Updates from: 09/07/2024 01:08:06
Service Microsoft Docs article Related commit history on GitHub Change details
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
Previously updated : 07/11/2024 Last updated : 09/04/2024
You can easily integrate Azure Application Insights with Azure API Management. A
* Learn strategies for reducing performance impact on your API Management service instance. > [!NOTE]
-> In an API Management [workspace](workspaces-overview.md), a workspace owner can independently integrate Application Insights and enable Application Insights logging for the workspace's APIs. The general guidance to integrate a workspace with Application Insights is similar to the guidance for an API Management instance; however, configuration is scoped to the workspace only. Currently, you must integrate Application Insights in a workspace by configuring an instrumentation key or connection string.
+> In an API Management [workspace](workspaces-overview.md), a workspace owner can independently integrate Application Insights and enable Application Insights logging for the workspace's APIs. The general guidance to integrate a workspace with Application Insights is similar to the guidance for an API Management instance; however, configuration is scoped to the workspace only. Currently, you must integrate Application Insights in a workspace by configuring a connection string (recommended) or an instrumentation key.
> [!WARNING] > When using our [self-hosted gateway](self-hosted-gateway-overview.md), we do not guarantee all telemetry will be pushed to Azure Application Insights given it relies on [Application Insights' in-memory buffering](./../azure-monitor/app/telemetry-channels.md#built-in-telemetry-channels).
You can easily integrate Azure Application Insights with Azure API Management. A
> [!NOTE] > The Application Insights resource **can be** in a different subscription or even a different tenant than the API Management resource.
-* If you plan to configure a managed identity for API Management to use with Application Insights, you need to complete the following steps:
+* If you plan to configure managed identity credentials to use with Application Insights, complete the following steps:
- 1. Enable a system-assigned or user-assigned [managed identity for API Management](api-management-howto-use-managed-service-identity.md) in your API Management instance.
+ 1. Enable a system-assigned or user-assigned [managed identity for API Management](api-management-howto-use-managed-service-identity.md).
* If you enable a user-assigned managed identity, take note of the identity's **Client ID**.
You can easily integrate Azure Application Insights with Azure API Management. A
The following are high level steps for this scenario.
-1. First, you create a connection between Application Insights and API Management
+1. First, create a connection between Application Insights and API Management
You can create a connection between Application Insights and your API Management using the Azure portal, the REST API, or related Azure tools. API Management configures a *logger* resource for the connection.
- > [!NOTE]
- > If your Application Insights resource is in a different tenant, then you must create the logger using the [REST API](#create-a-connection-using-the-rest-api-bicep-or-arm-template) as shown in a later section of this article.
- > [!IMPORTANT]
- > Currently, in the portal, API Management only supports connections to Application Insights using an Application Insights instrumentation key. To use an Application Insights connection string or an API Management managed identity, use the REST API, Bicep, or ARM template to create the logger. [Learn more](../azure-monitor/app/sdk-connection-string.md) about Application Insights connection strings.
+ > Currently, in the portal, API Management only supports connections to Application Insights using an Application Insights instrumentation key. For enhanced security, we recommend using an Application Insights connection string with an API Management managed identity. To configure connection string with managed identity credentials, use the [REST API](#create-a-connection-using-the-rest-api-bicep-or-arm-template) or related tools as shown in a later section of this article. [Learn more](../azure-monitor/app/sdk-connection-string.md) about Application Insights connection strings.
>
-1. Second, you enable Application Insights logging for your API or APIs.
+ > [!NOTE]
+ > If your Application Insights resource is in a different tenant, then you must create the logger using the [REST API](#create-a-connection-using-the-rest-api-bicep-or-arm-template) or related tools as shown in a later section of this article.
+
+1. Second, enable Application Insights logging for your API or APIs.
In this article, you enable Application Insights logging for your API using the Azure portal. API Management configures a *diagnostic* resource for the API.
The following are high level steps for this scenario.
Follow these steps to use the Azure portal to create a connection between Application Insights and API Management.
+> [!NOTE]
+> Where possible, Microsoft recommends using connection string with managed identity credentials for enhanced security. To configure these credentials, use the [REST API](#create-a-connection-using-the-rest-api-bicep-or-arm-template) or related tools as shown in a later section of this article.
++ 1. Navigate to your **Azure API Management service instance** in the **Azure portal**. 1. Select **Application Insights** from the menu on the left. 1. Select **+ Add**.
Follow these steps to use the Azure portal to create a connection between Applic
## Create a connection using the REST API, Bicep, or ARM template
-Follow these steps to use the REST API, Bicep, or ARM template to create a connection between Application Insights and API Management. You can configure a logger that uses a connection string, system-assigned managed identity, or user-assigned managed identity.
+Follow these steps to use the REST API, Bicep, or ARM template to create an Application Insights logger for your API Management instance. You can configure a logger that uses connection string with managed identity credentials (recommended), or a logger that uses only a connection string.
-### Logger with connection string credentials
+### Logger with connection string with managed identity credentials (recommended)
+
+See the [prerequisites](#prerequisites) for using an API Management managed identity.
The Application Insights connection string appears in the **Overview** section of your Application Insights resource.
+#### Connection string with system-assigned managed identity
+ #### [REST API](#tab/rest) Use the API Management [Logger - Create or Update](/rest/api/apimanagement/current-preview/logger/create-or-update) REST API with the following request body.
-If you are configuring the logger for a workspace, use the [Workspace Logger - Create or Update](/rest/api/apimanagement/workspace-logger/create-or-update?view=rest-apimanagement-2023-09-01-preview&preserve-view=true) REST API.
- ```JSON { "properties": { "loggerType": "applicationInsights",
- "description": "adding a new logger with connection string",
+ "description": "Application Insights logger with system-assigned managed identity",
"credentials": {
- "connectionString":"InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;..."
+ "connectionString":"InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;...",
+ "identityClientId":"SystemAssigned"
} } }+ ``` #### [Bicep](#tab/bicep)
resource aiLoggerWithSystemAssignedIdentity 'Microsoft.ApiManagement/service/log
parent: '<APIManagementInstanceName>' properties: { loggerType: 'applicationInsights'
- description: 'Application Insights logger with connection string'
+ description: 'Application Insights logger with system-assigned managed identity'
credentials: { connectionString: 'InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;...'
+ identityClientId: 'systemAssigned'
} } }
Include a JSON snippet similar to the following in your Azure Resource Manager t
"name": "ContosoLogger1", "properties": { "loggerType": "applicationInsights",
- "description": "Application Insights logger with connection string",
+ "description": "Application Insights logger with system-assigned managed identity",
"resourceId": "<ApplicationInsightsResourceID>", "credentials": {
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;..."
- },
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;...",
+ "identityClientId": "SystemAssigned"
+ }
} } ``` -
-### Logger with system-assigned managed identity credentials
-
-See the [prerequisites](#prerequisites) for using an API Management managed identity.
+#### Connection string with user-assigned managed identity
#### [REST API](#tab/rest)
Use the API Management [Logger - Create or Update](/rest/api/apimanagement/curre
{ "properties": { "loggerType": "applicationInsights",
- "description": "adding a new logger with system-assigned managed identity",
+ "description": "Application Insights logger with user-assigned managed identity",
"credentials": { "connectionString":"InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;...",
- "identityClientId":"SystemAssigned"
+ "identityClientId":"<ClientID>"
} } }
Use the API Management [Logger - Create or Update](/rest/api/apimanagement/curre
#### [Bicep](#tab/bicep)
-Include a snippet similar to the following in your Bicep template.
+Include a snippet similar the following in your Bicep template.
```Bicep
-resource aiLoggerWithSystemAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-08-01' = {
+resource aiLoggerWithUserAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-08-01' = {
name: 'ContosoLogger1' parent: '<APIManagementInstanceName>' properties: { loggerType: 'applicationInsights'
- description: 'Application Insights logger with system-assigned managed identity'
+ description: 'Application Insights logger with user-assigned managed identity'
credentials: { connectionString: 'InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;...'
- identityClientId: 'systemAssigned'
+ identityClientId: '<ClientID>'
} } }
Include a JSON snippet similar to the following in your Azure Resource Manager t
"name": "ContosoLogger1", "properties": { "loggerType": "applicationInsights",
- "description": "Application Insights logger with system-assigned managed identity",
+ "description": "Application Insights logger with user-assigned managed identity",
"resourceId": "<ApplicationInsightsResourceID>", "credentials": { "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;...",
- "identityClientId": "SystemAssigned"
+ "identityClientId": "<ClientID>"
} } } ```
-### Logger with user-assigned managed identity credentials
-See the [prerequisites](#prerequisites) for using an API Management managed identity.
+### Logger with connection string credentials only
+
+The Application Insights connection string appears in the **Overview** section of your Application Insights resource.
#### [REST API](#tab/rest) Use the API Management [Logger - Create or Update](/rest/api/apimanagement/current-preview/logger/create-or-update) REST API with the following request body.
+If you are configuring the logger for a workspace, use the [Workspace Logger - Create or Update](/rest/api/apimanagement/workspace-logger/create-or-update?view=rest-apimanagement-2023-09-01-preview&preserve-view=true) REST API.
+ ```JSON { "properties": { "loggerType": "applicationInsights",
- "description": "adding a new logger with user-assigned managed identity",
+ "description": "Application Insights logger with connection string",
"credentials": {
- "connectionString":"InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;...",
- "identityClientId":"<ClientID>"
+ "connectionString":"InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;..."
} } }- ``` #### [Bicep](#tab/bicep)
-Include a snippet similar the following in your Bicep template.
+Include a snippet similar to the following in your Bicep template.
+
+If you are configuring the logger for a workspace, create a `Microsoft.ApiManagement/service.workspace/loggers@2023-09-01-preview` resource instead.
+ ```Bicep
-resource aiLoggerWithUserAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-08-01' = {
+resource aiLoggerWithSystemAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-08-01' = {
name: 'ContosoLogger1' parent: '<APIManagementInstanceName>' properties: { loggerType: 'applicationInsights'
- description: 'Application Insights logger with user-assigned managed identity'
+ description: 'Application Insights logger with connection string'
credentials: { connectionString: 'InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;...'
- identityClientId: '<ClientID>'
} } }
resource aiLoggerWithUserAssignedIdentity 'Microsoft.ApiManagement/service/logge
Include a JSON snippet similar to the following in your Azure Resource Manager template.
+If you are configuring the logger for a workspace, create a `Microsoft.ApiManagement/service.workspace/loggers` resource and set `apiVersion` to `2023-09-01-preview` instead.
++ ```JSON { "type": "Microsoft.ApiManagement/service/loggers",
Include a JSON snippet similar to the following in your Azure Resource Manager t
"name": "ContosoLogger1", "properties": { "loggerType": "applicationInsights",
- "description": "Application Insights logger with user-assigned managed identity",
+ "description": "Application Insights logger with connection string",
"resourceId": "<ApplicationInsightsResourceID>", "credentials": {
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;...",
- "identityClientId": "<ClientID>"
- }
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/;..."
+ },
} } ``` + ## Enable Application Insights logging for your API Use the following steps to enable Application Insights logging for an API. You can also enable Application Insights logging for all APIs.
Application Insights receives:
> See [Application Insights limits](../azure-monitor/service-limits.md#application-insights) for information about the maximum size and number of metrics and events per Application Insights instance. ## Emit custom metrics
-You can emit [custom metrics](../azure-monitor/essentials/metrics-custom-overview.md) to Application Insights from your API Management instance. API Management emits custom metrics using the [emit-metric](emit-metric-policy.md) policy.
+You can emit [custom metrics](../azure-monitor/essentials/metrics-custom-overview.md) to Application Insights from your API Management instance. API Management emits custom metrics using policies such as [emit-metric](emit-metric-policy.md) and [azure-openai-emit-token-metric](azure-openai-emit-token-metric-policy.md). The following section uses the `emit-metric` policy as an example.
> [!NOTE] > Custom metrics are a [preview feature](../azure-monitor/essentials/metrics-custom-overview.md) of Azure Monitor and subject to [limitations](../azure-monitor/essentials/metrics-custom-overview.md#design-limitations-and-considerations).
To emit custom metrics, perform the following configuration steps.
### Limits for custom metrics
-Azure Monitor imposes [usage limits](../azure-monitor/essentials/metrics-custom-overview.md#quotas-and-limits) for custom metrics that may affect your ability to emit metrics from API Management. For example, Azure Monitor currently sets a limit of 10 dimension keys per metric, and a limit of 50,000 total active time series per region in a subscription (within a 12 hour period).
+Azure Monitor imposes [usage limits](../azure-monitor/essentials/metrics-custom-overview.md#quotas-and-limits) for custom metrics that may affect your ability to emit metrics from API Management. For example, Azure Monitor currently sets a limit of 10 dimension keys per metric, and a limit of 50,000 total active time series per region in a subscription (within a 12 hour period).
These limits have the following implications for configuring custom metrics in API Management:
Addressing the issue of telemetry data flow from API Management to Application I
+ Learn more about [Azure Application Insights](../azure-monitor/app/app-insights-overview.md). + Consider [logging with Azure Event Hubs](api-management-howto-log-event-hubs.md).
-+ Learn about visualizing data from Application Insights using [Azure Managed Grafana](visualize-using-managed-grafana-dashboard.md)
++ Learn about visualizing data from Application Insights using [Azure Managed Grafana](visualize-using-managed-grafana-dashboard.md)
api-management Api Management Howto Log Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md
Previously updated : 07/12/2024 Last updated : 09/04/2024
Azure Event Hubs is a highly scalable data ingress service that can ingest milli
To log events to the event hub, you need to configure credentials for access from API Management. API Management supports either of the two following access mechanisms:
+* A managed identity for your API Management instance (recommended)
* An Event Hubs connection string
-* A managed identity for your API Management instance.
-### Option 1: Configure Event Hubs connection string
-
-To create an Event Hubs connection string, see [Get an Event Hubs connection string](../event-hubs/event-hubs-get-connection-string.md).
-
-* You can use a connection string for the Event Hubs namespace or for the specific event hub you use for logging from API Management.
-* The shared access policy for the connection string must enable at least **Send** permissions.
+> [!NOTE]
+> Where possible, Microsoft recommends using managed identity credentials for enhanced security.
-### Option 2: Configure API Management managed identity
-> [!NOTE]
-> Using an API Management managed identity for logging events to an event hub is supported in API Management REST API version `2022-04-01-preview` or later.
+### Option 1: Configure API Management managed identity
1. Enable a system-assigned or user-assigned [managed identity for API Management](api-management-howto-use-managed-service-identity.md) in your API Management instance.
To create an Event Hubs connection string, see [Get an Event Hubs connection str
1. Assign the identity the **Azure Event Hubs Data sender** role, scoped to the Event Hubs namespace or to the event hub used for logging. To assign the role, use the [Azure portal](../role-based-access-control/role-assignments-portal.yml) or other Azure tools. +
+### Option 2: Configure Event Hubs connection string
+
+To create an Event Hubs connection string, see [Get an Event Hubs connection string](../event-hubs/event-hubs-get-connection-string.md).
+
+* You can use a connection string for the Event Hubs namespace or for the specific event hub you use for logging from API Management.
+* The shared access policy for the connection string must enable at least **Send** permissions.
++ ## Create an API Management logger The next step is to configure a [logger](/rest/api/apimanagement/current-ga/logger) in your API Management service so that it can log events to the event hub. Create and manage API Management loggers by using the [API Management REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) directly or by using tools including [Azure PowerShell](/powershell/module/az.apimanagement/new-azapimanagementlogger), a Bicep template, or an Azure Resource Management template.
-### Logger with connection string credentials
+### Option 1: Logger with managed identity credentials (recommended)
-For prerequisites, see [Configure Event Hubs connection string](#option-1-configure-event-hubs-connection-string).
+You can configure an API Management logger to an event hub using either system-assigned or user-assigned managed identity credentials.
-#### [PowerShell](#tab/PowerShell)
+### Logger with system-assigned managed identity credentials
-The following example uses the [New-AzApiManagementLogger](/powershell/module/az.apimanagement/new-azapimanagementlogger) cmdlet to create a logger to an event hub by configuring a connection string.
+For prerequisites, see [Configure API Management managed identity](#option-1-configure-api-management-managed-identity).
-```powershell
-# API Management service-specific details
-$apimServiceName = "apim-hello-world"
-$resourceGroupName = "myResourceGroup"
+#### [REST API](#tab/PowerShell)
+
+Use the API Management [Logger - Create or Update](/rest/api/apimanagement/current-preview/logger/create-or-update) REST API with the following request body.
+
+```JSON
+{
+ "properties": {
+ "loggerType": "azureEventHub",
+ "description": "Event Hub logger with system-assigned managed identity",
+ "credentials": {
+ "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net",
+ "identityClientId":"SystemAssigned",
+ "name":"<EventHubName>"
+ }
+ }
+}
-# Create logger
-$context = New-AzApiManagementContext -ResourceGroupName $resourceGroupName -ServiceName $apimServiceName
-New-AzApiManagementLogger -Context $context -LoggerId "ContosoLogger1" -Name "ApimEventHub" -ConnectionString "Endpoint=sb://<EventHubsNamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>" -Description "Event hub logger with connection string"
``` #### [Bicep](#tab/bicep)
New-AzApiManagementLogger -Context $context -LoggerId "ContosoLogger1" -Name "Ap
Include a snippet similar to the following in your Bicep template. ```Bicep
-resource ehLoggerWithConnectionString 'Microsoft.ApiManagement/service/loggers@2022-04-01-preview' = {
+resource ehLoggerWithSystemAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-08-01' = {
name: 'ContosoLogger1' parent: '<APIManagementInstanceName>' properties: { loggerType: 'azureEventHub'
- description: 'Event hub logger with connection string'
+ description: 'Event hub logger with system-assigned managed identity'
credentials: {
- connectionString: 'Endpoint=sb://<EventHubsNamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>'
- name: 'ApimEventHub'
+ endpointAddress: '<EventHubsNamespace>.servicebus.windows.net'
+ identityClientId: 'systemAssigned'
+ name: '<EventHubName>'
} } }
Include a JSON snippet similar to the following in your Azure Resource Manager t
```JSON { "type": "Microsoft.ApiManagement/service/loggers",
- "apiVersion": "2022-04-01-preview",
+ "apiVersion": "2022-08-01",
"name": "ContosoLogger1", "properties": { "loggerType": "azureEventHub",
- "description": "Event hub logger with connection string",
- "resourceId": "<EventHubsResourceID>"
+ "description": "Event Hub logger with system-assigned managed identity",
+ "resourceId": "<EventHubsResourceID>",
"credentials": {
- "connectionString": "Endpoint=sb://<EventHubsNamespace>/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>",
- "name": "ApimEventHub"
+ "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net",
+ "identityClientId": "SystemAssigned",
+ "name": "<EventHubName>"
}, } } ```
+#### Logger with user-assigned managed identity credentials
-### Logger with system-assigned managed identity credentials
-
-For prerequisites, see [Configure API Management managed identity](#option-2-configure-api-management-managed-identity).
+For prerequisites, see [Configure API Management managed identity](#option-1-configure-api-management-managed-identity).
#### [REST API](#tab/PowerShell)
-Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) or a Bicep or ARM template to configure a logger to an event hub with system-assigned managed identity credentials.
+Use the API Management [Logger - Create or Update](/rest/api/apimanagement/current-preview/logger/create-or-update) REST API with the following request body.
+ ```JSON { "properties": { "loggerType": "azureEventHub",
- "description": "adding a new logger with system assigned managed identity",
+ "description": "Event Hub logger with user-assigned managed identity",
"credentials": { "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net",
- "identityClientId":"SystemAssigned",
+ "identityClientId":"<ClientID>",
"name":"<EventHubName>" } }
Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger
Include a snippet similar to the following in your Bicep template. ```Bicep
-resource ehLoggerWithSystemAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-04-01-preview' = {
+resource ehLoggerWithUserAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-08-01' = {
name: 'ContosoLogger1' parent: '<APIManagementInstanceName>' properties: { loggerType: 'azureEventHub'
- description: 'Event hub logger with system-assigned managed identity'
+ description: 'Event Hub logger with user-assigned managed identity'
credentials: { endpointAddress: '<EventHubsNamespace>.servicebus.windows.net'
- identityClientId: 'systemAssigned'
+ identityClientId: '<ClientID>'
name: '<EventHubName>' } }
Include a JSON snippet similar to the following in your Azure Resource Manager t
```JSON { "type": "Microsoft.ApiManagement/service/loggers",
- "apiVersion": "2022-04-01-preview",
+ "apiVersion": "2022-08-01",
"name": "ContosoLogger1", "properties": { "loggerType": "azureEventHub",
- "description": "Event hub logger with system-assigned managed identity",
+ "description": "Event Hub logger with user-assigned managed identity",
"resourceId": "<EventHubsResourceID>", "credentials": { "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net",
- "identityClientId": "SystemAssigned",
+ "identityClientId": "<ClientID>",
"name": "<EventHubName>" }, } } ```
-### Logger with user-assigned managed identity credentials
-For prerequisites, see [Configure API Management managed identity](#option-2-configure-api-management-managed-identity).
-#### [REST API](#tab/PowerShell)
+### Option 2. Logger with connection string credentials
-Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) or a Bicep or ARM template to configure a logger to an event hub with user-assigned managed identity credentials.
+For prerequisites, see [Configure Event Hubs connection string](#option-2-configure-event-hubs-connection-string).
-```JSON
-{
- "properties": {
- "loggerType": "azureEventHub",
- "description": "adding a new logger with user-assigned managed identity",
- "credentials": {
- "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net",
- "identityClientId":"<ClientID>",
- "name":"<EventHubName>"
- }
- }
-}
+> [!NOTE]
+> Where possible, Microsoft recommends configuring the logger with managed identity credentials. See [Configure logger with managed identity credentials](#option-1-logger-with-managed-identity-credentials-recommended), earlier in this article.
+#### [PowerShell](#tab/PowerShell)
+
+The following example uses the [New-AzApiManagementLogger](/powershell/module/az.apimanagement/new-azapimanagementlogger) cmdlet to create a logger to an event hub by configuring a connection string.
+
+```powershell
+# API Management service-specific details
+$apimServiceName = "apim-hello-world"
+$resourceGroupName = "myResourceGroup"
+
+# Create logger
+$context = New-AzApiManagementContext -ResourceGroupName $resourceGroupName -ServiceName $apimServiceName
+New-AzApiManagementLogger -Context $context -LoggerId "ContosoLogger1" -Name "ApimEventHub" -ConnectionString "Endpoint=sb://<EventHubsNamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>" -Description "Event hub logger with connection string"
``` #### [Bicep](#tab/bicep)
-Include a snippet similar the following in your Bicep template.
+Include a snippet similar to the following in your Bicep template.
```Bicep
-resource ehLoggerWithUserAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-04-01-preview' = {
+resource ehLoggerWithConnectionString 'Microsoft.ApiManagement/service/loggers@2022-08-01' = {
name: 'ContosoLogger1' parent: '<APIManagementInstanceName>' properties: { loggerType: 'azureEventHub'
- description: 'Event hub logger with user-assigned managed identity'
+ description: 'Event Hub logger with connection string credentials'
credentials: {
- endpointAddress: '<EventHubsNamespace>.servicebus.windows.net'
- identityClientId: '<ClientID>'
- name: '<EventHubName>'
+ connectionString: 'Endpoint=sb://<EventHubsNamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>'
+ name: 'ApimEventHub'
} } }
Include a JSON snippet similar to the following in your Azure Resource Manager t
```JSON { "type": "Microsoft.ApiManagement/service/loggers",
- "apiVersion": "2022-04-01-preview",
+ "apiVersion": "2022-08-01",
"name": "ContosoLogger1", "properties": { "loggerType": "azureEventHub",
- "description": "Event hub logger with user-assigned managed identity",
- "resourceId": "<EventHubsResourceID>",
+ "description": "Event Hub logger with connection string credentials",
+ "resourceId": "<EventHubsResourceID>"
"credentials": {
- "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net",
- "identityClientId": "<ClientID>",
- "name": "<EventHubName>"
+ "connectionString": "Endpoint=sb://<EventHubsNamespace>/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>",
+ "name": "ApimEventHub"
}, } }
api-management Transform Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/transform-api.md
The rest of this section tests policy transformations that you set in this artic
### Test the rate limit (throttling) 1. Select **Demo Conference API** > **Test**.
-1. Select the **GetSpeakers** operation. Select **Send** three times in a row.
+1. Select the **GetSpeakers** operation. Select **Send** four times in a row.
- After sending the request three times, you get the **429 Too Many Requests** response.
+ After sending the request four times, you get the **429 Too Many Requests** response.
:::image type="content" source="media/transform-api/test-throttling-new.png" alt-text="Screenshot showing Too Many Requests in the response in the portal.":::
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Title: Deploy ASP.NET Core and Azure SQL Database app description: Learn how to deploy an ASP.NET Core web app to Azure App Service and connect to an Azure SQL Database. Previously updated : 06/30/2024 Last updated : 09/06/2024 ms.devlang: csharp
In this tutorial, you learn how to:
## Prerequisites + * An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free). * A GitHub account. you can also [get one for free](https://github.com/join). * Knowledge of ASP.NET Core development. * **(Optional)** To try GitHub Copilot, a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor). A 30-day free trial is available.
-<!-- ## Skip to the end
++
+* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/java).
+* [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed. You can follow the steps with the [Azure Cloud Shell](https://shell.azure.com) because it already has Azure Developer CLI installed.
+* Knowledge of ASP.NET Core development.
+* **(Optional)** To try GitHub Copilot, a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor). A 30-day free trial is available.
++
+## Skip to the end
You can quickly deploy the sample app in this tutorial and see it running in Azure. Just run the following commands in the [Azure Cloud Shell](https://shell.azure.com), and follow the prompt:
cd msdocs-app-service-sqldb-dotnetcore
azd init --template msdocs-app-service-sqldb-dotnetcore azd up ```
- -->
## 1. Run the sample
Having issues? Check the [Troubleshooting section](#troubleshooting).
In this step, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service, Azure SQL Database, and Azure Cache. For the creation process, you'll specify:
-* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`.
-* The **Region** to run the app physically in the world.
+* The **Name** for the web app. It's used as part of the DNS name for your app in the form of `https://<app-name>-<hash>.<region>.azurewebsites.net`.
+* The **Region** to run the app physically in the world. It's also used as part of the DNS name for your app.
* The **Runtime stack** for the app. It's where you select the .NET version to use for your app. * The **Hosting plan** for the app. It's the pricing tier that includes the set of features and scaling capacity for your app. * The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
- **App Service plan**: Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. - **App Service**: Represents your app and runs in the App Service plan. - **Virtual network**: Integrated with the App Service app and isolates back-end network traffic.
- - **Private endpoints**: Access endpoints for the database server and the Redis cache in the virtual network.
+ - **Private endpoints**: Access endpoints for the key vault, the database server, and the Redis cache in the virtual network.
- **Network interfaces**: Represents private IP addresses, one for each of the private endpoints. - **Azure SQL Database server**: Accessible only from behind its private endpoint. - **Azure SQL Database**: A database and a user are created for you on the server. - **Azure Cache for Redis**: Accessible only from behind its private endpoint.
- - **Private DNS zones**: Enable DNS resolution of the database server and the Redis cache in the virtual network.
+ - **Key vault**: Accessible only from behind its private endpoint. Used to manage secrets for the App Service app.
+ - **Private DNS zones**: Enable DNS resolution of the key vault, the database server, and the Redis cache in the virtual network.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-sqldb-3.png" alt-text="A screenshot showing the deployment process completed." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-sqldb-3.png"::: :::column-end::: :::row-end:::
-## 2. Verify connection strings
+## 3. Secure connection secrets
-> [!TIP]
-> The default SQL database connection string uses SQL authentication. For more secure, passwordless authentication, see [How do I change the SQL Database connection to use a managed identity instead?](#how-do-i-change-the-sql-database-connection-to-use-a-managed-identity-instead)
-
-The creation wizard generated connection strings for the SQL database and the Redis cache already. In this step, find the generated connection strings for later.
+The creation wizard generated the connectivity string for you already as [.NET connection strings](configure-common.md#configure-connection-strings) and [app settings](configure-common.md#configure-app-settings). However, the security best practice is to keep secrets out of App Service completely. You'll move your secrets to key vault and change your app setting to [Key Vault references](app-service-key-vault-references.md) with the help of Service Connectors.
:::row::: :::column span="2":::
- **Step 1:** In the App Service page, from the left menu, select **Settings** > **Environment variables**.
+ **Step 1:** In the App Service page,
+ 1. In the left menu, select **Settings > Environment variables > Connection strings**.
+ 1. Select **AZURE_SQL_CONNECTIONSTRING**.
+ 1. In **Add/Edit connection string**, in the **Value** field, find the *Password=* part at the end of the string.
+ 1. Copy the password string after *Password=* for use later.
+ This connection string lets you connect to the SQL database secured behind a private endpoint. The password is saved directly in the App Service app, which isn't the best. Likewise, the Redis cache connection string in the **App settings** tab contains a secret. You'll change this.
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-get-connection-string-1.png":::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-1.png" alt-text="A screenshot showing how to see the value of an app setting." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-1.png":::
:::column-end::: :::row-end::: :::row::: :::column span="2":::
- **Step 2:**
- 1. Find **AZURE_REDIS_CONNECTIONSTRING** in the **App settings** section. This string was generated from the new Redis cache by the creation wizard. To set up your application, this name is all you need.
- 1. Select **Connection strings** and find **AZURE_SQL_CONNECTIONSTRING** in the **Connection strings** section. This string was generated from the new SQL database by the creation wizard. To set up your application, this name is all you need.
- 1. If you want, you can select the setting and see, copy, or edit its value.
- Later, you'll change your application to use `AZURE_SQL_CONNECTIONSTRING` and `AZURE_REDIS_CONNECTIONSTRING`.
+ **Step 2:** Create a key vault for secure management of secrets.
+ 1. In the top search bar, type "*key vault*", then select **Marketplace** > **Key Vault**.
+ 1. In **Resource Group**, select **msdocs-core-sql-tutorial**.
+ 1. In **Key vault name**, type a name that consists of only letters and numbers.
+ 1. In **Region**, set it to the sample location as the resource group.
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to create an app setting." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-get-connection-string-2.png":::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-2.png" alt-text="A screenshot showing how to create a key vault." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:**
+ 1. Select the **Networking** tab.
+ 1. Unselect **Enable public access**.
+ 1. Select **Create a private endpoint**.
+ 1. In **Resource Group**, select **msdocs-core-sql-tutorial**.
+ 1. In **Key vault name**, type a name that consists of only letters and numbers.
+ 1. In **Region**, set it to the sample location as the resource group.
+ 1. In the dialog, in **Location**, select the same location as your App Service app.
+ 1. In **Resource Group**, select **msdocs-core-sql-tutorial**.
+ 1. In **Name**, type **msdocs-core-sql-XYZVvaultEndpoint**.
+ 1. In **Virtual network**, select **msdocs-core-sql-XYZVnet**.
+ 1. In **Subnet**, **msdocs-core-sql-XYZSubnet**.
+ 1. Select **OK**.
+ 1. Select **Review + create**, then select **Create**. Wait for the key vault deployment to finish. You should see "Your deployment is complete."
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-3.png" alt-text="A screenshot showing how secure a key vault with a private endpoint." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4:**
+ 1. In the top search bar, type *msdocs-core-sql*, then the App Service resource called **msdocs-core-sql-XYZ**.
+ 1. In the App Service page, in the left menu, select **Settings > Service Connector**. There are already two connectors, which the app creation wizard created for you.
+ 1. Select checkbox next to the SQL Database connector, then select **Edit**.
+ 1. Select the **Authentication** tab.
+ 1. Select **Store Secret in Key Vault**.
+ 1. Under **Key Vault Connection**, select **Create new**.
+ A **Create connection** dialog is opened on top of the edit dialog.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-4.png" alt-text="A screenshot showing how to edit the SQL Database service connector with a key vault connection." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-4.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 5:** In the **Create connection** dialog for the Key Vault connection:
+ 1. In **Key Vault**, select the key vault you created earlier.
+ 1. Select **Review + Create**. You should see that **System assigned managed identity** is set to **Selected**.
+ 1. When validation completes, select **Create**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-5.png" alt-text="A screenshot showing how to create a Key Vault service connector." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-5.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 6:** You're back in the edit dialog for **defaultConnector**.
+ 1. In the **Authentication** tab, wait for the key vault connector to be created. When it's finished, the **Key Vault Connection** dropdown automatically selects it.
+ 1. Select **Next: Networking**.
+ 1. Select **Configure firewall rules to enable access to target service**. The app creation wizard already secured the SQL database with a private endpoint.
+ 1. Select **Save**. Wait until the **Update succeeded** notification appears.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-6.png" alt-text="A screenshot showing the key vault connection selected in the SQL Database service connector." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-6.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 7:** In the Service Connectors page:
+ 1. Select checkbox next to the Cache for Redis connector, then select **Edit**.
+ 1. Select the **Authentication** tab.
+ 1. Select **Store Secret in Key Vault**.
+ 1. Under **Key Vault Connection**, select the key vault you created.
+ 1. Select **Next: Networking**.
+ 1. Select **Configure firewall rules to enable access to target service**. The app creation wizard already secured the SQL database with a private endpoint.
+ 1. Select **Save**. Wait until the **Update succeeded** notification appears.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-7.png" alt-text="A screenshot showing how to edit the Cache for Redis service connector with a key vault connection." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-7.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 8:** To verify that you secured the secrets:
+ 1. From the left menu, select **Environment variables > Connection strings** again.
+ 1. Next to **AZURE_SQL_CONNECTIONSTRING**, select **Show value**. The value should be `@Microsoft.KeyValut(...)`, which means that it's a [key vault reference](app-service-key-vault-references.md) because the secret is now managed in the key vault.
+ 1. To verify the Redis connection string, select the **App setting** tab. Next to **AZURE_REDIS_CONNECTIONSTRING**, select **Show value**. The value should be `@Microsoft.KeyValut(...)` too.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-8.png" alt-text="A screenshot showing how to see the value of the .NET connection string in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-secure-connection-secrets-8.png":::
:::column-end::: :::row-end:::
With the SQL Database protected by the virtual network, the easiest way to run [
:::row::: :::column span="2":::
- **Step 1:** Back in the App Service page, in the left menu, select **Development Tools** > **SSH**, then select **Go**.
+ **Step 1:** Back in the App Service page, in the left menu, select **Development Tools** > **SSH**, then select **Go**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-generate-db-schema-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-generate-db-schema-1.png":::
Having issues? Check the [Troubleshooting section](#troubleshooting).
:::column span="2"::: **Step 1:** In the App Service page: 1. From the left menu, select **Overview**.
- 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`.
+ 1. Select the URL of your app.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-browse-app-1.png":::
Having issues? Check the [Troubleshooting section](#troubleshooting).
:::row-end::: > [!TIP]
-> The sample application implements the [cache-aside](/azure/architecture/patterns/cache-aside) pattern. When you visit a data view for the second time, or reload the same page after making data changes, **Processing time** in the webpage shows a much faster time because it's loading the data from the cache instead of the database.
+> The sample application implements the [cache-aside](/azure/architecture/patterns/cache-aside) pattern. When you visit a data view for the second time, or reload the same page after making data changes, **Processing time** in the webpage shows a much faster time because it's loading the data from the cache instead of the database.
## 6. Stream diagnostic logs
The dev container already has the [Azure Developer CLI](/azure/developer/azure-d
||| |The current directory is not empty. Would you like to initialize a project here in '\<your-directory>'? | **Y** | |What would you like to do with these files? | **Keep my existing files unchanged** |
- |Enter a new environment name | Type a unique name. The AZD template uses this name as part of the DNS name of your web app in Azure (`<app-name>.azurewebsites.net`). Alphanumeric characters and hyphens are allowed. |
+ |Enter a new environment name | Type a unique name. The AZD template uses this name as part of the DNS name of your web app in Azure (`<app-name>-<hash>.azurewebsites.net`). Alphanumeric characters and hyphens are allowed. |
1. Sign into Azure by running the `azd auth login` command and following the prompt:
The dev container already has the [Azure Developer CLI](/azure/developer/azure-d
- **App Service plan**: Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. - **App Service**: Represents your app and runs in the App Service plan. - **Virtual network**: Integrated with the App Service app and isolates back-end network traffic.
- - **Private endpoints**: Access endpoints for the database server and the Redis cache in the virtual network.
+ - **Private endpoints**: Access endpoints for the key vault, the database server, and the Redis cache in the virtual network.
- **Network interfaces**: Represents private IP addresses, one for each of the private endpoints. - **Azure SQL Database server**: Accessible only from behind its private endpoint. - **Azure SQL Database**: A database and a user are created for you on the server. - **Azure Cache for Redis**: Accessible only from behind its private endpoint.
- - **Private DNS zones**: Enable DNS resolution of the database server and the Redis cache in the virtual network.
+ - **Key vault**: Accessible only from behind its private endpoint. Used to manage secrets for the App Service app.
+ - **Private DNS zones**: Enable DNS resolution of the key vault, the database server, and the Redis cache in the virtual network.
Having issues? Check the [Troubleshooting section](#troubleshooting).
Having issues? Check the [Troubleshooting section](#troubleshooting).
The AZD template you use generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings) and outputs the them to the terminal for your convenience. App settings are one way to keep connection secrets out of your code repository.
-1. In the AZD output, find the settings `AZURE_SQL_CONNECTIONSTRING` and `AZURE_REDIS_CONNECTIONSTRING`. To keep secrets safe, only the setting names are displayed. They look like this in the AZD output:
+1. In the AZD output, find the settings `AZURE_SQL_CONNECTIONSTRING` and `AZURE_REDIS_CONNECTIONSTRING`. Only the setting names are displayed. They look like this in the AZD output:
<pre> App Service app has the following connection strings:
-
- - AZURE_SQL_CONNECTIONSTRING
- - AZURE_REDIS_CONNECTIONSTRING
+ - AZURE_SQL_CONNECTIONSTRING
+ - AZURE_REDIS_CONNECTIONSTRING
+ - AZURE_KEYVAULT_RESOURCEENDPOINT
+ - AZURE_KEYVAULT_SCOPE
</pre> `AZURE_SQL_CONNECTIONSTRING` contains the connection string to the SQL Database in Azure, and `AZURE_REDIS_CONNECTIONSTRING` contains the connection string to the Azure Redis cache. You need to use them in your code later.
With the SQL Database protected by the virtual network, the easiest way to run d
1. In the azd output, find the URL for the SSH session and navigate to it in the browser. It looks like this in the output: <pre>
- Open SSH session to App Service container at: https://&lt;app-name>.scm.azurewebsites.net/webssh/host
+ Open SSH session to App Service container at: https://&lt;app-name>-&lt;hash>.scm.azurewebsites.net/webssh/host
</pre> 1. In the SSH terminal, run the following commands:
Having issues? Check the [Troubleshooting section](#troubleshooting).
Deploying services (azd deploy) (Γ£ô) Done: Deploying service web
- - Endpoint: https://&lt;app-name>.azurewebsites.net/
+ - Endpoint: https://&lt;app-name>-&lt;hash>.azurewebsites.net/
</pre> 2. Add a few tasks to the list.
azd down
Depending on your subscription and the region you select, you might see the deployment status for Azure SQL Database to be `Conflict`, with the following message in Operation details:
-`InternalServerError: An unexpected error occured while processing the request.`
+`Location '<region>' is not accepting creation of new Windows Azure SQL Database servers at this time.`
This error is most likely caused by a limit on your subscription for the region you select. Try choosing a different region for your deployment.
Pricing for the created resources is as follows:
### How do I connect to the Azure SQL Database server that's secured behind the virtual network with other tools? -- For basic access from a command-line tool, you can run `sqlcmd` from the app's SSH terminal. The app's container doesn't come with `sqlcmd`, so you must [install it manually](/sql/linux/sql-server-linux-setup-tools#ubuntu). Remember that the installed client doesn't persist across app restarts.
+- For basic access from a command-line tool, you can run `sqlcmd` from the app's SSH terminal. The app's container doesn't come with `sqlcmd`, so you must [install it manually](/sql/tools/sqlcmd/sqlcmd-utility?tabs=go%2Clinux&pivots=cs1-bash#download-and-install-sqlcmd). Remember that the installed client doesn't persist across app restarts.
- To connect from a SQL Server Management Studio client or from Visual Studio, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network. ### How does local app development work with GitHub Actions?
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
Having issues? Check the [Troubleshooting section](#troubleshooting).
## 3. Secure connection secrets
-The creation wizard generated the connectivity string for you already as [app settings](configure-common.md#configure-app-settings). However, it's In this step, you learn where to find the app settings, and how you can create your own.
-
-App settings are one way to keep connection secrets out of your code repository. When you're ready to move your secrets to a more secure location, you can use [Key Vault references](app-service-key-vault-references.md) instead.
+The creation wizard generated the connectivity string for you already as an [app setting](configure-common.md#configure-app-settings). However, the security best practice is to keep secrets out of App Service completely. You'll move your secrets to key vault and change your app setting to a [Key Vault reference](app-service-key-vault-references.md) with the help of Service Connectors.
:::row::: :::column span="2":::
App settings are one way to keep connection secrets out of your code repository.
1. In **Region**, set it to the sample location as the resource group. 1. In the dialog, in **Location**, select the same location as your App Service app. 1. In **Resource Group**, select **msdocs-spring-cosmosdb-tutorial**.
- 1. In **Name**, type **msdocs-spring-cosmosdb-XYZVvaultEndpoint**.
+ 1. In **Name**, type **msdocs-spring-cosmosdb-XYZVaultEndpoint**.
1. In **Virtual network**, select **msdocs-spring-cosmosdb-XYZVnet**. 1. In **Subnet**, **msdocs-spring-cosmosdb-XYZSubnet**. 1. Select **OK**.
App settings are one way to keep connection secrets out of your code repository.
:::column span="2"::: **Step 4:** 1. In the top search bar, type *msdocs-spring-cosmosdb*, then the App Service resource called **msdocs-spring-cosmosdb-XYZ**.
- 1. In the App Service page, in the left menu, select **Settings > Service Connector. There's already a connector, which the app creation wizard created for you.
+ 1. In the App Service page, in the left menu, select **Settings > Service Connector**. There's already a connector, which the app creation wizard created for you.
1. Select checkbox next to the connector, then select **Edit**. 1. In the **Basics** tab, set **Client type** to **SpringBoot**. This option creates the Spring Boot specific environment variables for you. 1. Select the **Authentication** tab.
- 1. Select Store Secret in Key Vault.
+ 1. Select **Store Secret in Key Vault**.
1. Under **Key Vault Connection**, select **Create new**. A **Create connection** dialog is opened on top of the edit dialog. :::column-end:::
App settings are one way to keep connection secrets out of your code repository.
:::row::: :::column span="2"::: **Step 6:** You're back in the edit dialog for **defaultConnector**.
- 1. In the **Authentication** tab, wait for the key vault connector to be created. When it's finished, the Key Vault Connection dropdown automatically selects it.
+ 1. In the **Authentication** tab, wait for the key vault connector to be created. When it's finished, the **Key Vault Connection** dropdown automatically selects it.
1. Select **Next: Networking**.
- 1. Select **Configure firewall rules to enable access to target service**. If you see the message, "No Private Endpoint on the target service," ignore it. The app creation wizard already secured the Cosmos DB database with a private endpoint.
+ 1. Select **Configure firewall rules to enable access to target service**. If you see the message, "No Private Endpoint on the target service," ignore it. The app creation wizard already secured the SQL database with a private endpoint.
1. Select **Save**. Wait until the **Update succeeded** notification appears. :::column-end::: :::column:::
Having issues? Check the [Troubleshooting section](#troubleshooting).
:::column span="2"::: **Step 1:** In the App Service page: 1. From the left menu, select **Overview**.
- 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`.
+ 1. Select the URL of your app.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-browse-app-1.png":::
The dev container already has the [Azure Developer CLI](/azure/developer/azure-d
||| |The current directory is not empty. Would you like to initialize a project here in '\<your-directory>'? | **Y** | |What would you like to do with these files? | **Keep my existing files unchanged** |
- |Enter a new environment name | Type a unique name. The AZD template uses this name as part of the DNS name of your web app in Azure (`<app-name>.azurewebsites.net`). Alphanumeric characters and hyphens are allowed. |
+ |Enter a new environment name | Type a unique name. The AZD template uses this name as part of the DNS name of your web app in Azure (`<app-name>-<hash>.azurewebsites.net`). Alphanumeric characters and hyphens are allowed. |
1. Sign into Azure by running the `azd auth login` command and following the prompt:
Having issues? Check the [Troubleshooting section](#troubleshooting).
Deploying services (azd deploy) (Γ£ô) Done: Deploying service web
- - Endpoint: https://&lt;app-name>.azurewebsites.net/
+ - Endpoint: https://&lt;app-name>-&lt;hash>.azurewebsites.net/
</pre> 2. Add a few tasks to the list.
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
description: Learn how to enable and manage logs for Azure Application Gateway.
-+ Last updated 06/17/2024
application-gateway Application Gateway Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-metrics.md
description: Learn how to use metrics to monitor performance of application gate
-+ Last updated 06/17/2024
application-gateway Application Gateway Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-probe-overview.md
description: Azure Application Gateway monitors the health of all resources in i
-+ Last updated 09/14/2023
application-gateway Create Url Route Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-url-route-portal.md
In this example, you create three virtual machines to be used as backend servers
1. Open the interactive shell and make sure that it's set to **PowerShell**.
- ![Install custom extension](./media/application-gateway-create-url-route-portal/application-gateway-extension.png)
+ ![Screenshot of install custom extension](./media/application-gateway-create-url-route-portal/application-gateway-extension.png)
2. Run the following command to install IIS on the virtual machine:
Review the settings on the **Review + create** tab, and then select **Create** t
1. Select **All resources**, and then select **myAppGateway**.
- ![Record application gateway public IP address](./media/application-gateway-create-url-route-portal/application-gateway-record-ag-address.png)
+ ![Screenshot of record application gateway public IP address.](./media/application-gateway-create-url-route-portal/application-gateway-record-ag-address.png)
2. Copy the public IP address, and then paste it into the address bar of your browser. Such as, http:\//203.0.113.10:8080.
- ![Test base URL in application gateway](./media/application-gateway-create-url-route-portal/application-gateway-iistest.png)
+ ![Screenshot of test base URL in application gateway.](./media/application-gateway-create-url-route-portal/application-gateway-iistest.png)
The listener on port 8080 routes this request to the default backend pool. 3. Change the URL to *http://&lt;ip-address&gt;:8080/images/test.htm*, replacing &lt;ip-address&gt; with the public IP address of **myAppGateway**, and you should see something like the following example:
- ![Test images URL in application gateway](./media/application-gateway-create-url-route-portal/application-gateway-iistest-images.png)
+ ![Screenshow of test images URL in application gateway](./media/application-gateway-create-url-route-portal/application-gateway-iistest-images.png)
The listener on port 8080 routes this request to the *Images* backend pool. 4. Change the URL to *http://&lt;ip-address&gt;:8080/video/test.htm*, replacing &lt;ip-address&gt; with the public IP address of **myAppGateway**, and you should see something like the following example:
- ![Test video URL in application gateway](./media/application-gateway-create-url-route-portal/application-gateway-iistest-video.png)
+ ![Screenshot of test video URL in application gateway.](./media/application-gateway-create-url-route-portal/application-gateway-iistest-video.png)
The listener on port 8080 routes this request to the *Video* backend pool.
application-gateway Alb Controller Backend Health Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/alb-controller-backend-health-metrics.md
-+ Last updated 06/03/2024
application-gateway Alb Controller Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/alb-controller-release-notes.md
-+ Last updated 5/9/2024
application-gateway Api Specification Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/api-specification-kubernetes.md
-+ Last updated 02/27/2024
application-gateway Application Gateway For Containers Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/application-gateway-for-containers-metrics.md
-+ Last updated 02/27/2024
application-gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/diagnostics.md
-+ Last updated 07/17/2024
application-gateway Ingress Controller Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-annotations.md
description: This article provides documentation on the annotations specific to
-+ Last updated 5/13/2024
application-gateway Ingress Controller Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-migration.md
description: This article provides instructions on how to migrate from AGIC depl
-+ Last updated 07/01/2024
application-gateway Ingress Controller Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-overview.md
description: This article provides an introduction to what Application Gateway I
-+ Last updated 01/31/2024
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
description: This article provides an overview of rewriting HTTP headers and URL
Previously updated : 09/13/2022 Last updated : 09/06/2024 # Rewrite HTTP headers and URL with Application Gateway
-Application Gateway allows you to rewrite selected content of requests and responses. With this feature, you can translate URLs, query string parameters as well as modify request and response headers. It also allows you to add conditions to ensure that the URL or the specified headers are rewritten only when certain conditions are met. These conditions are based on the request and response information.
+Application Gateway allows you to rewrite selected content of requests and responses. With this feature, you can translate URLs, query string parameters and modify request and response headers. It also allows you to add conditions to ensure that the URL or the specified headers are rewritten only when certain conditions are met. These conditions are based on the request and response information.
> [!NOTE] > HTTP header and URL rewrite features are only available for the [Application Gateway v2 SKU](application-gateway-autoscaling-zone-redundant.md)
Application Gateway allows you to add, remove, or update HTTP request and respon
To learn how to rewrite request and response headers with Application Gateway using Azure portal, see [here](rewrite-http-headers-portal.md).
-![img](./media/rewrite-http-headers-url/header-rewrite-overview.png)
-
+![A diagram showing headers in request and response packets.](./media/rewrite-http-headers-url/header-rewrite-overview.png)
**Supported headers** You can rewrite all headers in requests and responses, except for the Connection, and Upgrade headers. You can also use the application gateway to create custom headers and add them to the requests and responses being routed through it. + ### URL path and query string With URL rewrite capability in Application Gateway, you can: * Rewrite the host name, path and query string of the request URL
-* Choose to rewrite the URL of all requests on a listener or only those requests which match one or more of the conditions you set. These conditions are based on the request properties (request header and server variables).
+* Choose to rewrite the URL of all requests on a listener or only those requests that match one or more of the conditions you set. These conditions are based on the request properties (request header and server variables).
* Choose to route the request (select the backend pool) based on either the original URL or the rewritten URL
To learn how to rewrite URL with Application Gateway using Azure portal, see [he
## Rewrite actions
-You use rewrite actions to specify the URL, request headers or response headers that you want to rewrite and the new value to which you intend to rewrite them to. The value of a URL or a new or existing header can be set to these types of values:
+Rewrite actions are used to specify the URL. Request headers or response headers that you want to rewrite and the new URL destination value. The value of a URL or a new or existing header can be set to the following types of values:
* Text * Request header. To specify a request header, you need to use the syntax {http_req_*headerName*}
You use rewrite actions to specify the URL, request headers or response headers
## Rewrite Conditions
-You can use rewrite conditions, an optional configuration, to evaluate the content of HTTP(S) requests and responses and perform a rewrite only when one or more conditions are met. The application gateway uses these types of variables to evaluate the content of requests and responses:
+You can use rewrite conditions to evaluate the content of HTTP(S) requests and responses. This optional configuration enables you to perform a rewrite only when one or more conditions are met. The application gateway uses these types of variables to evaluate the content of requests and responses:
* HTTP headers in the request * HTTP headers in the response
You can use a condition to evaluate whether a specified variable is present, whe
### Pattern Matching
-Application Gateway uses regular expressions for pattern matching in the condition. You should use Regular Expression 2 (RE2) compatible expressions when writing your conditions. If you are running an Application Gateway Web Application Firewall (WAF) with Core Rule Set 3.1 or earlier, you may run into issues when using [Perl Compatible Regular Expressions (PCRE)](https://www.pcre.org/) while doing lookahead and lookbehind (negative or positive) assertions.
+Application Gateway uses regular expressions for pattern matching in the condition. You should use Regular Expression 2 (RE2) compatible expressions when writing your conditions. If you're running an Application Gateway Web Application Firewall (WAF) with Core Rule Set 3.1 or earlier, you might have issues when using [Perl Compatible Regular Expressions (PCRE)](https://www.pcre.org/). Issues can happen when using lookahead and lookbehind (negative or positive) assertions.
### Capturing
To capture a substring for later use, put parentheses around the subpattern that
* (\d)+ # Match a digit one or more times, capturing the last into group 1 > [!Note]
-> Use of */* to prefix and suffix the pattern should not be specified in the pattern to match value. For example, (\d)(\d) will match two digits. /(\d)(\d)/ won't match two digits.
+> Use of */* to prefix and suffix the pattern shouldn't be specified in the pattern to match value. For example, (\d)(\d) matches two digits. /(\d)(\d)/ won't match two digits.
Once captured, you can reference them in the action set using the following format:
Once captured, you can reference them in the action set using the following form
* For a server variable, you must use {var_serverVariableName_groupNumber}. For example, {var_uri_path_1} or {var_uri_path_2} > [!Note]
-> The case of the condition variable needs to match case of the capture variable. For example, if my condition variable is User-Agent, my capture variable must be for User-Agent (i.e. {http_req_User-Agent_2}). If my condition variable is defined as user-agent, my capture variable must be for user-agent (i.e. {http_req_user-agent_2}).
+> The case of the condition variable needs to match case of the capture variable. For example, if the condition variable is defined as user-agent, the capture variable must be for user-agent ({http_req_user-agent_2}).
-If you want to use the whole value, you should not mention the number. Simply use the format {http_req_headerName}, etc. without the groupNumber.
+If you want to use the whole value, you shouldn't mention the number. Simply use the format {http_req_headerName}, etc. without the groupNumber.
## Server variables
-Application Gateway uses server variables to store useful information about the server, the connection with the client, and the current request on the connection. Examples of information stored include the client's IP address and the web browser type. Server variables change dynamically, for example, when a new page loads or when a form is posted. You can use these variables to evaluate rewrite conditions and rewrite headers. In order to use the value of server variables to rewrite headers, you will need to specify these variables in the syntax {var_*serverVariableName*}
+Application Gateway uses server variables to store useful information about the server, the connection with the client, and the current request on the connection. Examples of information stored include the client's IP address and the web browser type. Server variables change dynamically, for example, when a new page loads or when a form is posted. You can use these variables to evaluate rewrite conditions and rewrite headers. In order to use the value of server variables to rewrite headers, you need to specify these variables in the syntax {var_*serverVariableName*}
Application gateway supports the following server variables: | Variable name | Description | | - | |
-| add_x_forwarded_for_proxy | The X-Forwarded-For client request header field with the `client_ip` variable (see explanation later in this table) appended to it in the format IP1, IP2, IP3, and so on. If the X-Forwarded-For field isn't in the client request header, the `add_x_forwarded_for_proxy` variable is equal to the `$client_ip` variable. This variable is particularly useful when you want to rewrite the X-Forwarded-For header set by Application Gateway so that the header contains only the IP address without the port information. |
+| add_x_forwarded_for_proxy | The X-Forwarded-For client request header field with the `client_ip` variable (see explanation later in this table) appended to it in the format IP1, IP2, IP3, and so on. If the X-Forwarded-For field isn't in the client request header, the `add_x_forwarded_for_proxy` variable is equal to the `$client_ip` variable. This variable is useful when you want to rewrite the X-Forwarded-For header set by Application Gateway so that the header contains only the IP address without the port information. |
| ciphers_supported | A list of the ciphers supported by the client. | | ciphers_used | The string of ciphers used for an established TLS connection. |
-| client_ip | The IP address of the client from which the application gateway received the request. If there's a reverse proxy before the application gateway and the originating client, `client_ip` will return the IP address of the reverse proxy. |
+| client_ip | The IP address of the client from which the application gateway received the request. If there's a reverse proxy before the application gateway and the originating client, `client_ip` returns the IP address of the reverse proxy. |
| client_port | The client port. | | client_tcp_rtt | Information about the client TCP connection. Available on systems that support the TCP_INFO socket option. | | client_user | When HTTP authentication is used, the user name supplied for authentication. |
-| host | In this order of precedence: the host name from the request line, the host name from the Host request header field, or the server name matching a request. Example: In the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, host value will be is `contoso.com` |
+| host | In this order of precedence: the host name from the request line, the host name from the Host request header field, or the server name matching a request. Example: In the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the host value is `contoso.com` |
| cookie_*name* | The *name* cookie. | | http_method | The method used to make the URL request. For example, GET or POST. | | http_status | The session status. For example, 200, 400, or 403. | | http_version | The request protocol. Usually HTTP/1.0, HTTP/1.1, or HTTP/2.0. |
-| query_string | The list of variable/value pairs that follows the "?" in the requested URL. Example: In the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, query_string value will be `id=123&title=fabrikam` |
+| query_string | The list of variable/value pairs that follows the "?" in the requested URL. Example: In the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, query_string value is `id=123&title=fabrikam` |
| received_bytes | The length of the request (including the request line, header, and request body). | | request_query | The arguments in the request line. | | request_scheme | The request scheme: http or https. |
-| request_uri | The full original request URI (with arguments). Example: in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam*`, request_uri value will be `/article.aspx?id=123&title=fabrikam` |
+| request_uri | The full original request URI (with arguments). Example: in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam*`, request_uri value is `/article.aspx?id=123&title=fabrikam` |
| sent_bytes | The number of bytes sent to a client. | | server_port | The port of the server that accepted a request. | | ssl_connection_protocol | The protocol of an established TLS connection. | | ssl_enabled | "On" if the connection operates in TLS mode. Otherwise, an empty string. |
-| uri_path | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments. Example: In the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, uri_path value will be `/article.aspx` |
+| uri_path | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments. Example: In the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, uri_path value is `/article.aspx` |
### Mutual authentication server variables
A rewrite rule set contains:
* **Request routing rule association:** The rewrite configuration is associated to the source listener via the routing rule. When you use a basic routing rule, the rewrite configuration is associated with a source listener and is a global header rewrite. When you use a path-based routing rule, the rewrite configuration is defined on the URL path map. In that case, it applies only to the specific path area of a site. You can create multiple rewrite sets and apply each rewrite set to multiple listeners. But you can apply only one rewrite set to a specific listener.
-* **Rewrite Condition**: It is an optional configuration. Rewrite conditions evaluate the content of the HTTP(S) requests and responses. The rewrite action will occur if the HTTP(S) request or response matches the rewrite condition. If you associate more than one condition with an action, the action occurs only when all the conditions are met. In other words, the operation is a logical AND operation.
+* **Rewrite Condition**: This configuration is optional. Rewrite conditions evaluate the content of the HTTP(S) requests and responses. The rewrite action occurs if the HTTP(S) request or response matches the rewrite condition. If you associate more than one condition with an action, the action occurs only when all the conditions are met. In other words, the operation is a logical AND operation.
* **Rewrite type**: There are 3 types of rewrites available: * Rewriting request headers * Rewriting response headers * Rewriting URL components
- * **URL path**: The value to which the path is to be rewritten to.
- * **URL Query String**: The value to which the query string is to be rewritten to.
- * **Re-evaluate path map**: Used to determine whether the URL path map is to be reevaluated or not. If kept unchecked, the original URL path will be used to match the path-pattern in the URL path map. If set to true, the URL path map will be reevaluated to check the match with the rewritten path. Enabling this switch helps in routing the request to a different backend pool post rewrite.
+ * **URL path**: The value to which the path is to be rewritten.
+ * **URL Query String**: The value to which the query string is to be rewritten.
+ * **Reevaluate path map**: Used to determine whether the URL path map is to be reevaluated or not. If kept unchecked, the original URL path is used to match the path-pattern in the URL path map. If set to true, the URL path map is reevaluated to check the match with the rewritten path. Enabling this switch helps in routing the request to a different backend pool post rewrite.
## Rewrite configuration common pitfalls
-* Enabling 'Re-evaluate path map' isn't allowed for basic request routing rules. This is to prevent infinite evaluation loop for a basic routing rule.
+* Enabling 'Reevaluate path map' isn't allowed for basic request routing rules. This is to prevent infinite evaluation loop for a basic routing rule.
-* There needs to be at least 1 conditional rewrite rule or 1 rewrite rule which doesn't have 'Re-evaluate path map' enabled for path-based routing rules to prevent infinite evaluation loop for a path-based routing rule.
+* There needs to be at least 1 conditional rewrite rule or 1 rewrite rule which doesn't have 'Reevaluate path map' enabled for path-based routing rules to prevent infinite evaluation loop for a path-based routing rule.
-* Incoming requests would be terminated with a 500 error code in case a loop is created dynamically based on client inputs. The Application Gateway will continue to serve other requests without any degradation in such a scenario.
+* Incoming requests would be terminated with a 500 error code in case a loop is created dynamically based on client inputs. The Application Gateway continues to serve other requests without any degradation in such a scenario.
### Using URL rewrite or Host header rewrite with Web Application Firewall (WAF_v2 SKU)
-When you configure URL rewrite or host header rewrite, the WAF evaluation will happen after the modification to the request header or URL parameters (post-rewrite). And when you remove the URL rewrite or host header rewrite configuration on your Application Gateway, the WAF evaluation will be done before the header rewrite (pre-rewrite). This order ensures that WAF rules are applied to the final request that would be received by your backend pool.
+When you configure URL rewrite or host header rewrite, the WAF evaluation happens after the modification to the request header or URL parameters (post-rewrite). And when you remove the URL rewrite or host header rewrite configuration on your Application Gateway, the WAF evaluation is done before the header rewrite (pre-rewrite). This order ensures that WAF rules are applied to the final request that would be received by your backend pool.
For example, say you have the following header rewrite rule for the header `"Accept" : "text/html"` - if the value of header `"Accept"` is equal to `"text/html"`, then rewrite the value to `"image/png"`.
-Here, with only header rewrite configured, the WAF evaluation will be done on `"Accept" : "text/html"`. But when you configure URL rewrite or host header rewrite, then the WAF evaluation will be done on `"Accept" : "image/png"`.
+Here, with only header rewrite configured, the WAF evaluation is done on `"Accept" : "text/html"`. But when you configure URL rewrite or host header rewrite, then the WAF evaluation is done on `"Accept" : "image/png"`.
### Common scenarios for header rewrite
Here, with only header rewrite configured, the WAF evaluation will be done on `"
Application Gateway inserts an X-Forwarded-For header into all requests before it forwards the requests to the backend. This header is a comma-separated list of IP ports. There might be scenarios in which the backend servers only need the headers to contain IP addresses. You can use header rewrite to remove the port information from the X-Forwarded-For header. One way to do this is to set the header to the add_x_forwarded_for_proxy server variable. Alternatively, you can also use the variable client_ip:
-![Remove port](./media/rewrite-http-headers-url/remove-port.png)
+![A screenshot showing a remove port action.](./media/rewrite-http-headers-url/remove-port.png)
#### Modify a redirection URL
Modification of a redirect URL can be useful under certain circumstances. For e
> > The limitations and implications of such a configuration are described in [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation). The recommended setup for App Service is to follow the instructions for **"Custom Domain (recommended)"** in [Configure App Service with Application Gateway](configure-web-app.md). Rewriting the location header on the response as described in the below example should be considered a workaround and doesn't address the root cause.
-When the app service sends a redirection response, it uses the same hostname in the location header of its response as the one in the request it receives from the application gateway. So the client will make the request directly to `contoso.azurewebsites.net/path2` instead of going through the application gateway (`contoso.com/path2`). Bypassing the application gateway isn't desirable.
+When the app service sends a redirection response, it uses the same hostname in the location header of its response as the one in the request it receives from the application gateway. So the client makes the request directly to `contoso.azurewebsites.net/path2` instead of going through the application gateway (`contoso.com/path2`). Bypassing the application gateway isn't desirable.
You can resolve this issue by setting the hostname in the location header to the application gateway's domain name.
Here are the steps for replacing the hostname:
1. Create a rewrite rule with a condition that evaluates if the location header in the response contains azurewebsites.net. Enter the pattern `(https?):\/\/.*azurewebsites\.net(.*)$`. 2. Perform an action to rewrite the location header so that it has the application gateway's hostname. Do this by entering `{http_resp_Location_1}://contoso.com{http_resp_Location_2}` as the header value. Alternatively, you can also use the server variable `host` to set the hostname to match the original request.
-![Modify location header](./media/rewrite-http-headers-url/app-service-redirection.png)
+![A screenshow of the modify location header action.](./media/rewrite-http-headers-url/app-service-redirection.png)
#### Implement security HTTP headers to prevent vulnerabilities You can fix several security vulnerabilities by implementing necessary headers in the application response. These security headers include X-XSS-Protection, Strict-Transport-Security, and Content-Security-Policy. You can use Application Gateway to set these headers for all responses.
-![Security header](./media/rewrite-http-headers-url/security-header.png)
+![A screenshow of a security header.](./media/rewrite-http-headers-url/security-header.png)
### Delete unwanted headers You might want to remove headers that reveal sensitive information from an HTTP response. For example, you might want to remove information like the backend server name, operating system, or library details. You can use the application gateway to remove these headers:
-![Deleting header](./media/rewrite-http-headers-url/remove-headers.png)
+![A screenshot showing the delete header action.](./media/rewrite-http-headers-url/remove-headers.png)
+
+It isn't possible to create a rewrite rule to delete the host header. If you attempt to create a rewrite rule with the action type set to delete and the header set to host, it results in an error.
#### Check for the presence of a header You can evaluate an HTTP request or response header for the presence of a header or server variable. This evaluation is useful when you want to perform a header rewrite only when a certain header is present.
-![Checking presence of a header](./media/rewrite-http-headers-url/check-presence.png)
+![A screenshow showing the check presence of a header action.](./media/rewrite-http-headers-url/check-presence.png)
### Common scenarios for URL rewrite
To accomplish scenarios where you want to choose the backend pool based on the v
**Step 2 (a):** Create a rewrite set which has 3 rewrite rules:
-* The first rule has a condition that checks the *query_string* variable for *category=shoes* and has an action that rewrites the URL path to /*listing1* and has **Re-evaluate path map** enabled
+* The first rule has a condition that checks the *query_string* variable for *category=shoes* and has an action that rewrites the URL path to /*listing1* and has **Reevaluate path map** enabled
-* The second rule has a condition that checks the *query_string* variable for *category=bags* and has an action that rewrites the URL path to /*listing2* and has **Re-evaluate path map** enabled
+* The second rule has a condition that checks the *query_string* variable for *category=bags* and has an action that rewrites the URL path to /*listing2* and has **Reevaluate path map** enabled
-* The third rule has a condition that checks the *query_string* variable for *category=accessories* and has an action that rewrites the URL path to /*listing3* and has **Re-evaluate path map** enabled
+* The third rule has a condition that checks the *query_string* variable for *category=accessories* and has an action that rewrites the URL path to /*listing3* and has **Reevaluate path map** enabled
:::image type="content" source="./media/rewrite-http-headers-url/url-scenario1-2.png" alt-text="URL rewrite scenario 1-2.":::
To accomplish scenarios where you want to choose the backend pool based on the v
:::image type="content" source="./media/rewrite-http-headers-url/url-scenario1-3.png" alt-text="URL rewrite scenario 1-3.":::
-Now, if the user requests *contoso.com/listing?category=any*, then it will be matched with the default path since none of the path patterns in the path map (/listing1, /listing2, /listing3) will match. Since you associated the above rewrite set with this path, this rewrite set will be evaluated. As the query string won't match the condition in any of the 3 rewrite rules in this rewrite set, no rewrite action will take place and therefore, the request will be routed unchanged to the backend associated with the default path (which is *GenericList*).
+If the user requests *contoso.com/listing?category=any*, then it's matched with the default path since none of the path patterns in the path map (/listing1, /listing2, /listing3) are matched. Since you associated the previous rewrite set with this path, this rewrite set is evaluated. Because the query string won't match the condition in any of the 3 rewrite rules in this rewrite set, no rewrite action takes place. Therefore, the request is routed unchanged to the backend associated with the default path (which is *GenericList*).
-If the user requests *contoso.com/listing?category=shoes*, then again the default path will be matched. However, in this case the condition in the first rule will match and therefore, the action associated with the condition will be executed which will rewrite the URL path to /*listing1* and reevaluate the path-map. When the path-map is reevaluated, the request will now match the path associated with pattern */listing1* and the request will be routed to the backend associated with this pattern, which is ShoesListBackendPool.
+If the user requests *contoso.com/listing?category=shoes*, then the default path is matched. However, in this case the condition in the first rule matches. Therefore, the action associated with the condition is executed, which rewrites the URL path to /*listing1* and reevaluates the path-map. When the path-map is reevaluated, the request matches the path associated with pattern */listing1* and the request is routed to the backend associated with this pattern (ShoesListBackendPool).
> [!NOTE] > This scenario can be extended to any header or cookie value, URL path, query string or server variables based on the conditions defined and essentially enables you to route requests based on those conditions.
For a step-by-step guide to achieve the scenario described above, see [Rewrite U
For a URL rewrite, Application Gateway rewrites the URL before the request is sent to the backend. This won't change what users see in the browser because the changes are hidden from the user.
-For a URL redirect, Application Gateway sends a redirect response to the client with the new URL. That, in turn, requires the client to resend its request to the new URL provided in the redirect. The URL that the user sees in the browser will update to the new URL.
+For a URL redirect, Application Gateway sends a redirect response to the client with the new URL. That, in turn, requires the client to resend its request to the new URL provided in the redirect. The URL that the user sees in the browser updates to the new URL.
:::image type="content" source="./media/rewrite-http-headers-url/url-rewrite-vs-redirect.png" alt-text="Rewrite vs Redirect."::: ## Limitations -- If a response has more than one header with the same name, then rewriting the value of one of those headers will result in dropping the other headers in the response. This can usually happen with Set-Cookie header since you can have more than one Set-Cookie header in a response. One such scenario is when you're using an app service with an application gateway and have configured cookie-based session affinity on the application gateway. In this case the response will contain two Set-Cookie headers: one used by the app service, for example: `Set-Cookie: ARRAffinity=ba127f1caf6ac822b2347cc18bba0364d699ca1ad44d20e0ec01ea80cda2a735;Path=/;HttpOnly;Domain=sitename.azurewebsites.net` and another for application gateway affinity, for example, `Set-Cookie: ApplicationGatewayAffinity=c1a2bd51lfd396387f96bl9cc3d2c516; Path=/`. Rewriting one of the Set-Cookie headers in this scenario can result in removing the other Set-Cookie header from the response.
+- If a response has more than one header with the same name, rewriting the value of one of those headers results in dropping the other headers in the response. This can happen with Set-Cookie header since you can have more than one Set-Cookie header in a response. One such scenario is when you're using an app service with an application gateway and have configured cookie-based session affinity on the application gateway. In this case the response contains two Set-Cookie headers. For example: one used by the app service, `Set-Cookie: ARRAffinity=ba127f1caf6ac822b2347cc18bba0364d699ca1ad44d20e0ec01ea80cda2a735;Path=/;HttpOnly;Domain=sitename.azurewebsites.net` and another for application gateway affinity, `Set-Cookie: ApplicationGatewayAffinity=c1a2bd51lfd396387f96bl9cc3d2c516; Path=/`. Rewriting one of the Set-Cookie headers in this scenario can result in removing the other Set-Cookie header from the response.
- Rewrites aren't supported when the application gateway is configured to redirect the requests or to show a custom error page.-- Request header names can contain alphanumeric characters and hyphens. Headers names containing other characters will be discarded when a request is sent to the backend target.
+- Request header names can contain alphanumeric characters and hyphens. Header names containing other characters are discarded when a request is sent to the backend target.
- Response header names can contain any alphanumeric characters and specific symbols as defined in [RFC 7230](https://tools.ietf.org/html/rfc7230#page-27). - Connection and upgrade headers cannot be rewritten - Rewrites aren't supported for 4xx and 5xx responses generated directly from Application Gateway
application-gateway Tcp Tls Proxy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tcp-tls-proxy-overview.md
Previously updated : 03/12/2024 Last updated : 09/06/2024
Process flow:
- A WAF v2 SKU gateway allows the creation of TLS or TCP listeners and backends to support HTTP and non-HTTP traffic through the same resource. However, it does not inspect traffic on TLS and TCP listeners for exploits and vulnerabilities. - The default [draining timeout](configuration-http-settings.md#connection-draining) value for backend servers is 30 seconds. At present, a user-defined draining value is not supported. - Client IP preservation is currently not supported.
+- Application Gateway Ingress Controller (AGIC) is not supported and works only with L7 proxy through HTTP(S) listeners.
## Next steps
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Azure Arc supports the following Windows and Linux operating systems. Only x86-6
* AlmaLinux 9 * Amazon Linux 2 and 2023
-* Azure Linux (CBL-Mariner) 2.0
+* Azure Linux (CBL-Mariner) 2.0 and 3.0
* Azure Stack HCI * Debian 11, and 12 * Oracle Linux 7, 8, and 9
azure-functions Functions Add Output Binding Storage Queue Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md
You've updated your HTTP triggered function to write data to a Storage queue. No
* [Azure Functions Python developer guide](functions-reference-python.md) ::: zone-end ::: zone pivot="programming-language-powershell"
-* [Examples of complete Function projects in PowerShell](/samples/browse/?products=azure-functions&languages=azurepowershell).
+* [Examples of complete Function projects in PowerShell](/samples/browse/?products=azure-functions&languages=powershell).
* [Azure Functions PowerShell developer guide](functions-reference-powershell.md) ::: zone-end
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/deploy.md
For each scenario, you can target the action against one or more subscriptions,
:::image type="content" source="media/deploy/schedule-recurrence-property.png" alt-text="Configure the recurrence frequency for logic app":::
-> [!NOTE]
-> If you do not provide a start date and time for the first recurrence, a recurrence will immediately run when you save the logic app, which might cause the VMs to start or stop before the scheduled run.
+ > [!NOTE]
+ > If you do not provide a start date and time for the first recurrence, a recurrence will immediately run when you save the logic app, which might cause the VMs to start or stop before the scheduled run.
1. In the designer pane, select **Function-Try** to configure the target settings. In the request body, if you want to manage VMs across all resource groups in the subscription, modify the request body as shown in the following example.
In an environment that includes two or more components on multiple Azure Resourc
:::image type="content" source="media/deploy/schedule-recurrence-property.png" alt-text="Configure the recurrence frequency for logic app":::
-> [!NOTE]
-> If you do not provide a start date and time for the first recurrence, a recurrence will immediately run when you save the logic app, which might cause the VMs to start or stop before the scheduled run.
+ > [!NOTE]
+ > If you do not provide a start date and time for the first recurrence, a recurrence will immediately run when you save the logic app, which might cause the VMs to start or stop before the scheduled run.
1. In the designer pane, select **Function-Try** to configure the target settings and then select the **</> Code view** button in the top menu to edit the code for the **Function-Try** element. In the request body, if you want to manage VMs across all resource groups in the subscription, modify the request body as shown in the following example.
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md
This new version of Start/Stop VMs v2 provides a decentralized low-cost automati
## Important Start/Stop VMs v2 Updates
-> + No further development, enhancements, or updates will be available for Start/Stop V2 except when required to remain on supported versions of components and Azure services.
+> + No further development, enhancements, or updates will be available for Start/Stop v2 except when required to remain on supported versions of components and Azure services.
>
-> + The TriggerAutoUpdate and UpdateStartStopV2 functions are now deprecated and will be removed in future updates to Start/Stop,V2. To update Start/Stop V2, we recommend that you stop the site, install to the latest version from our [GitHub repository](https://github.com/microsoft/startstopv2-deployments), and the start the site. No built-in notification system is available for updates. After an update to Start/Stop V2 becomes available, we will update the [readme.md](https://github.com/microsoft/startstopv2-deployments/blob/main/README.md) in the GitHub repository. Third-party Github file watchers might be available to enable you to be notified of changes. To disable the automatic update functionality, set the Function App's **AzureClientOptions:EnableAutoUpdate** [application setting](../functions-how-to-use-azure-function-app-settings.md?tabs=azure-portal%2Cto-premium#get-started-in-the-azure-portal) to **false**.
+> + The TriggerAutoUpdate and UpdateStartStopV2 functions are now deprecated and will be removed in the future. To update Start/Stop v2, we recommend that you stop the site, install to the latest version from our [GitHub repository](https://github.com/microsoft/startstopv2-deployments), and then start the site. To disable the automatic update functionality, set the Function App's **AzureClientOptions:EnableAutoUpdate** [application setting](../functions-how-to-use-azure-function-app-settings.md?tabs=azure-portal%2Cto-premium#get-started-in-the-azure-portal) to **false**. No built-in notification system is available for updates. After an update to Start/Stop v2 becomes available, we will update the [readme.md](https://github.com/microsoft/startstopv2-deployments/blob/main/README.md) in the GitHub repository. Third-party GitHub file watchers might be available to notify you of changes.
>
-> + As of August 19, 2024. Start/Stop v2 has been updated to the [.NET 8 isolated worker model](../functions-versions.md?tabs=isolated-process%2Cv4&pivots=programming-language-csharp#languages).
+> + As of August 19, 2024, Start/Stop v2 has been updated to the [.NET 8 isolated worker model](../functions-versions.md?tabs=isolated-process%2Cv4&pivots=programming-language-csharp#languages).
## Overview
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
Title: Statsbeat in Application Insights | Microsoft Docs
-description: Statistics about Application Insights SDKs and AutoInstrumentation
+description: Statistics about Application Insights SDKs and autoinstrumentation
Last updated 08/24/2022
ms.reviwer: heya
# Statsbeat in Application Insights
-Statsbeat collects essential and nonessential [custom metrics](../essentials/metrics-custom-overview.md) about Application Insights SDKs and autoinstrumentation. Statsbeat serves three benefits for Azure Monitor Application Insights customers:
-- Service health and reliability (outside-in monitoring of connectivity to ingestion endpoint)-- Support diagnostics (self-help insights and CSS insights)-- Product improvement (insights for design optimizations)
+Statsbeat collects essential and nonessential [custom metrics](../essentials/metrics-custom-overview.md) about Application Insights SDKs and autoinstrumentation. Statsbeat serves three benefits for Application Insights customers:
-Statsbeat data is stored in a Microsoft data store. It doesn't affect customers' overall monitoring volume and cost.
-
-Statsbeat doesn't support [Azure Private Link](../../automation/how-to/private-link-security.md).
+* Service health and reliability (outside-in monitoring of connectivity to ingestion endpoint)
+* Support diagnostics (self-help insights and CSS insights)
+* Product improvement (insights for design optimizations)
-## What data does Statsbeat collect?
+Statsbeat data is stored in a Microsoft data store. It doesn't affect customers' overall monitoring volume and cost.
-Statsbeat collects essential and nonessential metrics.
+> [!NOTE]
+> Statsbeat doesn't support [Azure Private Link](../../automation/how-to/private-link-security.md).
## Supported languages
-| C# | Java | JavaScript | Node.js | Python |
-||--||--|--|
-| Currently not supported | Supported | Currently not supported | Supported | Supported |
+| C# | Java | JavaScript | Node.js | Python |
+|-|--|-|--|--|
+| Currently not supported | Supported | Currently not supported | Supported | Supported |
## Supported EU regions
-#### [Java](#tab/eu-java)
-
-Statsbeat supports EU Data Boundary for Application Insights resources in the following regions:
-
-| Geo name | Region name |
-|||
-| Europe | North Europe |
-| Europe | West Europe |
-| France | France Central |
-| France | France South |
-| Germany | Germany West Central |
-| Norway | Norway East |
-| Norway | Norway West |
-| Sweden | Sweden Central |
-| Switzerland | Switzerland North |
-| Switzerland | Switzerland West |
-| United Kingdom | United Kingdom South |
-| United Kingdom | United Kingdom West |
-
-#### [Node](#tab/eu-node)
-
-Statsbeat supports EU Data Boundary for Application Insights resources in the following regions:
-
-| Geo name | Region name |
-|||
-| Europe | North Europe |
-| Europe | West Europe |
-| France | France Central |
-| France | France South |
-| Germany | Germany West Central |
-| Norway | Norway East |
-| Norway | Norway West |
-| Sweden | Sweden Central |
-| Switzerland | Switzerland North |
-| Switzerland | Switzerland West |
-| United Kingdom | United Kingdom South |
-| United Kingdom | United Kingdom West |
-
-#### [Python](#tab/eu-python)
- Statsbeat supports EU Data Boundary for Application Insights resources in the following regions:
-| Geo name | Region name |
-|||
-| Europe | North Europe |
-| Europe | West Europe |
-| France | France Central |
-| France | France South |
-| Germany | Germany West Central |
-| Norway | Norway East |
-| Norway | Norway West |
-| Sweden | Sweden Central |
-| Switzerland | Switzerland North |
-| Switzerland | Switzerland West |
-| United Kingdom | United Kingdom South |
-| United Kingdom | United Kingdom West |
--
+| Geo name | Region name |
+|-|-|
+| Europe | North Europe |
+| Europe | West Europe |
+| France | France Central |
+| France | France South |
+| Germany | Germany West Central |
+| Norway | Norway East |
+| Norway | Norway West |
+| Sweden | Sweden Central |
+| Switzerland | Switzerland North |
+| Switzerland | Switzerland West |
+| United Kingdom | United Kingdom South |
+| United Kingdom | United Kingdom West |
+
+## Supported metrics
+
+Statsbeat collects [essential](#essential-statsbeat) and [nonessential](#nonessential-statsbeat) metrics:
+
+| Language | Essential metrics | Non-essential metrics |
+|-|-|--|
+| Java | ✅ | ✅ |
+| Node.js | ✅ | ❌ |
+| Python | ✅ | ❌ |
### Essential Statsbeat #### Network Statsbeat
-|Metric name|Unit|Supported dimensions|
-|--|--|--|
-|Request Success Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
-|Requests Failure Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`|
-|Request Duration|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
-|Retry Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`|
-|Throttle Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`|
-|Exception Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Exception Type`|
+| Metric name | Unit | Supported dimensions |
+||-||
+| Request Success Count | Count | `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host` |
+| Requests Failure Count | Count | `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code` |
+| Request Duration | Count | `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host` |
+| Retry Count | Count | `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code` |
+| Throttle Count | Count | `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code` |
+| Exception Count | Count | `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Exception Type` |
[!INCLUDE [azure-monitor-log-analytics-rebrand](~/reusable-content/ce-skilling/azure/includes/azure-monitor-instrumentation-key-deprecation.md)] #### Attach Statsbeat
-|Metric name|Unit|Supported dimensions|
-|--|--|--|
-|Attach|Count| `Resource Provider`, `Resource Provider Identifier`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`|
+| Metric name | Unit | Supported dimensions |
+|-|-||
+| Attach | Count | `Resource Provider`, `Resource Provider Identifier`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version` |
#### Feature Statsbeat
-|Metric name|Unit|Supported dimensions|
-|--|--|--|
-|Feature|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Feature`, `Type`, `Operating System`, `Language`, `Version`|
+| Metric name | Unit | Supported dimensions |
+|-|-|--|
+| Feature | Count | `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Feature`, `Type`, `Operating System`, `Language`, `Version` |
### Nonessential Statsbeat Track the Disk I/O failure when you use disk persistence for reliable telemetry.
-|Metric name|Unit|Supported dimensions|
-|--|--|--|
-|Read Failure Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`|
-|Write Failure Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`|
+| Metric name | Unit | Supported dimensions |
+||-|-|
+| Read Failure Count | Count | `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version` |
+| Write Failure Count | Count | `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version` |
+
+## Firewall configuration
+
+Metrics are sent to the following locations, to which outgoing connections must be opened in firewalls:
-### Configure Statsbeat
+| Location | URL |
+|-|-|
+| Europe | `westeurope-5.in.applicationinsights.azure.com` |
+| Outside of Europe | `westus-0.in.applicationinsights.azure.com` |
-#### [Java](#tab/java)
+## Disable Statsbeat
+
+### [Java](#tab/java)
+
+> [!NOTE]
+> Only nonessential Statsbeat can be disabled in Java.
To disable nonessential Statsbeat, add the following configuration to your config file:
To disable nonessential Statsbeat, add the following configuration to your confi
You can also disable this feature by setting the environment variable `APPLICATIONINSIGHTS_STATSBEAT_DISABLED` to `true`. This setting then takes precedence over `disabled`, which is specified in the JSON configuration.
-#### [Node](#tab/node)
-
-Not supported yet.
-
-#### [Python](#tab/python)
+### [Node](#tab/node)
-Statsbeat is enabled by default. It can be disabled by setting the environment variable <code class="notranslate">APPLICATIONINSIGHTS_STATSBEAT_DISABLED_ALL</code> to <code class="notranslate">true</code>.
+Statsbeat is enabled by default. It can be disabled by setting the environment variable `APPLICATION_INSIGHTS_NO_STATSBEAT` to `true`.
-Metrics are sent to the following locations, to which outgoing connections must be opened in firewalls.
+### [Python](#tab/python)
-|Location |URL |
-|||
-|Europe |<code class="notranslate">westeurope-5.in.applicationinsights.azure.com</code> |
-|Outside Europe |<code class="notranslate">westus-0.in.applicationinsights.azure.com</code> |
+Statsbeat is enabled by default. It can be disabled by setting the environment variable `APPLICATIONINSIGHTS_STATSBEAT_DISABLED_ALL` to `true`.
azure-monitor Metrics Aggregation Explained https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-aggregation-explained.md
Previously updated : 07/13/2024 Last updated : 09/06/2024
azure-monitor Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/insights-overview.md
description: Lists available Azure Monitor "Insights" and other Azure product in
Previously updated : 10/15/2022 Last updated : 09/06/2024
The following table lists the available curated visualizations and information a
| [Azure Backup](../../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. | |**Databases**|||| | [Azure Cosmos DB Insights](/azure/cosmos-db/cosmosdb-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. |
-| [Azure Monitor for Azure Cache for Redis (preview)](../../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. |
+| [Azure Monitor for Azure Cache for Redis](../../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. |
|**Analytics**|||| | [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
-| [Azure Monitor Log Analytics Workspace](../logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
+| [Azure Monitor Log Analytics Workspace](../logs/log-analytics-workspace-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights. |
|**Security**|||| | [Azure Key Vault Insights](/azure/key-vault/key-vault-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. | |**Monitor**|||| | [Azure Monitor Application Insights](../app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible application performance management service that monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It uses the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. |
-| [Azure activity Log Insights](../essentials/activity-log-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
+| [Azure Activity Log Insights](../essentials/activity-log-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
| [Azure Monitor for Resource Groups](../../azure-resource-manager/management/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context for the health and performance of the resource group as a whole. | |**Integration**|||| | [Azure Service Bus Insights](../../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus Insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. | |[Azure IoT Edge](../../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal by using Azure Monitor Workbooks-based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. | |**Workloads**|||| | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL Insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you're just setting up SQL monitoring, use SQL Insights instead of the SQL Analytics solution. |
-| [Azure Monitor for SAP solutions](/azure/virtual-machines/workloads/sap/monitor-sap-on-azure) | Preview | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlates the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability clusters, SAP HANA database, and SAP NetWeaver, by adding the corresponding provider for that component. |
+| [Azure Monitor for SAP solutions](/azure/virtual-machines/workloads/sap/monitor-sap-on-azure) | GA | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlates the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability clusters, SAP HANA database, and SAP NetWeaver, by adding the corresponding provider for that component. |
|**Other**|||| | [Azure Virtual Desktop Insights](../../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. | | [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | GA| [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. |
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Visualizations such as charts and tables are effective tools for summarizing mon
|[Power BI](logs/log-powerbi.md)|Power BI is a business analytics service that provides interactive visualizations across various data sources. It's an effective means of making data available to others within and outside your organization. You can configure Power BI to automatically import log data from Azure Monitor to take advantage of these visualizations. | |[Grafana](visualize/grafana-plugin.md)|Grafana is an open platform that excels in operational dashboards. All versions of Grafana include the Azure Monitor data source plug-in to visualize your Azure Monitor metrics and logs. Azure Managed Grafana also optimizes this experience for Azure-native data stores such as Azure Monitor and Azure Data Explorer. In this way, you can easily connect to any resource in your subscription and view all resulting monitoring data in a familiar Grafana dashboard. It also supports pinning charts from Azure Monitor metrics and logs to Grafana dashboards. <br/><br/> Grafana has popular plug-ins and dashboard templates for non-Microsoft APM tools such as Dynatrace, New Relic, and AppDynamics as well. You can use these resources to visualize Azure platform data alongside other metrics from higher in the stack collected by these other tools. It also has AWS CloudWatch and GCP BigQuery plug-ins for multicloud monitoring in a single pane of glass.|
+For a more extensive discussion of the recommended visualization tools and when to use them, see [Analyze and visualize monitoring data](best-practices-analysis.md)
+ ### Analyze The Azure portal contains built-in tools that allow you to analyze monitoring data.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation"
description: "What's new in Azure Monitor documentation" Previously updated : 08/21/2024 Last updated : 09/06/2024
This article lists significant changes to Azure Monitor documentation.
## [2024](#tab/2024)
+## August 2024
+
+|Subservice | Article | Description |
+||||
+|Agents|[MM)|New Removal Tool|
+|Application-Insights|[Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)|We've updated the table that converts values used in Classic AI resources to Workspace-based resources.|
+|Application-Insights|[Application Insights for ASP.NET Core applications](app/asp-net-core.md)|The option to disable telemetry correlation has been documented for ASP.NET Core.|
+|Application-Insights|[Configure Application Insights for your ASP.NET website](app/asp-net.md)|The option to disable telemetry correlation has been documented for ASP.NET.|
+|Application-Insights|[Configuration options: Azure Monitor Application Insights for Java](app/java-standalone-config.md)|The option to disable telemetry correlation has been documented for Java.|
+|Application-Insights|[Monitor your Node.js services and apps with Application Insights](app/nodejs.md)|The option to disable telemetry correlation has been documented for Node.js.|
+|Application-Insights|[What is autoinstrumentation for Azure Monitor Application Insights?](app/codeless-overview.md)|Further "autoinstrumentation" explanation has been added, alongside an example, to better convey the meaning of this term.|
+|Application-Insights|[Microsoft Entra authentication for Application Insights](app/azure-ad-authentication.md)|Options to enable Entra authentication for .NET and Node.js autoinstrumentation are documented.|
+|Application-Insights|[Application Insights availability tests](app/availability.md)|We clarified information about using a "availability test string identifier", which previously caused some confusion when referred to as a "GUID".|
+|Containers|[Optimize monitoring costs for Container insights](containers/container-insights-cost.md)|Rewritten to consolidate cost saving options and analysis.|
+|Containers|[Configure log collection in Container insights](containers/container-insights-data-collection-configure.md)|New article to consolidate all guidance for container location.|
+|Containers|[Filter log collection in Container insights](containers/container-insights-data-collection-filter.md)|New article to describe all options to filter container logs.|
+|Containers|[Container insights log schema](containers/container-insights-logs-schema.md)|Rewritten to focus on definition and configuration of log schema, including metadata option.|
+|Containers|[Access Syslog data in Container Insights](containers/container-insights-syslog.md)|Removed duplicate information on configuration.|
+|Containers|[Data transformations in Container insights](containers/container-insights-transformations.md)|Added details to filtering example and added a new example to send data to multiple tables.|
+|Containers|[Enable private link for Kubernetes monitoring in Azure Monitor](containers/container-insights-private-link.md)|New article to consolidate private link guidance for Container insights.|
+|Containers|[High scale logs collection in Container Insights (Preview)](containers/container-insights-high-scale.md)|New feature.|
+|Containers|[Monitor your Kubernetes cluster performance with Container insights](containers/container-insights-analyze.md)|Added explanation of "Other processes" column.|
+|Essentials |[Manage data collection rules (DCRs) and associations in Azure Monitor](essentials/data-collection-rule-view.md)|Added guidance for new UI feature to manage DCR associations.|
+|Essentials|[Use Azure Policy to install and manage the Azure Monitor agent](agents/azure-monitor-agent-policy.md)|Added information on new UI feature to create associations.|
+|Essentials|[Create and edit data collection rules (DCRs) and associations in Azure Monitor](essentials/data-collection-rule-create-edit.md)|Removed duplicate information.|
+|Essentials|[Data collection rules (DCRs) in Azure Monitor](essentials/data-collection-rule-overview.md)|Added diagram.|
+|Essentials|[Monitor and troubleshoot DCR data collection in Azure Monitor](essentials/data-collection-monitor.md)|Corrected error in KQL using InputStreamId.|
+|General|[Analyze and visualize monitoring data](best-practices-analysis.md)|We've updated our visualization recommendations to better guide customers when to use Azure Managed Grafana and when to use Azure Workbooks.|
+|Logs|[Best practices for Azure Monitor Logs](best-practices-logs.md)|Updated overview of features that enhance resilience of your Log Analytics workspace, including a new video. |
+|Logs|[Set up a table with the Auxiliary plan in your Log Analytics workspace (Preview)](logs/create-custom-table-auxiliary.md)|New article that explains how to set up a table with the Auxiliary plan. |
+|Logs|[Azure Monitor Logs overview](logs/data-platform-logs.md)|Updated Azure Monitor Logs overview provides a high-level overview of data collection, management, retrieval, and consumption for a range of use cases.|
+|Logs|[Run search jobs in Azure Monitor](logs/search-jobs.md)|New video that explains how to use search jobs in Azure Monitor Logs.|
+|Logs|[Aggregate data in a Log Analytics workspace by using summary rules (Preview)](logs/summary-rules.md)|New video that explains how to use summary rules to optimize data in your Log Analytics workspace.|
+++ ## July 2024 |Subservice | Article | Description |
azure-netapp-files Azure Netapp Files Service Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-service-levels.md
Previously updated : 08/20/2024 Last updated : 09/05/2024 # Service levels for Azure NetApp Files
Azure NetApp Files supports three service levels: *Ultra*, *Premium*, and *Stand
The Ultra service level provides up to 128 MiB/s of throughput per 1 TiB of capacity provisioned. * Storage with cool access:
- Cool access storage is available with the Standard, Premium, and Ultra service levels. The throughput experience for any of these service levels with cool access is the same for cool access as it is for data in the hot tier. It may differ when data that resides in the cool tier is accessed. For more information, see [Azure NetApp Files storage with cool access](cool-access-introduction.md#effects-of-cool-access-on-data).
+ Cool access storage is available with the Standard, Premium, and Ultra service levels. The throughput experience for any of these service levels with cool access is the same for cool access as it is for data in the hot tier. It may differ when data that resides in the cool tier is accessed. For more information, see [Azure NetApp Files storage with cool access](cool-access-introduction.md) and [Performance considerations for storage with cool access](performance-considerations-cool-access.md).
## Throughput limits
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Azure NetApp Files storage with cool access is supported for the following regio
* West US 2 * West US 3
-## Effects of cool access on data
-
-This section describes a large-duration, large-dataset warming test. It shows an example scenario of a dataset where 100% of the data is in the cool tier and how it warms over time.
-
-Typical randomly accessed data starts as part of a working set (read, modify, and write). As data loses relevance, it becomes "cool" and is eventually tiered off to the cool tier.
-
-Cool data might become hot again. ItΓÇÖs not typical for the entire working set to start as cold, but some scenarios do exist, for example, audits, year-end processing, quarter-end processing, lawsuits, and end-of-year licensure reviews.
-
-This scenario provides insight to the warming performance behavior of a 100% cooled dataset. The insight applies whether it's a small percentage or the entire dataset.
-
-### 4k random-read test
-
-This section describes a 4k random-read test across 160 files totaling 10 TB of data.
-
-#### Setup
-
-**Capacity pool size:** 100-TB capacity pool <br>
-**Volume allocated capacity:** 100-TB volumes <br>
-**Working Dataset:** 10 TB <br>
-**Service Level:** Standard storage with cool access <br>
-**Volume Count/Size:** 1 <br>
-**Client Count:** Four standard 8-s clients <br>
-**OS:** RHEL 8.3 <br>
-**Mount Option:** `rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg,hard`
-
-#### Methodology
-
-This test was set up via FIO to run a 4k random-read test across 160 files that total 10 TB of data. FIO was configured to randomly read each block across the entire working dataset. (It can read any block any number of times as part of the test instead of touching each block once). This script was called once every 5 minutes and then a data point collected on performance. When blocks are randomly read, they're moved to the hot tier.
-
-This test had a large dataset and ran several days starting the worst-case most-aged data (all caches dumped). The time component of the X axis has been removed because the total time to rewarm varies due to the dataset size. This curve could be in days, hours, minutes, or even seconds depending on the dataset.
-
-#### Results
-
-The following chart shows a test that ran over 2.5 days on the 10-TB working dataset that has been 100% cooled and the buffers cleared (absolute worst-case aged data).
--
-### 64k sequential-read test
-
-#### Setup
-
-**Capacity pool size:** 100-TB capacity pool <br>
-**Volume allocated capacity:** 100-TB volumes <br>
-**Working Dataset:** 10 TB <br>
-**Service Level:** Standard storage with cool access <br>
-**Volume Count/Size:** 1 <br>
-**Client Count:** One large client <br>
-**OS:** RHEL 8.3 <br>
-**Mount Option:** `rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg,hard` <br>
-
-#### Methodology
-
-Sequentially read blocks aren't rewarmed to the hot tier. However, small dataset sizes might see performance improvements because of caching (no performance change guarantees).
-
-This test provides the following data points:
-* 100% hot tier dataset
-* 100% cool tier dataset
-
-This test ran for 30 minutes to obtain a stable performance number.
-
-#### Results
-
-The following table summarizes the test results:
-
-| 64-k sequential | Read throughput |
-|-|-|
-| Hot data | 1,683 MB/s |
-| Cool data | 899 MB/s |
-
-### Test conclusions
-
-Data read from the cool tier experiences a performance hit. If you size your time to cool off correctly, then you might not experience a performance hit at all. You might have little cool tier access, and a 30-day window is perfect for keeping warm data warm.
-
-You should avoid a situation that churns blocks between the hot tier and the cool tier. For instance, you set a workload for data to cool seven days, and you randomly read a large percentage of the dataset every 11 days.
-
-In summary, if your working set is predictable, you can save cost by moving infrequently accessed data blocks to the cool tier. The 7 to 30 day wait range before cooling provides a large window for working sets that are rarely accessed after they're dormant or don't require the hot-tier speeds when they're accessed.
- ## Metrics Cool access offers [performance metrics](azure-netapp-files-metrics.md#cool-access-metrics) to understand usage patterns on a per volume basis:
Your first twelve-month savings:
* [Manage Azure NetApp Files storage with cool access](manage-cool-access.md) * [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md)
+* [Performance considerations for Azure NetApp Files storage with cool access](performance-considerations-cool-access.md)
azure-netapp-files Manage Cool Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-cool-access.md
Azure NetApp Files storage with cool access can be enabled during the creation o
* *Cool access is **enabled***: * If no value is set for cool access retrieval policy:
- The retrieval policy will be set to `Default`, and cold data will be retrieved to the hot tier only when performing random reads. Sequential reads will be served directly from the cool tier.
+ The retrieval policy set to `Default`. Cool data is only retrieved to the hot tier only when performing random reads. Sequential reads are served directly from the cool tier.
* If cool access retrieval policy is set to `Default`:
- Cold data will be retrieved only by performing random reads.
+ Cold data is retrieved only by performing random reads.
* If cool access retrieval policy is set to `On-Read`:
- Cold data will be retrieved by performing both sequential and random reads.
+ Cold data is retrieved by performing both sequential and random reads.
* If cool access retrieval policy is set to `Never`:
- Cold data is served directly from the cool tier and not be retrieved to the hot tier.
+ Cold data is served directly from the cool tier and not retrieved to the hot tier.
* *Cool access is **disabled**:* * You can set a cool access retrieval policy if cool access is disabled only if there's existing data on the cool tier. * Once you disable the cool access setting on the volume, the cool access retrieval policy remains the same.
In a cool-access enabled capacity pool, you can enable an existing volume to sup
1. Right-click the volume for which you want to enable the cool access. 1. In the **Edit** window that appears, set the following options for the volume: * **Enable Cool Access**
- This option specifies whether the volume will support cool access.
+ This option specifies whether the volume supports cool access.
* **Coolness Period** This option specifies the period (in days) after which infrequently accessed data blocks (cold data blocks) are moved to the Azure storage account. The default value is 31 days. The supported values are between 2 and 183 days. * **Cool Access Retrieval Policy**
Based on the client read/write patterns, you can modify the cool access configur
## Next steps * [Azure NetApp Files storage with cool access](cool-access-introduction.md)
+* [Performance considerations for Azure NetApp Files storage with cool access](performance-considerations-cool-access.md)
azure-netapp-files Performance Considerations Cool Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-considerations-cool-access.md
+
+ Title: Performance considerations for Azure NetApp Files storage with cool access
+description: Understand use cases for cool access and the effect it can have on performance.
++++ Last updated : 09/05/2024++
+# Performance considerations for Azure NetApp Files storage with cool access
+
+Data sets aren't always actively used. Up to 80% of data in a set can be considered "cool," meaning it's not currently in use or hasn't been accessed recently. When storing data on high performance storage such as Azure NetApp Files, the money spent on the capacity being used is essentially being wasted since cool data doesn't require high performance storage until it's being accessed again.
+
+[Azure NetApp Files storage with cool access](cool-access-introduction.md) is intended to reduce costs for cloud storage in Azure. There are performance considerations in specific use cases that need to be considered.
+
+Accessing data that has moved to the cool tiers incurs more latency, particularly for random I/O. In a worst-case scenario, all of the data being accessed might be on the cool tier, so every request would need to conduct a retrieval of the data. It's uncommon for all of the data in an actively used dataset to be in the cool tier, so it's unlikely to observe such latency.
+
+When the default cool access retrieval policy is selected, sequential I/O reads are served directly from the cool tier and doesn't repopulate into the hot tier. Randomly read data is repopulated into the hot tier, increasing the performance of subsequent reads. Optimizations for sequential workloads often reduce the latency incurred by cloud retrieval as compared to random reads and improves overall performance.
+
+In a recent test performed using Standard storage with cool access for Azure NetApp Files, the following results were obtained.
+
+## 100% sequential reads on hot/cool tier (single job)
+
+In the following scenario, a single job on one D32_V5 virtual machine (VM) was used on a 50-TiB Azure NetApp Files volume using the Ultra performance tier. Different block sizes were used to test performance on hot and cool tiers.
+
+>[!NOTE]
+>The maximum for the Ultra service level is 128 MiB/s per tebibyte of allocated capacity. An Azure NetApp Files regular volume can manage a throughput up to approximately 5,000 MiB/s.
+
+The following graph shows the cool tier performance for this test using a variety of queue depths. The maximum throughput for a single VM was approximately 400 MiB/s.
++
+Hot tier performance was around 2.75x better, capping out at approximately 11,180 MiB/s.
++
+This graph shows a side-by-side comparison of cool and hot tier performance with a 256K block size.
++
+## 100% sequential reads on hot/cool tier (multiple jobs)
+
+For this scenario, the test was conducted with 16 job using a 256=KB block size on a single D32_V5 VM on a 50-TiB Azure NetApp Files volume using the Ultra performance tier.
+
+>[!NOTE]
+>The maximum for the Ultra service level is 128 MiB/s per tebibyte of allocated capacity. An Azure NetApp Files regular volume can manage a throughput of up to approximately 5,000 MiB/s.
+
+It's possible to push for more throughput for the hot and cool tiers using a single VM when running multiple jobs. The performance difference between hot and cool tiers is less drastic when running multiple jobs. The following graph displays results for hot and cool tiers when running 16 jobs with 16 threads at a 256-KB block size.
++
+- Throughput improved by nearly three times for the hot tier.
+- Throughput improved by 6.5 times for the cool tier.
+- The performance difference for the hot and cool tier decreased from 2.9x to just 1.3x.
+
+## Maximum viable job scale for cool tier ΓÇô 100% sequential reads
+
+The cool tier has a limit of how many jobs can be pushed to a single Azure NetApp Files volume before latency starts to spike to levels that are generally unusable for most workloads.
+
+In the case of cool tiering, that limit is around 16 jobs with a queue depth of no more than 15. The following graph shows that latency spikes from approximately 23 milliseconds (ms) with 16 jobs/15 queue depth with slightly less throughput than with a queue depth of 14. Latency spikes as high as about 63 ms when pushing 32 jobs and throughput drops by roughly 14%.
++
+## What causes latency in hot and cool tiers?
+
+Latency in the hot tier is a factor of the storage system itself, where system resources are exhausted when more I/O is sent to the service than can be handled at any given time. As a result, operations need to queue until previously sent operations can be complete.
+
+Latency in the cool tier is generally seen with the cloud retrieval operations: either requests over the network for I/O to the object store (sequential workloads) or cool block rehydration into the hot tier (random workloads).
+
+## Mixed workload: sequential and random
+
+A mixed workload contains both random and sequential I/O patterns. In mixed workloads, performance profiles for hot and cool tiers can have drastically different results compared to a purely sequential I/O workload but are very similar to a workload that's 100% random.
+
+The following graph shows the results using 16 jobs on a single VM with a queue depth of one and varying random/sequential ratios.
++
+The impact on performance when mixing workloads can also be observed when looking at the latency as the workload mix changes. The graphs show how latency impact for cool and hot tiers as the workload mix goes from 100% sequential to 100% random. Latency starts to spike for the cool tier at around a 60/40 sequential/random mix (greater than 12 ms), while latency remains the same (under 2 ms) for the hot tier.
+++
+## Results summary
+
+- When a workload is 100% sequential, the cool tier's throughput decreases by roughly 47% versus the hot tier (3330 MiB/s compared to 1742 MiB/s).
+- When a workload is 100% random, the cool tierΓÇÖs throughput decreases by roughly 88% versus the hot tier (2,479 MiB/s compared to 280 MiB/s).
+- The performance drop for hot tier when doing 100% sequential (3,330 MiB/s) and 100% random (2,479 MiB/s) workloads was roughly 25%. The performance drop for the cool tier when doing 100% sequential (1,742 MiB/s) and 100% random (280 MiB/s) workloads was roughly 88%.
+- Hot tier throughput maintains about 2,300 MiB/s regardless of the workload mix.
+- When a workload contains any percentage of random I/O, overall throughput for the cool tier is closer to 100% random than 100% sequential.
+- Reads from cool tier dropped by about 50% when moving from 100% sequential to an 80/20 sequential/random mix.
+- Sequential I/O can take advantage of a `readahead` cache in Azure NetApp Files that random I/O doesn't. This benefit to sequential I/O helps reduce the overall performance differences between the hot and cool tiers.
+
+## General recommendations
+
+To avoid worst-case scenario performance with cool access in Azure NetApp Files, follow these recommendations:
+
+- If your workload frequently changes access patterns in an unpredictable manner, cool access may not be ideal due to the performance differences between hot and cool tiers.
+- If your workload contains any percentage of random I/O, performance expectations when accessing data on the cool tier should be adjusted accordingly.
+- Configure the coolness window and cool access retrieval settings to match your workload patterns and to minimize the amount of cool tier retrieval.
+
+## Next steps
+* [Azure NetApp Files storage with cool access](cool-access-introduction.md)
+* [Manage Azure NetApp Files storage with cool access](manage-cool-access.md)
cdn Cdn Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-billing.md
-+ Last updated 03/20/2024
cdn Cdn Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-ddos.md
-+ Last updated 03/20/2024
cdn Cdn How Caching Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-how-caching-works.md
-+ Last updated 03/20/2024
cdn Cdn Http Debug Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-http-debug-headers.md
-+ Last updated 03/20/2024
cdn Cdn Http Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-http-variables.md
-+ Last updated 03/20/2024
cdn Cdn Http2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-http2.md
-+ Last updated 03/20/2024
cdn Cdn Log Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-log-analysis.md
-+ Last updated 03/20/2024
cdn Cdn Msft Http Debug Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-msft-http-debug-headers.md
-+ Last updated 03/20/2024
cdn Cdn Pop Abbreviations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-abbreviations.md
-+ Last updated 03/20/2024
cdn Cdn Pop List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-list-api.md
-+ Last updated 03/20/2024
For the syntax of the REST API operation for retrieving the POP list, see [Edge
## Retrieve the current Microsoft POP IP list for Azure Content Delivery Network
-To lock down your application to accept traffic only from Azure Content Delivery Network from Microsoft, you need to set up IP access control lists (ACLs) for your backend. You might also restrict the set of accepted values for the header 'X-Forwarded-Host' sent by Azure Content Delivery Network from Microsoft. These steps are detailed as followed:
+To lock down your application to accept traffic only from point of presence (POP) servers utilized by Microsoft's content delivery network (CDN) offerings (**Azure Front Door**, **Azure Front Door Classic**, or **Azure CDN from Microsoft**), you need to set up IP access control lists (ACLs) for your backend. You might also restrict the set of accepted values for the header 'X-Forwarded-Host' sent by Azure Content Delivery Network from Microsoft. These steps are detailed as followed:
Configure IP ACLing for your backends to accept traffic from Azure Content Delivery Network from Microsoft's backend IP address space and Azure's infrastructure services only.
-Use the AzureFrontDoor.Backend [service tag](../virtual-network/service-tags-overview.md) with Azure Content Delivery Network from Microsoft to configure Microsoft's backend IP ranges. For a complete list, see [IP Ranges and Service tags](https://www.microsoft.com/en-us/download/details.aspx?id=56519) for Microsoft services.
+To configure Microsoft's backend IP ranges with Azure Content Delivery Network from Microsoft, use the AzureFrontDoor.Backend [service tag](../virtual-network/service-tags-overview.md). For a complete list, see [IP Ranges and Service tags](https://www.microsoft.com/en-us/download/details.aspx?id=56519) for Microsoft services.
## Typical use case
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-locations.md
ms.assetid: 669ef140-a6dd-4b62-9b9d-3f375a14215e -+ Last updated 03/20/2024
cdn Cdn Standard Rules Engine Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-standard-rules-engine-actions.md
-+ Last updated 03/20/2024
cdn Cdn Standard Rules Engine Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-standard-rules-engine-match-conditions.md
-+ Last updated 03/20/2024
cdn Cdn Standard Rules Engine Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-standard-rules-engine-reference.md
description: Reference documentation for match conditions and actions in the Sta
-+ Last updated 03/20/2024
cdn Cdn Verizon Http Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-http-headers.md
-+ Last updated 03/20/2024
cdn Cdn Verizon Premium Rules Engine Reference Conditional Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-premium-rules-engine-reference-conditional-expressions.md
-+ Last updated 03/20/2024
cdn Cdn Verizon Premium Rules Engine Reference Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-premium-rules-engine-reference-features.md
-+ Last updated 03/20/2024
cdn Cdn Verizon Premium Rules Engine Reference Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-premium-rules-engine-reference-match-conditions.md
-+ Last updated 03/20/2024
cdn Cdn Verizon Premium Rules Engine Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-premium-rules-engine-reference.md
-+ Last updated 03/20/2024
cdn Microsoft Pop Abbreviations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/microsoft-pop-abbreviations.md
description: This article lists Microsoft POP locations, sorted by POP abbreviat
-+ Last updated 03/20/2024
container-registry Intro Connected Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/intro-connected-registry.md
Title: What is a connected registry
-description: Overview and scenarios of the connected registry feature of Azure Container Registry
+ Title: What is a connected registry?
+description: Overview and scenarios of the connected registry feature of Azure Container Registry, including its benefits and use cases.
Last updated 10/31/2023
+#customer intent: As a reader, I want to understand the overview and scenarios of the connected registry feature of Azure Container Registry so that I can utilize it effectively.
# What is a connected registry?
-In this article, you learn about the *connected registry* feature of [Azure Container Registry](container-registry-intro.md). A connected registry is an on-premises or remote replica that synchronizes container images and other OCI artifacts with your cloud-based Azure container registry. Use a connected registry to help speed up access to registry artifacts on-premises and to build advanced scenarios, for example using [nested IoT Edge](../iot-edge/tutorial-nested-iot-edge.md).
+In this article, you learn about the *connected registry* feature of [Azure Container Registry](container-registry-intro.md). A connected registry is an on-premises or remote replica that synchronizes container images with your cloud-based Azure container registry. Use a connected registry to help speed-up access to registry artifacts on-premises or remote.
-> [!NOTE]
-> The connected registry is a preview feature of the **Premium** container registry service tier, and subject to [limitations](#limitations). For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md).
+## Billing and Support
+
+The connected registry is a preview feature of the **Premium** container registry service tier, and subject to [limitations](#limitations). For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md).
+
+>[!IMPORTANT]
+> Please note that there are **Important upcoming changes** to the connected registry Deployment Model Support and Billing starting from January 1st, 2025. For any inquiries or assistance with the transition, please reach out to the customer support team.
+
+### Billing
+
+- The connected registry incurs no charges until it reaches general availability (GA).
+- Post-GA, a monthly price of $10 will apply for each connected registry deployed.
+- This price represents Microsoft's commitment to deliver high-quality services and product support.
+- The price is applied to the Azure subscription associated with the parent registry.
+
+### Support
+
+- Microsoft will end support for the connected registry deployment on IoT Edge devices on January 1st, 2025.
+- After January 1st, 2025 connected registry will solely support Arc-enabled Kubernetes clusters as the deployment model.
+- Microsoft advises users to begin planning their transition to Arc-enabled Kubernetes clusters as the deployment model.
## Available regions
-* Canada Central
-* East Asia
-* East US
-* North Europe
-* Norway East
-* Southeast Asia
-* West Central US
-* West Europe
+Connected registry is available in the following continents and regions:
+
+```
+| Continent | Available Regions |
+||-|
+| Australia | Australia East |
+| Asia | East Asia |
+| | Japan East |
+| | Japan West |
+| | Southeast Asia |
+| Europe | North Europe |
+| | Norway East |
+| | West Europe |
+| North America | Canada Central |
+| | Central US |
+| | East US |
+| | South Central US |
+| | West Central US |
+| | West US 3 |
+| South America | Brazil South |
+```
## Scenarios
Scenarios for a connected registry include:
## How does the connected registry work?
-The following image shows a typical deployment model for the connected registry.
+The connected registry is deployed on a server or device on-premises, or an environment that supports container workloads on-premises such as Azure IoT Edge and Azure Arc-enabled Kubernetes. The connected registry synchronizes container images and other OCI artifacts with a cloud-based Azure container registry.
+
+The following image shows a typical deployment model for the connected registry using IoT Edge.
+
+The following image shows a typical deployment model for the connected registry using Azure Arc-enabled Kubernetes.
+ ### Deployment
-Each connected registry is a resource you manage using a cloud-based Azure container registry. The top parent in the connected registry hierarchy is an Azure container registry in an Azure cloud.
+Each connected registry is a resource you manage within a cloud-based Azure container registry. The top parent in the connected registry hierarchy is an Azure container registry in the Azure cloud. The connected registry can be deployed either on Azure IoT Edge or Arc-enabled Kubernetes clusters.
+
+To install the connected registry, use Azure tools on a server or device on your premises, or in an environment that supports on-premises container workloads, such as [Azure IoT Edge](../iot-edge/tutorial-nested-iot-edge.md).
-Use Azure tools to install the connected registry on a server or device on your premises, or an environment that supports container workloads on-premises such as [Azure IoT Edge](../iot-edge/tutorial-nested-iot-edge.md).
+Deploy the connected registry Arc extension to the Arc-enabled Kubernetes cluster. Secure the connection with TLS using default configurations for read-only access and a continuous sync window. This setup allows the connected registry to synchronize images from the Azure container registry (ACR) to the connected registry on-premises, enabling image pulls from the connected registry.
The connected registry's *activation status* indicates whether it's deployed on-premises.
-* **Active** - The connected registry is currently deployed on-premises. It can't be deployed again until it is deactivated.
+* **Active** - The connected registry is currently deployed on-premises. It can't be deployed again until it's deactivated.
* **Inactive** - The connected registry is not deployed on-premises. It can be deployed at this time. ### Content synchronization
It can also be configured to synchronize a subset of the repositories from the c
A connected registry can work in one of two modes: *ReadWrite* or *ReadOnly* -- **ReadWrite mode** - The mode allows clients to pull and push artifacts (read and write) to the connected registry. Artifacts that are pushed to the connected registry will be synchronized with the cloud registry.
-
- The ReadWrite mode is useful when a local development environment is in place. The images are pushed to the local connected registry and from there synchronized to the cloud.
+**ReadOnly mode** - The default mode, when the connected registry is in ReadOnly mode, clients can only pull (read) artifacts. This configuration is used in scenarios where clients need to pull a container image to operate. This default mode aligns with our secure-by-default approach and is effective starting with CLI version 2.60.0.
-- **ReadOnly mode** - When the connected registry is in ReadOnly mode, clients can only pull (read) artifacts. This configuration is used for nested IoT Edge scenarios, or other scenarios where clients need to pull a container image to operate.--- **Default mode** - The ***ReadOnly mode*** is now the default mode for connected registries. This change aligns with our secure-by-default approach and is effective starting with CLI version 2.60.0.
+**ReadWrite mode** - This mode allows clients to pull and push artifacts (read and write) to the connected registry. Artifacts that are pushed to the connected registry will be synchronized with the cloud registry. The ReadWrite mode is useful when a local development environment is in place. The images are pushed to the local connected registry and from there synchronized to the cloud.
### Registry hierarchy
-Each connected registry must be connected to a parent. The top parent is the cloud registry. For hierarchical scenarios such as [nested IoT Edge](overview-connected-registry-and-iot-edge.md), you can nest connected registries in either mode. The parent connected to the cloud registry can operate in either mode.
+Each connected registry must be connected to a parent. The top parent is the cloud registry. For hierarchical scenarios such as [nested IoT Edge][overview-connected-registry-and-iot-edge], you can nest connected registries in either mode. The parent connected to the cloud registry can operate in either mode.
-Child registries must be compatible with their parent capabilities. Thus, both ReadWrite and ReadOnly mode connected registries can be children of a connected registry operating in ReadWrite mode, but only a ReadOnly mode registry can be a child of a connected registry operating in ReadOnly mode.
+Child registries must be compatible with their parent capabilities. Thus, both ReadOnly and ReadWrite modes of the connected registries can be children of a connected registry operating in ReadWrite mode, but only a ReadOnly mode registry can be a child of a connected registry operating in ReadOnly mode.
## Client access
-On-premises clients use standard tools such as the Docker CLI to push or pull content from a connected registry. To manage client access, you create Azure container registry [tokens][repository-scoped-permissions] for access to each connected registry. You can scope the client tokens for pull or push access to one or more repositories in the registry.
+On-premises clients use standard tools such as the Docker CLI to push or pull content from a Connected registry. To manage client access, you create Azure container registry [tokens][repository-scoped-permissions] for access to each connected registry. You can scope the client tokens for pull or push access to one or more repositories in the registry.
Each connected registry also needs to regularly communicate with its parent registry. For this purpose, the registry is issued a synchronization token (*sync token*) by the cloud registry. This token is used to authenticate with its parent registry for synchronization and management operations.
For more information, see [Manage access to a connected registry][overview-conne
## Limitations -- Number of tokens and scope maps is [limited](container-registry-skus.md) to 20,000 each for a single container registry. This indirectly limits the number of connected registries for a cloud registry, because every connected registry needs a sync and client token.
+- Number of tokens and scope maps is [limited](container-registry-skus.md) to 20,000 each for a single container registry. This indirectly limits the number of connected registries for a cloud registry, because every Connected registry needs a sync and client token.
- Number of repository permissions in a scope map is limited to 500. - Number of clients for the connected registry is currently limited to 20.-- [Image locking](container-registry-image-lock.md) through repository/manifest/tag metadata is not currently supported for connected registries.-- [Repository delete](container-registry-delete.md) is not supported on the connected registry using ReadOnly mode.
+- [Image locking](container-registry-image-lock.md) through repository/manifest/tag metadata isn't currently supported for connected registries.
+- [Repository delete](container-registry-delete.md) isn't supported on the connected registry using ReadOnly mode.
- [Resource logs](monitor-service-reference.md#resource-logs) for connected registries are currently not supported.-- Connected registry is coupled with the registry's home region data endpoint. Automatic migration for [geo-replication](container-registry-geo-replication.md) is not supported.-- Deletion of a connected registry needs manual removal of the containers on-premises as well as removal of the respective scope map or tokens in the cloud.
+- Connected registry is coupled with the registry's home region data endpoint. Automatic migration for [geo-replication](container-registry-geo-replication.md) isn't supported.
+- Deletion of a connected registry needs manual removal of the containers on-premises and removal of the respective scope map or tokens in the cloud.
- Connected registry sync limitations are as follows: - For continuous sync:
- - `minMessageTtl` is 1 day
+ - `minMessageTtl` is one day
- `maxMessageTtl` is 90 days - For occasionally connected scenarios, where you want to specify sync window: - `minSyncWindow` is 1 hr
- - `maxSyncWindow` is 7 days
+ - `maxSyncWindow` is seven days
-## Next steps
+## Conclusion
In this overview, you learned about the connected registry and some basic concepts. Continue to the one of the following articles to learn about specific scenarios where connected registry can be utilized. > [!div class="nextstepaction"]
-> [Overview: Connected registry access][overview-connected-registry-access]
->
-> [!div class="nextstepaction"]
-> [Overview: Connected registry and IoT Edge][overview-connected-registry-and-iot-edge]
- <!-- LINKS - internal --> [overview-connected-registry-access]:overview-connected-registry-access.md [overview-connected-registry-and-iot-edge]:overview-connected-registry-and-iot-edge.md
cost-management-billing Tutorial Improved Exports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-improved-exports.md
Agreement types, scopes, and required roles are explained at [Understand and wor
| **Data types** | **Supported agreement** | **Supported scopes** | | | | |
-| Cost and usage (actual) | ΓÇó EA<br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise<br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó Microsoft Partner Agreement (MPA) - Customer, subscription, and resource group |
-| Cost and usage (amortized) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, and resource group |
+| Cost and usage (actual) | ΓÇó EA<br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise<br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó Microsoft Partner Agreement (MPA) - Customer, subscription, and resource group |
+| Cost and usage (amortized) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, and resource group |
| Cost and usage (FOCUS) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner| ΓÇó EA - Enrollment, department, account, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, resource group. **NOTE**: The management group scope isn't supported for Cost and usage details (FOCUS) exports. | | All available prices | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile | | Reservation recommendations | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile |
The improved exports experience currently has the following limitations.
- The new exports experience doesn't fully support the management group scope, and it has feature limitations. -- Azure internal and MOSP billing scopes and subscriptions donΓÇÖt support FOCUS datasets.
+- Azure internal accounts and the Microsoft Online Service Program (MOSP), commonly referred to as pay-as-you-go, support only the 'Cost and Usage Details (Usage Only)' dataset for billing scopes and subscriptions.
+ - Shared access service (SAS) key-based cross tenant export is only supported for Microsoft partners at the billing account scope. It isn't supported for other partner scenarios like any other scope, EA indirect contract, or Azure Lighthouse. ## FAQ
cost-management-billing Cost Usage Details Pay As You Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/dataset-schema/cost-usage-details-pay-as-you-go.md
# Pay-as-you-go cost and usage details file schema
-This article applies to the pay-as-you-go cost and usage details file schema. Pay-as-you-go is also called Microsoft Online Services Program (MOSP) and Microsoft Online Subscription Agreement (MOSA).
+This article applies to the pay-as-you-go cost and usage details (usage only) file schema. Pay-as-you-go is also called Microsoft Online Services Program (MOSP) and Microsoft Online Subscription Agreement (MOSA).
-The following information lists the cost and usage details (formerly known as usage details) fields found in the pay-as-you-go cost and usage details file. The file contains contains all of the cost details and usage data for the Azure services that were used.
+The schema outlined here is applicable only to the 'Cost and Usage Details (Usage Only)' dataset and does not cover purchase information.
## Version 2019-11-01
cost-management-billing Save Compute Costs Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/save-compute-costs-reservations.md
For more information, see [Self-service exchanges and refunds for Azure Reservat
Software plans: - **SUSE Linux** - A reservation covers the software plan costs. The discounts apply only to SUSE meters and not to the virtual machine usage.-- **Red Hat Plans** (***temporarily unavailable***) - A reservation covers the software plan costs. The discounts apply only to RedHat meters and not to the virtual machine usage.
+- **Red Hat Plans** (***plans and renewal are temporarily unavailable***) - A reservation covers the software plan costs. The discounts apply only to RedHat meters and not to the virtual machine usage.
- **Azure Red Hat OpenShift** - A reservation applies to the OpenShift costs, not to Azure infrastructure costs. For Windows virtual machines and SQL Database, the reservation discount doesn't apply to the software costs. You can cover the licensing costs with [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/).
cost-management-billing Understand Rhel Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-rhel-reservation-charges.md
# Understand how the Red Hat Linux Enterprise software reservation plan discount is applied for Azure > [!NOTE]
-> The Red Hat Linux Enterprise software reservation plan is temporarily unavailable.
+> The Red Hat Linux Enterprise software reservation plan and renewal are temporarily unavailable. Disregard any renewal emails until the plan is available.
When you buy a Red Hat Linux Enterprise software plan, you get a discount on the cost of running Red Hat software on Azure virtual machines. This article explains how the discount is applied to your Red Hat software costs.
-After you buy a Red Hat Linux plan, the discount is automatically applied to deployed Red Hat virtual machines (VMs) that match the reservation. A Red Hat Linux plan covers the cost of running the Red Hat software on an Azure VM.
+After you buy a Red Hat Linux plan, the discount is automatically applied to deployed Red Hat virtual machines (VM) that match the reservation. A Red Hat Linux plan covers the cost of running the Red Hat software on an Azure VM.
To buy the right Red Hat Linux plan, you need to understand what Red Hat VMs you run and the number of vCPUs on those VMs. Use the following sections to help identify from your usage CSV file what plan to buy. ## Discount applies to different VM sizes
-Like Reserved VM Instances, Red Hat plan purchases offer instance size flexibility. This means that your discount applies even when you deploy a VM with a different vCPU count. The discount applies to different VM sizes within the software plan.
+Like Reserved VM Instances, Red Hat plan purchases offer instance size flexibility. Instance size flexibility applies your discount even when you deploy a VM with a different vCPU count and to different VM sizes within the software plan.
The discount amount depends on the VM vCPU ratio listed at [Instance size flexibility ratio for VMs](/azure/virtual-machines/reserved-vm-instance-size-flexibility#instance-size-flexibility-ratio-for-vms). Use the ratio value to calculate how many VM instances get the Red Hat Linux plan discount.
To learn more about reservations, see the following articles:
- [Prepay for Red Hat software plans with Azure reservations](/azure/virtual-machines/linux/prepay-suse-software-charges) - [Prepay for Virtual Machines with Azure Reserved VM Instances](/azure/virtual-machines/prepay-reserved-vm-instances) - [Manage reservations for Azure](manage-reserved-vm-instance.md)-- [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md)
+- [Understand reservation usage for your pay-as-you-go subscription](understand-reserved-instance-usage.md)
- [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md) ## Related content
data-factory Connector Vertica https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-vertica.md
Previously updated : 10/20/2023 Last updated : 08/12/2024 # Copy data from Vertica using Azure Data Factory or Synapse Analytics
This Vertica connector is supported for the following capabilities:
| Supported capabilities|IR | || --|
-|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
-|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; (only for version 1.0) &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; (only for version 1.0) &#9313;|
*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activi
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
+For version 2.0 (Preview), you need to [install a Vertica ODBC driver](#install-vertica-odbc-driver-for-the-version-20-preview) manually. For version 1.0, the service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver.
+ ## Prerequisites
+If your data store is located inside an on-premises network, an Azure virtual network, or Amazon Virtual Private Cloud, you need to configure a [self-hosted integration runtime](create-self-hosted-integration-runtime.md) to connect to it. If you use the version 2.0 (Preview), your self-hosted integration runtime version should be 5.44.8984.1 or above.
+
+For more information about the network security mechanisms and options supported by Data Factory, see [Data access strategies](data-access-strategies.md).
+
+### For version 1.0
+
+If your data store is a managed cloud data service, you can use the Azure Integration Runtime. If the access is restricted to IPs that are approved in the firewall rules, you can add [Azure Integration Runtime IPs](azure-integration-runtime-ip-addresses.md) to the allowlist.
+
+ You can also use the [managed virtual network integration runtime](tutorial-managed-virtual-network-on-premise-sql-server.md) feature in Azure Data Factory to access the on-premises network without installing and configuring a self-hosted integration runtime.
++
+### Install Vertica ODBC driver for the version 2.0 (Preview)
+
+To use Vertica connector with version 2.0 (Preview), install the Vertica ODBC driver on the machine running the self-hosted Integration runtime by following these steps:
+
+1. Download the Vertica client setup for ODBC driver from [Client Drivers | OpenTextΓäó VerticaΓäó](https://www.vertica.com/download/vertica/client-drivers/). Take Windows system setup as an example:
+
+ :::image type="content" source="media/connector-vertica/download.png" alt-text="Screenshot of a Windows system setup example.":::
+
+1. Open the downloaded .exe to begin the installation process.ΓÇ»
+
+ :::image type="content" source="media/connector-vertica/install.png" alt-text="Screenshot of the installation process.":::
+
+1. Select **ODBC driver** under Vertica Component List, then select **Next** to start the installation.
+
+ :::image type="content" source="media/connector-vertica/select-odbc-driver.png" alt-text="Screenshot of selecting ODBC driver.":::
+
+1. After the installation process is successfully completed, you can go to Start -> ODBC Data Source Administrator to confirm the successful installation.
+
+ :::image type="content" source="media/connector-vertica/confirm-the successful-installation.png" alt-text="Screenshot of confirming the successful installation.":::
## Getting started
The following sections provide details about properties that are used to define
## Linked service properties
-The following properties are supported for Vertica linked service:
+If you use version 2.0 (Preview), the following properties are supported for Vertica linked service:
| Property | Description | Required | |: |: |: | | type | The type property must be set to: **Vertica** | Yes |
-| connectionString | An ODBC connection string to connect to Vertica.<br/>You can also put password in Azure Key Vault and pull the `pwd` configuration out of the connection string. Refer to the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
-| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
+| server | The name or the IP address of the server to which you want to connect. | Yes |
+| port | The port number of the server listener. | No, default is 5433 |
+| database | Name of the Vertica database. | Yes |
+| uid | The user ID that is used to connect to the database. | Yes |
+| pwd | The password that the application uses to connect to the database. | Yes |
+| version | The version when you select version 2.0 (Preview). The value is `2.0`. | Yes |
+| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. You can only use the self-hosted integration runtime and its version should be 5.44.8984.1 or above. |No |
**Example:**
The following properties are supported for Vertica linked service:
"name": "VerticaLinkedService", "properties": { "type": "Vertica",
+ "version": "2.0",
"typeProperties": {
- "connectionString": "Server=<server>;Port=<port>;Database=<database>;UID=<user name>;PWD=<password>"
+ "server": "<server>",
+ "port": 5433,
+ "uid": "<username>",
+ "database": "<database>",
+ "pwd": {
+ "type": "SecureString",
+ "value": "<password>"
+ }
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
The following properties are supported for Vertica linked service:
"name": "VerticaLinkedService", "properties": { "type": "Vertica",
+ "version": "2.0",
"typeProperties": {
- "connectionString": "Server=<server>;Port=<port>;Database=<database>;UID=<user name>;",
- "pwd": { 
- "type": "AzureKeyVaultSecret", 
- "store": { 
- "referenceName": "<Azure Key Vault linked service name>", 
- "type": "LinkedServiceReference" 
- }, 
- "secretName": "<secretName>" 
+ "server": "<server>",
+ "port": 5433,
+ "uid": "<username>",
+ "database": "<database>",
+ "pwd": {
+ "type": "AzureKeyVaultSecret",
+ "store": {
+ "referenceName": "<Azure Key Vault linked service name>",
+ "type": "LinkedServiceReference"
+ },
+ "secretName": "<secretName>"
} }, "connectVia": {
The following properties are supported for Vertica linked service:
} ```
+If you use version 1.0, the following properties are supported:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to: **Vertica** | Yes |
+| connectionString | An ODBC connection string to connect to Vertica.<br/>You can also put password in Azure Key Vault and pull the `pwd` configuration out of the connection string. Refer to the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
+| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
+
+**Example:**
+
+```json
+{
+ "name": "VerticaLinkedService",
+ "properties": {
+ "type": "Vertica",
+ "typeProperties": {
+ "connectionString": "Server=<server>;Port=<port>;Database=<database>;UID=<user name>;PWD=<password>"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+ ## Dataset properties For a full list of sections and properties available for defining datasets, see the [datasets](concepts-datasets-linked-services.md) article. This section provides a list of properties supported by Vertica dataset.
To copy data from Vertica, set the type property of the dataset to **VerticaTabl
| type | The type property of the dataset must be set to: **VerticaTable** | Yes | | schema | Name of the schema. |No (if "query" in activity source is specified) | | table | Name of the table. |No (if "query" in activity source is specified) |
-| tableName | Name of the table with schema. This property is supported for backward compatibility. Use `schema` and `table` for new workload. | No (if "query" in activity source is specified) |
**Example**
To copy data from Vertica, set the source type in the copy activity to **Vertica
| Property | Description | Required | |: |: |: | | type | The type property of the copy activity source must be set to: **VerticaSource** | Yes |
-| query | Use the custom SQL query to read data. For example: `"SELECT * FROM MyTable"`. | No (if "tableName" in dataset is specified) |
+| query | Use the custom SQL query to read data. For example: `"SELECT * FROM MyTable"`. | No (if "schema+table" in dataset is specified) |
**Example:**
To copy data from Vertica, set the source type in the copy activity to **Vertica
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
+## Upgrade the Vertica version
+
+Here are steps that help you upgrade your Vertica version:
+
+1. Install a Vertica ODBC driver by following the steps in [Prerequisites](#install-vertica-odbc-driver-for-the-version-20-preview).
+1. In **Edit linked service page**, select **2.0 (Preview)** under **Version** and configure the linked service by referring to [Linked service properties](#linked-service-properties).
+1. Apply a self-hosted integration runtime with version 5.44.8984.1 or above. Azure integration runtime is not supported by version 2.0 (Preview).
+ ## Related content For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
Previously updated : 01/05/2024 Last updated : 09/03/2024
Here's a high-level summary of the data-flow steps for copying with a self-hoste
## Prerequisites - The supported versions of Windows are:
- - Windows 8.1
- Windows 10 - Windows 11
- - Windows Server 2012
- - Windows Server 2012 R2
- Windows Server 2016 - Windows Server 2019 - Windows Server 2022
databox Data Box Disk Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-copy-data.md
Title: Tutorial to copy data to Azure Data Box Disk| Microsoft Docs
+ Title: Tutorial to copy data to Azure Data Box Disk | Microsoft Docs
description: In this tutorial, learn how to copy data from your host computer to Azure Data Box Disk and then generate checksums to verify data integrity.
After the disks are connected and unlocked, you can copy data from your source d
::: zone target="docs" > [!IMPORTANT]
-> Azure Data Box now supports access tier assignment at the blob level. The steps contained within this tutorial reflect the updated data copy process and are specific to block blobs.
+> Azure Data Box now supports access tier assignment at the blob level. The steps contained within this tutorial reflect the updated data copy process and are specific to block blobs.
>
->For help with determining the appropriate access tier for your block blob data, refer to the [Determine appropriate access tiers for block blobs](#determine-appropriate-access-tiers-for-block-blobs) section. Follow the steps containined within the [Copy data to disks](#copy-data-to-disks) section to copy your data to the appropriate access tier.
+> Access tier assignment is not supported when copying data using the Data Box Split Copy Tool. If your use case requires access tier assignment, follow the steps containined within the [Copy data to disks](#copy-data-to-disks) section to copy your data to the appropriate access tier using the Robocopy utility.
+>
+> For help with determining the appropriate access tier for your block blob data, refer to the [Determine appropriate access tiers for block blobs](#determine-appropriate-access-tiers-for-block-blobs) section.
> > The information contained within this section applies to orders placed after April 1, 2024.
The Data Box Split Copy tool helps split and copy data across two or more Azure
>[!IMPORTANT] > The Data Box Split Copy tool can also validate your data. If you use Data Box Split Copy tool to copy data, you can skip the [validation step](#validate-data).
-> The Split Copy tool is not supported with managed disks.
+>
+> Access tier assignment is not supported when copying data using the Data Box Split Copy Tool. If your use case requires access tier assignment, follow the steps containined within the [Copy data to disks](#copy-data-to-disks) section to copy your data to the appropriate access tier using the Robocopy utility.
+>
+> The Data Box Split Copy tool is not supported with managed disks.
1. On your Windows computer, ensure that you have the Data Box Split Copy tool downloaded and extracted in a local folder. This tool is included within the Data Box Disk toolset for Windows. 1. Open File Explorer. Make a note of the data source drive and drive letters assigned to Data Box Disk.
defender-for-iot Cli Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md
Before you can run any of the following CLI commands, you'll need access to the
While this article lists the command syntax for each user, we recommend using the *admin* user for all CLI commands where the *admin* user is supported.
-If you're using an older version of the sensor software, you may have access to the legacy *support* user. In such cases, any commands that are listed as supported for the *admin* user are supported for the legacy *support* user.
- For more information, see [Access the CLI](../references-work-with-defender-for-iot-cli-commands.md#access-the-cli) and [Privileged user access for OT monitoring](references-work-with-defender-for-iot-cli-commands.md#privileged-user-access-for-ot-monitoring). - ## Appliance maintenance ### Check OT monitoring services health
Health checks are also available from the OT sensor console. For more informatio
|**admin** | `system sanity` | No attributes | |**cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-sanity` | No attributes | - The following example shows the command syntax and response for the *admin* user: ```bash
-root@xsense: system sanity
+shell> system sanity
[+] C-Cabra Engine | Running for 17:26:30.191945 [+] Cache Layer | Running for 17:26:32.352745 [+] Core API | Running for 17:26:28
root@xsense: system sanity
System is UP! (medium) ``` -
-### Restart and shutdown
-
-#### Restart an appliance
+### Restart an appliance
Use the following commands to restart the OT sensor appliance.
Use the following commands to restart the OT sensor appliance.
|**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `sudo reboot` | No attributes | |**cyberx_host** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `sudo reboot` | No attributes | - For example, for the *admin* user: ```bash
-root@xsense: system reboot
+shell> system reboot
```
-#### Shut down an appliance
+### Shutdown an appliance
Use the following commands to shut down the OT sensor appliance.
Use the following commands to shut down the OT sensor appliance.
|**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `sudo shutdown -r now` | No attributes | |**cyberx_host**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `sudo shutdown -r now` | No attributes | - For example, for the *admin* user: ```bash
-root@xsense: system shutdown
+shell> system shutdown
```
-### Software versions
-
-#### Show installed software version
+### Show installed software version
Use the following commands to list the Defender for IoT software version installed on your OT sensor.
Use the following commands to list the Defender for IoT software version install
|**admin** | `system version` | No attributes | |**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-version` | No attributes | - For example, for the *admin* user: ```bash
-root@xsense: system version
+shell> system version
Version: 22.2.5.9-r-2121448 ```
-#### Update sensor software from CLI
-
-For more information, see [Update your sensors](update-ot-software.md#update-ot-sensors).
-
-### Date, time, and NTP
-
-#### Show current system date/time
+### Show current system date/time
Use the following commands to show the current system date and time on your OT network sensor, in GMT format.
Use the following commands to show the current system date and time on your OT n
|**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `date` | No attributes | |**cyberx_host** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `date` | No attributes | - For example, for the *admin* user: ```bash
-root@xsense: date
+shell> date
Thu Sep 29 18:38:23 UTC 2022
-root@xsense:
+shell>
```
-#### Turn on NTP time sync
+### Turn on NTP time sync
Use the following commands to turn on synchronization for the appliance time with an NTP server.
In these commands, `<IP address>` is the IP address of a valid IPv4 NTP server u
For example, for the *admin* user: ```bash
-root@xsense: ntp enable 129.6.15.28
-root@xsense:
+shell> ntp enable 129.6.15.28
+shell>
```
-#### Turn off NTP time sync
+### Turn off NTP time sync
Use the following commands to turn off the synchronization for the appliance time with an NTP server.
In these commands, `<IP address>` is the IP address of a valid IPv4 NTP server u
For example, for the *admin* user: ```bash
-root@xsense: ntp disable 129.6.15.28
-root@xsense:
+shell> ntp disable 129.6.15.28
+shell>
``` ## Backup and restore
Backup files include a full snapshot of the sensor state, including configuratio
>[!CAUTION] > Do not interrupt a system backup or restore operation as this may cause the system to become unusable.
-### List current backup files
-
-Use the following commands to list the backup files currently stored on your OT network sensor.
-
-|User |Command |Full command syntax |
-||||
-|**admin** | `system backup-list` | No attributes |
-|**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | ` cyberx-xsense-system-backup-list` | No attributes |
--
-For example, for the *admin* user:
-
-```bash
-root@xsense: system backup-list
-backup files:
- e2e-xsense-1664469968212-backup-version-22.3.0.318-r-71e6295-2022-09-29_18:30:20.tar
- e2e-xsense-1664469968212-backup-version-22.3.0.318-r-71e6295-2022-09-29_18:29:55.tar
-root@xsense:
-```
-- ### Start an immediate, unscheduled backup
-Use the following commands to start an immediate, unscheduled backup of the data on your OT sensor. For more information, see [Set up backup and restore files](../how-to-manage-individual-sensors.md#set-up-backup-and-restore-files).
+Use the following command to start an immediate, unscheduled backup of the data on your OT sensor. For more information, see [Set up backup and restore files](../how-to-manage-individual-sensors.md#set-up-backup-and-restore-files).
> [!CAUTION] > Make sure not to stop or power off the appliance while backing up data. |User |Command |Full command syntax | ||||
-|**admin** | `system backup` | No attributes |
+|**admin** | `system backup create` | No attributes |
|**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | ` cyberx-xsense-system-backup` | No attributes | - For example, for the *admin* user: ```bash
-root@xsense: system backup
+shell> system backup create
Backing up DATA_KEY ... ... Finished backup. Backup is stored at /var/cyberx/backups/e2e-xsense-1664469968212-backup-version-22.2.6.318-r-71e6295-2022-09-29_18:29:55.tar Setting backup status 'SUCCESS' in redis
-root@xsense:
+shell>
+```
+
+### List current backup files
+
+Use the following commands to list the backup files currently stored on your OT network sensor.
+
+|User |Command |Full command syntax |
+||||
+|**admin** | `system backup list` | No attributes |
+|**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-system-backup-list` | No attributes |
+
+For example, for the *admin* user:
+
+```bash
+shell> system backup list
+backup files:
+ e2e-xsense-1664469968212-backup-version-22.3.0.318-r-71e6295-2022-09-29_18:30:20.tar
+ e2e-xsense-1664469968212-backup-version-22.3.0.318-r-71e6295-2022-09-29_18:29:55.tar
+shell>
``` ### Restore data from the most recent backup
-Use the following commands to restore data on your OT network sensor using the most recent backup file. When prompted, confirm that you want to proceed.
+Use the following command to restore data on your OT network sensor using the most recent backup file. When prompted, confirm that you want to proceed.
> [!CAUTION] > Make sure not to stop or power off the appliance while restoring data.
Use the following commands to restore data on your OT network sensor using the m
|**admin** | `system restore` | No attributes | |**cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | ` cyberx-xsense-system-restore` | `-f` `<filename>` | - For example, for the *admin* user: ```bash
-root@xsense: system restore
+shell> system restore
Waiting for redis to start... Redis is up Use backup file as "/var/cyberx/backups/e2e-xsense-1664469968212-backup-version-22.2.6.318-r-71e6295-2022-09-29_18:30:20.tar" ? [Y/n]: y
WARNING - the following procedure will restore data. do not stop or power off th
... watchdog started starting components
-root@xsense:
+shell>
``` - ### Display backup disk space allocation The following command lists the current backup disk space allocation, including the following details:
The following command lists the current backup disk space allocation, including
|User |Command |Full command syntax | ||||
-|**cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | ` cyberx-backup-memory-check` | No attributes |
+| **admin** | `cyberx-backup-memory-check` | No attributes |
-For example, for the *cyberx* user:
+For example, for the *admin* user:
```bash
-root@xsense:/# cyberx-backup-memory-check
+shell> cyberx-backup-memory-check
2.1M /var/cyberx/backups Backup limit is: 20Gb
-root@xsense:/#
-```
--
-## TLS/SSL certificates
--
-### Import TLS/SSL certificates to your OT sensor
-
-Use the following command to import TLS/SSL certificates to the sensor from the CLI.
-
-To use this command:
--- Verify that the certificate file you want to import is readable on the appliance. Upload certificate files to the appliance using tools such as WinSCP or Wget.-- Confirm with your IT office that the appliance domain as it appears in the certificate is correct for your DNS server and the corresponding IP address.-
-For more information, see [Prepare CA-signed certificates](best-practices/plan-prepare-deploy.md#prepare-ca-signed-certificates) and [Create SSL/TLS certificates for OT appliances](ot-deploy/create-ssl-certificates.md).
-
-|User |Command |Full command syntax |
-||||
-| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-certificate-import` | cyberx-xsense-certificate-import [-h] [--crt &lt;PATH&gt;] [--key &lt;FILE NAME&gt;] [--chain &lt;PATH&gt;] [--pass &lt;PASSPHRASE&gt;] [--passphrase-set &lt;VALUE&gt;]`
-
-In this command:
--- `-h`: Shows the full command help syntax-- `--crt`: The path to the certificate file you want to upload, with a `.crt` extension-- `--key`: The `\*.key` file you want to use for the certificate. Key length must be a minimum of 2,048 bits-- `--chain`: The path to a certificate chain file. Optional.-- `--pass`: A passphrase used to encrypt the certificate. Optional. -
- The following characters are supported for creating a key or certificate with a passphrase:
- - ASCII characters, including **a-z**, **A-Z**, **0-9**
- - The following special characters: **! # % ( ) + , - . / : = ? @ [ \ ] ^ _ { } ~**
-- `--passphrase-set`: Unused and set to *False* by default. Set to *True* to use passphrase supplied with the previous certificate. Optional.-
-For example, for the *cyberx* user:
-
-```bash
-root@xsense:/# cyberx-xsense-certificate-import
+shell>
```
-### Restore the default self-signed certificate
-
-Use the following command to restore the default, self-signed certificates on your sensor appliance. We recommend that you use this activity for troubleshooting only, and not on production environments.
-
-|User |Command |Full command syntax |
-||||
-|**cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-create-self-signed-certificate` | No attributes |
-
-For example, for the *cyberx* user:
-
-```bash
-root@xsense:/# cyberx-xsense-create-self-signed-certificate
-Creating a self-signed certificate for Apache2...
-random directory name for the new certificate is 348
-Generating a RSA private key
-................+++++
-....................................+++++
-writing new private key to '/var/cyberx/keys/certificates/348/apache.key'
-executing a query to add the certificate to db
-finished
-root@xsense:/#
-```
-- ## Local user management ### Change local user passwords
-Use the following commands to change passwords for local users on your OT sensor.
-
-When you change the password for the *admin*, *cyberx*, or *cyberx_host* user, the password is changed for both SSH and web access.
+Use the following commands to change passwords for local users on your OT sensor. The new password must be at least 8 characters, contain lowercase and uppercase, alphabetic characters, numbers and symbols.
+When you change the password for the *admin* the password is changed for both SSH and web access.
|User |Command |Full command syntax | ||||
-|**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-users-password-reset` | `cyberx-users-password-reset -u <user> -p <password>` |
-|**cyberx_host**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `passwd` | No attributes |
+|**admin** | `system password` | `<username>` |
-
-The following example shows the *cyberx* user resetting the *admin* user's password to `jI8iD9kE6hB8qN0h`:
-
-```bash
-root@xsense:/# cyberx-users-password-reset -u admin -p jI8iD9kE6hB8qN0h
-resetting the password of OS user "admin"
-Sending USER_PASSWORD request to OS manager
-Open UDS connection with /var/cyberx/system/os_manager.sock
-Received data: b'ack'
-resetting the password of UI user "admin"
-root@xsense:/#
-```
-
-The following example shows the *cyberx_host* user changing the *cyberx_host* user's password.
+The following example shows the *admin* user's changing the password. The new password does not appear on the screen when you type it, make sure to write to make a note of it and ensure that it is correctly typed when asked to reenter the password.
```bash
-cyberx_host@xsense:/# passwd
-Changing password for user cyberx_host.
-(current) UNIX password:
-New password:
-Retype new password:
-passwd: all authentication tokens updated successfully.
-cyberx_host@xsense:/#
+shell>system password user1
+Enter New Password for user1:
+Reenter Password:
+shell>
``` -
-### Control user session timeouts
-
-Define the time after which users are automatically signed out of the OT sensor. Define this value in a properties file saved on the sensor.
-not that
-For more information, see [Control user session timeouts](manage-users-sensor.md#control-user-session-timeouts).
-
-### Define maximum number of failed sign-ins
-
-Define the number of maximum failed sign-ins before an OT sensor will prevent the user from signing in again from the same IP address. Define this value in a properties file saved on the sensor.
-
-For more information, see [Define maximum number of failed sign-ins](manage-users-sensor.md#define-maximum-number-of-failed-sign-ins).
- ## Network configuration
-### Network settings
-
-#### Change networking configuration or reassign network interface roles
+### Change networking configuration or reassign network interface roles
Use the following command to rerun the OT monitoring software configuration wizard, which helps you define or reconfigure the following OT sensor settings:
Use the following command to rerun the OT monitoring software configuration wiza
|User |Command |Full command syntax | ||||
-|**cyberx_host**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `sudo dpkg-reconfigure iot-sensor` | No attributes |
+|**admin** | `sudo dpkg-reconfigure iot-sensor` | No attributes |
-For example, with the **cyberx_host** user:
+For example, with the **admin** user:
```bash
-root@xsense:/# sudo dpkg-reconfigure iot-sensor
+shell> sudo dpkg-reconfigure iot-sensor
``` The configuration wizard starts automatically after you run this command. For more information, see [Install OT monitoring software](../how-to-install-software.md#install-ot-monitoring-software). -
-#### Validate and show network interface configuration
+### Validate and show network interface configuration
Use the following commands to validate and show the current network interface configuration on the OT sensor.
Use the following commands to validate and show the current network interface co
For example, for the *admin* user: ```bash
-root@xsense: network validate
+shell> network validate
Success! (Appliance configuration matches the network settings) Current Network Settings: interface: eth0
subnet: 255.255.192.0
default gateway: 10.1.0.1 dns: 168.63.129.16 monitor interfaces mapping: local_listener=adiot0
-root@xsense:
+shell>
```
-### Network connectivity
-#### Check network connectivity from the OT sensor
+### Check network connectivity from the OT sensor
-Use the following commands to send a ping message from the OT sensor.
+Use the following command to send a ping message from the OT sensor.
|User |Command |Full command syntax | ||||
Use the following commands to send a ping message from the OT sensor.
In these commands, `<IP address>` is the IP address of a valid IPv4 network host accessible from the management port on your OT sensor.
-#### Check network interface current load
-
-Use the following command to display network traffic and bandwidth using a six-second test.
-
-|User |Command |Full command syntax |
-||||
-|**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-nload` | No attributes |
-
-```bash
-root@xsense:/# cyberx-nload
-eth0:
- Received: 66.95 KBit/s Sent: 87.94 KBit/s
- Received: 58.95 KBit/s Sent: 107.25 KBit/s
- Received: 43.67 KBit/s Sent: 107.86 KBit/s
- Received: 87.00 KBit/s Sent: 191.47 KBit/s
- Received: 79.71 KBit/s Sent: 85.45 KBit/s
- Received: 54.68 KBit/s Sent: 48.77 KBit/s
-local_listener (virtual adiot0):
- Received: 0.0 Bit Sent: 0.0 Bit
- Received: 0.0 Bit Sent: 0.0 Bit
- Received: 0.0 Bit Sent: 0.0 Bit
- Received: 0.0 Bit Sent: 0.0 Bit
- Received: 0.0 Bit Sent: 0.0 Bit
- Received: 0.0 Bit Sent: 0.0 Bit
-root@xsense:/#
-```
-
-#### Check internet connection
-
-Use the following command to check the internet connectivity on your appliance.
-
-|User |Command |Full command syntax |
-||||
-|**cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-internet-connectivity` | No attributes |
-
-```bash
-root@xsense:/# cyberx-xsense-internet-connectivity
-Checking internet connectivity...
-The machine was successfully able to connect the internet.
-root@xsense:/#
-```
--
-### Set bandwidth limit for the management network interface
-
-Use the following command to set the outbound bandwidth limit for uploads from the OT sensor's management interface to the Azure portal or an on-premises management console.
-
-Setting outbound bandwidth limits can be helpful in maintaining networking quality of service (QoS). This command is supported only in bandwidth-constrained environments, such as over a satellite or serial link.
-
-|User |Command |Full command syntax |
-||||
-|**cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-limit-interface` | `cyberx-xsense-limit-interface [-h] --interface <INTERFACE VALUE> [--limit <LIMIT VALUE] [--clear]` |
-
-In this command:
--- `-h` or `--help`: Shows the command help syntax--- `--interface <INTERFACE VALUE>`: Is the interface you want to limit, such as `eth0`--- `--limit <LIMIT VALUE>`: The limit you want to set, such as `30kbit`. Use one of the following units:-
- - `kbps`: Kilobytes per second
- - `mbps`: Megabytes per second
- - `kbit`: Kilobits per second
- - `mbit`: Megabits per second
- - `bps` or a bare number: Bytes per second
--- `--clear`: Clears all settings for the specified interface--
-For example, for the *cyberx* user:
-
-```bash
-root@xsense:/# cyberx-xsense-limit-interface -h
-usage: cyberx-xsense-limit-interface [-h] --interface INTERFACE [--limit LIMIT] [--clear]
-
-optional arguments:
- -h, --help show this help message and exit
- --interface INTERFACE
- interface (e.g. eth0)
- --limit LIMIT limit value (e.g. 30kbit). kbps - Kilobytes per second, mbps - Megabytes per second, kbit -
- Kilobits per second, mbit - Megabits per second, bps or a bare number - Bytes per second
- --clear flag, will clear settings for the given interface
-root@xsense:/#
-root@xsense:/# cyberx-xsense-limit-interface --interface eth0 --limit 1000mbps
-setting the bandwidth limit of interface "eth0" to 1000mbps
-```
----
-### Physical interfaces
-
-#### Locate a physical port by blinking interface lights
+### Locate a physical port by blinking interface lights
Use the following command to locate a specific physical interface by causing the interface lights to blink.
In this command, `<INT>` is a physical ethernet port on the appliance.
The following example shows the *admin* user blinking the *eth0* interface: ```bash
-root@xsense: network blink eth0
+shell> network blink eth0
Blinking interface for 20 seconds ... ```
-#### List connected physical interfaces
+### List connected physical interfaces
-Use the following commands to list the connected physical interfaces on your OT sensor.
+Use the following command to list the connected physical interfaces on your OT sensor.
|User |Command |Full command syntax | ||||
Use the following commands to list the connected physical interfaces on your OT
For example, for the *admin* user: ```bash
-root@xsense: network list
+shell> network list
adiot0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST> mtu 4096 ether be:b1:01:1f:91:88 txqueuelen 1000 (Ethernet) RX packets 2589575 bytes 740011013 (740.0 MB)
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
TX packets 837196 bytes 259542408 (259.5 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-root@xsense:
-```
-
-## Traffic capture filters
--
-To reduce alert fatigue and focus your network monitoring on high priority traffic, you may decide to filter the traffic that streams into Defender for IoT at the source. Capture filters allow you to block high-bandwidth traffic at the hardware layer, optimizing both appliance performance and resource usage.
-
-Use include an/or exclude lists to create and configure capture filters on your OT network sensors, making sure that you don't block any of the traffic that you want to monitor.
-
-The basic use case for capture filters uses the same filter for all Defender for IoT components. However, for advanced use cases, you may want to configure separate filters for each of the following Defender for IoT components:
--- `horizon`: Captures deep packet inspection (DPI) data-- `collector`: Captures PCAP data-- `traffic-monitor`: Captures communication statistics-
-> [!NOTE]
-> - Capture filters don't apply to [Defender for IoT malware alerts](../alert-engine-messages.md#malware-engine-alerts), which are triggered on all detected network traffic.
->
-> - The capture filter command has a character length limit that's based on the complexity of the capture filter definition and the available network interface card capabilities. If your requested filter commmand fails, try grouping subnets into larger scopes and using a shorter capture filter command.
-
-### Create a basic filter for all components
-
-The method used to configure a basic capture filter differs, depending on the user performing the command:
--- **cyberx** user: Run the specified command with specific attributes to configure your capture filter.-- **admin** user: Run the specified command, and then enter values as [prompted by the CLI](#create-a-basic-capture-filter-using-the-admin-user), editing your include and exclude lists in a nano editor.-
-Use the following commands to create a new capture filter:
-
-|User |Command |Full command syntax |
-||||
-| **admin** | `network capture-filter` | No attributes.|
-| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-capture-filter` | `cyberx-xsense-capture-filter [-h] [-i INCLUDE] [-x EXCLUDE] [-etp EXCLUDE_TCP_PORT] [-eup EXCLUDE_UDP_PORT] [-itp INCLUDE_TCP_PORT] [-iup INCLUDE_UDP_PORT] [-vlan INCLUDE_VLAN_IDS] -m MODE [-S]` |
-
-Supported attributes for the *cyberx* user are defined as follows:
-
-|Attribute |Description |
-|||
-|`-h`, `--help` | Shows the help message and exits. |
-|`-i <INCLUDE>`, `--include <INCLUDE>` | The path to a file that contains the devices and subnet masks you want to include, where `<INCLUDE>` is the path to the file. For example, see [Sample include or exclude file](#txt). |
-|`-x EXCLUDE`, `--exclude EXCLUDE` | The path to a file that contains the devices and subnet masks you want to exclude, where `<EXCLUDE>` is the path to the file. For example, see [Sample include or exclude file](#txt). |
-|- `-etp <EXCLUDE_TCP_PORT>`, `--exclude-tcp-port <EXCLUDE_TCP_PORT>` | Excludes TCP traffic on any specified ports, where the `<EXCLUDE_TCP_PORT>` defines the port or ports you want to exclude. Delimitate multiple ports by commas, with no spaces. |
-|`-eup <EXCLUDE_UDP_PORT>`, `--exclude-udp-port <EXCLUDE_UDP_PORT>` | Excludes UDP traffic on any specified ports, where the `<EXCLUDE_UDP_PORT>` defines the port or ports you want to exclude. Delimitate multiple ports by commas, with no spaces. |
-|`-itp <INCLUDE_TCP_PORT>`, `--include-tcp-port <INCLUDE_TCP_PORT>` | Includes TCP traffic on any specified ports, where the `<INCLUDE_TCP_PORT>` defines the port or ports you want to include. Delimitate multiple ports by commas, with no spaces. |
-|`-iup <INCLUDE_UDP_PORT>`, `--include-udp-port <INCLUDE_UDP_PORT>` | Includes UDP traffic on any specified ports, where the `<INCLUDE_UDP_PORT>` defines the port or ports you want to include. Delimitate multiple ports by commas, with no spaces. |
-|`-vlan <INCLUDE_VLAN_IDS>`, `--include-vlan-ids <INCLUDE_VLAN_IDS>` | Includes VLAN traffic by specified VLAN IDs, `<INCLUDE_VLAN_IDS>` defines the VLAN ID or IDs you want to include. Delimitate multiple VLAN IDs by commas, with no spaces. |
-|`-p <PROGRAM>`, `--program <PROGRAM>` | Defines the component for which you want to configure a capture filter. Use `all` for basic use cases, to create a single capture filter for all components. <br><br>For advanced use cases, create separate capture filters for each component. For more information, see [Create an advanced filter for specific components](#create-an-advanced-filter-for-specific-components).|
-|`-m <MODE>`, `--mode <MODE>` | Defines an include list mode, and is relevant only when an include list is used. Use one of the following values: <br><br>- `internal`: Includes all communication between the specified source and destination <br>- `all-connected`: Includes all communication between either of the specified endpoints and external endpoints. <br><br>For example, for endpoints A and B, if you use the `internal` mode, included traffic will only include communications between endpoints **A** and **B**. <br>However, if you use the `all-connected` mode, included traffic will include all communications between A *or* B and other, external endpoints. |
-
-<a name="txt"></a>**Sample include or exclude file**
-
-For example, an include or exclude **.txt** file might include the following entries:
-
-```txt
-192.168.50.10
-172.20.248.1
+shell>
```
-#### Create a basic capture filter using the admin user
-
-If you're creating a basic capture filter as the *admin* user, no attributes are passed in the [original command](#create-a-basic-filter-for-all-components). Instead, a series of prompts is displayed to help you create the capture filter interactively.
-
-Reply to the prompts displayed as follows:
-
-1. `Would you like to supply devices and subnet masks you wish to include in the capture filter? [Y/N]:`
-
- Select `Y` to open a new include file, where you can add a device, channel, and/or subnet that you want to include in monitored traffic. Any other traffic, not listed in your include file, isn't ingested to Defender for IoT.
-
- The include file is opened in the [Nano](https://www.nano-editor.org/dist/latest/cheatsheet.html) text editor. In the include file, define devices, channels, and subnets as follows:
-
- |Type |Description |Example |
- ||||
- |**Device** | Define a device by its IP address. | `1.1.1.1` includes all traffic for this device. |
- |**Channel** | Define a channel by the IP addresses of its source and destination devices, separated by a comma. | `1.1.1.1,2.2.2.2` includes all of the traffic for this channel. |
- |**Subnet** | Define a subnet by its network address. | `1.1.1` includes all traffic for this subnet. |
-
- List multiple arguments in separate rows.
-
-1. `Would you like to supply devices and subnet masks you wish to exclude from the capture filter? [Y/N]:`
-
- Select `Y` to open a new exclude file where you can add a device, channel, and/or subnet that you want to exclude from monitored traffic. Any other traffic, not listed in your exclude file, is ingested to Defender for IoT.
-
- The exclude file is opened in the [Nano](https://www.nano-editor.org/dist/latest/cheatsheet.html) text editor. In the exclude file, define devices, channels, and subnets as follows:
-
- |Type |Description |Example |
- ||||
- | **Device** | Define a device by its IP address. | `1.1.1.1` excludes all traffic for this device. |
- | **Channel** | Define a channel by the IP addresses of its source and destination devices, separated by a comma. | `1.1.1.1,2.2.2.2` excludes all of the traffic between these devices. |
- | **Channel by port** | Define a channel by the IP addresses of its source and destination devices, and the traffic port. | `1.1.1.1,2.2.2.2,443` excludes all of the traffic between these devices and using the specified port.|
- | **Subnet** | Define a subnet by its network address. | `1.1.1` excludes all traffic for this subnet. |
- | **Subnet channel** | Define subnet channel network addresses for the source and destination subnets. | `1.1.1,2.2.2` excludes all of the traffic between these subnets. |
-
- List multiple arguments in separate rows.
-
-1. Reply to the following prompts to define any TCP or UDP ports to include or exclude. Separate multiple ports by comma, and press ENTER to skip any specific prompt.
-
- - `Enter tcp ports to include (delimited by comma or Enter to skip):`
- - `Enter udp ports to include (delimited by comma or Enter to skip):`
- - `Enter tcp ports to exclude (delimited by comma or Enter to skip):`
- - `Enter udp ports to exclude (delimited by comma or Enter to skip):`
- - `Enter VLAN ids to include (delimited by comma or Enter to skip):`
-
- For example, enter multiple ports as follows: `502,443`
-
-1. `In which component do you wish to apply this capture filter?`
-
- Enter `all` for a basic capture filter. For [advanced use cases](#create-an-advanced-capture-filter-using-the-admin-user), create capture filters for each Defender for IoT component separately.
-
-1. `Type Y for "internal" otherwise N for "all-connected" (custom operation mode enabled) [Y/N]:`
-
- This prompt allows you to configure which traffic is in scope. Define whether you want to collect traffic where both endpoints are in scope, or only one of them is in the specified subnet. Supported values include:
-
- - `internal`: Includes all communication between the specified source and destination
- - `all-connected`: Includes all communication between either of the specified endpoints and external endpoints.
-
- For example, for endpoints A and B, if you use the `internal` mode, included traffic will only include communications between endpoints **A** and **B**. <br>However, if you use the `all-connected` mode, included traffic will include all communications between A *or* B and other, external endpoints.
-
- The default mode is `internal`. To use the `all-connected` mode, select `Y` at the prompt, and then enter `all-connected`.
-
-The following example shows a series of prompts that creates a capture filter to exclude subnet `192.168.x.x` and port `9000:`
-
-```bash
-root@xsense: network capture-filter
-Would you like to supply devices and subnet masks you wish to include in the capture filter? [y/N]: n
-Would you like to supply devices and subnet masks you wish to exclude from the capture filter? [y/N]: y
-You've exited the editor. Would you like to apply your modifications? [y/N]: y
-Enter tcp ports to include (delimited by comma or Enter to skip):
-Enter udp ports to include (delimited by comma or Enter to skip):
-Enter tcp ports to exclude (delimited by comma or Enter to skip):9000
-Enter udp ports to exclude (delimited by comma or Enter to skip):9000
-Enter VLAN ids to include (delimited by comma or Enter to skip):
-In which component do you wish to apply this capture filter?all
-Would you like to supply a custom base capture filter for the collector component? [y/N]: n
-Would you like to supply a custom base capture filter for the traffic_monitor component? [y/N]: n
-Would you like to supply a custom base capture filter for the horizon component? [y/N]: n
-type Y for "internal" otherwise N for "all-connected" (custom operation mode enabled) [Y/n]: internal
-Please respond with 'yes' or 'no' (or 'y' or 'n').
-type Y for "internal" otherwise N for "all-connected" (custom operation mode enabled) [Y/n]: y
-starting "/usr/local/bin/cyberx-xsense-capture-filter --exclude /var/cyberx/media/capture-filter/exclude --exclude-tcp-port 9000 --exclude-udp-port 9000 --program all --mode internal --from-shell"
-No include file given
-Loaded 1 unique channels
-(000) ret #262144
-(000) ldh [12]
-......
-......
-......
-debug: set new filter for horizon '(((not (net 192.168))) and (not (tcp port 9000)) and (not (udp port 9000))) or (vlan and ((not (net 192.168))) and (not (tcp port 9000)) and (not (udp port 9000)))'
-root@xsense:
-```
-
-### Create an advanced filter for specific components
-
-When configuring advanced capture filters for specific components, you can use your initial include and exclude files as a base, or template, capture filter. Then, configure extra filters for each component on top of the base as needed.
-
-To create a capture filter for *each* component, make sure to repeat the entire process for each component.
-
-> [!NOTE]
-> If you've created different capture filters for different components, the mode selection is used for all components. Defining the capture filter for one component as `internal` and the capture filter for another component as `all-connected` isn't supported.
-
-|User |Command |Full command syntax |
-||||
-| **admin** | `network capture-filter` | No attributes.|
-| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-capture-filter` | `cyberx-xsense-capture-filter [-h] [-i INCLUDE] [-x EXCLUDE] [-etp EXCLUDE_TCP_PORT] [-eup EXCLUDE_UDP_PORT] [-itp INCLUDE_TCP_PORT] [-iup INCLUDE_UDP_PORT] [-vlan INCLUDE_VLAN_IDS] -p PROGRAM [-o BASE_HORIZON] [-s BASE_TRAFFIC_MONITOR] [-c BASE_COLLECTOR] -m MODE [-S]` |
-
-The following extra attributes are used for the *cyberx* user to create capture filters for each component separately:
-
-|Attribute |Description |
-|||
-|`-p <PROGRAM>`, `--program <PROGRAM>` | Defines the component for which you want to configure a capture filter, where `<PROGRAM>` has the following supported values: <br>- `traffic-monitor` <br>- `collector` <br>- `horizon` <br>- `all`: Creates a single capture filter for all components. For more information, see [Create a basic filter for all components](#create-a-basic-filter-for-all-components).|
-|`-o <BASE_HORIZON>`, `--base-horizon <BASE_HORIZON>` | Defines a base capture filter for the `horizon` component, where `<BASE_HORIZON>` is the filter you want to use. <br> Default value = `""` |
-|`-s BASE_TRAFFIC_MONITOR`, `--base-traffic-monitor BASE_TRAFFIC_MONITOR` | Defines a base capture filter for the `traffic-monitor` component. <br> Default value = `""` |
-|`-c BASE_COLLECTOR`, `--base-collector BASE_COLLECTOR` | Defines a base capture filter for the `collector` component. <br> Default value = `""` |
-
-Other attribute values have the same descriptions as in the basic use case, described [earlier](#create-a-basic-filter-for-all-components).
-
-#### Create an advanced capture filter using the admin user
-
-If you're creating a capture filter for each component separately as the *admin* user, no attributes are passed in the [original command](#create-an-advanced-filter-for-specific-components). Instead, a series of prompts is displayed to help you create the capture filter interactively.
-
-Most of the prompts are identical to [basic use case](#create-a-basic-capture-filter-using-the-admin-user). Reply to the following extra prompts as follows:
-
-1. `In which component do you wish to apply this capture filter?`
-
- Enter one of the following values, depending on the component you want to filter:
-
- - `horizon`
- - `traffic-monitor`
- - `collector`
-
-1. You're prompted to configure a custom base capture filter for the selected component. This option uses the capture filter you configured in the previous steps as a base, or template, where you can add extra configurations on top of the base.
-
- For example, if you'd selected to configure a capture filter for the `collector` component in the previous step, you're prompted: `Would you like to supply a custom base capture filter for the collector component? [Y/N]:`
-
- Enter `Y` to customize the template for the specified component, or `N` to use the capture filter you'd configured earlier as it is.
-
-Continue with the remaining prompts as in the [basic use case](#create-a-basic-capture-filter-using-the-admin-user).
-
-### List current capture filters for specific components
-
-Use the following commands to show details about the current capture filters configured for your sensor.
-
-|User |Command |Full command syntax |
-||||
-| **admin** | Use the following commands to view the capture filters for each component: <br><br>- **horizon**: `edit-config horizon_parser/horizon.properties` <br>- **traffic-monitor**: `edit-config traffic_monitor/traffic-monitor` <br>- **collector**: `edit-config dumpark.properties` | No attributes |
-| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | Use the following commands to view the capture filters for each component: <br><br>-**horizon**: `nano /var/cyberx/properties/horizon_parser/horizon.properties` <br>- **traffic-monitor**: `nano /var/cyberx/properties/traffic_monitor/traffic-monitor.properties` <br>- **collector**: `nano /var/cyberx/properties/dumpark.properties` | No attributes |
-
-These commands open the following files, which list the capture filters configured for each component:
-
-|Name |File |Property |
-||||
-|**horizon** | `/var/cyberx/properties/horizon.properties` | `horizon.processor.filter` |
-|**traffic-monitor** | `/var/cyberx/properties/traffic-monitor.properties` | `horizon.processor.filter` |
-|**collector** | `/var/cyberx/properties/dumpark.properties` | `dumpark.network.filter` |
-
-For example with the **admin** user, with a capture filter defined for the *collector* component that excludes subnet 192.168.x.x and port 9000:
-
-```bash
-
-root@xsense: edit-config dumpark.properties
- GNU nano 2.9.3 /tmp/tmpevt4igo7/tmpevt4igo7
-
-dumpark.network.filter=(((not (net 192.168))) and (not (tcp port 9000)) and (not
-dumpark.network.snaplen=4096
-dumpark.packet.filter.data.transfer=false
-dumpark.infinite=true
-dumpark.output.session=false
-dumpark.output.single=false
-dumpark.output.raw=true
-dumpark.output.rotate=true
-dumpark.output.rotate.history=300
-dumpark.output.size=20M
-dumpark.output.time=30S
-```
-
-### Reset all capture filters
-
-Use the following command to reset your sensor to the default capture configuration with the *cyberx* user, removing all capture filters.
-
-|User |Command |Full command syntax |
-||||
-| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-capture-filter -p all -m all-connected` | No attributes |
-
-If you want to modify the existing capture filters, run the [earlier](#create-a-basic-filter-for-all-components) command again, with new attribute values.
-
-To reset all capture filters using the *admin* user, run the [earlier](#create-a-basic-filter-for-all-components) command again, and respond `N` to all [prompts](#create-a-basic-capture-filter-using-the-admin-user) to reset all capture filters.
-
-The following example shows the command syntax and response for the *cyberx* user:
-
-```bash
-root@xsense:/# cyberx-xsense-capture-filter -p all -m all-connected
-starting "/usr/local/bin/cyberx-xsense-capture-filter -p all -m all-connected"
-No include file given
-No exclude file given
-(000) ret #262144
-(000) ret #262144
-debug: set new filter for dumpark ''
-No include file given
-No exclude file given
-(000) ret #262144
-(000) ret #262144
-debug: set new filter for traffic-monitor ''
-No include file given
-No exclude file given
-(000) ret #262144
-(000) ret #262144
-debug: set new filter for horizon ''
-root@xsense:/#
-```
-
-## Alerts
-
-### Trigger a test alert
-
-Use the following command to test connectivity and alert forwarding from the sensor to management consoles, including the Azure portal, a Defender for IoT on-premises management console, or a third-party SIEM.
-
-|User |Command |Full command syntax |
-||||
-| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-trigger-test-alert` | No attributes |
-
-The following example shows the command syntax and response for the *cyberx* user:
-
-```bash
-root@xsense:/# cyberx-xsense-trigger-test-alert
-Triggering Test Alert...
-Test Alert was successfully triggered.
-```
-
-### Alert exclusion rules from an OT sensor
-
-The following commands support alert exclusion features on your OT sensor, including showing current exclusion rules, adding and editing rules, and deleting rules.
-
-> [!NOTE]
-> Alert exclusion rules defined on an OT sensor can be overwritten by alert exclusion rules defined on your on-premises management console.
-
-#### Show current alert exclusion rules
-
-Use the following command to display a list of currently configured exclusion rules.
-
-|User |Command |Full command syntax |
-||||
-|**admin** | `alerts exclusion-rule-list` | `alerts exclusion-rule-list [-h] -n NAME [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
-|**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `alerts cyberx-xsense-exclusion-rule-list` | `alerts cyberx-xsense-exclusion-rule-list [-h] -n NAME [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
-
-The following example shows the command syntax and response for the *admin* user:
-
-```bash
-root@xsense: alerts exclusion-rule-list
-starting "/usr/local/bin/cyberx-xsense-exclusion-rule-list"
-root@xsense:
-```
-
-#### Create a new alert exclusion rule
-
-Use the following commands to create a local alert exclusion rule on your sensor.
-
-|User |Command |Full command syntax |
-||||
-| **admin** | `cyberx-xsense-exclusion-rule-create` | `cyberx-xsense-exclusion-rule-create [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]`|
-| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) |`cyberx-xsense-exclusion-rule-create` |`cyberx-xsense-exclusion-rule-create [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
-
-Supported attributes are defined as follows:
-
-|Attribute |Description |
-|||
-|`-h`, `--help` | Shows the help message and exits. |
-|`[-n <NAME>]`, `[--name <NAME>]` | Define the rule's name.|
-|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `hh:mm-hh:mm, hh:mm-hh:mm` |
-|`[-dir <DIRECTION>]`, `--direction <DIRECTION>` | Address direction to exclude. Use one of the following values: `both`, `src`, `dst`|
-|`[-dev <DEVICES>]`, `[--devices <DEVICES>]` | Device addresses or address types to exclude, using the following syntax: `ip-x.x.x.x`, `mac-xx:xx:xx:xx:xx:xx`, `subnet:x.x.x.x/x`|
-| `[-a <ALERTS>]`, `--alerts <ALERTS>`|Alert names to exclude, by hex value. For example: `0x00000, 0x000001` |
-
-The following example shows the command syntax and response for the *admin* user:
-
-```bash
-alerts exclusion-rule-create [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
-[-dev DEVICES] [-a ALERTS]
-```
-
-#### Modify an alert exclusion rule
-
-Use the following commands to modify an existing local alert exclusion rule on your sensor.
-
-|User |Command |Full command syntax |
-||||
-| **admin** | `exclusion-rule-append` | `exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]`|
-| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) |`exclusion-rule-append` |`exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
-
-Supported attributes are defined as follows:
-
-|Attribute |Description |
-|||
-|`-h`, `--help` | Shows the help message and exits. |
-|`[-n <NAME>]`, `[--name <NAME>]` | The name of the rule you want to modify.|
-|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `hh:mm-hh:mm, hh:mm-hh:mm` |
-|`[-dir <DIRECTION>]`, `--direction <DIRECTION>` | Address direction to exclude. Use one of the following values: `both`, `src`, `dst`|
-|`[-dev <DEVICES>]`, `[--devices <DEVICES>]` | Device addresses or address types to exclude, using the following syntax: `ip-x.x.x.x`, `mac-xx:xx:xx:xx:xx:xx`, `subnet:x.x.x.x/x`|
-| `[-a <ALERTS>]`, `--alerts <ALERTS>`|Alert names to exclude, by hex value. For example: `0x00000, 0x000001` |
-
-Use the following command syntax with the *admin* user:
-
-```bash
-alerts exclusion-rule-append [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
-[-dev DEVICES] [-a ALERTS]
-```
-
-#### Delete an alert exclusion rule
-
-Use the following commands to delete an existing local alert exclusion rule on your sensor.
-
-|User |Command |Full command syntax |
-||||
-| **admin** | `exclusion-rule-remove` | `exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]`|
-| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) |`exclusion-rule-remove` |`exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
-
-Supported attributes are defined as follows:
-
-|Attribute |Description |
-|||
-|`-h`, `--help` | Shows the help message and exits. |
-|`[-n <NAME>]`, `[--name <NAME>]` | The name of the rule you want to delete.|
-|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `hh:mm-hh:mm, hh:mm-hh:mm` |
-|`[-dir <DIRECTION>]`, `--direction <DIRECTION>` | Address direction to exclude. Use one of the following values: `both`, `src`, `dst`|
-|`[-dev <DEVICES>]`, `[--devices <DEVICES>]` | Device addresses or address types to exclude, using the following syntax: `ip-x.x.x.x`, `mac-xx:xx:xx:xx:xx:xx`, `subnet:x.x.x.x/x`|
-| `[-a <ALERTS>]`, `--alerts <ALERTS>`|Alert names to exclude, by hex value. For example: `0x00000, 0x000001` |
-
-The following example shows the command syntax and response for the *admin* user:
-
-```bash
-alerts exclusion-rule-remove [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
-[-dev DEVICES] [-a ALERTS]
-```
-- ## Next steps > [!div class="nextstepaction"]
defender-for-iot How To Control What Traffic Is Monitored https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-control-what-traffic-is-monitored.md
If the traffic shown on the **Deployment** page isn't what you expect, you might
After having analyzed the traffic your sensor is monitoring and fine tuning the deployment, you may need to further fine tune your subnet list. Use this procedure to ensure that your subnets are configured correctly.
-While your OT sensor automatically learns your network subnets during the initial deployment, we recommend analyzing the detected traffic and updating them as needed to optimize your map views and device inventory.
+While your OT sensor automatically learns your network subnets during the initial deployment, we recommend analyzing the detected traffic and updating them as needed to optimize your map views and device inventory.
Also use this procedure to also define subnet settings, determining how devices are displayed in the sensor's [device map](how-to-work-with-the-sensor-device-map.md) and the [Azure device inventory](device-inventory.md).
While the OT network sensor automatically learns the subnets in your network, we
| **Mask**| Define the subnet's IP mask. | | **Name**| We recommend that you enter a meaningful name that specifies the subnet's network role. Subnet names can have up to 60 characters.| |**Segregated** | Select to show this subnet separately when displaying the device map according to Purdue level. |
- | **Remove subnet** | Select to remove any subnets that aren't related to your IoT/OT network scope.|
+ | **Remove subnet** | Select to remove any subnets that aren't related to your IoT/OT network scope.|
In the subnet grid, subnets marked as **ICS subnet** are recognized as OT networks. This option is read-only in this grid, but you can [manually define a subnet as ICS](#manually-define-a-subnet-as-ics) if there's an OT subnet not being recognized correctly.
To reduce alert fatigue and focus your network monitoring on high priority traff
For more information, see: - [Defender for IoT CLI users and access](references-work-with-defender-for-iot-cli-commands.md)-- [Traffic capture filters](cli-ot-sensor.md#traffic-capture-filters) ## Next steps
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
If you're working with a production environment, you'd deployed a CA-signed SSL/
The following procedures describe how to deploy updated SSL/TLS certificates, such as if the certificate has expired.
-> [!TIP]
-> You can also [import the certificate to your OT sensor using CLI commands](references-work-with-defender-for-iot-cli-commands.md#tlsssl-certificate-commands).
->
- # [Deploy a CA-signed certificate](#tab/ca-signed) **To deploy a CA-signed SSL/TLS certificate:**
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
To access the Defender for IoT CLI, you'll need access to the sensor or on-premi
## Privileged user access for OT monitoring
-Use the *admin* user when using the Defender for IoT CLI, which is an administrative account with access to all CLI commands. On the on-premises management console, use either the *support* or the *cyberx* user.
+Use the *admin* user when using the Defender for IoT CLI, which is an administrative account with access to all CLI commands. On the on-premises management console, use the *cyberx* user.
If you're using a legacy software version, you may have one or more of the following users: |Legacy scenario |Description | ||| |**Sensor versions earlier than 23.2.0** | In sensor versions earlier than [23.2.0](whats-new.md#default-privileged-user-is-now-admin-instead-of-support), the default *admin* user is named *support*. The *support* user is available and supported only on versions earlier than 23.2.0.<br><br>Documentation refers to the *admin* user to match the latest version of the software. |
-|**Sensor software versions earlier than 23.1.x** | In sensor software versions earlier than [23.1.x](whats-new.md#july-2023), the *cyberx* and *cyberx_host* privileged users are also in use. <br><br>In newly installed versions 23.1.x and higher, the *cyberx* and *cyberx_host* users are available, but not enabled by default. <br><br>To enable these extra privileged users, such as to use the [Defender for IoT CLI](references-work-with-defender-for-iot-cli-commands.md), change their passwords. For more information, see [Recover privileged access to a sensor](manage-users-sensor.md#recover-privileged-access-to-a-sensor). |
Other CLI users cannot be added.
The following tables list the activities available by CLI and the privileged use
|Service area |Users |Actions | |||| |Sensor health | *admin*, *cyberx* | [Check OT monitoring services health](cli-ot-sensor.md#check-ot-monitoring-services-health) |
-|Restart and shutdown | *admin*, *cyberx*, *cyberx_host* | [Restart an appliance](cli-ot-sensor.md#restart-an-appliance)<br>[Shut down an appliance](cli-ot-sensor.md#shut-down-an-appliance) |
+|Reboot and shutdown | *admin*, *cyberx*, *cyberx_host* | [Restart an appliance](cli-ot-sensor.md#restart-an-appliance)<br>[Shut down an appliance](cli-ot-sensor.md#shutdown-an-appliance) |
|Software versions | *admin*, *cyberx* | [Show installed software version](cli-ot-sensor.md#show-installed-software-version) <br>[Update software version](update-ot-software.md) | |Date and time | *admin*, *cyberx*, *cyberx_host* | [Show current system date/time](cli-ot-sensor.md#show-current-system-datetime) | |NTP | *admin*, *cyberx* | [Turn on NTP time sync](cli-ot-sensor.md#turn-on-ntp-time-sync)<br>[Turn off NTP time sync](cli-ot-sensor.md#turn-off-ntp-time-sync) |
The following tables list the activities available by CLI and the privileged use
|Service area |Users |Actions | ||||
-|Backup files | *admin*, *cyberx* | [List current backup files](cli-ot-sensor.md#list-current-backup-files) <br>[Start an immediate, unscheduled backup](cli-ot-sensor.md#start-an-immediate-unscheduled-backup) |
+|List backup files | *admin*, *cyberx* | [List current backup files](cli-ot-sensor.md#list-current-backup-files) <br>[Start an immediate, unscheduled backup](cli-ot-sensor.md#start-an-immediate-unscheduled-backup) |
|Restore | *admin*, *cyberx* | [Restore data from the most recent backup](cli-ot-sensor.md#restore-data-from-the-most-recent-backup) | |Backup disk space | *cyberx* | [Display backup disk space allocation](cli-ot-sensor.md#display-backup-disk-space-allocation) |
-### TLS/SSL certificate commands
-
-|Service area |Users |Actions |
-||||
-|Certificate management | *cyberx* | [Import TLS/SSL certificates to your OT sensor](cli-ot-sensor.md#import-tlsssl-certificates-to-your-ot-sensor)<br>[Restore the default self-signed certificate](cli-ot-sensor.md#restore-the-default-self-signed-certificate) |
- ### Local user management commands |Service area |Users |Actions |
The following tables list the activities available by CLI and the privileged use
| Network setting configuration | *cyberx_host* | [Change networking configuration or reassign network interface roles](cli-ot-sensor.md#change-networking-configuration-or-reassign-network-interface-roles) | |Network setting configuration | *admin* | [Validate and show network interface configuration](cli-ot-sensor.md#validate-and-show-network-interface-configuration) | |Network connectivity | *admin*, *cyberx* | [Check network connectivity from the OT sensor](cli-ot-sensor.md#check-network-connectivity-from-the-ot-sensor) |
-|Network connectivity | *cyberx* | [Check network interface current load](cli-ot-sensor.md#check-network-interface-current-load) <br>[Check internet connection](cli-ot-sensor.md#check-internet-connection) |
-|Network bandwidth limit | *cyberx* | [Set bandwidth limit for the management network interface](cli-ot-sensor.md#set-bandwidth-limit-for-the-management-network-interface) |
|Physical interfaces management | *admin* | [Locate a physical port by blinking interface lights](cli-ot-sensor.md#locate-a-physical-port-by-blinking-interface-lights) | |Physical interfaces management | *admin*, *cyberx* | [List connected physical interfaces](cli-ot-sensor.md#list-connected-physical-interfaces) |
-### Traffic capture filter commands
-
-|Service area |Users |Actions |
-||||
-| Capture filter management | *admin*, *cyberx* | [Create a basic filter for all components](cli-ot-sensor.md#create-a-basic-filter-for-all-components)<br>[Create an advanced filter for specific components](cli-ot-sensor.md#create-an-advanced-filter-for-specific-components) <br>[List current capture filters for specific components](cli-ot-sensor.md#list-current-capture-filters-for-specific-components) <br> [Reset all capture filters](cli-ot-sensor.md#reset-all-capture-filters) |
-
-### Alert commands
-
-|Service area |Users |Actions |
-||||
-|Alert functionality testing | *cyberx* | [Trigger a test alert](cli-ot-sensor.md#trigger-a-test-alert) |
-| Alert exclusion rules | *admin*, *cyberx* | [Show current alert exclusion rules](cli-ot-sensor.md#show-current-alert-exclusion-rules) <br>[Create a new alert exclusion rule](cli-ot-sensor.md#create-a-new-alert-exclusion-rule)<br>[Modify an alert exclusion rule](cli-ot-sensor.md#modify-an-alert-exclusion-rule)<br>[Delete an alert exclusion rule](cli-ot-sensor.md#delete-an-alert-exclusion-rule)
- ## Defender for IoT CLI access To access the Defender for IoT CLI, sign in to your OT or Enterprise IoT sensor or your on-premises management console using a terminal emulator and SSH.
Each CLI command on an OT network sensor or on-premises management console is su
## Access the system root as an *admin* user
-When signing in as the *admin* user, run the following command to access the host machine as the root user. Access the host machine as the root user enables you to run CLI commands that aren't available to the *admin* user.
+When signing in as the *admin* user, run the following command to access the host machine as the root user. Access the host machine as the root user enables you to run CLI commands that aren't available to the *admin* user.
Run:
Run:
system shell ```
-OT sensor versions earlier than [23.2.0](whats-new.md#default-privileged-user-is-now-admin-instead-of-support) include the *support* privileged user instead of the *admin* user. If you're using an older version of the sensor software, any commands that are listed as supported for the *admin* user are also supported for the legacy *support* user.
- ## Sign out of the CLI Make sure to properly sign out of the CLI when you're done using it. You're automatically signed out after an inactive period of 300 seconds.
deployment-environments Ade Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/ade-roadmap.md
Title: Roadmap for Azure Deployment Environments
-description: Learn about features coming soon and in development for Azure Deployment Environments.
+description: Learn about planned features coming soon and features in development for Azure Deployment Environments.
Previously updated : 08/26/2024 Last updated : 09/06/2024 #customer intent: As a customer, I want to understand upcoming features and enhancements in Azure Deployment Environments so that I can plan and optimize development and deployment strategies.
Last updated 08/26/2024
# Azure Deployment Environments Roadmap
-This roadmap presents a set of planned feature releases that underscores MicrosoftΓÇÖs commitment to revolutionizing the way enterprise developers provision application infrastructure, offering a seamless and intuitive experience that also ensures robust centralized management and governance. This feature list offers a glimpse into our plans for the next six months, highlighting key features we're developing. It's not exhaustive but shows major investments. Some features might release as previews and evolve based on your feedback before becoming generally available. We always listen to your input, so the timing, design, and delivery of some features might change.
+This roadmap presents a set of planned feature releases aimed at improving how enterprise developers set up application infrastructure. It focuses on making the process easier and ensuring strong centralized management and governance. This list highlights key features planned for the next six months. It isn't exhaustive but shows major areas of investment. Some features might release as previews and evolve based on your feedback before becoming generally available. We always listen to your input, so the timing, design, and delivery of some features might change.
The key deliverables focus on the following themes:
The key deliverables focus on the following themes:
## Self-serve app infrastructure
-Navigating complex dependencies, opaque configurations, and compatibility issues, alongside managing security risks, has long made deploying app infrastructure a challenging endeavor. Azure Deployment Environments aims to eliminate these obstacles and supercharge enterprise developer agility. By enabling developers to swiftly and effortlessly self-serve the infrastructure needed to deploy, test, and run cloud-based applications, we're transforming the development landscape. Our ongoing investment in this area underscores our commitment to optimizing and enhancing the end-to-end developer experience, empowering teams to innovate without barriers.
--- Enhanced integration with Azure Developer CLI (azd) will support ADEΓÇÖs extensibility model, enabling deployments using any preferred IaC framework. The extensibility model allows enterprise development teams to deploy their code onto newly provisioned or existing environments with simple commands like `azd up` and `azd deploy`. By facilitating real-time testing, rapid issue identification, and swift resolution, developers can deliver higher-quality applications faster than ever before. -- Ability to track and manage environment operations, logs, and the deployment outputs directly in the developer portal will make it easier for dev teams to troubleshoot any potential issues and fix their deployments.
+Deploying app infrastructure has been challenging due to complex dependencies, unclear configurations, compatibility issues, and managing security risks. Azure Deployment Environments (ADE) aims to remove these obstacles and make developers more agile. By allowing developers to quickly and easily set up the infrastructure needed to deploy, test, and run cloud-based applications, we're changing the development process. Our ongoing investment in this area shows our commitment to improving the end-to-end developer experience and helping teams innovate without barriers.
+- Enhanced Integration with Azure Developer CLI (azd):
+ - Supports ADE's extensibility model.
+ - Enables deployments using any preferred Infrastructure as Code (IaC) framework.
+ - Allows simple commands like `azd up` and `azd deploy` for deploying code.
+ - Facilitates real-time testing, rapid issue identification, and quick resolution.
+- Tracking and Managing Environment Operations:
+ - Logs and deployment outputs can be managed directly in the developer portal.
+ - Makes it easier for dev teams to troubleshoot and fix deployments.
## Standardized deployments and customized templates
-Azure Deployment Environments empowers platform engineers and dev leads to securely provide curated, project-specific IaC templates directly from source control repositories. With the support for an extensibility model, organizations can now use their preferred IaC frameworks, including popular third-party options like Pulumi and Terraform, to execute deployments seamlessly.
-
-While the extensibility model already allows for customized deployments, we're committed to making it exceptionally easy for platform engineers and dev leads to tailor their deployments, ensuring they can securely meet the unique needs of their organization or development team.
--- Configuring pre- and post-deployment scripts as part of environment definitions will unlock the power to integrate more logic, validations, and custom actions into deployments, leveraging internal APIs and systems for more customized and efficient workflows. -- Support for private registries will allow platform engineers to store custom container images in a private Azure Container Registry (ACR), ensuring controlled and secure access.
+Azure Deployment Environments allows platform engineers and dev leads to securely provide curated, project-specific IaC templates directly from source control repositories. With support for an extensibility model, organizations can use their preferred IaC frameworks, including third-party options like Pulumi and Terraform, for seamless deployments.
+Customized deployments make it easy for platform engineers and dev leads to tailor deployments and ensure they can securely meet the unique needs of their organization or development team.
+- Pre- and Post-Deployment Scripts:
+ - Configure as part of environment definitions.
+ - Allow integration of more logic, validations, and custom actions into deployments.
+ - Leverage internal APIs and systems for more customized and efficient workflows.
+- Support for Private Registries:
+ - Allow platform engineers to store custom container images in a private Azure Container Registry (ACR).
+ - Ensure controlled and secure access.
## Enterprise management Balancing developer productivity with security, compliance, and cost management is crucial for organizations. Deployment Environments boosts productivity while upholding organizational security and compliance standards by centralizing environment management and governance for platform engineers.
-We're committed to further investing in capabilities that strengthen both security and cost controls, ensuring a secure and efficient development ecosystem.
--- Ability to configure a private virtual network for the runner executing the template deployments puts enterprises in control while accessing confidential data and resources from internal systems. -- Default autodeletion eliminates orphaned cloud resources, safeguarding enterprises from unnecessary costs and ensuring budget efficiency.
+- Private virtual network configuration for the runner executing the template deployments:
+ - Allows enterprises to control access to confidential data and resources from internal systems.
+- Default Autodeletion:
+ - Eliminates orphaned cloud resources.
+ - Ensures budget efficiency by avoiding unnecessary costs.
This roadmap outlines our current priorities, and we remain flexible to adapt based on customer feedback. We invite you to [share your thoughts and suggest more capabilities you would like to see](https://developercommunity.microsoft.com/deploymentenvironments/suggest). Your insights help us refine our focus and deliver even greater value.
dev-box Concept Dev Box Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-deployment-guide.md
Some usage scenarios for conditional access in Microsoft Dev Box include:
Learn how you can [configure conditional access policies for Dev Box](./how-to-configure-intune-conditional-access-policies.md).
-#### Back up and restore a dev box
-
-Microsoft Intune provides backup functionality for dev boxes. It automatically sets regular restore points, and enables you to create a manual restore point, just as you would for a [Cloud PC](/windows-365/enterprise/create-manual-restore-point).
-
-Restore functionality for dev boxes is provided by sharing Cloud PC restore points to a storage account. For more information, see: [Share Cloud PC restore points to an Azure Storage Account](/windows-365/enterprise/share-restore-points-storage)
- #### Privilege management You can configure Microsoft Intune Endpoint Privilege Management (EPM) for dev boxes so that dev box users don't need local administrative privileges. Microsoft Intune Endpoint Privilege Management allows your organizationΓÇÖs users to run as a standard user (without administrator rights) and complete tasks that require elevated privileges. Tasks that commonly require administrative privileges are application installs (like Microsoft 365 Applications), updating device drivers, and running certain Windows diagnostics.
dns Dns Alerts Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alerts-metrics.md
-+ Last updated 11/30/2023
dns Dns Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alias.md
description: In this article, learn about support for alias records in Microsoft
-+ Last updated 03/08/2024
To learn more about alias records, see the following articles:
- [Tutorial: Configure an alias record to refer to an Azure public IP address](tutorial-alias-pip.md) - [Tutorial: Configure an alias record to support apex domain names with Traffic Manager](tutorial-alias-tm.md)-- [DNS FAQ](./dns-faq.yml)
+- [DNS FAQ](./dns-faq.yml)
dns Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-for-azure-services.md
tags: azure dns
ms.assetid: e9b5eb94-7984-4640-9930-564bb9e82b78 -+ Last updated 11/30/2023
dns Dns Reverse Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-overview.md
description: In this learning path, get started learning how reverse DNS works a
-+ Last updated 06/10/2024
dns Dns Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-sdk.md
ms.assetid: eed99b87-f4d4-4fbf-a926-263f7e30b884 ms.devlang: csharp-+ Last updated 11/30/2023
dns Private Dns Autoregistration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-autoregistration.md
description: Overview of auto registration feature in Azure DNS private zones.
-+ Last updated 06/28/2024
dns Private Dns Privatednszone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-privatednszone.md
description: Overview of Private DNS zones
-+ Last updated 10/12/2023
dns Private Dns Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-resiliency.md
description: In this article, learn about resiliency in Azure Private DNS zones.
-+ Last updated 06/09/2023
dns Private Dns Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-scenarios.md
description: In this article, learn about common scenarios for using Azure Priva
-+ Last updated 04/25/2023
dns Private Dns Virtual Network Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-virtual-network-links.md
description: Overview of virtual network link sub resource an Azure DNS private
-+ Last updated 05/15/2024
education-hub It Admin Allocate Credit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/it-admin-allocate-credit.md
- Title: "Quickstart: Allocate credits to educators"
-description: This article shows IT administrators at a university how to assign credits to educators to use in Azure Education Hub labs.
---- Previously updated : 08/07/2024---
-# Quickstart: Allocate credits to educators
-
-In this quickstart, you allocate credits to educators in the Azure Education Hub. Educators use these credits in labs to distribute money to students for the deployment of Azure resources. These instructions are for IT administrators.
-
-## Prerequisites
--- Sign a Microsoft Customer Agreement.-- Be in direct field-led motion.-- Have a billing profile and a billing account.-- Assign educators as owners on the billing profile.-
-## Go to the Education Hub
-
-The first step in assigning credit to educators is to go to the Education Hub:
-
-1. Go to the [Azure portal](https://ms.portal.azure.com/).
-2. Sign in with the account that's associated with Azure Dev Tools for Teaching.
-3. Search for **education** on the search bar, and then select the **Education** result.
-
-## Add a credit
-
-Assigning credit means that you're allowing educators to use a certain amount of money from your billing profile to create labs in the Education Hub. Educators must be in the same tenant as you to receive the credit. They must also be owners of the billing profile where you want to create the credit.
-
-1. Go to the **Credits** section of the Education Hub.
-2. Select **Add** to begin adding a new credit.
-3. Choose the billing profile that you want the educators to draw the money from.
-4. Set the amount of credit.
-
- > [!NOTE]
- > Because of latency issues, there might be cases where the money spent is slightly higher than the set budget.
-
-5. Set an expiration date for this credit. You can extend the date later, if necessary.
-6. Select **Next** and confirm details.
-7. Select **Create** to finish creating the credit.
-
-## Modify credits
-
-After you create credits, they appear as rows on the **Credits** tab. You can modify them if necessary:
-
-1. Select the **Edit** button to the right of a credit.
-2. Change the end date or the credit amount.
-
- > [!NOTE]
- > You can only extend the credit end date. You can't shorten it.
-
-## Modify access
-
-You can modify which educators have access to a credit:
-
-1. Go to **Cost Management**.
-1. Add or remove educators from the billing profile that's associated with a credit.
-
- Added educators receive an email that invites them to visit the Education Hub to begin using the credit. Ensure that the educators sign in to the Azure portal by using the account that's associated with the credit's billing profile.
-
-## Next step
-
-> [!div class="nextstepaction"]
-> [Create an assignment and allocate credit](create-assignment-allocate-credit.md)
energy-data-services How To Enable External Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-enable-external-data-sources.md
External Data Sources (EDS) is a capability in [OSDU&reg;](https://osduforum.org
For more information about External Data Sources (EDS), see [The OSDU Forum 2Q 2022 Newsletter - EDS](https://osduforum.org/wp-content/uploads/2022/06/The-OSDU-Forum-2Q-2022-Newsletter.pdf). > [!NOTE]
-> OSDU community shipped EDS as a preview feature in M18 Release, and it is available as a preview feature on Azure Data Manager for Energy in Developer tier only.
+> EDS M23 preview is now available on Azure Data Manager for Energy in Developer tier only.
> [!IMPORTANT] > Limit your Identity Provider (IdP) token to read operations only.
For more information about External Data Sources (EDS), see [The OSDU Forum 2Q 2
To enable External Data Sources Preview on your Azure Data Manager for Energy, create an Azure support ticket with the following information: - Subscription ID - Azure Data Manager for Energy developer tier resource name-- Data partition name (the data partition in which EDS needs to be enabled)
+- Data partition name (the data partition in which EDS needs to be enabled for automated triggering of EDS-Scheduler)
- Key Vault name (created in [Prerequisites](#prerequisites)) > [!NOTE]
-> EDS does not have [multi data partition support](https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/51)
+> Support for [multiple data partitions](https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/51) is currently enabled for manual triggering of the EDS Ingest DAG, but this feature has not yet been implemented for the EDS Scheduler.
We notify you once EDS preview is enabled in your Azure Data Manager for Energy resource. ## Known issues-- Below issues are specific to [OSDU&reg;](https://osduforum.org/) M18 release:
+- Below issues are specific to [OSDU&reg;](https://osduforum.org/) M23 release:
- EDS ingest DAG results in failures when the data supplierΓÇÖs wrapper Search service is unavailable. - EDS Dataset service response provides an empty response when data supplierΓÇÖs Dataset wrapper service is unavailable. - Secret service responds with 5xx HTTP response code instead of 4xx in some cases. For example,
We notify you once EDS preview is enabled in your Azure Data Manager for Energy
- When an application tries to get an invalid deleted secret. ## Limitations
-Some EDS capabilities like **Naturalization, Reverse Naturalization, Reference data mapping** are unavailable in the M18 [OSDU&reg;](https://osduforum.org/) release (available in later releases), and hence unavailable in Azure Data Manager for Energy M18 release. These features are available once we upgrade to subsequent [OSDU&reg;](https://osduforum.org/) milestone release.
-
+The Naturalization DAG workflow won't be included in the M23 release.
+
## FAQ See [External data sources FAQ.](faq-energy-data-services.yml#external-data-sources)
energy-data-services How To Register External Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-register-external-data-sources.md
Use **getRetrievalInstructions** API in `005: Dataset Service collection` to ret
## References * [External data sources FAQ](faq-energy-data-services.yml#external-data-sources) * [EDS documentation 1.0](https://gitlab.opengroup.org/osdu/subcommittees/ea/projects/extern-data/docs/-/blob/master/Design%20Documents/Training/EDS_Documentation-1.0.docx)
-* [EDS M18 release notes](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M18-Release-Notes#external-data-sources-eds)
+* [OSDU EDS Documentation](https://osdu.pages.opengroup.org/platform/data-flow/ingestion/osdu-airflow-lib/)
+* [EDS M23 release notes](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M23-Release-Notes)
* [EDS Postman collection](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M20/QA_Artifacts_M20/eds_testing_doc/EDS_Ingest_M20_Pre-Shipping_Setup_and_Testing.postman_collection.json?ref_type=heads) * [EDS supplier enablement guide](https://gitlab.opengroup.org/osdu/r3-program-activities/docs/-/raw/master/R3%20Document%20Snapshot/23-osdu-eds-data-supplier-enablement-guide.pdf)
-* [EDS issues](https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues)
+* [EDS issues](https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues)
energy-data-services Osdu Services On Adme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/osdu-services-on-adme.md
Azure Data Manager for Energy is currently compliant with the M23 OSDU® milesto
### Ingestion services - **EDS DMS [[Preview]](how-to-enable-external-data-sources.md)**: Pulls specified data (metadata) from OSDU-compliant data sources via scheduled jobs while leaving associated dataset files (LAS, SEG-Y, etc.) stored at the external source for retrieval on demand.
- - **EDS Fetch & Ingest DAG**: Facilitates fetching data from external providers and ingesting it into the OSDU platform. It involves steps like registering with providers, creating data jobs, and triggering ingestion.
+ - **EDS Fetch & Ingest DAG**: Facilitates fetching data from external providers and ingesting it into the OSDU platform. It involves steps like registering with providers, creating data jobs, and triggering ingestion. With the M23 release, EDS Fetch and Ingest DAG includes new features like Parent and Reference data mapping.
- **EDS Scheduler DAG**: Automates data fetching based on predefined schedules and sends emails to recipients as needed. It ensures data remains current without manual intervention - **Ingestion Workflow**: Initiates business processes within the system. During the prototype phase, it facilitates CRUD operations on workflow metadata and triggers workflows in Apache Airflow. Additionally, the service manages process startup records, acting as a wrapper around Airflow functions. - **Manifest Ingestion DAG**: Used for ingesting single or multiple metadata artifacts about datasets in Azure Data Manager for Energy instance. Learn more about [Manifest-based ingestion](concepts-manifest-ingestion.md).
expressroute Cross Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/cross-network-connectivity.md
Title: 'Azure cross-network connectivity'
description: This page describes an application scenario for cross network connectivity and solution based on Azure networking features. -+ Last updated 06/30/2023
Global Reach is rolled out on a country/region by country/region basis. To see i
[Subscription limits]: ../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits [Connect-ER-VNet]: ./expressroute-howto-linkvnet-portal-resource-manager.md [ER-FAQ]: ./expressroute-faqs.md
-[VNet-FAQ]: ../virtual-network/virtual-networks-faq.md
+[VNet-FAQ]: ../virtual-network/virtual-networks-faq.md
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
description: This page provides architectural recommendations for disaster recov
-+ Last updated 06/15/2023
expressroute Designing For High Availability With Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-high-availability-with-expressroute.md
description: This page provides architectural recommendations for high availabil
-+ Last updated 06/15/2023
expressroute Expressroute Asymmetric Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-asymmetric-routing.md
description: This article walks you through the issues you might face with asymm
-+ Last updated 07/11/2024
expressroute Expressroute Bfd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-bfd.md
description: This article provides instructions on how to configure BFD (Bidirec
-+ Last updated 06/03/2024
expressroute Expressroute Config Samples Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-config-samples-nat.md
description: This page provides router configuration samples for Cisco and Junip
-+ Last updated 12/28/2023
expressroute Expressroute Config Samples Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-config-samples-routing.md
description: Use these interface and routing configuration samples for Cisco IOS
-+ Last updated 06/30/2023
expressroute Expressroute Connect Azure To Public Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-connect-azure-to-public-cloud.md
description: Describe various ways to connect Azure to other public clouds
-+ Last updated 06/30/2023
See [Set up direct connection between Azure and Oracle Cloud][ER-OCI] for connec
<!--Link References--> [ER-FAQ]: ./expressroute-faqs.md
-[ER-OCI]: /azure/virtual-machines/workloads/oracle/configure-azure-oci-networking
+[ER-OCI]: /azure/virtual-machines/workloads/oracle/configure-azure-oci-networking
expressroute Expressroute For Cloud Solution Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-for-cloud-solution-providers.md
-+ Last updated 06/30/2023
firewall-manager Threat Intelligence Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/threat-intelligence-settings.md
description: Learn how to configure threat intelligence-based filtering for your
-+ Last updated 10/17/2022
firewall-manager Vhubs And Vnets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/vhubs-and-vnets.md
description: Compare and contrast using hub virtual network or secured virtual h
-+ Last updated 03/08/2024
firewall Compliance Certifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/compliance-certifications.md
description: A list of Azure Firewall certifications for PCI, SOC, and ISO.
-+ Last updated 04/28/2023
firewall Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/diagnostic-logs.md
description: Diagnostic logs are the original Azure Firewall log queries that ou
-+ Last updated 12/04/2023
firewall Dns Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/dns-settings.md
For example, to use FQDNs in network rule, DNS proxy should be enabled. But if a
DNS proxy configuration requires three steps: 1. Enable the DNS proxy in Azure Firewall DNS settings. 2. Optionally, configure your custom DNS server or use the provided default.
-3. Configure the Azure Firewall private IP address as a custom DNS address in your virtual network DNS server settings. This setting ensures DNS traffic is directed to Azure Firewall.
+3. Configure the Azure Firewall private IP address as a custom DNS address in your virtual network DNS server settings to direct DNS traffic to the Azure Firewall.
+
+> [!NOTE]
+> If you choose to use a custom DNS server, select any IP address within the virtual network, excluding those in the Azure Firewall subnet.
#### [Portal](#tab/browser)
firewall Forced Tunneling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/forced-tunneling.md
description: You can configure forced tunneling to route Internet-bound traffic
-+ Last updated 03/22/2024
firewall Fqdn Filtering Network Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/fqdn-filtering-network-rules.md
description: How to use Azure Firewall FQDN filtering in network rules
-+ Last updated 05/10/2024
firewall Fqdn Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/fqdn-tags.md
description: An FQDN tag represents a group of fully qualified domain names (FQD
-+ Last updated 06/07/2024
firewall Infrastructure Fqdns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/infrastructure-fqdns.md
description: Azure Firewall includes a built-in rule collection for infrastructu
-+ Last updated 11/19/2019
You can override this built-in infrastructure rule collection by creating a deny
## Next steps -- Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).
+- Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).
firewall Logs And Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/logs-and-metrics.md
description: You can monitor Azure Firewall using firewall logs. You can also us
-+ Last updated 12/04/2023
firewall Long Running Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/long-running-sessions.md
description: There are few scenarios where Azure Firewall can potentially drop l
-+ Last updated 01/04/2023
firewall Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/metrics.md
description: Metrics in Azure Monitor are numerical values that describe some as
-+ Last updated 12/04/2023
firewall Policy Rule Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-rule-sets.md
description: Azure Firewall policy has a hierarchy of rule collection groups, ru
-+ Last updated 05/09/2024
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md
description: Azure Firewall has NAT rules, network rules, and applications rules
-+ Last updated 07/02/2024
firewall Service Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/service-tags.md
description: A service tag represents a group of IP address prefixes to help min
-+ Last updated 08/31/2023
firewall Threat Intel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/threat-intel.md
description: You can enable Threat intelligence-based filtering for your firewal
-+ Last updated 08/01/2022
frontdoor Front Door Cdn Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-cdn-comparison.md
description: This article provides a comparison between the different Azure Fron
-+ Last updated 10/13/2023
frontdoor How To Enable Private Link Storage Static Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-enable-private-link-storage-static-website.md
Last updated 03/31/2024
+zone_pivot_groups: front-door-dev-exp-portal-cli
# Connect Azure Front Door Premium to a storage static website with Private Link + This article guides you through how to configure Azure Front Door Premium tier to connect to your storage static website privately using the Azure Private Link service. ## Prerequisites
When creating a private endpoint connection to the storage static website's seco
Once the origin is added and the private endpoint connection is approved, you can test your private link connection to your storage static website. ++
+This article will guide you through how to configure Azure Front Door Premium tier to connect to your Storage Account privately using the Azure Private Link service with Azure CLI.
+
+## Prerequisites - CLI
++
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Have a functioning Azure Front Door Premium profile, an endpoint and an origin group. For more information on how to create an Azure Front Door profile, see [Create a Front Door - CLI](create-front-door-cli.md).
+
+## Enable Private Link to a Storage Static Website in Azure Front Door Premium
+
+1. Run [az afd origin create](/cli/azure/afd/origin#az-afd-origin-create) to create a new Azure Front Door origin. Enter the following settings to configure the Storage Static Website you want Azure Front Door Premium to connect with privately. Notice the `private-link-location` must be in one of the [available regions](private-link.md#region-availability) and the `private-link-sub-resource-type` must be **web**.
+
+```azurecli-interactive
+az afd origin create --enabled-state Enabled \
+ --resource-group testRG \
+ --origin-group-name default-origin-group \
+ --origin-name pvtStaticSite \
+ --profile-name testAFD \
+ --host-name example.z13.web.core.windows.net\
+ --origin-host-header example.z13.web.core.windows.net\
+ --http-port 80 \
+ --https-port 443 \
+ --priority 1 \
+ --weight 500 \
+ --enable-private-link true \
+ --private-link-location EastUS \
+ --private-link-request-message 'AFD Storage static website origin Private Link request.' \
+ --private-link-resource /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/testRG/providers/Microsoft.Storage/storageAccounts/testingafdpl \
+ --private-link-sub-resource-type web
+
+```
+
+## Approve Private Endpoint Connection from Storage Account
+
+1. Run [az network private-endpoint-connection list](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-list) to list the private endpoint connections for your storage account. Note down the 'Resource ID' of the private endpoint connection available in your storage account, in the first line of your output.
+
+```azurecli-interactive
+ az network private-endpoint-connection list -g testRG -n testingafdpl --type Microsoft.Storage/storageAccounts
+
+```
+
+2. Run [az network private-endpoint-connection approve](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-approve) to approve the private endpoint connection.
+
+```azurecli-interactive
+ az network private-endpoint-connection approve --id /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testRG/providers/Microsoft.Storage/storageAccounts/testingafdpl/privateEndpointConnections/testingafdpl.00000000-0000-0000-0000-000000000000
+
+```
+
+## Create Private Endpoint Connection to Web_Secondary
+
+When creating a private endpoint connection to the storage static website's secondary sub-resource, you need to add a **-secondary** suffix to the origin host header. For example, if your origin host header is `example.z13.web.core.windows.net`, you need to change it to `example-secondary.z13.web.core.windows.net`.
+
+Once the origin is added and the private endpoint connection is approved, you can test your private link connection to your storage static website.
++ ## Next steps Learn about [Private Link service with storage account](../storage/common/storage-private-endpoints.md).
governance Protect Resource Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/how-to/protect-resource-hierarchy.md
with policy assignments or Azure role assignments that are more suited to a new
1. Use the search bar to search for and select **Management groups**.
-1. On the root management group, select **details** next to the name of the management group.
+1. Select the root management group.
-1. Under **Settings**, select **Hierarchy settings**.
+1. Select **Settings** on the left side of the page.
1. Select the **Change default management group** button.
management group hierarchy. To create child management groups, a user requires t
1. Use the search bar to search for and select **Management groups**.
-1. On the root management group, select **details** next to the name of the management group.
+1. Select the root management group.
-1. Under **Settings**, select **Hierarchy settings**.
+1. Select **Settings** on the left side of the page.
-1. Turn on the **Require permissions for creating new management groups** toggle.
+1. Turn on the **Permissions for creating new management groups** toggle.
- If the **Require permissions for creating new management groups** toggle is unavailable, the cause is one of these conditions:
+ If the **Require write permissions for creating new management groups** toggle is unavailable, the cause is one of these conditions:
- The management group that you're viewing isn't the root management group. - Your security principal doesn't have the necessary permissions to alter the hierarchy settings.
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/manage.md
You can change the name of the management group by using the Azure portal, Azure
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **All services** > **Management groups**.
+1. Select **All services**. In the **Filter services** text box, enter *Management groups* and select it from the list.
1. Select the management group that you want to rename.
To delete a management group, you must meet the following requirements:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **All services** > **Management groups**.
+1. Select **All services**. In the **Filter services** text box, enter *Management groups* and select it from the list.
1. Select the management group that you want to delete.
You can view any management group if you have a direct or inherited Azure role o
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **All services** > **Management groups**.
+1. Select **All services**. In the **Filter services** text box, enter *Management groups* and select it from the list.
1. The page for management group hierarchy appears. On this page, you can explore all the management groups and subscriptions that you have access to. Selecting the group name takes you to a
To see what permissions you have in the Azure portal, select the management grou
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **All services** > **Management groups**.
+1. Select **All services**. In the **Filter services** text box, enter *Management groups* and select it from the list.
1. Select the management group that you want to be the parent. 1. At the top of the page, select **Add subscription**.
-1. Select the subscription in the list with the correct ID.
+1. From **Add subscription** select the subscription in the list with the correct ID.
:::image type="content" source="./media/add_context_sub.png" alt-text="Screenshot of the box for selecting an existing subscription to add to a management group." border="false":::
To see what permissions you have in the Azure portal, select the management grou
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **All services** > **Management groups**.
+1. Select **All services**. In the **Filter services** text box, enter *Management groups* and select it from the list.
1. Select the management group that's the current parent.
resource subToMG 'Microsoft.Management/managementGroups/subscriptions@2020-05-01
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **All services** > **Management groups**.
+1. Select **All services**. In the **Filter services** text box, enter *Management groups* and select it from the list.
1. Select the management group that you want to be the parent.
-1. At the top of the page, select **Add management group**.
+1. At the top of the page, select **Create**.
-1. On the **Add management group** pane, choose whether you want to use a new or existing management group:
+1. On the **Create management group** pane, choose whether you want to use a new or existing management group:
- Selecting **Create new** creates a new management group.
- - Selecting **Use existing** presents you with a dropdown list of all the management groups that you
- can move to this management group.
+ - Selecting **Use existing** presents you with a dropdown list of all the management groups that you can move to this management group.
:::image type="content" source="./media/add_context_MG.png" alt-text="Screenshot of the pane for adding a management group." border="false":::
GET https://management.azure.com/providers/Microsoft.Management/managementgroups
To learn more about management groups, see: -- [Create management groups to organize Azure resources](./create-management-group-portal.md)
+- [Quickstart: Create a management group](./create-management-group-portal.md)
- [Review management groups in the Azure PowerShell Az.Resources module](/powershell/module/az.resources#resources) - [Review management groups in the REST API](/rest/api/managementgroups/managementgroups) - [Review management groups in the Azure CLI](/cli/azure/account/management-group)
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
Title: Details of the policy assignment structure description: Describes the policy assignment definition used by Azure Policy to relate policy definitions and parameters to resources for evaluation. Previously updated : 07/03/2024 Last updated : 09/05/2024
-# Azure Policy assignment structure
-Policy assignments define which resources are to be evaluated by a
-policy definition or initiaitve. Further, the policy assignment can determine the values of parameters for that group of
-resources at assignment time, making it possible to reuse policy definitions that address the same resource properties with different needs for compliance.
+# Azure Policy assignment structure
+Policy assignments define which resources are evaluated by a policy definition or initiative. Further, the policy assignment can determine the values of parameters for that group of resources at assignment time, making it possible to reuse policy definitions that address the same resource properties with different needs for compliance.
You use JavaScript Object Notation (JSON) to create a policy assignment. The policy assignment contains elements for:
For example, the following JSON shows a sample policy assignment request in _DoN
```json {
- "properties": {
- "displayName": "Enforce resource naming rules",
- "description": "Force resource names to begin with DeptA and end with -LC",
- "definitionVersion": "1.*.*",
- "metadata": {
- "assignedBy": "Cloud Center of Excellence"
- },
- "enforcementMode": "DoNotEnforce",
- "notScopes": [],
- "policyDefinitionId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyDefinitions/ResourceNaming",
- "nonComplianceMessages": [
- {
- "message": "Resource names must start with 'DeptA' and end with '-LC'."
- }
- ],
- "parameters": {
- "prefix": {
- "value": "DeptA"
- },
- "suffix": {
- "value": "-LC"
- }
- },
- "identity": {
- "type": "SystemAssigned"
- },
- "resourceSelectors": [],
- "overrides": []
- }
+ "properties": {
+ "displayName": "Enforce resource naming rules",
+ "description": "Force resource names to begin with DeptA and end with -LC",
+ "definitionVersion": "1.*.*",
+ "metadata": {
+ "assignedBy": "Cloud Center of Excellence"
+ },
+ "enforcementMode": "DoNotEnforce",
+ "notScopes": [],
+ "policyDefinitionId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyDefinitions/ResourceNaming",
+ "nonComplianceMessages": [
+ {
+ "message": "Resource names must start with 'DeptA' and end with '-LC'."
+ }
+ ],
+ "parameters": {
+ "prefix": {
+ "value": "DeptA"
+ },
+ "suffix": {
+ "value": "-LC"
+ }
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "resourceSelectors": [],
+ "overrides": []
+ }
} ```
-## Scope
-The scope used for assignment resource creation time is the primary driver of resource applicability. For more information on assignment scope, see
-> [Understand scope in Azure Policy](./scope.md#assignment-scopes).
+
+## Scope
+
+The scope used for assignment resource creation time is the primary driver of resource applicability. For more information on assignment scope, see [Understand scope in Azure Policy](./scope.md#assignment-scopes).
## Policy definition ID and version (preview)
-This field must be the full path name of either a policy definition or an initiative definition.
-`policyDefinitionId` is a string and not an array. The latest content of the assigned policy
-definition or initiative is retrieved each time the policy assignment is evaluated. It's
-recommended that if multiple policies are often assigned together, to use an
-[initiative](./initiative-definition-structure.md) instead.
-For built-in definitions and initiative, you can use specific the `definitionVersion` of which to assess on. By default, the version will set to the latest major version and autoingest minor and patch changes.
+This field must be the full path name of either a policy definition or an initiative definition. The `policyDefinitionId` is a string and not an array. The latest content of the assigned policy definition or initiative is retrieved each time the policy assignment is evaluated. The recommendation is that if multiple policies are often assigned together, to use an [initiative](./initiative-definition-structure.md) instead.
-To autoingest any minor changes of the definition, the version number would be `#.*.*`. Wildcard represents autoingesting updates.
-To pin to a minor version path, the version format would be `#.#.*`.
-All patch changes must be autoinjested for security purposes. Patch changes are limited to text changes and break glass scenarios.
+For built-in definitions and initiatives, you can use specific the `definitionVersion` of which to assess on. By default, the version is set to the latest major version and autoingest minor and patch changes.
+
+- To autoingest any minor changes of the definition, the version number would be `#.*.*`. The Wildcard represents autoingesting updates.
+- To pin to a minor version path, the version format would be `#.#.*`.
+- All patch changes must be autoinjested for security purposes. Patch changes are limited to text changes and break glass scenarios.
## Display name and description
-You use **displayName** and **description** to identify the policy assignment and provide context
-for its use with the specific set of resources. **displayName** has a maximum length of _128_
-characters and **description** a maximum length of _512_ characters.
+You use `displayName` and `description` to identify the policy assignment and provide context for its use with the specific set of resources. `displayName` has a maximum length of _128_ characters and `description` a maximum length of _512_ characters.
## Metadata
-The optional `metadata` property stores information about the policy assignment. Customers can
-define any properties and values useful to their organization in `metadata`. However, there are some
-_common_ properties used by Azure Policy. Each `metadata` property has a limit of 1,024 characters.
+The optional `metadata` property stores information about the policy assignment. Customers can define any properties and values useful to their organization in `metadata`. However, there are some _common_ properties used by Azure Policy. Each `metadata` property has a limit of 1,024 characters.
### Common metadata properties - `assignedBy` (string): The friendly name of the security principal that created the assignment. - `createdBy` (string): The GUID of the security principal that created the assignment. - `createdOn` (string): The Universal ISO 8601 DateTime format of the assignment creation time.-- `updatedBy` (string): The friendly name of the security principal that updated the assignment, if
- any.
-- `updatedOn` (string): The Universal ISO 8601 DateTime format of the assignment update time, if
- any.
+- `updatedBy` (string): The friendly name of the security principal that updated the assignment, if any.
+- `updatedOn` (string): The Universal ISO 8601 DateTime format of the assignment update time, if any.
### Scenario specific metadata properties-- `parameterScopes` (object): A collection of key-value pairs where the key matches a
- [strongType](./definition-structure-parameters.md#strongtype) configured parameter name and the value defines
- the resource scope used in Portal to provide the list of available resources by matching
- _strongType_. Portal sets this value if the scope is different than the assignment scope. If set,
- an edit of the policy assignment in Portal automatically sets the scope for the parameter to this
- value. However, the scope isn't locked to the value and it can be changed to another scope.
- The following example of `parameterScopes` is for a _strongType_ parameter named
- `backupPolicyId` that sets a scope for resource selection when the assignment is edited in the
- Portal.
+- `parameterScopes` (object): A collection of key-value pairs where the key matches a [strongType](./definition-structure-parameters.md#strongtype) configured parameter name and the value defines the resource scope used in Portal to provide the list of available resources by matching _strongType_. Portal sets this value if the scope is different than the assignment scope. If set, an edit of the policy assignment in Portal automatically sets the scope for the parameter to this value. However, the scope isn't locked to the value and it can be changed to another scope.
+
+ The following example of `parameterScopes` is for a _strongType_ parameter named `backupPolicyId` that sets a scope for resource selection when the assignment is edited in the portal.
```json "metadata": { "parameterScopes": {
- "backupPolicyId": "/subscriptions/{SubscriptionID}/resourcegroups/{ResourceGroupName}"
+ "backupPolicyId": "/subscriptions/{SubscriptionID}/resourcegroups/{ResourceGroupName}"
} } ```
_common_ properties used by Azure Policy. Each `metadata` property has a limit o
## Resource selectors
-The optional `resourceSelectors` property facilitates safe deployment practices (SDP) by enabling
-you to gradually roll out policy assignments based on factors like resource location, resource type,
-or whether a resource has a location. When resource selectors are used, Azure Policy will only
-evaluate resources that are applicable to the specifications made in the resource selectors.
-Resource selectors can also be used to narrow down the scope of [exemptions](exemption-structure.md) in the same way.
+The optional `resourceSelectors` property facilitates safe deployment practices (SDP) by enabling you to gradually roll out policy assignments based on factors like resource location, resource type, or whether a resource has a location. When resource selectors are used, Azure Policy only evaluates resources that are applicable to the specifications made in the resource selectors. Resource selectors can also be used to narrow down the scope of [exemptions](exemption-structure.md) in the same way.
In the following example scenario, the new policy assignment is evaluated only if the resource's location is either **East US** or **West US**. ```json {
- "properties": {
- "policyDefinitionId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyDefinitions/ResourceLimit",
- "definitionVersion": "1.1.*",
- "resourceSelectors": [
- {
- "name": "SDPRegions",
- "selectors": [
- {
- "kind": "resourceLocation",
- "in": [ "eastus", "westus" ]
- }
- ]
- }
+ "properties": {
+ "policyDefinitionId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyDefinitions/ResourceLimit",
+ "definitionVersion": "1.1.*",
+ "resourceSelectors": [
+ {
+ "name": "SDPRegions",
+ "selectors": [
+ {
+ "kind": "resourceLocation",
+ "in": [
+ "eastus",
+ "westus"
+ ]
+ }
]
- },
- "systemData": { ... },
- "id": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/ResourceLimit",
- "type": "Microsoft.Authorization/policyAssignments",
- "name": "ResourceLimit"
+ }
+ ]
+ },
+ "systemData": { ...
+ },
+ "id": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/ResourceLimit",
+ "type": "Microsoft.Authorization/policyAssignments",
+ "name": "ResourceLimit"
} ```
When you're ready to expand the evaluation scope for your policy, you just have
```json {
- "properties": {
- "policyDefinitionId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyDefinitions/ResourceLimit",
- "definitionVersion": "1.1.*",
- "resourceSelectors": [
- {
- "name": "SDPRegions",
- "selectors": [
- {
- "kind": "resourceLocation",
- "in": [ "eastus", "westus", "centralus", "southcentralus" ]
- }
- ]
- }
+ "properties": {
+ "policyDefinitionId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyDefinitions/ResourceLimit",
+ "definitionVersion": "1.1.*",
+ "resourceSelectors": [
+ {
+ "name": "SDPRegions",
+ "selectors": [
+ {
+ "kind": "resourceLocation",
+ "in": [
+ "eastus",
+ "westus",
+ "centralus",
+ "southcentralus"
+ ]
+ }
]
- },
- "systemData": { ... },
- "id": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/ResourceLimit",
- "type": "Microsoft.Authorization/policyAssignments",
- "name": "ResourceLimit"
+ }
+ ]
+ },
+ "systemData": { ...
+ },
+ "id": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/ResourceLimit",
+ "type": "Microsoft.Authorization/policyAssignments",
+ "name": "ResourceLimit"
} ``` Resource selectors have the following properties:+ - `name`: The name of the resource selector. - `selectors`: (Optional) The property used to determine which subset of resources applicable to the policy assignment should be evaluated for compliance.
Resource selectors have the following properties:
- `notIn`: The list of not-allowed values for the specified `kind`. Can't be used with `in`. Can contain up to 50 values.
-A **resource selector** can contain multiple **selectors**. To be applicable to a resource selector, a resource must meet requirements specified by all its selectors. Further, up to 10 **resource selectors** can be specified in a single assignment. In-scope resources are evaluated when they satisfy any one of these resource selectors.
+A **resource selector** can contain multiple `selectors`. To be applicable to a resource selector, a resource must meet requirements specified by all its selectors. Further, up to 10 `resourceSelectors` can be specified in a single assignment. In-scope resources are evaluated when they satisfy any one of these resource selectors.
## Overrides
The optional `overrides` property allows you to change the effect of a policy de
A common use case for overrides on effect is policy initiatives with a large number of associated policy definitions. In this situation, managing multiple policy effects can consume significant administrative effort, especially when the effect needs to be updated from time to time. Overrides can be used to simultaneously update the effects of multiple policy definitions within an initiative.
-Let's take a look at an example. Imagine you have a policy initiative named _CostManagement_ that includes a custom policy definition with `policyDefinitionReferenceId` _corpVMSizePolicy_ and a single effect of `audit`. Suppose you want to assign the _CostManagement_ initiative, but don't yet want to see compliance reported for this policy. This policy's 'audit' effect can be replaced by 'disabled' through an override on the initiative assignment, as shown in the following sample:
+Let's take a look at an example. Imagine you have a policy initiative named _CostManagement_ that includes a custom policy definition with `policyDefinitionReferenceId` _corpVMSizePolicy_ and a single effect of `audit`. Suppose you want to assign the _CostManagement_ initiative, but don't yet want to see compliance reported for this policy. This policy's `audit` effect can be replaced by `disabled` through an override on the initiative assignment, as shown in the following sample:
```json {
- "properties": {
- "policyDefinitionId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policySetDefinitions/CostManagement",
- "overrides": [
- {
- "kind": "policyEffect",
- "value": "disabled",
- "selectors": [
- {
- "kind": "policyDefinitionReferenceId",
- "in": [ "corpVMSizePolicy" ]
- }
- ]
- }
+ "properties": {
+ "policyDefinitionId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policySetDefinitions/CostManagement",
+ "overrides": [
+ {
+ "kind": "policyEffect",
+ "value": "disabled",
+ "selectors": [
+ {
+ "kind": "policyDefinitionReferenceId",
+ "in": [
+ "corpVMSizePolicy"
+ ]
+ }
]
- },
- "systemData": { ... },
- "id": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/CostManagement",
- "type": "Microsoft.Authorization/policyAssignments",
- "name": "CostManagement"
+ }
+ ]
+ },
+ "systemData": { ...
+ },
+ "id": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/CostManagement",
+ "type": "Microsoft.Authorization/policyAssignments",
+ "name": "CostManagement"
} ``` Another common use case for overrides is rolling out a new version of a definition. For recommended steps on safely updating an assignment version, see [Policy Safe deployment](../how-to/policy-safe-deployment-practices.md#steps-for-safely-updating-built-in-definition-version-within-azure-policy-assignment). Overrides have the following properties:-- `kind`: The property the assignment will override. The supported kinds are `policyEffect` and `policyVersion`.+
+- `kind`: The property the assignment overrides. The supported kinds are `policyEffect` and `policyVersion`.
- `value`: The new value that overrides the existing value. For `kind: policyEffect`, the supported values are [effects](effect-basics.md). For `kind: policyVersion`, the supported version number must be greater than or equal to the `definitionVersion` specified in the assignment. - `selectors`: (Optional) The property used to determine what scope of the policy assignment should take on the override.
- - `kind`: The property of a selector that describes what characteristic will narrow down the scope of the override. Allowed values for `kind: policyEffect`:
+ - `kind`: The property of a selector that describes which characteristic narrows down the scope of the override. Allowed values for `kind: policyEffect`:
- - `policyDefinitionReferenceId`: This specifies which policy definitions within an initiative assignment should take on the effect override.
+ - `policyDefinitionReferenceId`: This property specifies which policy definitions within an initiative assignment should take on the effect override.
- `resourceLocation`: This property is used to select resources based on their type. Can't be used in the same resource selector as `resourceWithoutLocation`.
- Allowed value for `kind: policyVersion`:
+ Allowed value for `kind: policyVersion`:
- `resourceLocation`: This property is used to select resources based on their type. Can't be used in the same resource selector as `resourceWithoutLocation`.
Overrides have the following properties:
- `notIn`: The list of not-allowed values for the specified `kind`. Can't be used with `in`. Can contain up to 50 values.
-One override can be used to replace the effect of many policies by specifying multiple values in the policyDefinitionReferenceId array. A single override can be used for up to 50 policyDefinitionReferenceIds, and a single policy assignment can contain up to 10 overrides, evaluated in the order in which they're specified. Before the assignment is created, the effect chosen in the override is validated against the policy rule and parameter allowed value list (in cases where the effect is [parameterized](./definition-structure-parameters.md)).
+One override can be used to replace the effect of many policies by specifying multiple values in the `policyDefinitionReferenceId` array. A single override can be used for up to 50 `policyDefinitionReferenceId`, and a single policy assignment can contain up to 10 overrides, evaluated in the order in which they're specified. Before the assignment is created, the effect chosen in the override is validated against the policy rule and parameter allowed value list (in cases where the effect is [parameterized](./definition-structure-parameters.md)).
## Enforcement mode
-The **enforcementMode** property provides customers the ability to test the outcome of a policy on
-existing resources without initiating the policy effect or triggering entries in the
-[Azure Activity log](../../../azure-monitor/essentials/platform-logs-overview.md).
+The `enforcementMode` property provides customers the ability to test the outcome of a policy on existing resources without initiating the policy effect or triggering entries in the [Azure Activity log](../../../azure-monitor/essentials/platform-logs-overview.md).
-This scenario is
-commonly referred to as "What If" and aligns to safe deployment practices. **enforcementMode** is
-different from the [Disabled](./effects.md#disabled) effect, as that effect prevents resource
-evaluation from happening at all.
+This scenario is commonly referred to as _What If_ and aligns to safe deployment practices. `enforcementMode` is different from the [Disabled](./effects.md#disabled) effect, as that effect prevents resource evaluation from happening at all.
This property has the following values:
This property has the following values:
|Enabled |Default |string |Yes |Yes |The policy effect is enforced during resource creation or update. | |Disabled |DoNotEnforce |string |Yes |No | The policy effect isn't enforced during resource creation or update. |
-If **enforcementMode** isn't specified in a policy or initiative definition, the value _Default_ is
-used. [Remediation tasks](../how-to/remediate-resources.md) can be started for
-[deployIfNotExists](./effects.md#deployifnotexists) policies, even when **enforcementMode** is set
-to _DoNotEnforce_.
+If `enforcementMode` isn't specified in a policy or initiative definition, the value _Default_ is used. [Remediation tasks](../how-to/remediate-resources.md) can be started for [deployIfNotExists](./effects.md#deployifnotexists) policies, even when `enforcementMode` is set to _DoNotEnforce_.
## Excluded scopes
-The **scope** of the assignment includes all child resource containers and child resources. If a
-child resource container or child resource shouldn't have the definition applied, each can be
-_excluded_ from evaluation by setting **notScopes**. This property is an array to enable excluding
-one or more resource containers or resources from evaluation. **notScopes** can be added or updated
-after creation of the initial assignment.
+The **scope** of the assignment includes all child resource containers and child resources. If a child resource container or child resource shouldn't have the definition applied, each can be _excluded_ from evaluation by setting `notScopes`. This property is an array to enable excluding one or more resource containers or resources from evaluation. `notScopes` can be added or updated after creation of the initial assignment.
> [!NOTE] > An _excluded_ resource is different from an _exempted_ resource. For more information, see
after creation of the initial assignment.
## Non-compliance messages
-To set a custom message that describes why a resource is non-compliant with the policy or initiative
-definition, set `nonComplianceMessages` in the assignment definition. This node is an array of
-`message` entries. This custom message is in addition to the default error message for
-non-compliance and is optional.
+To set a custom message that describes why a resource is non-compliant with the policy or initiative definition, set `nonComplianceMessages` in the assignment definition. This node is an array of `message` entries. This custom message is in addition to the default error message for non-compliance and is optional.
> [!IMPORTANT] > Custom messages for non-compliance are only supported on definitions or initiatives with
non-compliance and is optional.
```json "nonComplianceMessages": [
- {
- "message": "Default message"
- }
+ {
+ "message": "Default message"
+ }
] ```
-If the assignment is for an initiative, different messages can be configured for each policy
-definition in the initiative. The messages use the `policyDefinitionReferenceId` value configured in
-the initiative definition. For details, see
-[policy definitions properties](./initiative-definition-structure.md#policy-definition-properties).
+If the assignment is for an initiative, different messages can be configured for each policy definition in the initiative. The messages use the `policyDefinitionReferenceId` value configured in the initiative definition. For more information, see [policy definitions properties](./initiative-definition-structure.md#policy-definition-properties).
```json "nonComplianceMessages": [
- {
- "message": "Default message"
- },
- {
- "message": "Message for just this policy definition by reference ID",
- "policyDefinitionReferenceId": "10420126870854049575"
- }
+ {
+ "message": "Default message"
+ },
+ {
+ "message": "Message for just this policy definition by reference ID",
+ "policyDefinitionReferenceId": "10420126870854049575"
+ }
] ``` ## Parameters
-This segment of the policy assignment provides the values for the parameters defined in the
-[policy definition or initiative definition](./definition-structure-parameters.md). This design
-makes it possible to reuse a policy or initiative definition with different resources, but check for
-different business values or outcomes.
+This segment of the policy assignment provides the values for the parameters defined in the [policy definition or initiative definition](./definition-structure-parameters.md). This design makes it possible to reuse a policy or initiative definition with different resources, but check for different business values or outcomes.
```json "parameters": {
- "prefix": {
- "value": "DeptA"
- },
- "suffix": {
- "value": "-LC"
- }
+ "prefix": {
+ "value": "DeptA"
+ },
+ "suffix": {
+ "value": "-LC"
+ }
} ```
-In this example, the parameters previously defined in the policy definition are `prefix` and
-`suffix`. This particular policy assignment sets `prefix` to **DeptA** and `suffix` to **-LC**. The
-same policy definition is reusable with a different set of parameters for a different department,
-reducing the duplication and complexity of policy definitions while providing flexibility.
+In this example, the parameters previously defined in the policy definition are `prefix` and `suffix`. This particular policy assignment sets `prefix` to **DeptA** and `suffix` to **-LC**. The same policy definition is reusable with a different set of parameters for a different department, reducing the duplication and complexity of policy definitions while providing flexibility.
## Identity
-For policy assignments with effect set to **deployIfNotExist** or **modify**, it's required to have an identity property to do remediation on non-compliant resources. When using an identity, the user must also specify a location for the assignment.
+For policy assignments with effect set to `deployIfNotExists` or `modify`, the requirement is to have an identity property to do remediation on non-compliant resources. When an assignment uses an identity, the user must also specify a location for the assignment.
> [!NOTE] > A single policy assignment can be associated with only one system- or user-assigned managed identity. However, that identity can be assigned more than one role if necessary.
For policy assignments with effect set to **deployIfNotExist** or **modify**, it
```json # System-assigned identity "identity": {
- "type": "SystemAssigned"
- }
+ "type": "SystemAssigned"
+}
# User-assigned identity "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/SubscriptionID/resourceGroups/testResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/test-identity": {}
- }
- },
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/SubscriptionID/resourceGroups/{rgName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/test-identity": {}
+ }
+},
``` ## Next steps
For policy assignments with effect set to **deployIfNotExist** or **modify**, it
- Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
+- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Compliance States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/compliance-states.md
Title: Azure Policy compliance states description: This article describes the concept of compliance states in Azure Policy. Previously updated : 04/05/2023 Last updated : 09/05/2024
## How compliance works
-When initiative or policy definitions are assigned, Azure Policy determines which resources are [applicable](./policy-applicability.md) then evaluates those which haven't been [excluded](./assignment-structure.md#excluded-scopes) or [exempted](./exemption-structure.md). Evaluation yields **compliance states** based on conditions in the policy rule and each resources' adherence to those requirements.
+When initiative or policy definitions are assigned, Azure Policy determines which resources are [applicable](./policy-applicability.md) then evaluates those resources that aren't [excluded](./assignment-structure.md#excluded-scopes) or [exempted](./exemption-structure.md). Evaluation yields **compliance states** based on conditions in the policy rule and each resources adherence to those requirements.
## Available compliance states ### Non-compliant
-Policy assignments with `audit`, `auditIfNotExists`, or `modify` effects are considered non-compliant for _new_, _updated_, or _existing_ resources when the conditions of the policy rule evaluate to **TRUE**.
+Policy assignments with `audit`, `auditIfNotExists`, or `modify` effects are considered non-compliant for _new_, _updated_, or _existing_ resources when the conditions of the policy rule evaluate to `TRUE`.
-Policy assignments with `append`, `deny`, and `deployIfNotExists` effects are considered non-compliant for _existing_ resources when the conditions of the policy rule evaluate to **TRUE**. _New_ and _updated_ resources are automatically remediated or denied at request time to enforce compliance. When a previously existing non-compliant resource is updated, the compliance state remains non-compliant until the resource deployment and Policy evaluation complete.
+Policy assignments with `append`, `deny`, and `deployIfNotExists` effects are considered non-compliant for _existing_ resources when the conditions of the policy rule evaluate to `TRUE`. _New_ and _updated_ resources are automatically remediated or denied at request time to enforce compliance. When a previously existing non-compliant resource is updated, the compliance state remains non-compliant until the resource deployment and Policy evaluation complete.
> [!NOTE]
-> The DeployIfNotExist and AuditIfNotExist effects require the IF statement to be TRUE and the
+> The `deployIfNotExists` and `auditIfNotExists` effects require the IF statement to be TRUE and the
> existence condition to be FALSE to be non-compliant. When TRUE, the IF condition triggers > evaluation of the existence condition for the related resources. Policy assignments with `manual` effects are considered non-compliant under two circumstances:
-1. The policy definition has a default compliance state of non-compliant and there is no active [attestation](./attestation-structure.md) for the applicable resource stating otherwise.
-1. The resource has been attested as non-compliant.
-To determine
-the reason a resource is non-compliant or to find the change responsible, see
-[Determine non-compliance](../how-to/determine-non-compliance.md). To [remediate](./remediation-structure.md) non-compliant resources for `deployIfNotExists` and `modify` policies, see [Remediate non-compliant resources with Azure Policy](../how-to/remediate-resources.md).
+1. The policy definition has a default compliance state of non-compliant and there's no active [attestation](./attestation-structure.md) for the applicable resource stating otherwise.
+1. The resource was attested as non-compliant.
+
+To determine the reason a resource is non-compliant or to find the change responsible, see [Determine causes of non-compliance](../how-to/determine-non-compliance.md). To [remediate](./remediation-structure.md) non-compliant resources for `deployIfNotExists` and `modify` policies, see [Remediate non-compliant resources with Azure Policy](../how-to/remediate-resources.md).
### Compliant
-Policy assignments with `append`, `audit`, `auditIfNotExists`, `deny`, `deployIfNotExists`, or `modify` effects are considered compliant for _new_, _updated_, or _existing_ resources when the conditions of the policy rule evaluate to **FALSE**.
+Policy assignments with `append`, `audit`, `auditIfNotExists`, `deny`, `deployIfNotExists`, or `modify` effects are considered compliant for _new_, _updated_, or _existing_ resources when the conditions of the policy rule evaluate to `FALSE`.
Policy assignments with `manual` effects are considered compliant under two circumstances:
-1. The policy definition has a default compliance state of compliant and there is no active [attestation](./attestation-structure.md) for the applicable resource stating otherwise.
-1. The resource has been attested as compliant.
+
+1. The policy definition has a default compliance state of compliant and there's no active [attestation](./attestation-structure.md) for the applicable resource stating otherwise.
+1. The resource was attested as compliant.
### Error
A policy assignment is considered conflicting when there are two or more policy
An applicable resource has a compliance state of exempt for a policy assignment when it is in the scope of an [exemption](./exemption-structure.md). > [!NOTE]
-> _Exempt_ is different than _excluded_. For more details, see [scope](./scope.md).
+> _Exempt_ is different than _excluded_. For more information, see [Understand scope in Azure Policy](./scope.md).
### Unknown
- Unknown is the default compliance state for definitions with `manual` effect, unless the default has been explicitly set to compliant or non-compliant. This state indicates that an [attestation](./attestation-structure.md) of compliance is warranted. This compliance state only occurs for policy assignments with `manual` effect.
+Unknown is the default compliance state for definitions with `manual` effect, unless the default was explicitly set to compliant or non-compliant. This state indicates that an [attestation](./attestation-structure.md) of compliance is warranted. This compliance state only occurs for policy assignments with `manual` effect.
### Protected
- Protected state signifies that the resource is covered under an assignment with a [denyAction](./effects.md#denyaction) effect.
+Protected state signifies that the resource is covered under an assignment with a [denyAction](./effect-deny-action.md) effect.
### Not registered
-This compliance state is visible in portal when the Azure Policy Resource Provider hasn't been registered, or when the account logged in doesn't have permission to read compliance data.
+This compliance state is visible in Azure portal when the Azure Policy Resource Provider isn't registered, or when the signed in account doesn't have permission to read compliance data.
> [!NOTE] > If compliance state is being reported as **Not registered**, verify that the
-> **Microsoft.PolicyInsights** Resource Provider is registered and that the user has the appropriate Azure role-based access control (Azure RBAC) permissions as described in
+> `Microsoft.PolicyInsights` Resource Provider is registered and that the user has the appropriate Azure role-based access control (Azure RBAC) permissions as described in
> [Azure RBAC permissions in Azure Policy](../overview.md#azure-rbac-permissions-in-azure-policy).
-> To register Microsoft.PolicyInsights, [follow these steps](../../../azure-resource-manager/management/resource-providers-and-types.md).
+> To register `Microsoft.PolicyInsights`, follow the steps in [Azure resource providers and types](../../../azure-resource-manager/management/resource-providers-and-types.md).
### Not started
-This compliance state indicates that the evaluation cycle hasn't started for the policy or resource.
+This compliance state indicates that the evaluation cycle isn't started for the policy or resource.
## Example
-Now that you have an understanding of what compliance states exist and what each one means, let's look at an example using compliant and non-compliant states.
+Now that you have an understanding of what compliance states exist and what each one means, let's look at an example using compliant and non-compliant states.
Suppose you have a resource group - ContosoRG, with some storage accounts (highlighted in red) that are exposed to public networks.
Suppose you have a resource group - ContosoRG, with some storage accounts
Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three are blue, while storage accounts two, four, and five are red. :::image-end:::
-In this example, you need to be wary of security risks. Assume you assign a policy definition that audits for storage accounts that are exposed to public networks, and that no exemptions are created for this assignment. The policy checks for applicable resources (which includes all storage accounts in the ContosoRG resource group), then evaluates those resources that aren't excluded from evaluation. It audits the three storage accounts exposed to public networks, changing their compliance states to **Non-compliant.** The remainder are marked **compliant**.
+In this example, you need to be wary of security risks. Assume you assign a policy definition that audits for storage accounts that are exposed to public networks, and that no exemptions are created for this assignment. The policy checks for applicable resources (which includes all storage accounts in the ContosoRG resource group), then evaluates those resources that aren't excluded from evaluation. It audits the three storage accounts exposed to public networks, changing their compliance states to **Non-compliant.** The remainders are marked **compliant**.
:::image type="complex" source="../media/getting-compliance-data/resource-group03.png" alt-text="Diagram of storage account compliance in the Contoso R G resource group." border="false"::: Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three now have green checkmarks beneath them, while storage accounts two, four, and five now have red warning signs beneath them.
There are several ways to view aggregated compliance results in the portal:
### Comparing different compliance states
-So how is the aggregate compliance state determined if multiple resources or policies have different compliance states themselves? Azure Policy ranks each compliance state so that one "wins" over another in this situation. The rank order is:
+So how is the aggregate compliance state determined if multiple resources or policies have different compliance states themselves? Azure Policy ranks each compliance state so that one _wins_ over another in this situation. The rank order is:
+ 1. Non-compliant 1. Compliant 1. Error
So how is the aggregate compliance state determined if multiple resources or pol
> [!NOTE] > [Not started](#not-started) and [not registered](#not-registered) aren't considered in compliance rollup calculations.
-With this ranking, if there are both non-compliant and compliant states, then the rolled up aggregate would be non-compliant, and so on. Let's look at an example:
+With this rank order, if there are both non-compliant and compliant states, then the rolled up aggregate would be non-compliant, and so on. Let's look at an example:
Assume an initiative contains 10 policies, and a resource is exempt from one policy but compliant to the remaining nine. Because a compliant state has a higher rank than an exempted state, the resource would register as compliant in the rolled-up summary of the initiative. So, a resource only shows as exempt for the entire initiative if it's exempt from, or has unknown compliance to, every other single applicable policy in that initiative. On the other extreme, a resource that is non-compliant to at least one applicable policy in the initiative has an overall compliance state of non-compliant, regardless of the remaining applicable policies.
-### Compliance percentage
+### Compliance percentage
-The compliance percentage is determined by dividing **Compliant**, **Exempt**, and **Unknown** resources by _total resources_. _Total resources_ include resources with **Compliant**, **Non-compliant**, **Unknown**,
-**Exempt**, **Conflicting**, and **Error** states.
+The compliance percentage is determined by dividing **Compliant**, **Exempt**, and **Unknown** resources by _total resources_. _Total resources_ include resources with **Compliant**, **Non-compliant**, **Unknown**, **Exempt**, **Conflicting**, and **Error** states.
```text overall compliance % = (compliant + exempt + unknown + protected) / (compliant + exempt + unknown + non-compliant + conflicting + error + protected) ```
-In the image shown, there are 20 distinct resources that are applicable and only one is **Non-compliant**.
-The overall resource compliance is 95% (19 out of 20).
+In the image shown, there are 20 distinct resources that are applicable and only one is **Non-compliant**. The overall resource compliance is 95% (19 out of 20).
:::image type="content" source="../media/getting-compliance-data/simple-compliance.png" alt-text="Screenshot of policy compliance details from Compliance page." border="false":::
The overall resource compliance is 95% (19 out of 20).
- Learn how to [get compliance data](../how-to/get-compliance-data.md) - Learn how to [determine causes of non-compliance](../how-to/determine-non-compliance.md)-- Get compliance data through [ARG query samples](../samples/resource-graph-samples.md)
+- Get compliance data through [Azure Resource Graph sample queries for Azure Policy](../samples/resource-graph-samples.md)
governance Exemption Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/exemption-structure.md
Title: Details of the policy exemption structure description: Describes the policy exemption definition used by Azure Policy to exempt resources from evaluation of initiatives or definitions. Previously updated : 11/03/2022 Last updated : 09/05/2024 + # Azure Policy exemption structure
-The Azure Policy exemptions feature is used to _exempt_ a resource hierarchy or an
-individual resource from evaluation of initiatives or definitions. Resources that are _exempt_ count
-toward overall compliance, but can't be evaluated or have a temporary waiver. For more information,
-see [Understand applicability in Azure Policy](./policy-applicability.md). Azure Policy exemptions also work with the following
-[Resource Manager modes](./definition-structure.md#resource-manager-modes): Microsoft.Kubernetes.Data, Microsoft.KeyVault.Data and Microsoft.Network.Data.
+The Azure Policy exemptions feature is used to _exempt_ a resource hierarchy or an individual resource from evaluation of initiatives or definitions. Resources that are _exempt_ count toward overall compliance, but can't be evaluated or have a temporary waiver. For more information, see [Understand applicability in Azure Policy](./policy-applicability.md). Azure Policy exemptions also work with the following [Resource Manager modes](./definition-structure.md#resource-manager-modes): `Microsoft.Kubernetes.Data`, `Microsoft.KeyVault.Data`, and `Microsoft.Network.Data`.
You use JavaScript Object Notation (JSON) to create a policy exemption. The policy exemption contains elements for:
You use JavaScript Object Notation (JSON) to create a policy exemption. The poli
- [assignment scope validation](#assignment-scope-validation-preview)
-A policy exemption is created as a child object on the resource hierarchy or the individual resource granted the exemption. Exemptions cannot be created at the Resource Provider mode component level.
-If the parent resource to which the exemption applies is removed, then the exemption is removed as well.
+A policy exemption is created as a child object on the resource hierarchy or the individual resource granted the exemption. Exemptions can't be created at the Resource Provider mode component level. If the parent resource to which the exemption applies is removed, then the exemption is removed as well.
-For example, the following JSON shows a policy exemption in the **waiver** category of a resource to
-an initiative assignment named `resourceShouldBeCompliantInit`. The resource is _exempt_ from only
-two of the policy definitions in the initiative, the `customOrgPolicy` custom policy definition
-( `policyDefinitionReferenceId`: `requiredTags`) and the **Allowed locations** built-in policy definition ( `policyDefinitionReferenceId` : `allowedLocations`):
+For example, the following JSON shows a policy exemption in the **waiver** category of a resource to an initiative assignment named `resourceShouldBeCompliantInit`. The resource is _exempt_ from only two of the policy definitions in the initiative, the `customOrgPolicy` custom policy definition ( `policyDefinitionReferenceId`: `requiredTags`) and the **Allowed locations** built-in policy definition ( `policyDefinitionReferenceId` : `allowedLocations`):
```json {
- "id": "/subscriptions/{subId}/resourceGroups/ExemptRG/providers/Microsoft.Authorization/policyExemptions/resourceIsNotApplicable",
- "apiVersion": "2020-07-01-preview",
- "name": "resourceIsNotApplicable",
- "type": "Microsoft.Authorization/policyExemptions",
- "properties": {
- "displayName": "This resource is scheduled for deletion",
- "description": "This resources is planned to be deleted by end of quarter and has been granted a waiver to the policy.",
- "metadata": {
- "requestedBy": "Storage team",
- "approvedBy": "IA",
- "approvedOn": "2020-07-26T08:02:32.0000000Z",
- "ticketRef": "4baf214c-8d54-4646-be3f-eb6ec7b9bc4f"
- },
- "policyAssignmentId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyAssignments/resourceShouldBeCompliantInit",
- "policyDefinitionReferenceIds": [
- "requiredTags",
- "allowedLocations"
- ],
- "exemptionCategory": "waiver",
- "expiresOn": "2020-12-31T23:59:00.0000000Z",
- "assignmentScopeValidation": "Default"
- }
+ "id": "/subscriptions/{subId}/resourceGroups/{rgName}/providers/Microsoft.Authorization/policyExemptions/resourceIsNotApplicable",
+ "apiVersion": "2020-07-01-preview",
+ "name": "resourceIsNotApplicable",
+ "type": "Microsoft.Authorization/policyExemptions",
+ "properties": {
+ "displayName": "This resource is scheduled for deletion",
+ "description": "This resources is planned to be deleted by end of quarter and has been granted a waiver to the policy.",
+ "metadata": {
+ "requestedBy": "Storage team",
+ "approvedBy": "IA",
+ "approvedOn": "2020-07-26T08:02:32.0000000Z",
+ "ticketRef": "4baf214c-8d54-4646-be3f-eb6ec7b9bc4f"
+ },
+ "policyAssignmentId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyAssignments/resourceShouldBeCompliantInit",
+ "policyDefinitionReferenceId": [
+ "requiredTags",
+ "allowedLocations"
+ ],
+ "exemptionCategory": "waiver",
+ "expiresOn": "2020-12-31T23:59:00.0000000Z",
+ "assignmentScopeValidation": "Default"
+ }
} ``` ## Display name and description
-You use **displayName** and **description** to identify the policy exemption and provide context for
-its use with the specific resource. **displayName** has a maximum length of _128_ characters and
-**description** a maximum length of _512_ characters.
+You use `displayName` and `description` to identify the policy exemption and provide context for its use with the specific resource. `displayName` has a maximum length of _128_ characters and `description` a maximum length of _512_ characters.
## Metadata
-The **metadata** property allows creating any child property needed for storing relevant
-information. In the example, properties **requestedBy**, **approvedBy**, **approvedOn**, and
-**ticketRef** contains customer values to provide information on who requested the exemption, who
-approved it and when, and an internal tracking ticket for the request. These **metadata** properties
-are examples, but they aren't required and **metadata** isn't limited to these child properties.
+The `metadata` property allows creating any child property needed for storing relevant information. In the example, properties `requestedBy`, `approvedBy`, `approvedOn`, and `ticketRef` contains customer values to provide information on who requested the exemption, who approved it and when, and an internal tracking ticket for the request. These `metadata` properties are examples, but they aren't required and `metadata` isn't limited to these child properties.
## Policy assignment ID
-This field must be the full path name of either a policy assignment or an initiative assignment.
-`policyAssignmentId` is a string and not an array. This property defines which assignment the parent
-resource hierarchy or individual resource is _exempt_ from.
+This field must be the full path name of either a policy assignment or an initiative assignment. The `policyAssignmentId` is a string and not an array. This property defines which assignment the parent resource hierarchy or individual resource is _exempt_ from.
## Policy definition IDs
-If the `policyAssignmentId` is for an initiative assignment, the **policyDefinitionReferenceIds** property may be used to specify which policy definition(s) in the initiative the subject resource
-has an exemption to. As the resource may be exempted from one or more included policy definitions,
-this property is an _array_. The values must match the values in the initiative definition in the
-`policyDefinitions.policyDefinitionReferenceId` fields.
+If the `policyAssignmentId` is for an initiative assignment, the `policyDefinitionReferenceId` property might be used to specify which policy definition in the initiative the subject resource has an exemption to. As the resource might be exempted from one or more included policy definitions, this property is an _array_. The values must match the values in the initiative definition in the `policyDefinitions.policyDefinitionReferenceId` fields.
## Exemption category Two exemption categories exist and are used to group exemptions: -- **Mitigated**: The exemption is granted because the policy intent is met through another method.-- **Waiver**: The exemption is granted because the non-compliance state of the resource is
- temporarily accepted. Another reason to use this category is for a resource or resource hierarchy
- that should be excluded from one or more definitions in an initiative, but shouldn't be excluded
- from the entire initiative.
+- Mitigated: The exemption is granted because the policy intent is met through another method.
+- Waiver: The exemption is granted because the non-compliance state of the resource is temporarily accepted. Another reason to use this category is to exclude a resource or resource hierarchy from one or more definitions in an initiative, but shouldn't be excluded from the entire initiative.
## Expiration
-To set when a resource hierarchy or an individual resource is no longer _exempt_ from an assignment,
-set the **expiresOn** property. This optional property must be in the Universal ISO 8601 DateTime
-format `yyyy-MM-ddTHH:mm:ss.fffffffZ`.
+To set when a resource hierarchy or an individual resource is no longer _exempt_ from an assignment, set the `expiresOn` property. This optional property must be in the Universal ISO 8601 DateTime format `yyyy-MM-ddTHH:mm:ss.fffffffZ`.
> [!NOTE]
-> The policy exemptions isn't deleted when the `expiresOn` date is reached. The object is preserved
-> for record-keeping, but the exemption is no longer honored.
+> The policy exemptions isn't deleted when the `expiresOn` date is reached. The object is preserved for record-keeping, but the exemption is no longer honored.
## Resource selectors
-Exemptions support an optional property `resourceSelectors`. This property works the same way in exemptions as it does in assignments, allowing for gradual rollout or rollback of an _exemption_ to certain subsets of resources in a controlled manner based on resource type, resource location, or whether the resource has a location. More details about how to use resource selectors can be found in the [assignment structure](assignment-structure.md#resource-selectors). Here is an example exemption JSON, which uses resource selectors. In this example, only resources in `westcentralus` will be exempt from the policy assignment:
+Exemptions support an optional property `resourceSelectors` that works the same way in exemptions as it does in assignments. The property allows for gradual rollout or rollback of an _exemption_ to certain subsets of resources in a controlled manner based on resource type, resource location, or whether the resource has a location. More details about how to use resource selectors can be found in the [assignment structure](assignment-structure.md#resource-selectors). The following JSON is an example exemption that uses resource selectors. In this example, only resources in `westcentralus` are exempt from the policy assignment:
```json {
- "properties": {
- "policyAssignmentId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/CostManagement",
- "policyDefinitionReferenceIds": [
- "limitSku", "limitType"
- ],
- "exemptionCategory": "Waiver",
- "resourceSelectors": [
- {
- "name": "TemporaryMitigation",
- "selectors": [
- {
- "kind": "resourceLocation",
- "in": [ "westcentralus" ]
- }
- ]
- }
+ "properties": {
+ "policyAssignmentId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/CostManagement",
+ "policyDefinitionReferenceId": [
+ "limitSku",
+ "limitType"
+ ],
+ "exemptionCategory": "Waiver",
+ "resourceSelectors": [
+ {
+ "name": "TemporaryMitigation",
+ "selectors": [
+ {
+ "kind": "resourceLocation",
+ "in": [
+ "westcentralus"
+ ]
+ }
]
- },
- "systemData": { ... },
- "id": "/subscriptions/{subId}/resourceGroups/demoCluster/providers/Microsoft.Authorization/policyExemptions/DemoExpensiveVM",
- "type": "Microsoft.Authorization/policyExemptions",
- "name": "DemoExpensiveVM"
+ }
+ ]
+ },
+ "systemData": { ...
+ },
+ "id": "/subscriptions/{subId}/resourceGroups/{rgName}/providers/Microsoft.Authorization/policyExemptions/DemoExpensiveVM",
+ "type": "Microsoft.Authorization/policyExemptions",
+ "name": "DemoExpensiveVM"
} ```
Regions can be added or removed from the `resourceLocation` list in the example.
## Assignment scope validation (preview)
-In most scenarios, the exemption scope is validated to ensure it is at or under the policy assignment scope. The optional `assignmentScopeValidation` property can allow an exemption to bypass this validation and be created outside of the assignment scope. This is intended for situations where a subscription needs to be moved from one management group (MG) to another, but the move would be blocked by policy due to properties of resources within the subscription. In this scenario, an exemption could be created for the subscription in its current MG to exempt its resources from a policy assignment on the destination MG. That way, when the subscription is moved into the destination MG, the operation is not blocked because resources are already exempt from the policy assignment in question. The use of this property is illustrated below:
+In most scenarios, the exemption scope is validated to ensure it's at or under the policy assignment scope. The optional `assignmentScopeValidation` property can allow an exemption to bypass this validation and be created outside of the assignment scope. This validation is intended for situations where a subscription needs to be moved from one management group (MG) to another, but the move would be blocked by policy due to properties of resources within the subscription. In this scenario, an exemption could be created for the subscription in its current MG to exempt its resources from a policy assignment on the destination MG. That way, when the subscription is moved into the destination MG, the operation isn't blocked because resources are already exempt from the policy assignment in question. The use of this property is shown in the following example:
```json {
- "properties": {
- "policyAssignmentId": "/providers/Microsoft.Management/managementGroups/{mgB}/providers/Microsoft.Authorization/policyAssignments/CostManagement",
- "policyDefinitionReferenceIds": [
- "limitSku", "limitType"
- ],
- "exemptionCategory": "Waiver",
- "assignmentScopeValidation": "DoNotValidate",
- },
- "systemData": { ... },
- "id": "/subscriptions/{subIdA}/providers/Microsoft.Authorization/policyExemptions/DemoExpensiveVM",
- "type": "Microsoft.Authorization/policyExemptions",
- "name": "DemoExpensiveVM"
+ "properties": {
+ "policyAssignmentId": "/providers/Microsoft.Management/managementGroups/{mgName}/providers/Microsoft.Authorization/policyAssignments/CostManagement",
+ "policyDefinitionReferenceId": [
+ "limitSku",
+ "limitType"
+ ],
+ "exemptionCategory": "Waiver",
+ "assignmentScopeValidation": "DoNotValidate",
+ },
+ "systemData": { ...
+ },
+ "id": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyExemptions/DemoExpensiveVM",
+ "type": "Microsoft.Authorization/policyExemptions",
+ "name": "DemoExpensiveVM"
} ```
-Allowed values for `assignmentScopeValidation` are `Default`and `DoNotValidate`. If not specified, the default validation process will occur.
+Allowed values for `assignmentScopeValidation` are `Default`and `DoNotValidate`. If not specified, the default validation process occurs.
## Required permissions
-The Azure RBAC permissions needed to manage Policy exemption objects are in the
-`Microsoft.Authorization/policyExemptions` operation group. The built-in roles
-[Resource Policy Contributor](../../../role-based-access-control/built-in-roles.md#resource-policy-contributor)
-and [Security Admin](../../../role-based-access-control/built-in-roles.md#security-admin) both have
-the `read` and `write` permissions and
-[Policy Insights Data Writer (Preview)](../../../role-based-access-control/built-in-roles.md#policy-insights-data-writer-preview)
-has the `read` permission.
+The Azure role-based access control (Azure RBAC) permissions needed to manage Policy exemption objects are in the `Microsoft.Authorization/policyExemptions` operation group. The built-in roles [Resource Policy Contributor](../../../role-based-access-control/built-in-roles.md#resource-policy-contributor) and [Security Admin](../../../role-based-access-control/built-in-roles.md#security-admin) both have the `read` and `write` permissions and [Policy Insights Data Writer (Preview)](../../../role-based-access-control/built-in-roles.md#policy-insights-data-writer-preview) has the `read` permission.
-Exemptions have extra security measures because of the impact of granting an exemption. Beyond
-requiring the `Microsoft.Authorization/policyExemptions/write` operation on the resource hierarchy
-or individual resource, the creator of an exemption must have the `exempt/Action` verb on the target
-assignment.
+Exemptions have extra security measures because of the effect of granting an exemption. Beyond requiring the `Microsoft.Authorization/policyExemptions/write` operation on the resource hierarchy or individual resource, the creator of an exemption must have the `exempt/Action` verb on the target assignment.
## Exemption creation and management
-Exemptions are recommended for time-bound or specific scenarios where a resource or resource hierarchy should still be tracked and would otherwise be evaluated, but there's a specific reason it shouldn't be assessed for compliance. For example, if an environment has the built-in definition `Storage accounts should disable public network access` (ID: `b2982f36-99f2-4db5-8eff-283140c09693`) assigned with _effect_ set to _audit_. Upon compliance assessment, resource "StorageAcc1" is non-compliant, but StorageAcc1 must have public network access enable for business purposes. At that time, a request should be submitted to create an exemption resource that targets StorageAcc1. Once the exemption is created, StorageAcc1 will be shown as _exempt_ in compliance review.
+Exemptions are recommended for time-bound or specific scenarios where a resource or resource hierarchy should still be tracked and would otherwise be evaluated, but there's a specific reason it shouldn't be assessed for compliance. For example, if an environment has the built-in definition `Storage accounts should disable public network access` (ID: `b2982f36-99f2-4db5-8eff-283140c09693`) assigned with _effect_ set to _audit_. Upon compliance assessment, resource `StorageAcc1` is non-compliant, but `StorageAcc1` must have public network access enable for business purposes. At that time, a request should be submitted to create an exemption resource that targets `StorageAcc1`. After the exemption is created, `StorageAcc1` is shown as _exempt_ in compliance review.
-Regularly revisit your exemptions to ensure that all eligible items are appropriately exempted and promptly remove any no longer qualifying for exemption. At that time, exemption resources that have expired could be deleted as well.
+Regularly revisit your exemptions to ensure that all eligible items are appropriately exempted and promptly remove any that don't qualify for exemption. At that time, expired exemption resources can be deleted as well.
## Next steps -- Leverage [Azure Resource Graph queries on exemptions](../samples/resource-graph-samples.md#azure-policy-exemptions).
+- Learn about [Azure Resource Graph queries on exemptions](../samples/resource-graph-samples.md#azure-policy-exemptions).
- Learn about [the difference between exclusions and exemptions](./scope.md#scope-comparison).-- Study the [Microsoft.Authorization policyExemptions resource type](/azure/templates/microsoft.authorization/policyexemptions?tabs=json).
+- Review the [Microsoft.Authorization policyExemptions resource type](/azure/templates/microsoft.authorization/policyexemptions?tabs=json).
- Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/scope.md
Title: Understand scope in Azure Policy description: Describes the concept of scope in Azure Resource Manager and how it applies to Azure Policy to control which resources Azure Policy evaluates. Previously updated : 06/15/2023 Last updated : 09/05/2024 # Understand scope in Azure Policy
-There are many settings that determine which resources are capable of being evaluated and which
-resources are evaluated by Azure Policy. The primary concept for these controls is _scope_. Scope in
-Azure Policy is based on how scope works in Azure Resource Manager. For a high-level overview, see
-[Scope in Azure Resource Manager](../../../azure-resource-manager/management/overview.md#understand-scope).
+There are many settings that determine which resources are capable of being evaluated and which resources Azure Policy evaluates. The primary concept for these controls is _scope_. Scope in Azure Policy is based on how scope works in Azure Resource Manager. For a high-level overview, see [Scope in Azure Resource Manager](../../../azure-resource-manager/management/overview.md#understand-scope).
-This article explains the importance of _scope_ in Azure Policy and it's related objects and
-properties.
+This article explains the importance of _scope_ in Azure Policy and the related objects and properties.
## Definition location
-The first instance scope used by Azure Policy is when a policy definition is created. The definition
-may be saved in either a management group or a subscription. The location determines the scope to
-which the initiative or policy can be assigned. Resources must be within the resource hierarchy of
-the definition location to target for assignment. The [resources covered by Azure Policy](../overview.md#resources-covered-by-azure-policy) describes how policies are evaluated.
+The first instance scope used by Azure Policy is when a policy definition is created. The definition might be saved in either a management group or a subscription. The location determines the scope to which the initiative or policy can be assigned. Resources must be within the resource hierarchy of the definition location to target for assignment. The [resources covered by Azure Policy](../overview.md#resources-covered-by-azure-policy) describes how policies are evaluated.
If the definition location is a: -- **Subscription** - The subscription where policy is defined and resources within that subscription can be assigned the policy definition.-- **Management group** - The management group where the policy is defined and resources within child management groups and child subscriptions can
- be assigned the policy definition. If you plan to apply the policy definition to several
- subscriptions, the location must be a management group that contains each subscription.
+- Subscription: The subscription where policy is defined and resources within that subscription can be assigned the policy definition.
+- Management group: The management group where the policy is defined and resources within child management groups and child subscriptions can be assigned the policy definition. If you plan to apply the policy definition to several subscriptions, the location must be a management group that contains each subscription.
-The location should be the resource container shared by all resources you want to use the policy
-definition on exist. This resource container is typically a management group near the root
-management group.
+The location should be the resource container shared by all resources you want to use the policy definition on exist. This resource container is typically a management group near the root management group.
## Assignment scopes
-An assignment has several properties that set a scope. The use of these properties determines which
-resource for Azure Policy to evaluate and which resources count toward compliance. These properties
-map to the following concepts:
--- Inclusion - A resource hierarchy or individual resource should be evaluated for compliance by the
- definition. The scope of where the assignment object lives on determines what to include and
- evaluate for compliance. For more information, see
- [Assignment definition](./assignment-structure.md).
--- Exclusion - A resource hierarchy or individual resource shouldn't be evaluated for compliance by
- the definition. The `properties.notScopes` _array_ property on an assignment object determines
- what to exclude. Resources within these scopes aren't evaluated or included in the compliance
- count. For more information, see
- [Assignment definition - excluded scopes](./assignment-structure.md#excluded-scopes).
-
-In addition to the properties on the policy assignment, is the
-[policy exemption](./exemption-structure.md) object. Exemptions enhance the scope story by providing
-a method to identify a portion of an assignment to not be evaluated.
--- Exemption - A resource hierarchy or individual resource should be
- evaluated for compliance by the definition, but won't be evaluated for a reason such as having a
- waiver or being mitigated through another method. Resources in this state show as **Exempted** in
- compliance reports so that they can be tracked. The exemption object is created on the resource
- hierarchy or individual resource as a child object, which determines the scope of the exemption. A
- resource hierarchy or individual resource can be exempt from multiple assignments. The exemption
- may be configured to expire on a schedule by using the `expiresOn` property. For more information,
- see [Exemption definition](./exemption-structure.md).
-
- > [!NOTE]
- > Due to the impact of granting an exemption for a resource hierarchy or individual resource,
- > exemptions have additional security measures. In addition to requiring the
- > `Microsoft.Authorization/policyExemptions/write` operation on the resource hierarchy or
- > individual resource, the creator of an exemption must have the `exempt/Action` verb on the
- > target assignment.
+An assignment has several properties that set a scope. The use of these properties determines which resource for Azure Policy to evaluate and which resources count toward compliance. These properties map to the following concepts:
+
+- Inclusion: A definition evaluates compliance for a resource hierarchy or individual resource. The assignment object's scope determines what to include and evaluate for compliance. For more information, see [Azure Policy assignment structure](./assignment-structure.md).
+- Exclusion: A definition shouldn't evaluate compliance for a resource hierarchy or individual resource. The `properties.notScopes` _array_ property on an assignment object determines what to exclude. Resources within these scopes aren't evaluated or included in the compliance count. For more information, see [Azure Policy assignment structure excluded scopes](./assignment-structure.md#excluded-scopes).
+
+In addition to the properties on the policy assignment, is the [Azure Policy exemption structure](./exemption-structure.md) object. Exemptions enhance the scope story by providing a method to identify a portion of an assignment to not be evaluated.
+
+Exemption: A definition evaluates compliance for a resource hierarchy or individual resource, but doesn't evaluate for a reason such as a waiver or mitigation through another method. Resources in this state show as **Exempted** in compliance reports so that they can be tracked. The exemption object is created on the resource hierarchy or individual resource as a child object, which determines the scope of the exemption. A resource hierarchy or individual resource can be exempt from multiple assignments. The exemption might be configured to expire on a schedule by using the `expiresOn` property. For more information, see [Azure Policy exemption structure](./exemption-structure.md).
+
+> [!NOTE]
+> Due to the impact of granting an exemption for a resource hierarchy or individual resource, exemptions have additional security measures. In addition to requiring the `Microsoft.Authorization/policyExemptions/write` operation on the resource hierarchy or individual resource, the creator of an exemption must have the `exempt/Action` verb on the target assignment.
## Scope comparison The following table is a comparison of the scope options:
-| | Inclusion | Exclusion (notScopes) | Exemption |
+| Resources | Inclusion | Exclusion (notScopes) | Exemption |
||::|::|::|
-|**Resources are evaluated** | &#10004; | - | - |
-|**Resource Manager object** | - | - | &#10004; |
-|**Requires modifying policy assignment object** | &#10004; | &#10004; | - |
+| Resources are evaluated | &#10004; | - | - |
+| Resource Manager object | - | - | &#10004; |
+| Requires modifying policy assignment object | &#10004; | &#10004; | - |
So how do you choose whether to use an exclusion or exemption? Typically exclusions are recommended to permanently bypass evaluation for a broad scope like a test environment that doesn't require the same level of governance. Exemptions are recommended for time-bound or more specific scenarios where a resource or resource hierarchy should still be tracked and would otherwise be evaluated, but there's a specific reason it shouldn't be assessed for compliance. ## Next steps -- Learn about the [policy definition structure](./definition-structure.md).
+- Learn about the [policy definition structure](./definition-structure-basics.md).
- Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
+- Learn more about how to [Organize your resources with Azure management groups](../../management-groups/overview.md).
hdinsight Azure Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/azure-cli-samples.md
Title: 'Azure HDInsight: Azure CLI samples'
description: Azure CLI examples for common tasks in Azure HDInsight. Previously updated : 09/19/2023 Last updated : 09/06/2024
This article provides sample scripts for common tasks. For each example, update
* Optional: Bash. The examples in this article use the Bash shell on Windows 10. See [Windows Subsystem for Linux Installation Guide for Windows 10](/windows/wsl/install-win10) for installation steps. The examples work from a Windows Command prompt with some slight modifications.
-## az login
+## `az login`
-[Log in to Azure](/cli/azure/reference-index#az-login).
+[Sign in to Azure](/cli/azure/reference-index#az-login).
```azurecli az login
az login
# az account set --subscription "SUBSCRIPTIONID" ```
-## az hdinsight create
+## `az hdinsight create`
[Creates a new cluster](/cli/azure/hdinsight#az-hdinsight-create).
az hdinsight create \
--cluster-configuration $clusterConfiguration ```
-## az hdinsight application create
+## `az hdinsight application create`
[Create an application for a HDInsight cluster](/cli/azure/hdinsight/application#az-hdinsight-application-create).
az hdinsight application create \
--sub-domain-suffix $subDomainSuffix ```
-## az hdinsight script-action execute
+## `az hdinsight script-action execute`
[Execute script actions on the specified HDInsight cluster](/cli/azure/hdinsight/script-action#az-hdinsight-script-action-execute).
hdinsight Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/azure-monitor-agent.md
Title: Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters
description: Learn how to migrate to Azure Monitor Agent (AMA) in Azure HDInsight clusters. Previously updated : 09/03/2024 Last updated : 09/06/2024 # Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters
hdinsight Apache Domain Joined Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-architecture.md
description: Learn how to plan Azure HDInsight security with Enterprise Security
Previously updated : 08/22/2024 Last updated : 09/06/2024 # Use Enterprise Security Package in HDInsight
hdinsight Apache Domain Joined Configure Using Azure Adds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds.md
description: Learn how to set up and configure an HDInsight cluster integrated with Active Directory by using Microsoft Entra Domain Services and the Enterprise Security Package feature. Previously updated : 09/21/2023 Last updated : 09/06/2024 # Configure HDInsight clusters for Microsoft Entra integration with Enterprise Security Package
-This article provides a summary and overview of the process of creating and configuring an HDInsight cluster integrated with Microsoft Entra ID. This integration relies on a HDInsight feature called Enterprise Security Package (ESP), Microsoft Entra Domain Services and your pre-existing on-premises Active Directory.
+This article provides a summary and overview of the process of creating and configuring an HDInsight cluster integrated with Microsoft Entra ID. This integration relies on a HDInsight feature called Enterprise Security Package (ESP), Microsoft Entra Domain Services and your preexisting on-premises Active Directory.
For a detailed, step-by-step tutorial on setting up and configuring a domain in Azure and creating an ESP enabled cluster and then syncing on-premises users, see [Create and configure Enterprise Security Package clusters in Azure HDInsight](apache-domain-joined-create-configure-enterprise-security-cluster.md).
There are a few prerequisites to complete before you can create an ESP-enabled H
- Create and authorize a managed identity. - Complete Networking setup for DNS and related issues.
-Each of these items are discussed in details. For a walkthrough of completing all of these steps, see [Create and configure Enterprise Security Package clusters in Azure HDInsight](apache-domain-joined-create-configure-enterprise-security-cluster.md).
+Each of these items are discussed in detail. For a walkthrough of completing all of these steps, see [Create and configure Enterprise Security Package clusters in Azure HDInsight](apache-domain-joined-create-configure-enterprise-security-cluster.md).
<a name='enable-azure-ad-ds'></a>
Change the configuration of the DNS servers in the Microsoft Entra Domain Servic
It's easier to place both the Microsoft Entra Domain Services instance and the HDInsight cluster in the same Azure virtual network. If you plan to use different virtual networks, you must peer those virtual networks so that the domain controller is visible to HDInsight VMs. For more information, see [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md).
-After the virtual networks are peered, configure the HDInsight virtual network to use a custom DNS server. And enter the Microsoft Entra Domain Services private IPs as the DNS server addresses. When both virtual networks use the same DNS servers, your custom domain name resolves to the right IP and it is reachable from HDInsight. For example, if your domain name is `contoso.com`, then after this step, `ping contoso.com` should resolve to the right Microsoft Entra Domain Services IP.
+After the virtual networks are peered, configure the HDInsight virtual network to use a custom DNS server. And enter the Microsoft Entra Domain Services private IPs as the DNS server addresses. When both virtual networks use the same DNS servers, your custom domain name resolves to the right IP and it's reachable from HDInsight. For example, if your domain name is `contoso.com`, then after this step, `ping contoso.com` should resolve to the right Microsoft Entra Domain Services IP.
:::image type="content" source="./media/apache-domain-joined-configure-using-azure-adds/hdinsight-aadds-peered-vnet-configuration.png" alt-text="Configuring custom DNS servers for a peered virtual network." border="true":::
hdinsight Apache Domain Joined Create Configure Enterprise Security Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-create-configure-enterprise-security-cluster.md
description: Learn how to create and configure Enterprise Security Package clust
Previously updated : 08/22/2024 Last updated : 09/06/2024
hdinsight Domain Joined Authentication Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/domain-joined-authentication-issues.md
Title: Authentication issues in Azure HDInsight
description: Authentication issues in Azure HDInsight Previously updated : 08/22/2024 Last updated : 09/06/2024 # Authentication issues in Azure HDInsight
hdinsight Ssh Domain Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/ssh-domain-accounts.md
Title: Manage SSH access for domain accounts in Azure HDInsight
description: Steps to manage SSH access for Microsoft Entra accounts in HDInsight. Previously updated : 09/19/2023 Last updated : 09/06/2024 # Manage SSH access for domain accounts in Azure HDInsight
hdinsight Enable Private Link On Kafka Rest Proxy Hdi Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/enable-private-link-on-kafka-rest-proxy-hdi-cluster.md
Previously updated : 09/19/2023 Last updated : 09/06/2024 # Enable Private Link on an HDInsight Kafka Rest Proxy cluster
hdinsight Enterprise Security Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/enterprise-security-package.md
Title: Enterprise Security Package for Azure HDInsight
description: Learn the Enterprise Security Package components and versions in Azure HDInsight. Previously updated : 09/19/2023 Last updated : 09/06/2024 # Enterprise Security Package for Azure HDInsight
hdinsight Find Host Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/find-host-name.md
Previously updated : 09/19/2023 Last updated : 09/06/2024 # Find the host names of cluster nodes
hdinsight Apache Hadoop Deep Dive Advanced Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-deep-dive-advanced-analytics.md
description: Learn how advanced analytics uses algorithms to process big data in
Previously updated : 08/13/2023 Last updated : 09/06/2024 # Deep dive - advanced analytics
HDInsight has several machine learning options for an advanced analytics workflo
There are three scalable machine learning libraries that bring algorithmic modeling capabilities to this distributed environment: * [**MLlib**](https://spark.apache.org/docs/latest/ml-guide.html) - MLlib contains the original API built on top of Spark RDDs.
-* [**SparkML**](https://spark.apache.org/docs/1.2.2/ml-guide.html) - SparkML is a newer package that provides a higher-level API built on top of Spark DataFrames for constructing ML pipelines.
+* **SparkML** - SparkML is a newer package that provides a higher-level API built on top of Spark DataFrames for constructing ML pipelines.
* [**MMLSpark**](https://github.com/Azure/mmlspark) - The Microsoft Machine Learning library for Apache Spark (MMLSpark) is designed to make data scientists more productive on Spark, to increase the rate of experimentation, and to leverage cutting-edge machine learning techniques, including deep learning, on large datasets. The MMLSpark library simplifies common modeling tasks for building models in PySpark. ### Azure Machine Learning and Apache Hive
hdinsight Apache Hadoop Dotnet Csharp Mapreduce Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-dotnet-csharp-mapreduce-streaming.md
description: Learn how to use C# to create MapReduce solutions with Apache Hadoo
Previously updated : 09/14/2023 Last updated : 09/06/2024 # Use C# with MapReduce streaming on Apache Hadoop in HDInsight
namespace mapper
} ```
-After you create the application, build it to produce the */bin/Debug/mapper.exe* file in the project directory.
+After you create the application, build it to produce the `/bin/Debug/mapper.exe` file in the project directory.
## Create the reducer
namespace reducer
} ```
-After you create the application, build it to produce the */bin/Debug/reducer.exe* file in the project directory.
+After you create the application, build it to produce the `/bin/Debug/reducer.exe` file in the project directory.
## Upload to storage
hdinsight Apache Hadoop Linux Tutorial Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started.md
description: In this quickstart, you create Apache Hadoop cluster in Azure HDIns
Previously updated : 09/15/2023 Last updated : 09/06/2024 #Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Resource Manager template
hdinsight Apache Hadoop Run Samples Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-run-samples-linux.md
description: Get started using MapReduce samples in jar files included in HDInsi
Previously updated : 09/14/2023 Last updated : 09/06/2024 # Run the MapReduce examples included in HDInsight
hdinsight Apache Hadoop Use Hive Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-curl.md
description: Learn how to remotely submit Apache Pig jobs to Azure HDInsight usi
Previously updated : 09/14/2023 Last updated : 09/06/2024 # Run Apache Hive queries with Apache Hadoop in HDInsight using REST
$clusterName
1. Once the state of the job has changed to **SUCCEEDED**, you can retrieve the results of the job from Azure Blob storage. The `statusdir` parameter passed with the query contains the location of the output file; in this case, `/example/rest`. This address stores the output in the `example/curl` directory in the clusters default storage.
- You can list and download these files by using the [Azure CLI](/cli/azure/install-azure-cli). For more information on using the Azure CLI with Azure Storage, see the [Use Azure CLI with Azure Storage](../../storage/blobs/storage-quickstart-blobs-cli.md) document.
+ You can list and download these files by using the [Azure CLI](/cli/azure/install-azure-cli). For more information, see [Use Azure CLI with Azure Storage](../../storage/blobs/storage-quickstart-blobs-cli.md).
## Next steps
hdinsight Apache Hadoop Use Hive Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-powershell.md
description: Use PowerShell to run Apache Hive queries in Apache Hadoop in Azure
Previously updated : 09/14/2023 Last updated : 09/06/2024 # Run Apache Hive queries using PowerShell
hdinsight Apache Hadoop Use Mapreduce Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-mapreduce-ssh.md
description: Learn how to use SSH to run MapReduce jobs using Apache Hadoop on H
Previously updated : 09/27/2023 Last updated : 09/06/2024 # Use MapReduce with Apache Hadoop on HDInsight with SSH
hdinsight Apache Hadoop Use Sqoop Mac Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-sqoop-mac-linux.md
description: Learn how to use Apache Sqoop to import and export between Apache H
Previously updated : 08/13/2023 Last updated : 09/06/2024 # Use Apache Sqoop to import and export data between Apache Hadoop on HDInsight and Azure SQL Database
hdinsight Hdinsight Troubleshoot Data Lake Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-data-lake-files.md
Title: Unable to access Data Lake storage files in Azure HDInsight
description: Unable to access Data Lake storage files in Azure HDInsight Previously updated : 09/13/2023 Last updated : 09/06/2024 # Unable to access Data Lake storage files in Azure HDInsight
hdinsight Hdinsight Use Sqoop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-use-sqoop.md
description: Learn how to use Azure PowerShell from a workstation to run Sqoop i
Previously updated : 09/18/2023 Last updated : 09/06/2024 # Use Apache Sqoop with Hadoop in HDInsight
HDInsight cluster comes with some sample data. You use the following two samples
| state |string | | country |string | | querydwelltime |double |
- | sessionid |bigint |
+ | `sessionid` |bigint |
| sessionpagevieworder |bigint | In this article, you use these two datasets to test Sqoop import and export. ## <a name="create-cluster-and-sql-database"></a>Set up test environment
-The cluster, SQL database, and other objects are created through the Azure portal using an Azure Resource Manager template. The template can be found in [Azure quickstart templates](https://azure.microsoft.com/resources/templates/hdinsight-linux-with-sql-database/). The Resource Manager template calls a bacpac package to deploy the table schemas to an SQL database. If you want to use a private container for the bacpac files, use the following values in the template:
+The cluster, SQL database, and other objects are created through the Azure portal using an Azure Resource Manager template. The template can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/hdinsight-linux-with-sql-database/). The Resource Manager template calls a bacpac package to deploy the table schemas to an SQL database. If you want to use a private container for the bacpac files, use the following values in the template:
```json "storageKeyType": "Primary",
The cluster, SQL database, and other objects are created through the Azure porta
|Resource group |Select your resource group from the drop-down list, or create a new one| |Location |Select a region from the drop-down list.| |Cluster Name |Enter a name for the Hadoop cluster. Use lowercase letter only.|
- |Cluster Login User Name |Keep the pre-populated value `admin`.|
- |Cluster Login Password |Enter a password.|
- |Ssh User Name |Keep the pre-populated value `sshuser`.|
+ |Cluster sign-in User Name |Keep the prepopulated value `admin`.|
+ |Cluster sign in Password |Enter a password.|
+ |Ssh User Name |Keep the prepopulated value `sshuser`.|
|Ssh Password |Enter a password.|
- |Sql Admin Login |Keep the pre-populated value `sqluser`.|
+ |Sql Admin sign-in |Keep the prepopulated value `sqluser`.|
|Sql Admin Password |Enter a password.| |_artifacts Location | Use the default value unless you want to use your own bacpac file in a different location.| |_artifacts Location Sas Token |Leave blank.|
HDInsight can run Sqoop jobs by using various methods. Use the following table t
| **Use this** if you want... | ...an **interactive** shell | ...**batch** processing | ...from this **client operating system** | |: |::|::|: |: |
-| [SSH](apache-hadoop-use-sqoop-mac-linux.md) |? |? |Linux, Unix, Mac OS X, or Windows |
+| [SSH](apache-hadoop-use-sqoop-mac-linux.md) |? |? |Linux, Unix, macOS X, or Windows |
| [.NET SDK for Hadoop](apache-hadoop-use-sqoop-dotnet-sdk.md) |&nbsp; |? |Windows (for now) | | [Azure PowerShell](apache-hadoop-use-sqoop-powershell.md) |&nbsp; |? |Windows |
hdinsight Python Udf Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/python-udf-hdinsight.md
Title: Python UDF with Apache Hive and Apache Pig - Azure HDInsight
description: Learn how to use Python User Defined Functions (UDF) from Apache Hive and Apache Pig in HDInsight, the Apache Hadoop technology stack on Azure. Previously updated : 09/15/2023 Last updated : 09/06/2024
The script output is a concatenation of the input values for `devicemake` and `d
### Upload file (shell)
-The following command, replaces `sshuser` with the actual username if different. Replace `mycluster` with the actual cluster name. Ensure your working directory is where the file is located.
+The following command replaces `sshuser` with the actual username if different. Replace `mycluster` with the actual cluster name. Ensure your working directory is where the file is located.
1. Use `scp` to copy the files to your HDInsight cluster. Edit and enter the command:
In the commands below, replace `sshuser` with the actual username if different.
DUMP DETAILS; ```
-3. After entering the following line, the job should start. Once the job completes, it returns output similar to the following data:
+3. After you enter the following line, the job should start. Once the job completes, it returns output similar to the following data:
```output ((2012-02-03,20:11:56,SampleClass5,[TRACE],verbose detail for id 990982084))
You can use the following PowerShell statements to remove the CR characters befo
Both of the example PowerShell scripts used to run the examples contain a commented line that displays error output for the job. If you aren't seeing the expected output for the job, uncomment the following line and see if the error information indicates a problem.
-[!code-powershell[main](../../../powershell_scripts/hdinsight/run-python-udf/run-python-udf.ps1?range=135-139)]
+[!Code-powershell[main](../../../powershell_scripts/hdinsight/run-python-udf/run-python-udf.ps1?range=135-139)]
The error information (STDERR) and the result of the job (STDOUT) are also logged to your HDInsight storage.
hdinsight Troubleshoot Invalidnetworkconfigurationerrorcode Cluster Creation Fails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-invalidnetworkconfigurationerrorcode-cluster-creation-fails.md
Title: InvalidNetworkConfigurationErrorCode error - Azure HDInsight
description: Various reasons for failed cluster creations with InvalidNetworkConfigurationErrorCode in Azure HDInsight Previously updated : 09/27/2023 Last updated : 09/06/2024 # Cluster creation fails with InvalidNetworkConfigurationErrorCode in Azure HDInsight
Error description contains "HostName Resolution failed."
### Cause
-This error points to a problem with custom DNS configuration. DNS servers within a virtual network can forward DNS queries to Azure's recursive resolvers to resolve hostnames within that virtual network (see [Name Resolution in Virtual Networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md) for details). Access to Azure's recursive resolvers is provided via the virtual IP 168.63.129.16. This IP is only accessible from the Azure VMs. It is nonfunctional if you are using an OnPrem DNS server, or your DNS server is an Azure VM, which is not part of the cluster's virtual network.
+This error points to a problem with custom DNS configuration. DNS servers within a virtual network can forward DNS queries to Azure's recursive resolvers to resolve hostnames within that virtual network (see [Name Resolution in Virtual Networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md) for details). Access to Azure's recursive resolvers is provided via the virtual IP 168.63.129.16. This IP is only accessible from the Azure VMs. It's nonfunctional if you're using an on-premises DNS server, or your DNS server is an Azure VM, which isn't part of the cluster's virtual network.
### Resolution
Azure Storage and SQL don't have fixed IP Addresses, so we need to allow outboun
### Resolution
-* If your cluster uses a [Network Security Group (NSG)](../../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* If your cluster uses a [Network Security Group (NSG)](../../virtual-network/virtual-network-vnet-plan-design-arm.md)
Go to the Azure portal and identify the NSG that is associated with the subnet where the cluster is being deployed. In the **Outbound security rules** section, allow outbound access to internet without limitation (note that a smaller **priority** number here means higher priority). Also, in the **subnets** section, confirm if this NSG is applied to the cluster subnet.
-* If your cluster uses a [User-defined Routes (UDR)](../../virtual-network/virtual-networks-udr-overview.md).
+* If your cluster uses a [User-defined Routes (UDR)](../../virtual-network/virtual-networks-udr-overview.md)
Go to the Azure portal and identify the route table that is associated with the subnet where the cluster is being deployed. Once you find the route table for the subnet, inspect the **routes** section in it.
Error description contains "Failed to establish an outbound connection from the
### Cause
-When using Private Linked HDInsight clusters, outbound access from the cluster must be configured to allow connections to be made to the HDInsight resource provider.
+When you use Private Linked HDInsight clusters, outbound access from the cluster must be configured to allow connections to be made to the HDInsight resource provider.
### Resolution * To resolve this issue, refer to the HDInsight Private Link setup steps at [private link setup](../hdinsight-private-link.md)
-## "Virtual network configuration is not compatible with HDInsight requirement"
+## "Virtual network configuration isn't compatible with HDInsight requirement"
### Issue
Validate that 168.63.129.16 is in the custom DNS chain. DNS servers within a vir
Based on the result - choose one of the following steps to follow:
-#### 168.63.129.16 is not in this list
+#### 168.63.129.16 isn't in this list
**Option 1** Add 168.63.129.16 as the first custom DNS for the virtual network using the steps described in [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md). These steps are applicable only if your custom DNS server runs on Linux.
hdinsight Apache Hbase Accelerated Writes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-accelerated-writes.md
Title: Azure HDInsight Accelerated Writes for Apache HBase
description: Gives an overview of the Azure HDInsight Accelerated Writes feature, which uses premium managed disks to improve performance of the Apache HBase Write Ahead Log. Previously updated : 08/13/2023 Last updated : 09/06/2024 # Azure HDInsight Accelerated Writes for Apache HBase
hdinsight Apache Hbase Advisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-advisor.md
Previously updated : 09/15/2023 Last updated : 09/06/2024 #Customer intent: The azure advisories help to tune the cluster/query. This doc gives a much deeper understanding of the various advisories including the recommended configuration tunings. # Apache HBase advisories in Azure HDInsight
This article describes several advisories to help you optimize the Apache HBase
## Optimize HBase to read most recently written data
-If your use case involves reading the most recently written data from HBase, this advisory can help you. For high performance, it is optimal that HBase reads are to be served from `memstore`, instead of the remote storage.
+If your use case involves reading the most recently written data from HBase, this advisory can help you. For high performance, it's optimal that HBase reads are to be served from `memstore`, instead of the remote storage.
-The query advisory indicates that for a given column family in a table > 75% reads that are getting served from `memstore`. This indicator suggests that even if a flush happens on the `memstore` the recent file needs to be accessed and that needs to be in cache. The data is first written to `memstore` the system accesses the recent data there. There's a chance that the internal HBase flusher threads detect that a given region has reached 128M (default) size and can trigger a flush. This scenario happens to even the most recent data that was written when the `memstore` was around 128M in size. Therefore, a later read of those recent records may require a file read rather than from `memstore`. Hence it is best to optimize that even recent data that is recently flushed can reside in the cache.
+The query advisory indicates that for a given column family in a table > 75% reads that are getting served from `memstore`. This indicator suggests that even if a flush happens on the `memstore` the recent file needs to be accessed and that needs to be in cache. The data is first written to `memstore` the system accesses the recent data there. There's a chance that the internal HBase flusher threads detect that a given region has reached 128M (default) size and can trigger a flush. This scenario happens to even the most recent data that was written when the `memstore` was around 128M in size. Therefore, a later read of those recent records may require a file read rather than from `memstore`. Hence it's best to optimize that even recent data that is recently flushed can reside in the cache.
To optimize the recent data in cache, consider the following configuration settings:
To optimize the recent data in cache, consider the following configuration setti
5. Block cache can be turned off for a given family in a table. Ensure that it's turned **ON** for families that have most recent data reads. By default, block cache is turned ON for all families in a table. In case you have disabled the block cache for a family and need to turn it ON, use the alter command from the hbase shell.
- These configurations help ensure that the data is available in cache and that the recent data does not undergo compaction. If a TTL is possible in your scenario, then consider using date-tiered compaction. For more information, see [Apache HBase Reference Guide: Date Tiered Compaction](https://hbase.apache.org/book.html#ops.date.tiered)
+ These configurations help ensure that the data is available in cache and that the recent data doesn't undergo compaction. If a TTL is possible in your scenario, then consider using date-tiered compaction. For more information, see [Apache HBase Reference Guide: Date Tiered Compaction](https://hbase.apache.org/book.html#ops.date.tiered)
## Optimize the flush queue
This advisory indicates that HBase flushes may need tuning. The current configur
In the region server UI, notice if the flush queue grows beyond 100. This threshold indicates the flushes are slow and you may have to tune the `hbase.hstore.flusher.count` configuration. By default, the value is 2. Ensure that the max flusher threads don't increase beyond 6.
-Additionally, see if you have a recommendation for region count tuning. If yes, we suggest you to try the region tuning to see if that helps in faster flushes. Otherwise, tuning the flusher threads may help you.
+Additionally, see if you have a recommendation for region count tuning. If yes, we suggest you try the region tuning to see if that helps in faster flushes. Otherwise, tuning the flusher threads may help you.
## Region count tuning
The advisory means that it would be good to reconsider the number of regions per
If the HBase compaction queue grows to more than 2000 and happens periodically, you can increase the compaction threads to a larger value.
-When there's an excessive number of files for compaction, it may lead to more heap usage related to how the files interact with the Azure file system. So it is better to complete the compaction as quickly as possible. Some times in older clusters the compaction configurations related to throttling might lead to slower compaction rate.
+When there's an excessive number of files for compaction, it may lead to more heap usage related to how the files interact with the Azure file system. So it's better to complete the compaction as quickly as possible. Some times in older clusters the compaction configurations related to throttling might lead to slower compaction rate.
-Check the configurations `hbase.hstore.compaction.throughput.lower.bound` and `hbase.hstore.compaction.throughput.higher.bound`. If they are already set to 50M and 100M, leave them as it is. However, if you configured those settings to a lower value (which was the case with older clusters), change the limits to 50M and 100M respectively.
+Check the configurations `hbase.hstore.compaction.throughput.lower.bound` and `hbase.hstore.compaction.throughput.higher.bound`. If they're already set to 50M and 100M, leave them as it is. However, if you configured those settings to a lower value (which was the case with older clusters), change the limits to 50M and 100M respectively.
The configurations are `hbase.regionserver.thread.compaction.small` and `hbase.regionserver.thread.compaction.large` (the defaults are 1 each). Cap the max value for this configuration to be less than 3.
The full table scan advisory indicates that over 75% of the scans issued are ful
* Use the **MultiRowRangeFilter** API so that you can query different ranges in one scan call. For more information, see [MultiRowRangeFilter API documentation](https://hbase.apache.org/2.1/apidocs/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html).
-* In cases where you need a full table or region scan, check if there's a possibility to avoid cache usage for those queries, so that other queries that use of the cache might not evict the blocks that are hot. To ensure the scans do not use cache, use the **scan** API with the **setCaching(false)** option in your code:
+* In cases where you need a full table or region scan, check if there's a possibility to avoid cache usage for those queries, so that other queries that use of the cache might not evict the blocks that are hot. To ensure the scans don't use cache, use the **scan** API with the **setCaching(false)** option in your code:
``` scan#setCaching(false)
hdinsight Apache Hbase Migrate New Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-migrate-new-version.md
description: Learn how to migrate Apache HBase clusters in Azure HDInsight to a
Previously updated : 09/19/2023 Last updated : 09/06/2024 # Migrate an Apache HBase cluster to a new version
hdinsight Apache Hbase Query With Phoenix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-query-with-phoenix.md
description: In this quickstart, you learn how to use Apache Phoenix in HDInsigh
Previously updated : 09/15/2023 Last updated : 09/06/2024 #Customer intent: As a HBase user, I want learn Apache Phoenix so that I can run HBase queries in Azure HDInsight.
hdinsight Hbase Troubleshoot Hbase Hbck Inconsistencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-hbase-hbck-inconsistencies.md
Title: hbase hbck returns inconsistencies in Azure HDInsight
-description: hbase hbck returns inconsistencies in Azure HDInsight
+ Title: HBase hbck returns inconsistencies in Azure HDInsight
+description: Base hbck returns inconsistencies in Azure HDInsight
Previously updated : 09/19/2023 Last updated : 09/06/2024 # Scenario: `hbase hbck` command returns inconsistencies in Azure HDInsight
-This article describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters. If you are using hbase-2.x, see [How to use Apache HBase HBCK2 tool](./how-to-use-hbck2-tool.md)
+This article describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters. If you're using hbase-2.x, see [How to use Apache HBase HBCK2 tool](./how-to-use-hbck2-tool.md)
-## Issue: Region is not in `hbase:meta`
+## Issue: Region isn't in `hbase:meta`
Region xxx on HDFS, but not listed in `hbase:meta` or deployed on any region server.
RegionB, startkey:001, endkey:080,
RegionC, startkey:010, endkey:080. ```
-In this scenario, you need to merge RegionA and RegionC and get RegionD with the same key range as RegionB, then merge RegionB and RegionD. xxxxxxx and yyyyyy are the hash string at the end of each region name. Be careful here not to merge two discontinuous regions. After each merge, like merge A and C, HBase will start a compaction on RegionD. Wait for the compaction to finish before doing another merge with RegionD. You can find the compaction status on that region server page in HBase HMaster UI.
+In this scenario, you need to merge RegionA and RegionC and get RegionD with the same key range as RegionB, then merge RegionB and RegionD. `xxxxxxx` and `yyyyyy` are the hash string at the end of each region name. Be careful here not to merge two discontinuous regions. After each merge, like merge A and C, HBase will start a compaction on RegionD. Wait for the compaction to finish before doing another merge with RegionD. You can find the compaction status on that region server page in HBase HMaster UI.
Can't load `.regioninfo` for region `/hbase/data/default/tablex/regiony`.
### Cause
-It is most likely due to region partial deletion when RegionServer crashes or VM reboots. Currently, the Azure Storage is a flat blob file system and some file operations are not atomic.
+It's most likely due to region partial deletion when RegionServer crashes or VM reboots. Currently, the Azure Storage is a flat blob file system and some file operations aren't atomic.
### Resolution
hdinsight Hbase Troubleshoot Unassigned Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-unassigned-regions.md
Title: Issues with region servers in Azure HDInsight
description: Issues with region servers in Azure HDInsight Previously updated : 09/19/2023 Last updated : 09/06/2024 # Issues with region servers in Azure HDInsight
hdinsight Troubleshoot Data Retention Issues Expired Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/troubleshoot-data-retention-issues-expired-data.md
Title: Troubleshoot data retention (TTL) issues with expired data not being dele
description: Troubleshoot various data-retention (TTL) issues with expired data not being deleted from storage on Azure HDInsight Previously updated : 09/14/2023 Last updated : 09/06/2024 # Troubleshoot data retention (TTL) issues with expired data not being deleted from storage on Azure HDInsight
hdinsight Hdinsight Create Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-create-virtual-network.md
description: Learn how to create an Azure Virtual Network to connect HDInsight t
Previously updated : 09/19/2023 Last updated : 09/06/2024 # Create virtual networks for Azure HDInsight clusters
Before executing any of the code samples in this article, have an understanding
Other prerequisites for the samples in this article include the following items:
-* If you're using PowerShell, you'll need to install the [AZ Module](/powershell/azure/).
+* If you're using PowerShell, you need to install the [AZ Module](/powershell/azure/).
* If you want to use Azure CLI and haven't yet installed it, see [Install the Azure CLI](/cli/azure/install-azure-cli). > [!IMPORTANT]
-> If you are looking for step by step guidance on connecting HDInsight to your on-premises network using an Azure Virtual Network, see the [Connect HDInsight to your on-premises network](connect-on-premises-network.md) document.
+> If you are looking for step by step guidance on connecting HDInsight to your on-premises network using an Azure Virtual Network, see [How to connect HDInsight to your on-premises network](connect-on-premises-network.md).
## <a id="hdinsight-nsg"></a>Example: network security groups with HDInsight
Use the following steps to create a virtual network that restricts inbound traff
Once this command completes, you can install HDInsight into the Virtual Network.
-These steps only open access to the HDInsight health and management service on the Azure cloud. Any other access to the HDInsight cluster from outside the Virtual Network is blocked. To enable access from outside the virtual network, you must add additional Network Security Group rules.
+These steps only open access to the HDInsight health and management service on the Azure cloud. Any other access to the HDInsight cluster from outside the Virtual Network is blocked. To enable access from outside the virtual network, you must add more Network Security Group rules.
The following code demonstrates how to enable SSH access from the Internet:
After completing these steps, you can connect to resources in the virtual networ
## Test your settings before deploying an HDInsight cluster
-Before deploying your cluster, you can check that your many of your network configuration settings are correct by running the [HDInsight Network Validator tool](https://aka.ms/hnv/v2) on an Azure Linux virtual machine in the same VNet and subnet as the planned cluster.
+Before deploying your cluster, you can check that many of your network configuration settings are correct by running the [HDInsight Network Validator tool](https://aka.ms/hnv/v2) on an Azure Linux virtual machine in the same virtual network and subnet as the planned cluster.
## Next steps
hdinsight Hdinsight Custom Ambari Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-custom-ambari-db.md
description: Learn how to create HDInsight clusters with your own custom Apache
Previously updated : 09/29/2023 Last updated : 09/06/2024 # Set up HDInsight clusters with a custom Ambari DB
hdinsight Hdinsight Delete Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-delete-cluster.md
description: Information on the various ways that you can delete an Azure HDInsi
Previously updated : 09/19/2023 Last updated : 09/06/2024 # Delete an HDInsight cluster using your browser, PowerShell, or the Azure CLI
hdinsight Hdinsight Hadoop Collect Debug Heap Dump Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-collect-debug-heap-dump-linux.md
description: Enable heap dumps for Apache Hadoop services from Linux-based HDIns
Previously updated : 09/19/2023 Last updated : 09/06/2024 # Enable heap dumps for Apache Hadoop services on Linux-based HDInsight
hdinsight Hdinsight Hadoop Create Linux Clusters Arm Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-arm-templates.md
description: Learn how to create clusters for HDInsight by using Resource Manage
Previously updated : 08/13/2023 Last updated : 09/06/2024 # Create Apache Hadoop clusters in HDInsight by using Resource Manager templates
hdinsight Hdinsight Hadoop Linux Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-linux-information.md
description: Get implementation tips for using Linux-based HDInsight (Hadoop) cl
Previously updated : 08/13/2023 Last updated : 09/06/2024 # Information about using HDInsight on Linux
hdinsight Hdinsight Hadoop Migrate Dotnet To Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-migrate-dotnet-to-linux.md
description: Learn how to use .NET applications for streaming MapReduce on Linux
Previously updated : 09/14/2023 Last updated : 09/06/2024 # Migrate .NET solutions for Windows-based HDInsight to Linux-based HDInsight
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
description: Learn how to use Azure Monitor logs to monitor jobs running in an H
Previously updated : 09/03/2024 Last updated : 09/06/2024 # Use Azure Monitor logs to monitor HDInsight clusters
hdinsight Hdinsight Hadoop Oms Log Analytics Use Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-use-queries.md
description: Learn how to run queries on Azure Monitor logs to monitor jobs runn
Previously updated : 09/15/2023 Last updated : 09/06/2024 # Query Azure Monitor logs to monitor HDInsight clusters
hdinsight Hdinsight Hadoop Port Settings For Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-port-settings-for-services.md
description: This article provides a list of ports used by Apache Hadoop service
Previously updated : 09/15/2023 Last updated : 09/06/2024 # Ports used by Apache Hadoop services on HDInsight
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2-portal.md
Previously updated : 08/13/2023 Last updated : 09/06/2024 # Create a cluster with Data Lake Storage Gen2 using the Azure portal
hdinsight Hdinsight Hadoop Windows Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-windows-tools.md
description: Work from a Windows PC in Hadoop on HDInsight. Manage and query clu
Previously updated : 09/14/2023 Last updated : 09/06/2024 # Work in the Apache Hadoop ecosystem on HDInsight from a Windows PC
hdinsight Hdinsight High Availability Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-high-availability-components.md
Title: High availability components in Azure HDInsight
description: Overview of the various high availability components used by HDInsight clusters. Previously updated : 09/28/2023 Last updated : 09/06/2024 # High availability services supported by Azure HDInsight
hdinsight Hdinsight Migrate Granular Access Cluster Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-migrate-granular-access-cluster-configurations.md
Title: Granular role-based access Azure HDInsight cluster configurations
description: Learn about the changes required as part of the migration to granular role-based access for HDInsight cluster configurations. Previously updated : 09/19/2023 Last updated : 09/06/2024 # Migrate to granular role-based access for cluster configurations
hdinsight Hdinsight Multiple Clusters Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-multiple-clusters-data-lake-store.md
description: Learn how to use more than one HDInsight cluster with a single Data
Previously updated : 09/15/2023 Last updated : 09/06/2024 # Use multiple HDInsight clusters with an Azure Data Lake Storage account
hdinsight Hdinsight Plan Virtual Network Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-plan-virtual-network-deployment.md
description: Learn how to plan an Azure Virtual Network deployment to connect HD
Previously updated : 09/15/2023 Last updated : 09/06/2024 # Plan a virtual network for Azure HDInsight
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 09/02/2024 Last updated : 09/06/2024 # Archived release notes
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 08/30/2024 Last updated : 09/06/2024 # Azure HDInsight release notes
hdinsight Hdinsight Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-guide.md
Title: Azure HDInsight troubleshooting guides
description: Troubleshoot Azure HDInsight. Step-by-step documentation shows you how to use HDInsight to solve common problems with Apache Hive, Apache Spark, Apache YARN, Apache HBase, and HDFS. Previously updated : 09/19/2023 Last updated : 09/06/2024 # Troubleshoot Azure HDInsight
hdinsight Apache Hive Warehouse Connector Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-operations.md
Previously updated : 08/13/2023 Last updated : 09/06/2024 # Apache Spark operations supported by Hive Warehouse Connector in Azure HDInsight
hdinsight Apache Hive Warehouse Connector Zeppelin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-zeppelin.md
Previously updated : 09/27/2023 Last updated : 09/06/2024 # Integrate Apache Zeppelin with Hive Warehouse Connector in Azure HDInsight
hdinsight Hive Default Metastore Export Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-default-metastore-export-import.md
Previously updated : 09/15/2023 Last updated : 09/06/2024 # Migrate default Hive metastore DB to external metastore DB
hdinsight Hive Migration Across Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-migration-across-storage-accounts.md
Previously updated : 09/19/2023 Last updated : 09/06/2024 # Hive workload migration to new account in Azure Storage
hdinsight Hive Warehouse Connector Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-warehouse-connector-apis.md
Previously updated : 09/19/2023 Last updated : 09/06/2024 # Hive Warehouse Connector APIs in Azure HDInsight
hdinsight Hive Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-workload-management.md
Previously updated : 09/19/2023 Last updated : 09/06/2024 # Hive LLAP Workload Management (WLM) feature
hdinsight Interactive Query Troubleshoot Hive Logs Diskspace Full Headnodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-hive-logs-diskspace-full-headnodes.md
Previously updated : 09/18/2023 Last updated : 09/06/2024 # Scenario: Apache Hive logs are filling up the disk space on the head nodes in Azure HDInsight
hdinsight Troubleshoot Workload Management Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/troubleshoot-workload-management-issues.md
Previously updated : 09/14/2023 Last updated : 09/06/2024 # Troubleshoot Hive LLAP Workload Management issues
hdinsight Apache Kafka Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-performance-tuning.md
description: Provides an overview of techniques for optimizing Apache Kafka work
Previously updated : 09/15/2023 Last updated : 09/06/2024 # Performance optimization for Apache Kafka HDInsight clusters
hdinsight Apache Kafka Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-bicep.md
Previously updated : 09/15/2023 Last updated : 09/06/2024 #Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
hdinsight Connect Kafka With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/connect-kafka-with-vnet.md
Title: Connect HDInsight Kafka cluster with client VM in different VNet on Azure
description: Learn how to connect HDInsight Kafka cluster with Client VM in different VNet on Azure HDInsight Previously updated : 08/13/2023 Last updated : 09/06/2024 # Connect HDInsight Kafka cluster with client VM in different VNet
hdinsight Kafka Troubleshoot Full Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/kafka-troubleshoot-full-disk.md
Previously updated : 09/19/2023 Last updated : 09/06/2024 # Scenario: Brokers are unhealthy or can't restart due to disk space full issue
hdinsight Log Analytics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/log-analytics-migration.md
Previously updated : 08/12/2024 Last updated : 09/06/2024 # Log Analytics migration guide for Azure HDInsight clusters
hdinsight Network Virtual Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/network-virtual-appliance.md
Title: Configure network virtual appliance in Azure HDInsight
description: Learn how to configure extra features for your network virtual appliance in Azure HDInsight. Previously updated : 09/20/2023 Last updated : 09/06/2024 # Configure network virtual appliance in Azure HDInsight
hdinsight Optimize Hive Ambari https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/optimize-hive-ambari.md
Title: Optimize Apache Hive with Apache Ambari in Azure HDInsight
description: Use the Apache Ambari web UI to configure and optimize Apache Hive. Previously updated : 09/15/2023 Last updated : 09/06/2024 # Optimize Apache Hive with Apache Ambari in Azure HDInsight
hdinsight Selective Logging Analysis Azure Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/selective-logging-analysis-azure-logs.md
description: Learn how to use the selective logging feature with a script action
Previously updated : 09/13/2023 Last updated : 09/06/2024 # Use selective logging with a script action in Azure HDInsight
hdinsight Selective Logging Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/selective-logging-analysis.md
description: Learn how to use the selective logging feature with a script action
Previously updated : 08/05/2024 Last updated : 09/06/2024 # Use selective logging with a script action for Azure Monitor Agent (AMA) in Azure HDInsight
hdinsight Share Hive Metastore With Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/share-hive-metastore-with-synapse.md
description: Learn how to share existing Azure HDInsight external Hive Metastore
keywords: external Hive metastore,share,Synapse Previously updated : 08/16/2024 Last updated : 09/06/2024 # Share Hive Metastore with Synapse Spark Pool (Preview)
hdinsight Apache Azure Spark History Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-azure-spark-history-server.md
description: Use the extended features in the Apache Spark History Server to deb
Previously updated : 09/13/2023 Last updated : 09/06/2024 # Use the extended features of the Apache Spark History Server to debug and diagnose Spark applications
hdinsight Apache Spark Ipython Notebook Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-ipython-notebook-machine-learning.md
description: Tutorial - Step-by-step instructions on how to build Apache Spark m
Previously updated : 09/14/2023 Last updated : 09/06/2024 # Customer intent: As a developer new to Apache Spark and to Apache Spark in Azure HDInsight, I want to learn how to create a simple machine learning Spark application.
The application uses the sample **HVAC.csv** data that is available on all clust
## Develop a Spark machine learning application using Spark MLlib
-This application uses a Spark [ML pipeline](https://spark.apache.org/docs/2.2.0/ml-pipeline.html) to do a document classification. ML Pipelines provide a uniform set of high-level APIs built on top of DataFrames. The DataFrames help users create and tune practical machine learning pipelines. In the pipeline, you split the document into words, convert the words into a numerical feature vector, and finally build a prediction model using the feature vectors and labels. Do the following steps to create the application.
+This application uses a Spark [ML pipeline](https://downloads.apache.org/spark/docs/3.3.1/ml-pipeline.html) to do a document classification. ML Pipelines provide a uniform set of high-level APIs built on top of DataFrames. The DataFrames help users create and tune practical machine learning pipelines. In the pipeline, you split the document into words, convert the words into a numerical feature vector, and finally build a prediction model using the feature vectors and labels. Do the following steps to create the application.
1. Create a Jupyter Notebook using the PySpark kernel. For the instructions, see [Create a Jupyter Notebook file](./apache-spark-jupyter-spark-sql.md#create-a-jupyter-notebook-file).
hdinsight Apache Spark Job Debugging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-job-debugging.md
description: Use YARN UI, Spark UI, and Spark History server to track and debug
Previously updated : 08/13/2023 Last updated : 09/06/2024 # Debug Apache Spark jobs running on Azure HDInsight
hdinsight Apache Spark Jupyter Spark Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql.md
Title: 'Quickstart: Create Apache Spark cluster using template - Azure HDInsight' description: This quickstart shows how to use Resource Manager template to create an Apache Spark cluster in Azure HDInsight, and run a Spark SQL query. Previously updated : 09/15/2023 Last updated : 09/06/2024
hdinsight Apache Spark Jupyter Spark Use Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-use-bicep.md
Title: 'Quickstart: Create Apache Spark cluster using Bicep - Azure HDInsight'
description: This quickstart shows how to use Bicep to create an Apache Spark cluster in Azure HDInsight, and run a Spark SQL query. Previously updated : 09/15/2023 Last updated : 09/06/2024
hdinsight Apache Spark Perf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-perf.md
Title: Optimize Spark jobs for performance - Azure HDInsight
description: Show common strategies for the best performance of Apache Spark clusters in Azure HDInsight. Previously updated : 09/15/2023 Last updated : 09/06/2024 # Optimize Apache Spark applications in HDInsight
hdinsight Apache Spark Shell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-shell.md
description: An interactive Spark Shell provides a read-execute-print process fo
Previously updated : 09/13/2023 Last updated : 09/06/2024 # Run Apache Spark from the Spark Shell
hdinsight Apache Spark Structured Streaming Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-structured-streaming-overview.md
description: How to use Spark Structured Streaming applications on HDInsight Spa
Previously updated : 09/14/2023 Last updated : 09/05/2024 # Overview of Apache Spark Structured Streaming
This query yields results similar to the following:
|{u'start': u'2016-07-26T07:00:00.000Z', u'end'... |95 | 96.980971 | 99 | |{u'start': u'2016-07-26T08:00:00.000Z', u'end'... |95 | 96.965997 | 99 |
-For details on the Spark Structured Stream API, along with the input data sources, operations, and output sinks it supports, see [Apache Spark Structured Streaming Programming Guide](https://spark.apache.org/docs/2.1.0/structured-streaming-programming-guide.html).
+For details on the Spark Structured Stream API, along with the input data sources, operations, and output sinks it supports, see [Apache Spark Structured Streaming Programming Guide](https://spark.apache.org/docs/latest/ss-migration-guide.html).
## Checkpointing and write-ahead logs
The status of all applications can also be checked with a GET request against a
## Next steps * [Create an Apache Spark cluster in HDInsight](../hdinsight-hadoop-create-linux-clusters-portal.md)
-* [Apache Spark Structured Streaming Programming Guide](https://spark.apache.org/docs/2.1.0/structured-streaming-programming-guide.html)
+* [Apache Spark Structured Streaming Programming Guide](https://spark.apache.org/docs/latest/ss-migration-guide.html)
* [Launch Apache Spark jobs remotely with Apache LIVY](apache-spark-livy-rest-interface.md)
hdinsight Apache Spark Troubleshoot Illegalargumentexception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-illegalargumentexception.md
Title: IllegalArgumentException error for Apache Spark - Azure HDInsight
description: IllegalArgumentException for Apache Spark activity in Azure HDInsight for Azure Data Factory Previously updated : 09/19/2023 Last updated : 09/06/2024 # Scenario: IllegalArgumentException for Apache Spark activity in Azure HDInsight
hdinsight Apache Spark Troubleshoot Rpctimeoutexception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-rpctimeoutexception.md
Title: RpcTimeoutException for Apache Spark thrift - Azure HDInsight
description: You see 502 errors when processing large data sets using Apache Spark thrift server Previously updated : 09/15/2023 Last updated : 09/06/2024 # Scenario: RpcTimeoutException for Apache Spark thrift server in Azure HDInsight
hdinsight Optimize Data Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/optimize-data-storage.md
Title: Optimize data storage for Apache Spark - Azure HDInsight
description: Learn how to optimize data storage for use with Apache Spark on Azure HDInsight. Previously updated : 09/15/2023 Last updated : 09/06/2024 # Data storage optimization for Apache Spark
hdinsight Optimize Memory Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/optimize-memory-usage.md
Title: Optimize memory usage in Apache Spark - Azure HDInsight
description: Learn how to optimize memory usage in Apache Spark on Azure HDInsight. Previously updated : 09/15/2023 Last updated : 09/06/2024 # Memory usage optimization for Apache Spark
hdinsight Troubleshoot Debug Wasb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/troubleshoot-debug-wasb.md
Title: Debug WASB file operations in Azure HDInsight
description: Describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters. Previously updated : 09/19/2023 Last updated : 09/06/2024 # Debug WASB file operations in Azure HDInsight
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/carin-implementation-guide-blue-button-tutorial.md
Last updated 06/06/2022
# CARIN Implementation Guide for Blue Button&#174;
-In this tutorial, we'll walk through setting up the FHIR service in Azure Health Data Services (hereby called the FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [CARIN Implementation Guide for Blue Button](https://build.fhir.org/ig/HL7/carin-bb/https://docsupdatetracker.net/index.html) (C4BB IG).
+In this tutorial, we walk through setting up the FHIR&reg; service in Azure Health Data Services to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [CARIN Implementation Guide for Blue Button](https://build.fhir.org/ig/HL7/carin-bb/https://docsupdatetracker.net/index.html) (C4BB IG).
## Touchstone capability statement
-The first test that we'll focus on is testing FHIR service against the [C4BB IG capability statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test against the FHIR service without any updates, the test will fail due to missing search parameters and missing profiles.
+We first focus on testing FHIR service against the [C4BB IG capability statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test against the FHIR service without any updates, the test fails due to missing search parameters and missing profiles.
### Define search parameters
-As part of the C4BB IG, you'll need to define three [new search parameters](how-to-do-custom-search.md) for the `ExplanationOfBenefit` resource. Two of these are tested in the capability statement (type and service-date), and one is needed for `_include` searches (insurer).
+As part of the C4BB IG, you'll need to define three [new search parameters](how-to-do-custom-search.md) for the `ExplanationOfBenefit` resource. Two of these (type and service-date) are tested in the capability statement, and one (insurer) is needed for `_include` searches.
* [type](https://build.fhir.org/ig/HL7/carin-bb/SearchParameter-explanationofbenefit-type.json) * [service-date](https://build.fhir.org/ig/HL7/carin-bb/SearchParameter-explanationofbenefit-service-date.json)
Outside of defining search parameters, the other update you need to make to pass
### Sample rest file
-To assist with creation of these search parameters and profiles, we have a [sample http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB.http) that includes all the steps outlined above in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone.
+To assist with creation of these search parameters and profiles, we have a [sample http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB.http) that includes all the steps previously outlined in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone.
:::image type="content" source="media/centers-medicare-services-tutorials/capability-test-script-execution-results.png" alt-text="Capability test script execution results."::: ## Touchstone read test
-After testing the capabilities statement, we'll test the [read capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/01-Read&activeOnly=false&contentEntry=TEST_SCRIPTS) of the FHIR service against the C4BB IG. This test is testing conformance against the eight profiles you loaded in the first test. You'll need to have resources loaded that conform to the profiles. The best path would be to test against resources that you already have in your database, but we also have an [http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB_Sample_Resources.http) available with sample resources pulled from the examples in the IG that you can use to create the resources and test against.
+After testing the capabilities statement, we'll test the [read capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/01-Read&activeOnly=false&contentEntry=TEST_SCRIPTS) of the FHIR service against the C4BB IG. This tests conformance against the eight profiles you loaded in the first test. You'll need to have resources loaded that conform to the profiles. The best path would be to test against resources that you already have in your database. We also have an [http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB_Sample_Resources.http) available with sample resources pulled from the examples in the IG that you can use to create the resources to test against.
:::image type="content" source="media/centers-medicare-services-tutorials/test-execution-results-touchstone.png" alt-text="Touchstone read test execution results."::: ## Touchstone EOB query test
-The next test we'll review is the [EOB query test](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/02-EOBQuery&activeOnly=false&contentEntry=TEST_SCRIPTS). If you've already completed the read test, you have all the data loaded that you'll need. This test validates that you can search for specific `Patient` and `ExplanationOfBenefit` resources using various parameters.
+The next test we'll review is the [EOB query test](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/02-EOBQuery&activeOnly=false&contentEntry=TEST_SCRIPTS). If you've already completed the read test, you already have all the data that you need loaded. This test validates that you can search for specific `Patient` and `ExplanationOfBenefit` resources using various parameters.
:::image type="content" source="media/centers-medicare-services-tutorials/test-execution-touchstone-eob-query-test.png" alt-text="Touchstone EOB query execution results."::: ## Touchstone error handling test
-The final test we'll walk through is testing [error handling](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/99-ErrorHandling&activeOnly=false&contentEntry=TEST_SCRIPTS). The only step you need to do is delete an ExplanationOfBenefit resource from your database and use the ID of the deleted `ExplanationOfBenefit` resource in the test.
+The final test we'll cover is testing [error handling](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/99-ErrorHandling&activeOnly=false&contentEntry=TEST_SCRIPTS). The only step is to delete an ExplanationOfBenefit resource from your database using the ID of the deleted `ExplanationOfBenefit` resource in the test.
:::image type="content" source="media/centers-medicare-services-tutorials/test-execution-touchstone-error-handling.png" alt-text="Touchstone EOB error handling results.":::
In this tutorial, we walked through how to pass the CARIN IG for Blue Button tes
>[!div class="nextstepaction"] >[DaVinci Drug Formulary](davinci-drug-formulary-tutorial.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
-
+
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/centers-for-medicare-tutorial-introduction.md
Last updated 06/06/2022
# Introduction: Centers for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule
-In this series of tutorials, we'll cover a high-level summary of the Center for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule, and the technical requirements outlined in this rule. We'll walk through the various implementation guides referenced for this rule. We'll also provide details on how to configure FHIR service in Azure Health Data Services (hereby called FHIR service) to support these implementation guides.
+This series of tutorials covers a high-level summary of the Center for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule, and the technical requirements outlined in this rule. We walk through various implementation guides referenced for this rule. We also provide details on how to configure FHIR&reg; service in Azure Health Data Services to support these implementation guides.
## Rule overview
-The CMS released the [Interoperability and Patient Access rule](https://www.cms.gov/Regulations-and-Guidance/Guidance/Interoperability/index) on May 1, 2020. This rule requires free and secure data flow between all parties involved in patient care (patients, providers, and payers) to allow patients to access their health information when they need it. Interoperability has plagued the healthcare industry for decades, resulting in siloed data that causes negative health outcomes with higher and unpredictable costs for care. CMS is using their authority to regulate Medicare Advantage (MA), Medicaid, Children's Health Insurance Program (CHIP), and Qualified Health Plan (QHP) issuers on the Federally Facilitated Exchanges (FFEs) to enforce this rule.
+The CMS released the [Interoperability and Patient Access rule](https://www.cms.gov/Regulations-and-Guidance/Guidance/Interoperability/index) on May 1, 2020. This rule requires free and secure data flow between all parties involved in patient care (patients, providers, and payers) to allow patients to access their health information. Interoperability has plagued the healthcare industry for decades, resulting in siloed data that causes negative health outcomes with higher and unpredictable costs for care. CMS is using their authority to regulate Medicare Advantage (MA), Medicaid, Children's Health Insurance Program (CHIP), and Qualified Health Plan (QHP) issuers on the Federally Facilitated Exchanges (FFEs) to enforce this rule.
In August 2020, CMS detailed how organizations can meet the mandate. To ensure that data can be exchanged securely and in a standardized manner, CMS identified FHIR version release 4 (R4) as the foundational standard required for the data exchange. There are three main pieces to the Interoperability and Patient Access ruling:
-* **Patient Access API (Required July 1, 2021)** ΓÇô CMS-regulated payers (as defined above) are required to implement and maintain a secure, standards-based API that allows patients to easily access their claims and encounter information, including cost, as well as a defined subset of their clinical information through third-party applications of their choice.
+* **Patient Access API (Required July 1, 2021)** ΓÇô CMS-regulated payers (as previously defined) are required to implement and maintain a secure, standards-based API that allows patients to easily access their claims and encounter information, including cost, as well as a defined subset of their clinical information through third-party applications of their choice.
-* **Provider Directory API (Required July 1, 2021)** ΓÇô CMS-regulated payers are required by this portion of the rule to make provider directory information publicly available via a standards-based API. Through making this information available, third-party application developers will be able to create services that help patients find providers for specific care needs and clinicians find other providers for care coordination.
+* **Provider Directory API (Required July 1, 2021)** ΓÇô CMS-regulated payers are required by this portion of the rule to make provider directory information publicly available via a standards-based API. Through making this information available, third-party application developers will be able to create services that help patients find providers for specific care needs, and clinicians find other providers for care coordination.
-* **Payer-to-Payer Data Exchange (Originally required Jan 1, 2022 - [Currently Delayed](https://www.cms.gov/Regulations-and-Guidance/Guidance/Interoperability/index))** ΓÇô CMS-regulated payers are required to exchange certain patient clinical data at the patientΓÇÖs request with other payers. While there's no requirement to follow any kind of standard, applying FHIR to exchange this data is encouraged.
+* **Payer-to-Payer Data Exchange (Originally required Jan 1, 2022 - [Currently Delayed](https://www.cms.gov/Regulations-and-Guidance/Guidance/Interoperability/index))** ΓÇô CMS-regulated payers are, at the patientΓÇÖs request, required to exchange certain patient clinical data with other payers. While there's no requirement to follow any kind of standard, applying FHIR to exchange this data is encouraged.
## Key FHIR concepts
-As mentioned above, FHIR R4 is required to meet this mandate. In addition, there have been several implementation guides developed that provide guidance for the rule. [Implementation guides](https://www.hl7.org/fhir/implementationguide.html) provide extra context on top of the base FHIR specification. This includes defining additional search parameters, profiles, extensions, operations, value sets, and code systems.
+As mentioned previously, FHIR R4 is required to meet this mandate. In addition, there are several implementation guides developed that provide guidance for the rule. [Implementation guides](https://www.hl7.org/fhir/implementationguide.html) provide extra context on top of the base FHIR specification. This includes defining additional search parameters, profiles, extensions, operations, value sets, and code systems.
-The FHIR service has the following capabilities to help you configure your database for the various implementation guides:
+The FHIR service has the following capabilities to help you configure your database for the various implementation guides.
* [Support for RESTful interactions](fhir-features-supported.md) * [Storing and validating profiles](validation-against-profiles.md)
The FHIR service has the following capabilities to help you configure your datab
The Patient Access API describes adherence to four FHIR implementation guides: * [CARIN IG for Blue Button®](http://hl7.org/fhir/us/carin-bb/STU1/https://docsupdatetracker.net/index.html): Payers are required to make patients' claims and encounters data available according to the CARIN IG for Blue Button Implementation Guide (C4BB IG). The C4BB IG provides a set of resources that payers can display to consumers via a FHIR API and includes the details required for claims data in the Interoperability and Patient Access API. This implementation guide uses the ExplanationOfBenefit (EOB) Resource as the main resource, pulling in other resources as they're referenced.
-* [HL7 FHIR Da Vinci PDex IG](http://hl7.org/fhir/us/davinci-pdex/STU1/https://docsupdatetracker.net/index.html): The Payer Data Exchange Implementation Guide (PDex IG) is focused on ensuring that payers provide all relevant patient clinical data to meet the requirements for the Patient Access API. This uses the US Core profiles on R4 Resources and includes (at a minimum) encounters, providers, organizations, locations, dates of service, diagnoses, procedures, and observations. While this data may be available in FHIR format, it may also come from other systems in the format of claims data, HL7 V2 messages, and C-CDA documents.
-* [HL7 US Core IG](https://www.hl7.org/fhir/us/core/toc.html): The HL7 US Core Implementation Guide (US Core IG) is the backbone for the PDex IG described above. While the PDex IG limits some resources even further than the US Core IG, many resources just follow the standards in the US Core IG.
-
-* [HL7 FHIR Da Vinci - PDex US Drug Formulary IG](http://hl7.org/fhir/us/Davinci-drug-formulary/https://docsupdatetracker.net/index.html): Part D Medicare Advantage plans have to make formulary information available via the Patient API. They do this using the PDex US Drug Formulary Implementation Guide (USDF IG). The USDF IG defines a FHIR interface to a health insurer’s drug formulary information, which is a list of brand-name and generic prescription drugs that a health insurer agrees to pay for. The main use case of this is so that patients can understand if there are alternative drug available to one that has been prescribed to them and to compare drug costs.
+* [HL7 FHIR Da Vinci PDex IG](http://hl7.org/fhir/us/davinci-pdex/STU1/https://docsupdatetracker.net/index.html): The Payer Data Exchange Implementation Guide (PDex IG) is focused on ensuring that payers provide all relevant patient clinical data to meet the requirements for the Patient Access API. This uses the US Core profiles on R4 Resources, and includes (at a minimum) encounters, providers, organizations, locations, dates of service, diagnoses, procedures, and observations. While this data may be available in FHIR format, it may also come from other systems in the format of claims data, HL7 V2 messages, and C-CDA documents.
+* [HL7 US Core IG](https://www.hl7.org/fhir/us/core/toc.html): The HL7 US Core Implementation Guide (US Core IG) is the backbone for the PDex IG previously described. While the PDex IG limits some resources even further than the US Core IG, many resources just follow the standards in the US Core IG.
+* [HL7 FHIR Da Vinci - PDex US Drug Formulary IG](http://hl7.org/fhir/us/Davinci-drug-formulary/https://docsupdatetracker.net/index.html): Part D Medicare Advantage plans have to make formulary information available via the Patient API. They do this using the PDex US Drug Formulary Implementation Guide (USDF IG). The USDF IG defines a FHIR interface to a health insurer’s drug formulary information, which is a list of brand-name and generic prescription drugs that a health insurer agrees to pay for. The main use case is so patients can determine if there is a drug available alternative to one that has been prescribed to them, and to compare drug costs.
## Provider Directory API Implementation Guide
The Provider Directory API describes adherence to one implementation guide:
## Touchstone
-To test adherence to the various implementation guides, [Touchstone](https://touchstone.aegis.net/touchstone/) is a great resource. Throughout the upcoming tutorials, we'll focus on ensuring that the FHIR service is configured to successfully pass various Touchstone tests. The Touchstone site has a lot of great documentation to help you get up and running.
+[Touchstone](https://touchstone.aegis.net/touchstone/) is a great resource for testing adherence to the various implementation guides. Throughout the upcoming tutorials, we focus on ensuring that the FHIR service is configured to successfully pass various Touchstone tests. The Touchstone site has a lot of great documentation to help you get up and running.
## Next steps
-Now that you have a basic understanding of the Interoperability and Patient Access rule, implementation guides, and available testing tool (Touchstone), we'll walk through setting up FHIR service for the CARIN IG for Blue Button.
+Now that you have a basic understanding of the Interoperability and Patient Access rule, implementation guides, and available testing tool (Touchstone), we next walk through setting up FHIR service for the CARIN IG for Blue Button.
>[!div class="nextstepaction"] >[CARIN Implementation Guide for Blue Button](carin-implementation-guide-blue-button-tutorial.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-cross-origin-resource-sharing.md
## What is cross-origin resource sharing in FHIR service?
-FHIR service in Azure Health Data Services (hereby called FHIR service) supports [cross-origin resource sharing (CORS)](https://wikipedia.org/wiki/Cross-Origin_Resource_Sharing). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request.
+FHIR&reg; service in Azure Health Data Services supports [cross-origin resource sharing (CORS)](https://wikipedia.org/wiki/Cross-Origin_Resource_Sharing). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request.
CORS is often used in a single-page app that must call a RESTful API to a different domain. ## Cross-origin resource sharing configuration settings
-To configure a CORS setting in the FHIR service, specify the following settings:
+To configure a CORS setting in the FHIR service, specify the following settings.
- **Origins (Access-Control-Allow-Origin)**. A list of domains allowed to make cross-origin requests to the FHIR service. Each domain (origin) must be entered in a separate line. You can enter an asterisk (*) to allow calls from any domain, but we don't recommend it because it's a security risk. -- **Headers (Access-Control-Allow-Headers)**. A list of headers that the origin request will contain. To allow all headers, enter an asterisk (*).
+- **Headers (Access-Control-Allow-Headers)**. A list of headers that the origin request contains. To allow all headers, enter an asterisk (*).
- **Methods (Access-Control-Allow-Methods)**. The allowed methods (PUT, GET, POST, and so on) in an API call. Choose **Select all** for all methods.
In this tutorial, we walked through how to configure a CORS setting in the FHIR
>[!div class="nextstepaction"] >[CARIN Implementation Guide for Blue Button&#174;](carin-implementation-guide-blue-button-tutorial.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-customer-managed-keys.md
# Configure customer-managed keys for the FHIR service
-By using customer-managed keys (CMK), you can protect and control access to your organization's data with keys that you create and manage. You use [Azure Key Vault](/azure/key-vault/) to create and manage CMK and then use the keys to encrypt the data stored by the FHIR&reg; service.
+By using customer-managed keys (CMK), you can protect and control access to your organization's data with keys that you create and manage. You use [Azure Key Vault](/azure/key-vault/) to create and manage CMK, and then use the keys to encrypt the data stored by the FHIR&reg; service.
Customer-managed keys enable you to:
Customer-managed keys enable you to:
- Verify you're assigned the [Azure Contributor](../../role-based-access-control/role-assignments-steps.md) RBAC role, which lets you create and modify Azure resources. -- Add a key for the FHIR service in Azure Key Vault. For steps, see [Add a key in Azure Key Vault](/azure/key-vault/keys/quick-create-portal#add-a-key-to-key-vault). Customer-managed keys must meet these requirements:
+- Add a key for the FHIR service in Azure Key Vault. For steps, see [Add a key in Azure Key Vault](/azure/key-vault/keys/quick-create-portal#add-a-key-to-key-vault). Customer-managed keys must meet the following requirements.
- The key is versioned.
Customer-managed keys enable you to:
- When using a key vault with a firewall to disable public access, the option to **Allow trusted Microsoft services to bypass this firewall** must be enabled.
- - To prevent losing the encryption key for the FHIR service, the key vault or managed HSM must have **soft delete** and **purge protection** enabled. These features allow you to recover deleted keys for a certain time (default 90 days) and block permanent deletion until that time is over.
+ - To prevent losing the encryption key for the FHIR service, the key vault or managed HSM must have **soft delete** and **purge protection** enabled. These features allow you to recover deleted keys for a certain time (default is 90 days) and block permanent deletion until that time is over.
> [!NOTE] >>The FHIR service supports attaching one identity type (either a system-assigned or user-assigned identity). Changing the identity type might impact background jobs such as export and import if the identity type is already mapped.
After you add the key, you need to update the FHIR service with the key URL.
:::image type="content" source="media/configure-customer-managed-keys/key-vault-url.png" alt-text="Screenshot showing the key version details and the copy action for the Key Identifier." lightbox="media/configure-customer-managed-keys/key-vault-url.png":::
-You update the key for the FHIR service by using the Azure portal or an ARM template. During the update, you choose whether to use a system-assigned or user-assigned managed identity. For a system-assigned managed identity, make sure to assign the **Key Vault Crypto Service Encryption User** role. For more information, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+To update the key for the FHIR service, use the Azure portal or an ARM template. During the update, choose whether to use a system-assigned or user-assigned managed identity. For a system-assigned managed identity, make sure to assign the **Key Vault Crypto Service Encryption User** role. For more information, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
### Update the key by using the Azure portal
You update the key for the FHIR service by using the Azure portal or an ARM temp
1. Select **Customer-managed key** for the Encryption type.
-1. Select a key vault and key or enter the Key URI for the key that was created previously.
+1. Select a key vault and key, or enter the Key URI for the key that was created previously.
-1. Select an identity type, either System-assigned or User-assigned, that matches the type of managed identity configured previously.
+1. Select an identity type, either System-assigned or User-assigned, that matches the type of managed identity previously configured.
1. Select **Save** to update the FHIR service to use the customer-managed key.
If you use a user-assigned managed identity with the FHIR service, you can confi
2. Choose **Next: Security**.
- :::image type="content" source="media/configure-customer-managed-keys/deploy-name.png" alt-text="Screenshot of the Create FHIR service view with the FHIR service name filled in." lightbox="media/configure-customer-managed-keys/deploy-name.png":::
+ :::image type="content" source="media/configure-customer-managed-keys/deploy-name.png" alt-text="Screenshot of the Created FHIR service view with the FHIR service name filled in." lightbox="media/configure-customer-managed-keys/deploy-name.png":::
3. On the **Security** tab, in the **Encryption section** select **Customer-managed key**.
For the FHIR service to operate properly, it must always have access to the key
- The FHIR service system-assigned managed identity loses access to the key vault.
-In any scenario where the FHIR service can't access the key, API requests return with `500` errors and the data is inaccessible until access to the key is restored.
+In any scenario where the FHIR service can't access the key, API requests return `500` errors and the data is inaccessible until access to the key is restored.
If key access is lost, ensure you updated the key and required resources so they're accessible by the FHIR service. ## Resolve common errors
-Common errors that cause databases to become inaccessible are usually due to configuration issues. For more information, see [Common errors with customer-managed keys](/sql/relational-databases/security/encryption/troubleshoot-tde).
+Common errors that cause databases to become inaccessible are often due to configuration issues. For more information, see [Common errors with customer-managed keys](/sql/relational-databases/security/encryption/troubleshoot-tde).
[!INCLUDE [FHIR trademark statement](../includes/healthcare-apis-fhir-trademark.md)]
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
# Configure export settings and set up a storage account
-The FHIR service supports the `$export` operation [specified by HL7](https://www.hl7.org/fhir/uv/bulkdata/) for exporting FHIR data from a FHIR server. In the FHIR service implementation, calling the `$export` endpoint causes the FHIR service to export data into a pre-configured Azure storage account.
+The FHIR&reg; service supports the `$export` operation [specified by HL7](https://www.hl7.org/fhir/uv/bulkdata/) for exporting FHIR data from a FHIR server. In the FHIR service implementation, calling the `$export` endpoint causes the FHIR service to export data into a pre-configured Azure storage account.
-Ensure you are granted with application role - 'FHIR Data exporter role' prior to configuring export. To understand more on application roles, see [Authentication and Authorization for FHIR service](../../healthcare-apis/authentication-authorization.md).
+Ensure you are granted the application role 'FHIR Data exporter role' prior to configuring export. To understand more on application roles, see [Authentication and Authorization for FHIR service](../../healthcare-apis/authentication-authorization.md).
-Three steps in setting up the `$export` operation for the FHIR service-
+There are three steps in setting up the `$export` operation for the FHIR service-
- Enable a managed identity for the FHIR service. - Configure a new or existing Azure Data Lake Storage Gen2 (ADLS Gen2) account and give permission for the FHIR service to access the account.
Three steps in setting up the `$export` operation for the FHIR service-
## Enable managed identity for the FHIR service
-The first step in configuring your environment for FHIR data export is to enable a system-wide managed identity for the FHIR service. This managed identity is used to authenticate the FHIR service to allow access to the ADLS Gen2 account during an `$export` operation. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+The first step in configuring your environment for FHIR data export is to enable a system-wide managed identity for the FHIR service. This managed identity is used to authenticate the FHIR service, allowing access to the ADLS Gen2 account during an `$export` operation. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
-In this step, browse to your FHIR service in the Azure portal and select the **Identity** blade. Set the **Status** option to **On**, and then click **Save**. When the **Yes** and **No** buttons display, select **Yes** to enable the managed identity for the FHIR service. Once the system identity has been enabled, you'll see an **Object (principal) ID** value for your FHIR service.
+In this step, browse to your FHIR service in the Azure portal and select **Identity**. Set the **Status** option to **On**, and then click **Save**. When the **Yes** and **No** buttons display, select **Yes** to enable the managed identity for the FHIR service. Once the system identity has been enabled, you'll see an **Object (principal) ID** value for your FHIR service.
[![Enable Managed Identity](media/export-data/fhir-mi-enabled.png)](media/export-data/fhir-mi-enabled.png#lightbox)
In this step, browse to your FHIR service in the Azure portal and select the **I
6. Select your Azure subscription.
-7. Select **System-assigned managed identity**, and then select the managed identity that you enabled earlier for your FHIR service.
+7. Select **System-assigned managed identity**, and then select the managed identity that you previously enabled for your FHIR service.
8. On the **Review + assign** tab, click **Review + assign** to assign the **Storage Blob Data Contributor** role to your FHIR service.
Now you're ready to configure the FHIR service by setting the ADLS Gen2 account
## Specify the storage account for FHIR service export
-The final step is to specify the ADLS Gen2 account that the FHIR service uses when exporting data.
+The final step is to specify the ADLS Gen2 account the FHIR service uses when exporting data.
> [!NOTE] > In the storage account, if you haven't assigned the **Storage Blob Data Contributor** role to the FHIR service, the `$export` operation will fail. 1. Go to your FHIR service settings.
-2. Select the **Export** blade.
+2. Select **Export**.
3. Select the name of the storage account from the list. If you need to search for your storage account, use the **Name**, **Resource group**, or **Region** filters. [![Screen shot showing user interface of FHIR Export Storage.](media/export-data/fhir-export-storage.png)](media/export-data/fhir-export-storage.png#lightbox)
-After you've completed this final configuration step, you're ready to export data from the FHIR service. See [How to export FHIR data](./export-data.md) for details on performing `$export` operations with the FHIR service.
+After you've completed this configuration step, you're ready to export data from the FHIR service. See [How to export FHIR data](./export-data.md) for details on performing `$export` operations with the FHIR service.
> [!NOTE] > Only storage accounts in the same subscription as the FHIR service are allowed to be registered as the destination for `$export` operations. ## Securing the FHIR service `$export` operation
-For securely exporting from the FHIR service to an ADLS Gen2 account, there are two main options:
+For securely exporting from the FHIR service to an ADLS Gen2 account, there are two options:
* Allowing the FHIR service to access the storage account as a Microsoft Trusted Service.
-* Allowing specific IP addresses associated with the FHIR service to access the storage account.
-This option permits two different configurations depending on whether or not the storage account is in the same Azure region as the FHIR service.
+* Allowing specific IP addresses associated with the FHIR service to access the storage account. This option permits two different configurations depending on whether or not the storage account is in the same Azure region as the FHIR service.
### Allowing FHIR service as a Microsoft Trusted Service
-Go to your ADLS Gen2 account in the Azure portal and select the **Networking** blade. Select **Enabled from selected virtual networks and IP addresses** under the **Firewalls and virtual networks** tab.
+Go to your ADLS Gen2 account in the Azure portal and select **Networking**. Select **Enabled from selected virtual networks and IP addresses** under the **Firewalls and virtual networks** tab.
:::image type="content" source="media/export-data/storage-networking-1.png" alt-text="Screenshot of Azure Storage Networking Settings." lightbox="media/export-data/storage-networking-1.png":::
Under the **Exceptions** section, select the box **Allow Azure services on the t
:::image type="content" source="media/export-data/exceptions.png" alt-text="Allow trusted Microsoft services to access this storage account.":::
-Next, run the following PowerShell command to install the `Az.Storage` PowerShell module in your local environment. This allows you to configure your Azure storage account(s) using PowerShell.
+Next, run the following PowerShell command to install the `Az.Storage` PowerShell module in your local environment. This allows you to configure your Azure storage accounts using PowerShell.
```PowerShell Install-Module Az.Storage -Repository PsGallery -AllowClobber -Force ```
-Now, use the PowerShell command below to set the selected FHIR service instance as a trusted resource for the storage account. Make sure that all listed parameters are defined in your PowerShell environment.
+Now, use the following PowerShell command to set the selected FHIR service instance as a trusted resource for the storage account. Make sure that all listed parameters are defined in your PowerShell environment.
You'll need to run the `Add-AzStorageAccountNetworkRule` command as an administrator in your local environment. For more information, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md).
The storage account is on selected networks and isn't publicly accessible. To se
## Next steps
-In this article, you learned about the three steps in configuring your environment to allow export of data from your FHIR service to an Azure storage account. For more information about Bulk Export capabilities in the FHIR service, see
+In this article, you learned about the three steps in configuring your environment to allow export of data from your FHIR service to an Azure storage account. For more information about Bulk Export capabilities in the FHIR service, see the following.
>[!div class="nextstepaction"] >[How to export FHIR data](export-data.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-identity-providers.md
# Configure multiple service identity providers
-In addition to [Microsoft Entra ID](/entra/fundamentals/whatis), you can configure up to two additional identity providers for a FHIR service, whether the service already exists or is newly created.
+In addition to [Microsoft Entra ID](/entra/fundamentals/whatis), you can configure up to two additional identity providers for a FHIR&reg; service, whether the service already exists or is newly created.
## Identity providers prerequisite Identity providers must support OpenID Connect (OIDC), and must be able to issue JSON Web Tokens (JWT) with a `fhirUser` claim, a `azp` or `appid` claim, and an `scp` claim with [SMART on FHIR v1 Scopes](https://www.hl7.org/fhir/smart-app-launch/1.0.0/scopes-and-launch-context/https://docsupdatetracker.net/index.html#scopes-for-requesting-clinical-data).
Add the `smartIdentityProviders` element to the FHIR service `authenticationConf
#### Configure the `smartIdentityProviders` array
-If you don't need any identity providers besides Microsoft Entra ID, set the `smartIdentityProviders` array to null, or omit it from the provisioning request. Otherwise, include at least one valid identity provider configuration object in the array. You can configure up to two additional identity providers.
+If you don't need any identity providers beside Microsoft Entra ID, set the `smartIdentityProviders` array to null, or omit it from the provisioning request. Otherwise, include at least one valid identity provider configuration object in the array. You can configure up to two additional identity providers.
#### Specify the `authority`
https://yourIdentityProvider.com/authority/v2.0/.well-known/openid-configuration
#### Configure the `applications` array
-You must include at least one application configuration and can add upto 25 applications in the `applications` array. Each application configuration has values that validate access token claims and an array that defines the permissions for the application to access FHIR resources.
+You must include at least one application configuration and can add up to 25 applications in the `applications` array. Each application configuration has values that validate access token claims, and an array that defines the permissions for the application to access FHIR resources.
#### Identify the application with the `clientId` string
-The identity provider defines the application with a unique identifier called the `clientId` string (or application ID). The FHIR service validates the access token by checking the `authorized party` (azp) or `application id` (appid) claim against the `clientId` string. The FHIR service rejects requests with a `401 Unauthorized` error code if the `clientId` string and the token claim don't match exactly.
+The identity provider defines the application with a unique identifier called the `clientId` string (or application ID). The FHIR service validates the access token by checking the `authorized party` (azp) or `application id` (appid) claim against the `clientId` string. If the `clientId` string and the token claim don't match exactly, the FHIR service rejects the request with a `401 Unauthorized` error code.
#### Validate the access token with the `audience` string
-The `aud` claim in an access token identifies the intended recipient of the token. The `audience` string is the unique identifier for the recipient. The FHIR service validates the access token by checking the `audience` string against the `aud` claim. The FHIR service rejects requests with a `401 Unauthorized` error code if the `audience` string and the `aud` claim don't match exactly.
+The `aud` claim in an access token identifies the intended recipient of the token. The `audience` string is the unique identifier for the recipient. The FHIR service validates the access token by checking the `audience` string against the `aud` claim. If the `audience` string and the `aud` claim don't match exactly, the FHIR service rejects requests with a `401 Unauthorized` error code.
#### Specify the permissions with the `allowedDataActions` array
-Include at least one permission string in the `allowedDataActions` array. You can include any valid permission strings, but avoid duplicates.
+Include at least one permission string in the `allowedDataActions` array. You can include any valid permission strings. Avoid duplicates.
| **Valid permission string** | **Description** | |||
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
# Configure FHIR import settings
-This article walks you through the steps to configure settings on the FHIR service for `import` operations. To configure settings, you need to:
+This article walks you through the steps to configure settings on the FHIR&reg; service for `import` operations. To configure settings, you need to:
1. Enable a managed identity on the FHIR service.
-1. Create an Azure storage account or use an existing storage account, and then grant permissions to the FHIR service to access it.
+1. Create an Azure storage account or use an existing storage account, and grant permissions for the FHIR service to access it.
1. Set the import configuration of the FHIR service. 1. Use one of the options to securely import FHIR data into the FHIR service from an Azure Data Lake Storage Gen2 account.
After you enable the managed identity, a system-assigned GUID value appears.
## Step 2: Assign permissions to the FHIR service
-Use the following steps to assign permissions to access the storage account:
+Use the following steps to assign permissions to access the storage account.
1. In the storage account, browse to **Access Control (IAM)**. 2. Select **Add role assignment**. If the option for adding a role assignment is unavailable, ask your Azure administrator to assign you permission to perform this step.
Now you're ready to select the storage account for import.
> [!NOTE] > If you haven't assigned storage access permissions to the FHIR service, the `import` operation will fail.
-For this step, you need to get the request URL and JSON body:
+For this step, you need to get the request URL and JSON body.
1. In the Azure portal, browse to your FHIR service. 2. Select **Overview**. 3. Select **JSON View**. 4. Select the API version as **2022-06-01** or later. - To specify the Azure storage account in JSON view which is in **READ** mode, you need to use the [REST API](/rest/api/healthcareapis/services/create-or-update) to update the FHIR service. [![Screenshot of selections for opening the JSON view.](media/bulk-import/fhir-json-view.png)](media/bulk-import/fhir-json-view.png#lightbox)
The following steps walk you through setting configurations for initial and incr
### Set the import configuration for initial import mode
-Make the following changes to JSON:
+Make the following changes to JSON.
1. In `importConfiguration`, set `enabled` to `true`. 2. Update `integrationDataStore` with the target storage account name.
You're now ready to perform initial-mode import by using `import`.
### Set the import configuration for incremental import mode
-Make the following changes to JSON:
+Make the following changes to JSON.
1. In `importConfiguration`, set `enabled` to `true`. 2. Update `integrationDataStore` with the target storage account name.
To securely import FHIR data into the FHIR service from an Azure Data Lake Stora
### Enable the FHIR service as a trusted Microsoft service
-1. In the Azure portal, go to your Data Lake Storage Gen2 account in the Azure portal.
+1. In the Azure portal, go to your Data Lake Storage Gen2 account.
1. On the left menu, select **Networking**.
You're now ready to securely import FHIR data from the storage account. The stor
## Next steps
-In this article, you learned how the FHIR service supports the `import` operation and how you can import data into the FHIR service from a storage account. You also learned about the steps for configuring import settings in the FHIR service. For more information about converting data to FHIR, exporting settings to set up a storage account, and moving data to Azure Synapse Analytics, see:
+In this article, you learned how the FHIR service supports the `import` operation, and how you can import data into the FHIR service from a storage account. You also learned about the steps for configuring import settings in the FHIR service. For more information about converting data to FHIR, exporting settings to set up a storage account, and moving data to Azure Synapse Analytics, see:
>[!div class="nextstepaction"] >[Import FHIR data](import-data.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Convert Data Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data-azure-data-factory.md
In this article, we detail how to use [Azure Data Factory (ADF)](../../data-fact
## Prerequisites
-Before getting started, do these steps:
+Before getting started, follow these steps.
1. Deploy an instance of theΓÇ»[FHIR service](fhir-portal-quickstart.md). The FHIR service is used to invoke the [`$convert-data`](convert-data-overview.md) operation. 2. By default, the ADF pipeline in this scenario uses the [predefined templates provided by Microsoft](convert-data-configuration.md#default-templates) for conversion. If your use case requires customized templates, set up your [Azure Container Registry instance to host your own templates](convert-data-configuration.md#host-your-own-templates) to be used for the conversion operation. 3. Create storage accounts with [Azure Data Lake Storage Gen2 (ADLS Gen2) capabilities](../../storage/blobs/create-data-lake-storage-account.md) by enabling a hierarchical namespace and container to store the data to read from and write to.
- You can create and use either one or separate ADLS Gen2 accounts and containers to:
- - Store the HL7v2 data to be transformed (for example: the source account and container the pipeline reads the data to be transformed from).
- - Store the transformed FHIR R4 bundles (for example: the destination account and container the pipeline writes the transformed result to).
- - Store the errors encountered during the transformation (for example: the destination account and container the pipeline writes execution errors to).
+ You can create and use one or separate ADLS Gen2 accounts and containers to:
+ - Store the HL7v2 data to be transformed (for example: the source account and container from which the pipeline reads the data to be transformed).
+ - Store the transformed FHIR R4 bundles (for example: the destination account and container to which the pipeline writes the transformed result).
+ - Store the errors encountered during the transformation (for example: the destination account and container to which the pipeline writes execution errors).
-4. Create an instance of [ADF](../../data-factory/quickstart-create-data-factory.md), which serves as a business logic orchestrator. Ensure that a [system-assigned managed identity](../../data-factory/data-factory-service-identity.md) is enabled.
+4. Create an instance of [ADF](../../data-factory/quickstart-create-data-factory.md), which serves to orchestrate business logic. Ensure that a [system-assigned managed identity](../../data-factory/data-factory-service-identity.md) is enabled.
5. Add the following [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) assignments to the ADF system-assigned managed identity: * **FHIR Data Converter** role to [grant permission to the FHIR service](../../healthcare-apis/configure-azure-rbac.md#assign-roles-for-the-fhir-service). * **Storage Blob Data Contributor** role to [grant permission to the ADLS Gen2 account](../../storage/blobs/assign-azure-role-data-access.md?tabs=portal). ## Configure an Azure Data Factory pipeline
-In this example, an ADF [pipeline](../../data-factory/concepts-pipelines-activities.md?tabs=data-factory) is used to transform HL7v2 data and persist transformed FHIR R4 bundle in a JSON file within the configured destination ADLS Gen2 account and container.
+In this example, an ADF [pipeline](../../data-factory/concepts-pipelines-activities.md?tabs=data-factory) is used to transform HL7v2 data, and persist a transformed FHIR R4 bundle in a JSON file within the configured destination ADLS Gen2 account and container.
1. From the Azure portal, open your Azure Data Factory instance and select **Launch Studio** to begin.
In this example, an ADFΓÇ»[pipeline](../../data-factory/concepts-pipelines-activ
## Create a pipeline
-Azure Data Factory pipelines are a collection of activities that perform a task. This section details the creation of a pipeline that performs the task of transforming HL7v2 data to FHIR R4 bundles. Pipeline execution can be in an improvised fashion or regularly based on defined triggers.
+Azure Data Factory pipelines are a collection of activities that perform a task. This section details the creation of a pipeline that performs the task of transforming HL7v2 data to FHIR R4 bundles. Pipeline execution can be done manually, or regularly based on defined triggers.
1. Select **Author** from the navigation menu. In the **Factory Resources** pane, select the **+** to add a new resource. Select **Pipeline** and then **Template gallery** from the menu.
Azure Data Factory pipelines are a collection of activities that perform a t
If needed, you can make any modifications to the pipelines/activities to fit your scenario (for example: if you don't intend to persist the results in a destination ADLS Gen2 storage account, you can modify the pipeline to remove the **Write converted result to ADLS Gen2** pipeline altogether).
-4. Select the **Parameters** tab and provide values based your configuration/setup. Some of the values are based on the resources setup as part of the [prerequisites](#prerequisites).
+4. Select the **Parameters** tab and provide values based your configuration. Some of the values are based on the resources setup as part of the [prerequisites](#prerequisites).
:::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/input-pipeline-parameters.png" alt-text="Screenshot showing the pipeline parameters options." lightbox="media/convert-data/convert-data-with-azure-data-factory/input-pipeline-parameters.png"::: * **fhirService** ΓÇô Provide the URL of the FHIR service to target for the `$convert-data` operation. For example: `https://**myservice-fhir**.fhir.azurehealthcareapis.com/`
- * **acrServer** ΓÇô Provide the name of the ACR server to pull the Liquid templates to use for conversion. By default, option is set to `microsofthealth`, which contains the predefined template collection published by Microsoft. To use your own template collection, replace this value with your ACR instance that hosts your templates and is registered to your FHIR service.
- * **templateReference** ΓÇô Provide the reference to the image within the ACR that contains the Liquid templates to use for conversion. By default, this option is set to `hl7v2templates:default` to pull the latest published Liquid templates for HL7v2 conversion by Microsoft. To use your own template collection, replace this value with the reference to the image within your ACR that hosts your templates and is registered to your FHIR service.
+ * **acrServer** ΓÇô Provide the name of the ACR server to pull the Liquid templates to use for conversion. The default option is set to `microsofthealth`, which contains the predefined template collection published by Microsoft. To use your own template collection, replace this value with your ACR instance that hosts your templates and is registered to your FHIR service.
+ * **templateReference** ΓÇô Provide the reference to the image within the ACR that contains the Liquid templates to use for conversion. The default option is set to `hl7v2templates:default` to pull the latest published Liquid templates for HL7v2 conversion by Microsoft. To use your own template collection, replace this value with the reference to the image within your ACR that hosts your templates and is registered to your FHIR service.
* **inputStorageAccount** ΓÇô The primary endpoint of the ADLS Gen2 storage account containing the input HL7v2 data to transform. For example: `https://**mystorage**.blob.core.windows.net`.
- * **inputStorageFolder** ΓÇô The container and folder path within the configured. For example: `**mycontainer**/**myHL7v2folder**`.
+ * **inputStorageFolder** ΓÇô The configured container and folder path. For example: `**mycontainer**/**myHL7v2folder**`.
> [!NOTE]
- > This can be a static folder path or can be left blank here and dynamically configured when setting up storage account triggers for this pipeline execution (refer to the section titled [Executing a pipeline](#executing-a-pipeline)).
+ > This can be a static folder path, or can be left blank and dynamically configured when setting up storage account triggers for this pipeline execution (refer to the section titled [Executing a pipeline](#executing-a-pipeline)).
* **inputStorageFile** ΓÇô The name of the file within the configured container. * **inputStorageAccount** and **inputStorageFolder** that contains the HL7v2 data to transform. For example: `**myHL7v2file**.hl7`. > [!NOTE]
- > This can be a static folder path or can be left blank here and dynamically configured when setting up storage account triggers for this pipeline execution (refer to the section titled [Executing a pipeline](#executing-a-pipeline)).
+ > This can be a static folder path, or can be left blank and dynamically configured when setting up storage account triggers for this pipeline execution (refer to the section titled [Executing a pipeline](#executing-a-pipeline)).
* **outputStorageAccount** ΓÇô The primary endpoint of the ADLS Gen2 storage account to store the transformed FHIR bundle. For example: `https://**mystorage**.blob.core.windows.net`. * **outputStorageFolder** ΓÇô The container and folder path within the configured **outputStorageAccount** to which the transformed FHIR bundle JSON files are written to. * **rootTemplate** ΓÇô The root template to use while transforming the provided HL7v2 data. For example: ADT_A01, ADT_A02, ADT_A03, ADT_A04, ADT_A05, ADT_A08, ADT_A11, ADT_A13, ADT_A14, ADT_A15, ADT_A16, ADT_A25, ADT_A26, ADT_A27, ADT_A28, ADT_A29, ADT_A31, ADT_A47, ADT_A60, OML_O21, ORU_R01, ORM_O01, VXU_V04, SIU_S12, SIU_S13, SIU_S14, SIU_S15, SIU_S16, SIU_S17, SIU_S26, MDM_T01, MDM_T02. > [!NOTE]
- > This can be a static folder path or can be left blank here and dynamically configured when setting up storage account triggers for this pipeline execution (refer to the section titled [Executing a pipeline](#executing-a-pipeline)).
+ > This can be a static folder path, or can be left blank and dynamically configured when setting up storage account triggers for this pipeline execution (refer to the section titled [Executing a pipeline](#executing-a-pipeline)).
- * **errorStorageFolder** - The container and folder path within the configured **outputStorageAccount** to which the errors encountered during execution are written to. For example: `**mycontainer**/**myerrorfolder**`.
+ * **errorStorageFolder** - The container and folder path within the configured **outputStorageAccount** to which the errors encountered during execution are written. For example: `**mycontainer**/**myerrorfolder**`.
5. You can configure more pipeline settings under the **Settings** tab based on your requirements.
Azure Data Factory pipelines are a collection of activities that perform a t
## Executing a pipeline
-You can execute (or run) a pipeline either manually or by using a trigger. There are different types of triggers that can be created to help automate your pipeline execution. For example:
+You can execute (or run) a pipeline either manually, or by using a trigger. There are different types of triggers that can be created to help automate your pipeline execution. For example:
* **Manual trigger** * **Schedule trigger**
For more information on the different trigger types and how to configure them, s
By setting triggers, you can simulate batch transformation of HL7v2 data. The pipeline executes automatically based on the configured trigger parameters without requiring individual invocation of the `$convert-data` operation for each input message. > [!IMPORTANT]
-> In a scenario with batch processing of HL7v2 messages, this template does not take sequencing into account, so post processing will be needed if sequencing is a requirement.
+> In a scenario with batch processing of HL7v2 messages, this template does not take sequencing into account. Post processing will be needed if sequencing is a requirement.
## Create a new storage event trigger
-In the following example, a storage event trigger is used. The storage event trigger automatically triggers the pipeline whenever a new HL7v2 data blob file to be processed is uploaded to the ADLS Gen2 storage account.
+In the following example, a storage event trigger is used. The storage event trigger automatically triggers the pipeline whenever a new HL7v2 data blob file is uploaded for processing to the ADLS Gen2 storage account.
-To configure the pipeline to automatically run whenever a new HL7v2 blob file in the source ADLS Gen2 storage account is available to transform, follow these steps:
+To configure the pipeline to automatically run whenever a new HL7v2 blob file in the source ADLS Gen2 storage account is available to transform, follow these steps.
1. Select **Author** from the navigation menu. Select the pipeline configured in the previous section and select **Add trigger** and **New/Edit** from the menu bar.
After the trigger is published, it can be triggered manually using theΓÇ»**Trigg
## Monitoring pipeline runs
-Trigger runs and their associated pipeline runs can be viewed in the **Monitor** tab. Here, users can browse when each pipeline ran, how long it took to execute, and potentially debug any problems that arose.
+Triggered runs and their associated pipeline runs can be viewed in the **Monitor** tab. Here, users can browse when each pipeline ran, how long it took to execute, and potentially debug any problems that arose.
:::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/monitor-pipeline-runs.png" alt-text="Screenshot showing monitoring Azure Data Factory pipeline runs." lightbox="media/convert-data/convert-data-with-azure-data-factory/monitor-pipeline-runs.png":::
Successful pipeline executions result in the transformed FHIR R4 bundles as JSON
### Errors
-Errors encountered during conversion, as part of the pipeline execution, result in error details captured as JSON file in the configured error destination ADLS Gen2 storage account and container. For information on how to troubleshoot `$convert-data`, see [Troubleshoot $convert-data](convert-data-troubleshoot.md).
+Errors encountered during conversion as part of the pipeline execution result in error details captured as a JSON file in the configured error destination ADLS Gen2 storage account and container. For information on how to troubleshoot `$convert-data`, see [Troubleshoot $convert-data](convert-data-troubleshoot.md).
:::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/pipeline-errors.png" alt-text="Screenshot showing Azure Data Factory errors." lightbox="media/convert-data/convert-data-with-azure-data-factory/pipeline-errors.png":::
healthcare-apis Convert Data Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data-configuration.md
[!INCLUDE [Converter redirect statement](../includes/converter-redirect-statement.md)]
-In this article, learn how to configure settings for `$convert-data` using the Azure portal to convert health data into [FHIR&reg; R4](https://www.hl7.org/fhir/R4/https://docsupdatetracker.net/index.html).
+This article illustrates how to configure settings for `$convert-data` using the Azure portal to convert health data into [FHIR&reg; R4](https://www.hl7.org/fhir/R4/https://docsupdatetracker.net/index.html).
## Default templates
To access and use the default templates for your conversion requests, ensure tha
> [!WARNING] > Default templates are released under the MIT License and are *not* supported by Microsoft Support. >
-> The default templates are provided only to help you get started with your data conversion workflow. These default templates are not intended for production and might change when Microsoft releases updates for the FHIR service. To have consistent data conversion behavior across different versions of the FHIR service, you must do the following:
+> The default templates are provided to help you get started with your data conversion workflow. These default templates are _not_ intended for production and might change when Microsoft releases updates for the FHIR service. To have consistent data conversion behavior across different versions of the FHIR service, you must do the following.
> > 1. Host your own copy of the templates in an [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md) instance. > 2. Register the templates to the FHIR service.
You can use the [FHIR Converter Visual Studio Code extension](https://marketplac
> [!NOTE] > The FHIR Converter extension for Visual Studio Code is available for HL7v2, C-CDA, and JSON Liquid templates. FHIR STU3 to FHIR R4 Liquid templates are currently not supported.
-The provided default templates can be used as a base starting point if needed, on top of which your customizations can be added. When making updates to the templates, consider following these guidelines to avoid unintended conversion results. The template should be authored in a way such that it yields a valid structure for a FHIR bundle resource.
+The provided default templates can be used as a starting point if needed, on top of which your customizations can be added. When making updates to the templates, consider following these guidelines to avoid unintended conversion results.
+
+The template should be authored in a way such that it yields a valid structure for a FHIR bundle resource.
For instance, the Liquid templates should have a format such as the following code:
For instance, the Liquid templates should have a format such as the following co
} ```
-The overall template follows the structure and expectations for a FHIR bundle resource, with the FHIR bundle JSON being at the root of the file. If you choose to add custom fields to the template that arenΓÇÖt part of the FHIR specification for a bundle resource, the conversion request could still succeed. However, the converted result could potentially have unexpected output and wouldn't yield a valid FHIR bundle resource that can be persisted in the FHIR service as is.
+The overall template follows the structure and expectations for a FHIR bundle resource, with the FHIR bundle JSON being at the root of the file. If you choose to add custom fields to the template that arenΓÇÖt part of the FHIR specification for a bundle resource, the conversion request could. However, the converted result could potentially have unexpected output, and wouldn't yield a valid FHIR bundle resource that can be persisted in the FHIR service as is.
For example, consider the following code:
For example, consider the following code:
} ```
-In the example code, two example custom fields `customfield_message` and `customfield_data` that aren't FHIR properties per the specification and the FHIR bundle resource seem to be nested under `customfield_data` (that is, the FHIR bundle JSON isn't at the root of the file). This template doesnΓÇÖt align with the expected structure around a FHIR bundle resource. As a result, the conversion request might succeed using the provided template. However, the returned converted result could potentially have unexpected output (due to certain post conversion processing steps being skipped). It wouldn't be considered a valid FHIR bundle (since it's nested and has non FHIR specification properties) and attempting to persist the result in your FHIR service fails.
+In the example code, two example custom fields `customfield_message` and `customfield_data` aren't FHIR properties per the specification, and the FHIR bundle resource seem to be nested under `customfield_data` (that is, the FHIR bundle JSON isn't at the root of the file). This template doesnΓÇÖt align with the expected structure around a FHIR bundle resource. The conversion request might succeed using the provided template. However, the returned converted result could potentially have unexpected output (due to certain post conversion processing steps being skipped). It wouldn't be considered a valid FHIR bundle (since it's nested and has non FHIR specification properties) and attempting to persist the result in your FHIR service fails.
## Host your own templates We recommend that you host your own copy of templates in an [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md) instance. ACR can be used to host your custom templates and support with versioning.
-Hosting your own templates and using them for `$convert-data` operations involves the following seven steps:
+Hosting your own templates and using them for `$convert-data` operations involves the following seven steps.
1. [Create an Azure Container Registry instance](#step-1-create-an-azure-container-registry-instance) 2. [Push the templates to your Azure Container Registry instance](#step-2-push-the-templates-to-your-azure-container-registry-instance)
To reference specific template versions in the API, be sure to use the exact ima
:::image type="content" source="~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-page.png" alt-text="Screenshot showing the add role assignment pane." lightbox="~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-page.png":::
-4. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+4. On the **Members** tab, select **Managed identity**, and then **Select members**.
5. Select your Azure subscription.
There are many methods for securing ACR using the built-in firewall depending on
Make a call to the `$convert-data` operation by specifying your template reference in the `templateCollectionReference` parameter:
-`<RegistryServer>/<imageName>@<imageDigest>`
+`<RegistryServer>/<imageName>@<imageDigest>`.
You should receive a `bundle` response that contains the health data converted into the FHIR format.
healthcare-apis Convert Data Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data-faq.md
For more information, see [Access the FHIR service in Azure Health Data Services
## What's the difference between the $convert-data endpoint in Azure API for FHIR versus the FHIR service in Azure Health Data Services?
-The experience and core `$convert-data` operation functionality is similar for both Azure API for FHIR and the FHIR service in Azure Health Data Services(../../healthcare-apis/fhir/overview.md). The only difference exists in the setup for the Azure API for FHIR version of the `$convert-data` operation, which requires assigning permissions to the right resources.
+The experience and core `$convert-data` operation functionality is similar for both Azure API for FHIR and the FHIR service in [Azure Health Data Services](../../healthcare-apis/fhir/overview.md). The only difference exists in the setup for the Azure API for FHIR version of the `$convert-data` operation, which requires assigning permissions to the right resources.
Learn more:
Yes. ItΓÇÖs possible to store and reference custom templates. For more informati
## Why are my dates being converted when transforming JSON data?
-It's possible for dates supplied within JSON data to be returned in a different format than what was supplied. During deserialization of the JSON payload strings that are identified as dates get converted into .NET DateTime objects. These objects then get converted back to strings before going through the Liquid template engine. This conversion can cause the date value to be reformatted and represented in the local timezone of the FHIR service.
+It's possible for dates supplied within JSON data to be returned in a different format than what was supplied. During deserialization of the JSON payload, strings that are identified as dates get converted into .NET DateTime objects. These objects then get converted back to strings before going through the Liquid template engine. This conversion can cause the date value to be reformatted and represented in the local timezone of the FHIR service.
The coercion of strings to .NET DateTime objects can be disabled using the boolean parameter `jsonDeserializationTreatDatesAsStrings`. When set to `true`, the supplied data is treated as a string and won't be modified before being supplied to the Liquid engine.
healthcare-apis Convert Data Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data-overview.md
The health data for conversion is delivered to the FHIR service in the body of t
## Parameters
-A `$convert-data` operation call packages the health data for conversion inside a JSON-formatted [parameters](http://hl7.org/fhir/parameters.html) in the body of the request. The parameters are described in the following table:
+A `$convert-data` operation call packages the health data for conversion inside JSON-formatted [parameters](http://hl7.org/fhir/parameters.html) in the body of the request. The parameters are described in the following table.
| Parameter name | Description&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Accepted values | | -- | -- | |
A `$convert-data` operation call packages the health data for conversion inside
- **FHIR STU3 to FHIR R4 templates are Liquid templates** that provide mappings of field differences only between a FHIR STU3 resource and its equivalent resource in the FHIR R4 specification. Some of the FHIR STU3 resources are renamed or removed from FHIR R4. For more information about the resource differences and constraints for FHIR STU3 to FHIR R4 conversion, see [Resource differences and constraints for FHIR STU3 to FHIR R4 conversion](https://github.com/microsoft/FHIR-Converter/blob/main/docs/Stu3R4-resources-differences.md). -- **JSON templates are sample templates for use in building your own conversion mappings.** They aren't default templates that adhere to any predefined health data message types. JSON itself isn't specified as a health data format, unlike HL7v2 or C-CDA. Therefore, instead of providing default JSON templates, we provide some sample JSON templates as a starting point for your own customized mappings.
+- **JSON templates are sample templates for use in building your own conversion mappings.** They aren't default templates that adhere to any predefined health data message types. JSON itself isn't specified as a health data format, unlike HL7v2 or C-CDA. As a result, instead of providing default JSON templates, we provide some sample JSON templates as a starting point for your own customized mappings.
> [!WARNING] > Default templates are released under the MIT License and aren't supported by Microsoft.
healthcare-apis Convert Data Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data-troubleshoot.md
Two main factors come into play that determine how long a `$convert-data` operat
Any loops or iterations in the templates can have large impacts on performance. The `$convert-data` operation has a post processing step that is run after the template is applied. In particular, the deduping step can mask template issues that cause performance problems. Updating the template so duplicates arenΓÇÖt generated can greatly increase performance. For more information and details about the post processing step, see [Post processing](#post-processing). ## Post processing
-The `$convert-data` operation applies post processing logic after the template is applied to the input. This post processing logic can result in the output looking different or unexpected errors compared to if you ran the default Liquid template directly. Post processing ensures the output is valid JSON and removes any duplicates based on the ID properties generated for resources in the template. To see the post processing logic in more detail, see the [FHIR-Converter GitHub repository](https://github.com/microsoft/FHIR-Converter/blob/main/src/Microsoft.Health.Fhir.Liquid.Converter/OutputProcessors/PostProcessor.cs).
+The `$convert-data` operation applies post processing logic after the template is applied to the input. This post processing logic can result in the output looking different, or unexpected errors compared to running the default Liquid template directly. Post processing ensures the output is valid JSON and removes any duplicates based on the ID properties generated for resources in the template. To see the post processing logic in more detail, see the [FHIR-Converter GitHub repository](https://github.com/microsoft/FHIR-Converter/blob/main/src/Microsoft.Health.Fhir.Liquid.Converter/OutputProcessors/PostProcessor.cs).
## Message size
-There isnΓÇÖt a hard limit on the size of the messages allowed for the `$convert-data` operation, however, for content with a request size greater than 10 MB, server 500 errors are possible. If you're receiving 500 server errors, ensure your requests are under 10 MB.
+There isnΓÇÖt a hard limit on the size of the messages allowed for the `$convert-data` operation. However, for content with a request size greater than 10 MB, server errors `500` are possible. If you're receiving `500` server errors, ensure your requests are under 10 MB.
## Why are my dates being converted when transforming JSON data?
-It's possible for dates supplied within JSON data to be returned in a different format than what was supplied. During deserialization of the JSON payload strings that are identified as dates get converted into .NET DateTime objects. These objects then get converted back to strings before going through the Liquid template engine. This conversion can cause the date value to be reformatted and represented in the local timezone of the FHIR service.
+It's possible for dates supplied within JSON data to be returned in a different format than what was supplied. During deserialization of the JSON payload, strings that are identified as dates get converted into .NET DateTime objects. These objects then get converted back to strings before going through the Liquid template engine. This conversion can cause the date value to be reformatted, and represented in the local timezone of the FHIR service.
The coercion of strings to .NET DateTime objects can be disabled using the boolean parameter `jsonDeserializationTreatDatesAsStrings`. When set to `true`, the supplied data is treated as a string and won't be modified before being supplied to the Liquid engine.
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/copy-to-synapse.md
# Copy data from FHIR service to Azure Synapse Analytics
-In this article, youΓÇÖll learn three ways to copy data from the FHIR service in Azure Health Data Services to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
+In this article, you learn three ways to copy data from the FHIR&reg; service in Azure Health Data Services to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
* Use the [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md) OSS tool * Use the [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) OSS tool
Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipel
> [!Note] > [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
-The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from CDM folder to Synapse dedicated SQL pool.
+The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from a CDM folder to a Synapse dedicated SQL pool.
This solution enables you to transform the data into tabular format as it gets written to CDM folder. You should consider this solution if you want to transform FHIR data into a custom schema after it's extracted from the FHIR server.
Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipel
## Loading exported data to Synapse using T-SQL
-In this approach, you use the FHIR `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Subsequently, you load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. You can convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
+In this approach, you use the FHIR `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Then, you load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. You can convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
:::image type="content" source="media/export-data/export-azure-storage-option.png" alt-text="Azure storage to Synapse using $export." lightbox="media/export-data/export-azure-storage-option.png":::
After configuring your FHIR server, you can follow the [documentation](./export-
https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}} ```
-You can also use `_type` parameter in the `$export` call above to restrict the resources that you want to export. For example, the following call will export only `Patient`, `MedicationRequest`, and `Observation` resources:
+You can also use `_type` parameter in the preceding `$export` call to restrict the resources that you want to export. For example, the following call exports only `Patient`, `MedicationRequest`, and `Observation` resources:
```rest https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}&
For more information on the different parameters supported, check out our `$expo
#### Creating a Synapse workspace
-Before using Synapse, you'll need a Synapse workspace. You'll create an Azure Synapse Analytics service on Azure portal. More step-by-step guide can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
+Before using Synapse, you'll need a Synapse workspace. Create an Azure Synapse Analytics service on Azure portal. More step-by-step guidance can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
After creating a workspace, you can view your workspace in Synapse Studio by signing into your workspace on [https://web.azuresynapse.net](https://web.azuresynapse.net), or launching Synapse Studio in the Azure portal.
Now that you have a linked service between your ADL Gen 2 storage and Synapse, y
#### Decide between serverless and dedicated SQL pool
-Azure Synapse Analytics offers two different SQL pools, serverless SQL pool and dedicated SQL pool. Serverless SQL pool gives the flexibility of querying data directly in the blob storage using the serverless SQL endpoint without any resource provisioning. Dedicated SQL pool has the processing power for high performance and concurrency, and is recommended for enterprise-scale data warehousing capabilities. For more details on the two SQL pools, check out the [Synapse documentation page](../../synapse-analytics/sql/overview-architecture.md) on SQL architecture.
+Azure Synapse Analytics offers two different SQL pools: serverless SQL pool and dedicated SQL pool. Serverless SQL pool gives the flexibility of querying data directly in the blob storage using the serverless SQL endpoint without any resource provisioning. Dedicated SQL pool has the processing power for high performance and concurrency, and is recommended for enterprise-scale data warehousing capabilities. For more details on the two SQL pools, check out the [Synapse documentation page](../../synapse-analytics/sql/overview-architecture.md) on SQL architecture.
#### Using serverless SQL pool
WITH (
) ```
-In the query above, the `OPENROWSET` function accesses files in Azure Storage, and `OPENJSON` parses JSON text and returns the JSON input properties as rows and columns. Every time this query is executed, the serverless SQL pool reads the file from the blob storage, parses the JSON, and extracts the fields.
+In the preceding query, the `OPENROWSET` function accesses files in Azure Storage, and `OPENJSON` parses JSON text and returns the JSON input properties as rows and columns. Every time this query is executed, the serverless SQL pool reads the file from the blob storage, parses the JSON, and extracts the fields.
-You can also materialize the results in Parquet format in an [External Table](../../synapse-analytics/sql/develop-tables-external-tables.md) to get better query performance, as shown below:
+You can also materialize the results in Parquet format in an [External Table](../../synapse-analytics/sql/develop-tables-external-tables.md) to get better query performance, as follows.
```sql -- Create External data source where the parquet file will be written
OPENROWSET(bulk 'https://{{youraccount}}.blob.core.windows.net/{{yourcontainer}}
Dedicated SQL pool supports managed tables and a hierarchical cache for in-memory performance. You can import big data with simple T-SQL queries, and then use the power of the distributed query engine to run high-performance analytics.
-The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the example query below, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
+The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the following example query, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
```sql -- Create table with HEAP, which is not indexed and does not have a column width limitation of NVARCHAR(4000)
FIELDTERMINATOR = '0x00'
GO ```
-Once you have the JSON rows in the `StagingPatient` table above, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. Here's a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
+Once you have the JSON rows in the preceding `StagingPatient` table, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. Here's a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
```sql SELECT RES.*
Next, you can learn about how you can de-identify your FHIR data while exporting
>[!div class="nextstepaction"] >[Exporting de-identified data](./de-identified-export.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/customer-managed-keys.md
# Best practices for using customer-managed keys for the FHIR service
-Customer-managed keys (CMK) are encryption keys that you create and manage in your own key store. By using CMK, you can have more flexibility and control over the encryption and access of your organizationΓÇÖs data. You use [Azure Key Vault](/azure/key-vault/) to create and manage CMK and then use the keys to encrypt the data stored by the FHIR&reg; service.
+Customer-managed keys (CMK) are encryption keys that you create and manage in your own key store. By using CMK, you have more flexibility and control over the encryption and access of your organizationΓÇÖs data. You use [Azure Key Vault](/azure/key-vault/) to create and manage CMK, and then use the keys to encrypt the data stored by the FHIR&reg; service.
## Rotate keys often
To rotate the key by generating a new version of the key, use the 'az keyvault k
## Update the FHIR service after changing a managed identity
-If you change the managed identity in any way, such as moving your FHIR service to a different tenant or subscription, the FHIR service isn't able to access your keys until you update the service manually with an ARM template deployment. For steps, see [Use an ARM template to update the encryption key](configure-customer-managed-keys.md#update-the-key-by-using-an-arm-template).
+If you change the managed identity in any way, such as moving your FHIR service to a different tenant or subscription, the FHIR service isn't able to access your keys. You must update the service manually with an ARM template deployment. For steps, see [Use an ARM template to update the encryption key](configure-customer-managed-keys.md#update-the-key-by-using-an-arm-template).
## Disable public access with a firewall
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-drug-formulary-tutorial.md
Last updated 06/06/2022
# Tutorial for Da Vinci Drug Formulary
-In this tutorial, we'll walk through setting up the FHIR service in Azure Health Data Services (hereby called FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange US Drug Formulary Implementation Guide](http://hl7.org/fhir/us/Davinci-drug-formulary/).
+In this tutorial, we'll walk through setting up the FHIR&reg; service in Azure Health Data Services (FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange US Drug Formulary Implementation Guide](http://hl7.org/fhir/us/Davinci-drug-formulary/).
## Touchstone capability statement
-The first test that we'll focus on is testing FHIR service against the [Da Vinci Drug Formulary capability
-statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test without any updates, the test will fail due to
-missing search parameters and missing profiles.
+The first test focuses on testing the FHIR service against the [Da Vinci Drug Formulary capability
+statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test without any updates, the test fails due to missing search parameters and missing profiles.
### Define search parameters
-As part of the Da Vinci Drug Formulary IG, you'll need to define three [new search parameters](how-to-do-custom-search.md) for the FormularyDrug resource. All three of these are tested in the
-capability statement.
+As part of the Da Vinci Drug Formulary IG, you'll need to define three [new search parameters](how-to-do-custom-search.md) for the FormularyDrug resource. All three of these are tested in the capability statement.
* [DrugTier](http://hl7.org/fhir/us/davinci-drug-formulary/STU1.0.1/SearchParameter-DrugTier.json.html) * [DrugPlan](http://hl7.org/fhir/us/davinci-drug-formulary/STU1.0.1/SearchParameter-DrugPlan.json.html) * [DrugName](http://hl7.org/fhir/us/davinci-drug-formulary/STU1.0.1/SearchParameter-DrugName.json.html)
-The rest of the search parameters needed for the Da Vinci Drug Formulary IG are defined by the base specification and are already available in FHIR service without any more updates.
+The rest of the search parameters needed for the Da Vinci Drug Formulary IG are defined by the base specification and are already available in FHIR service without updates.
### Store profiles
Outside of defining search parameters, the only other update you need to make to
### Sample rest file
-To assist with creation of these search parameters and profiles, we have the [Da Vinci Formulary](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary.http) sample HTTP file on the open-source site that includes all the steps outlined above in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone. You should get a successful run:
+To assist with creation of these search parameters and profiles, we have the [Da Vinci Formulary](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary.http) sample HTTP file on the open-source site that includes all the steps previously outlined in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone. You should get a successful run:
:::image type="content" source="media/centers-medicare-services-tutorials/davinci-test-script-execution.png" alt-text="Da Vinci test script execution."::: ## Touchstone query test
-The second test is the [query capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/01-Query&activeOnly=false&contentEntry=TEST_SCRIPTS). This test validates that you can search for specific Coverage Plan and Drug resources using various parameters. The best path would be to test against resources that you already have in your database, but we also have the [Da VinciFormulary_Sample_Resources](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary_Sample_Resources.http) HTTP file available with sample resources pulled from the examples in the IG that you can use to create the resources and test against.
+The second test is the [query capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/01-Query&activeOnly=false&contentEntry=TEST_SCRIPTS). This test validates that you can search for specific Coverage Plan and Drug resources using various parameters. The best path would be to test against resources that you already have in your database, but we also have the [Da VinciFormulary_Sample_Resources](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary_Sample_Resources.http) HTTP file available with sample resources pulled from the examples in the IG, which you can use to create the resources and test against.
:::image type="content" source="media/centers-medicare-services-tutorials/davinci-test-execution-results.png" alt-text="Da Vinci test execution results.":::
In this tutorial, we walked through how to pass the Da Vinci Payer Data Exchange
>[!div class="nextstepaction"] >[Da Vinci PDex](davinci-pdex-tutorial.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-pdex-tutorial.md
Last updated 06/06/2022
# Da Vinci PDex
-In this tutorial, we'll walk through setting up the FHIR service in Azure Health Data Services (hereby called FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange Implementation Guide](http://hl7.org/fhir/us/davinci-pdex/toc.html) (PDex IG).
+In this tutorial, we walk through setting up the FHIR&reg; service in Azure Health Data Services (FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange Implementation Guide](http://hl7.org/fhir/us/davinci-pdex/toc.html) (PDex IG).
> [!NOTE] > FHIR service only supports JSON. The Microsoft open-source FHIR service supports both JSON and XML, and in open-source you can use the _format parameter to view the XML capability statement: `GET {fhirurl}/metadata?_format=xml` ## Touchstone capability statement
-The first set of tests that we'll focus on is testing the FHIR service against the PDex IG capability statement. This includes three tests:
+The first set of tests focus on testing the FHIR service against the PDex IG capability statement. This includes three tests:
-* The first test validates the basic capability statement against the IG requirements and will pass without any updates.
+* The first test validates the basic capability statement against the IG requirements and passes without any updates.
-* The second test validates all the profiles have been added for US Core. This test will pass without updates but will include a bunch of warnings. To have these warnings removed, you need to [load the US Core profiles](validation-against-profiles.md). We've created a [sample HTTP file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) that walks through creating all the profiles. You can also get the [profiles](http://hl7.org/fhir/us/core/STU3.1.1/profiles.html#profiles) from the HL7 site directly, which will have the most current versions.
+* The second test validates all the profiles have been added for US Core. This test passes without updates but will include warnings. To have these warnings removed, you need to [load the US Core profiles](validation-against-profiles.md). We've created a [sample HTTP file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) that walks through creating all the profiles. You can also get the [profiles](http://hl7.org/fhir/us/core/STU3.1.1/profiles.html#profiles) from the HL7 site directly, which will have the most current versions.
* The third test validates that the [$patient-everything operation](patient-everything.md) is supported.
The first set of tests that we'll focus on is testing the FHIR service against t
The [second test](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PayerExchange/01-Member-Match&activeOnly=false&contentEntry=TEST_SCRIPTS) in the Payer Data Exchange section tests the existence of the [$member-match operation](http://hl7.org/fhir/us/davinci-hrex/2020Sep/OperationDefinition-member-match.html). You can read more about the $member-match operation in our [$member-match operation overview](tutorial-member-match.md).
-In this test, youΓÇÖll need to load some sample data for the test to pass. We have a rest file [here](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/membermatch.http) with the patient and coverage linked that you'll need for the test. Once this data is loaded, you'll be able to successfully pass this test. If the data isn't loaded, you'll receive a 422 response due to not finding an exact match.
+In this test, you need to load some sample data for the test to pass. We have a rest file [here](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/membermatch.http) with the patient and coverage linked that you need for the test. Once this data is loaded, you'll be able to successfully pass this test. If the data isn't loaded, you receive a `422` response due to not finding an exact match.
:::image type="content" source="media/centers-medicare-services-tutorials/davinci-pdex-test-script-passed.png" alt-text="Da Vinci PDex test script passed."::: ## Touchstone patient by reference
-The next tests we'll review is the [patient by reference](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PayerExchange/02-PatientByReference&activeOnly=false&contentEntry=TEST_SCRIPTS) tests. This set of tests validates that you can find a patient based on various search criteria. The best way to test the patient by reference will be to test against your own data, but we've uploaded a [sample resource file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/PDex_Sample_Data.http) that you can load to use as well.
+The next tests we'll review are the [patient by reference](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PayerExchange/02-PatientByReference&activeOnly=false&contentEntry=TEST_SCRIPTS) tests. This set of tests validates that you can find a patient based on various search criteria. The best way to test the patient by reference will be to test against your own data, but we've uploaded a [sample resource file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/PDex_Sample_Data.http) that you can load to use as well.
:::image type="content" source="media/centers-medicare-services-tutorials/davinci-pdex-test-execution-passed.png" alt-text="Da Vinci PDex execution passed."::: ## Touchstone patient/$everything test
-The final test we'll walk through is testing patient-everything. For this test, you'll need to load a patient, and then you'll use that patientΓÇÖs ID to test that you can use the $everything operation to pull all data related to the patient.
+The final test we walk through is testing patient-everything. For this test, you need to load a patient, then use that patientΓÇÖs ID to test that you can use the $everything operation to pull all data related to the patient.
:::image type="content" source="media/centers-medicare-services-tutorials/davinci-pdex-test-patient-everything.png" alt-text="touchstone patient/$everything test passed.":::
In this tutorial, we walked through how to pass the Payer Exchange tests in Touc
>[!div class="nextstepaction"] >[Da Vinci Plan Net](davinci-plan-net.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Davinci Plan Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-plan-net.md
First, test the FHIR service against the [Da Vinci Plan-Net capability statement
## Define search parameters
-Next, define six [new search parameters](how-to-do-custom-search.md) for the Healthcare Service, Insurance Plan, Practitioner Role, Organization, and Organization Affiliation resources. All six of these parameters are tested in the capability statement:
+Next, define [new search parameters](how-to-do-custom-search.md) for the Healthcare Service, Insurance Plan, Practitioner Role, Organization, and Organization Affiliation resources. All of these parameters are tested in the capability statement:
- [Healthcare Service Coverage Area](http://hl7.org/fhir/us/davinci-pdex-plan-net/STU1/SearchParameter-healthcareservice-coverage-area.html) - [Insurance Plan Coverage Area](http://hl7.org/fhir/us/davinci-pdex-plan-net/STU1/SearchParameter-insuranceplan-coverage-area.html)
Next, define six [new search parameters](how-to-do-custom-search.md) for the Hea
> [!NOTE] > In the raw JSON for these search parameters, the name is set to `Plannet_sp_<Resource Name>_<SearchParameter Name>`. The Touchstone test expects the name to be only the `SearchParameter Name` (coverage-area, plan-type, or network).
-The rest of the search parameters needed for the Da Vinci Plan Net Implementation Guide are defined by the base specification, and are already available in the FHIR service without any other updates.
+The rest of the search parameters needed for the Da Vinci Plan Net Implementation Guide are defined by the base specification, and are already available in the FHIR service without other updates.
## Store profiles
To assist with creation of the search parameters and profiles, there's a sample
## Touchstone error handling test
-The second test is of [error handling](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PlanNet/01-Error-Codes&activeOnly=false&contentEntry=TEST_SCRIPTS). The only step you need to do is delete a `HealthcareService` resource from your database and use the ID of the deleted HealthcareService resource in the test. The sample [DaVinci_PlanNet.http](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciPlanNet/DaVinci_PlanNet.http) file on the open-source site provides an example `HealthcareService` to post and delete for this step.
+The second test evaluates [error handling](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PlanNet/01-Error-Codes&activeOnly=false&contentEntry=TEST_SCRIPTS). The only step you need to do is delete a `HealthcareService` resource from your database and use the ID of the deleted HealthcareService resource in the test. The sample [DaVinci_PlanNet.http](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciPlanNet/DaVinci_PlanNet.http) file on the open-source site provides an example `HealthcareService` to post and delete for this step.
:::image type="content" source="media/davinci-plan-net/davinci-test-script-execution-passed.png" alt-text="Screenshot showing Da Vinci Plan Net touchstone error test execution script passed." lightbox="media/davinci-plan-net/davinci-test-script-execution-passed.png":::
iot-hub-device-update Delta Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/delta-updates.md
Delta updates allow you to generate a small update that represents only the chan
## Requirements for using delta updates in Device Update for IoT Hub - The source and target update files must be SWUpdate (SWU) format.-- Within each SWUpdate file, there must be a raw image that uses the Ext2, Ext3, or Ext4 filesystem. That image can be compressed with gzip or zstd.-- The delta generation process recompresses the target SWU update using zstd compression in order to produce an optimal delta. Import this recompressed target SWU update to the Device Update service along with the generated delta update file.-- Within SWUpdate on the device, zstd decompression must also be enabled.
- - This process requires using [SWUpdate 2019.11](https://github.com/sbabic/swupdate/releases/tag/2019.11) or later.
+- Within each SWUpdate file, there must be a raw image that uses the Ext2, Ext3, or Ext4 filesystem.
+
+- The delta generation process recompresses the target SWU update using gzip compression in order to produce an optimal delta. Import this recompressed target SWU update to the Device Update service along with the generated delta update file.
## Configure a device with Device Update agent and delta processor component
The Device Update agent _orchestrates_ the update process on the device, includi
### Update handler
-An update handler integrates with the Device Update agent to perform the actual update install. For delta updates, start with the [`microsoft/swupdate:2` update handler](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/step_handlers/swupdate_handler_v2/README.md) if you don't already have your own SWUpdate update handler that you want to modify. **If you use your own update handler, be sure to enable zstd decompression in SWUpdate**.
+An update handler integrates with the Device Update agent to perform the actual update install. For delta updates, start with the [`microsoft/swupdate:2` update handler](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/step_handlers/swupdate_handler_v2/README.md) if you don't already have your own SWUpdate update handler that you want to modify.
### Delta processor
-The delta processor re-creates the original SWU image file on your device after the delta file is downloaded, so your update handler can install the SWU file. The delta processor code is available in the [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta) GitHub repo.
+The delta processor re-creates the original SWU image file on your device after the delta file is downloaded, so your update handler can install the SWU file. The delta processor is available in the [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta) GitHub repo.
-To add the delta processor component to your device image and configure it for use, follow the README.md instructions to use CMAKE to build the delta processor from source. From there, install the shared object (libadudiffapi.so) directly by copying it to the `/usr/lib` directory:
+To add the delta processor component to your device image and configure it for use, you can [download ](https://github.com/Azure/iot-hub-device-update-delt instructions to use CMAKE to build the delta processor from source instead. From there, install the shared object (libadudiffapi.so) directly by copying it to the `/usr/lib` directory:
```bash sudo cp <path to libadudiffapi.so> /usr/lib/libadudiffapi.so
The following table provides a list of the content needed, where to retrieve the
| Binary Name | Where to acquire | How to install | |--|--|--|
-| DiffGen | [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta) GitHub repo | From the root folder, select the _Microsoft.Azure.DeviceUpdate.Diffs.[version].nupkg_ file. [Learn more about NuGet packages](/nuget/).|
-| .NETCore Runtime, version 6.0.0 | Via Terminal / Package Managers | [Instructions for Linux](/dotnet/core/install/linux). Only the Runtime is required. |
-
-### Dependencies
-
-The zstd_compression_tool is used for decompressing an archive's image files and recompressing them with zstd. This process ensures that all archive files used for diff generation have the same compression algorithm for the images inside the archives.
-
-Commands to install required packages/libraries:
-
-```bash
-sudo apt update
-sudo apt-get install -y python3 python3-pip
-sudo pip3 install libconf zstandard
-```
+| DiffGen | [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta/tree/main/preview/2.0.0) GitHub repo |Download the version matching the OS/distro on the machine that will be used to generate delta updates. |
+| .NETCore Runtime, version 8.0.0 | Via Terminal / Package Managers | [Instructions for Linux](/dotnet/core/install/linux). Only the Runtime is required. |
### Create a delta update using DiffGen
The DiffGen tool is run with several arguments. All arguments are required, and
`DiffGenTool [source_archive] [target_archive] [output_path] [log_folder] [working_folder] [recompressed_target_archive]` - The script recompress_tool.py runs to create the file [recompressed_target_archive], which then is used instead of [target_archive] as the target file for creating the diff.-- The image files within [recompressed_target_archive] are compressed with zstd.
+- The image files within [recompressed_target_archive] are compressed with gzip.
If your SWU files are signed (likely), you need another argument as well:
load-balancer Egress Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/egress-only.md
description: This article provides a step-by-step guide on how to configure an "
Previously updated : 10/24/2023 Last updated : 09/06/2024
In this section, you'll create the internal load balancer.
1. Select **Next: Frontend IP configuration** at the bottom of the page.
-1. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+1. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
1. Enter **lb-int-frontend** in **Name**.
+1. Select **lb-vnet** in **Virtual Network**.
+ 1. Select **backend-subnet** in **Subnet**. 1. Select **Dynamic** for **Assignment**.
In this section, you'll add the virtual machine you created previously to the ba
1. Select **Connect**.
-1. Open Internet Explorer.
+1. Open Microsoft Edge browser.
1. Enter **https://whatsmyip.org** in the address bar.
In this section, you'll add the virtual machine you created previously to the ba
1. Select **Connect**.
-1. Open Internet Explorer.
+1. Open Microsoft Edge browser.
1. Enter **https://whatsmyip.org** in the address bar.
load-balancer Load Balancer Nat Pool Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-nat-pool-migration.md
An [inbound NAT rule](inbound-nat-rules.md) is used to forward traffic from a lo
## NAT rule version 1
-[Version 1](inbound-nat-rules.md) is the legacy approach for assigning an Azure Load BalancerΓÇÖs frontend port to each backend instance. Rules are applied to the backend instanceΓÇÖs network interface card (NIC). For Azure Virtual Machine Scale Sets instances, inbound NAT rules are automatically created/deleted as new instances are scaled up/down.
+[Version 1](inbound-nat-rules.md) is the legacy approach for assigning an Azure Load BalancerΓÇÖs frontend port to each backend instance. Rules are applied to the backend instanceΓÇÖs network interface card (NIC). For Azure Virtual Machine Scale Sets (VMSS) instances, inbound NAT rules are automatically created/deleted as new instances are scaled up/down. For VMSS instanes use the `Inbound NAT Pool` property to manage Inbound NAT rules version 1.
## NAT rule version 2
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
For more information about connection pooling with Azure App Service, see [Troub
New outbound connections to a destination IP fail when port exhaustion occurs. Connections succeed when a port becomes available. This exhaustion occurs when the 64,000 ports from an IP address are spread thin across many backend instances. For guidance on mitigation of SNAT port exhaustion, see the [troubleshooting guide](./troubleshoot-outbound-connection.md).
-### Port reuse
+## Port reuse
For TCP connections, the load balancer uses a single SNAT port for every destination IP and port. For connections to the same destination IP, a single SNAT port can be reused as long as the destination port differs. Reuse isn't possible when there already exists a connection to the same destination IP and port. For UDP connections, the load balancer uses a **port-restricted cone NAT** algorithm, which consumes one SNAT port per destination IP, regardless of the destination port.
logic-apps Create Run Custom Code Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-run-custom-code-functions.md
ms.suite: integration
Previously updated : 06/17/2024 Last updated : 09/06/2024 # Customer intent: As a logic app workflow developer, I want to write and run my own .NET code to perform custom integration tasks.
For more information about limitations in Azure Logic Apps, see [Limits and conf
## Limitations
-Custom functions authoring currently isn't available in the Azure portal. However, after you deploy your functions from Visual Studio Code to Azure, follow the steps in [Call your code from a workflow](#call-code-from-workflow) for the Azure portal. You can use the built-in action named **Call a local function in this logic app** to select from your deployed custom functions and run your code. Subsequent actions in your workflow can reference the outputs from these functions, as in any other workflow. You can view the built-in action's run history, inputs, and outputs.
+- Custom functions authoring currently isn't available in the Azure portal. However, after you deploy your functions from Visual Studio Code to Azure, follow the steps in [Call your code from a workflow](#call-code-from-workflow) for the Azure portal. You can use the built-in action named **Call a local function in this logic app** to select from your deployed custom functions and run your code. Subsequent actions in your workflow can reference the outputs from these functions, as in any other workflow. You can view the built-in action's run history, inputs, and outputs.
+
+- Custom functions use an isolated worker to invoke the code in your logic app workflow. To avoid package references conflicts between your own function code and the worker, use the same package versions referenced by the worker. For the full package list and versions referenced by the worker, see [Worker and package dependencies](https://github.com/Azure/logicapps/blob/master/articles/worker-packages-dependencies-custom-code.pdf).
## Create a code project
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
ms.suite: integration Previously updated : 08/16/2024 Last updated : 09/05/2024
In Visual Studio Code, at your logic app project's root level, the **local.setti
App settings in Azure Logic Apps work similarly to app settings in Azure Functions or Azure Web Apps. If you've used these other services before, you might already be familiar with app settings. For more information, review [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) and [Work with Azure Functions Core Tools - Local settings file](../azure-functions/functions-develop-local.md#local-settings-file).
-| Setting | Default value | Description |
-|||-|
-| `APP_KIND` | `workflowApp` | Required for setting the app type for the Azure resource. |
-| `AzureWebJobsStorage` | None | Sets the connection string for an Azure storage account. For more information, see [AzureWebJobsStorage](../azure-functions/functions-app-settings.md#azurewebjobsstorage) |
-| `FUNCTIONS_WORKER_RUNTIME` | `dotnet` | Sets the language worker runtime to use with your logic app resource and workflows. However, this setting is no longer necessary due to automatically enabled multi-language support. <br><br>**Note**: Previously, this setting's default value was **`node`**. Now, **`dotnet`** is the default value for all new and existing deployed Standard logic apps, even for apps that had a different different value. This change shouldn't affect your workflow's runtime, and everything should work the same way as before.<br><br>For more information, see [FUNCTIONS_WORKER_RUNTIME](../azure-functions/functions-app-settings.md#functions_worker_runtime). |
-| `ServiceProviders.Sftp.FileUploadBufferTimeForTrigger` | `00:00:20` <br>(20 seconds) | Sets the buffer time to ignore files that have a last modified timestamp that's greater than the current time. This setting is useful when large file writes take a long time and avoids fetching data for a partially written file. |
-| `ServiceProviders.Sftp.OperationTimeout` | `00:02:00` <br>(2 min) | Sets the time to wait before timing out on any operation. |
-| `ServiceProviders.Sftp.ServerAliveInterval` | `00:30:00` <br>(30 min) | Sends a "keep alive" message to keep the SSH connection active if no data exchange with the server happens during the specified period. |
-| `ServiceProviders.Sftp.SftpConnectionPoolSize` | `2` connections | Sets the number of connections that each processor can cache. The total number of connections that you can cache is *ProcessorCount* multiplied by the setting value. |
-| `ServiceProviders.MaximumAllowedTriggerStateSizeInKB` | `10` KB, which is ~1,000 files | Sets the trigger state entity size in kilobytes, which is proportional to the number of files in the monitored folder and is used to detect files. If the number of files exceeds 1,000, increase this value. |
-| `ServiceProviders.Sql.QueryTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for SQL service provider operations. |
-| `WEBSITE_LOAD_ROOT_CERTIFICATES` | None | Sets the thumbprints for the root certificates to be trusted. |
-| `Workflows.Connection.AuthenticationAudience` | None | Sets the audience for authenticating a managed (Azure-hosted) connection. |
-| `Workflows.CustomHostName` | None | Sets the host name to use for workflow and input-output URLs, for example, "logic.contoso.com". For information to configure a custom DNS name, see [Map an existing custom DNS name to Azure App Service](../app-service/app-service-web-tutorial-custom-domain.md) and [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../app-service/configure-ssl-bindings.md). |
-| `Workflows.<workflowName>.FlowState` | None | Sets the state for <*workflowName*>. |
-| `Workflows.<workflowName>.RuntimeConfiguration.RetentionInDays` | None | Sets the amount of time in days to keep the run history for <*workflowName*>. |
-| `Workflows.RuntimeConfiguration.RetentionInDays` | `90` days | Sets the amount of time in days to keep workflow run history after a run starts. |
-| `Workflows.WebhookRedirectHostUri` | None | Sets the host name to use for webhook callback URLs. |
+For your workflow to run properly, some app settings are marked as "required".
+
+| Setting | Required | Default value | Description |
+||-||-|
+| `APP_KIND` | Yes | `workflowApp` | Required to set the app type for the Standard logic app resource. |
+| `AzureWebJobsStorage` | Yes | None | Required to set the connection string for an Azure storage account. For more information, see [AzureWebJobsStorage](../azure-functions/functions-app-settings.md#azurewebjobsstorage). |
+| `FUNCTINONS_EXTENSION_VERSION` | Yes | `~4` | Required to set the Azure Functions version. For more information, see [FUNCTIONS_EXTENSION_VERSION](/azure/azure-functions/functions-app-settings#functions_extension_version). |
+| `FUNCTIONS_WORKER_RUNTIME` | Yes | `dotnet` | Required to set the language worker runtime for your logic app resource and workflows. <br><br>**Note**: Previously, this setting's default value was **`node`**. Now, **`dotnet`** is the default value for all new and existing deployed Standard logic apps, even for apps that had a different different value. This change shouldn't affect your workflow's runtime, and everything should work the same way as before.<br><br>For more information, see [FUNCTIONS_WORKER_RUNTIME](../azure-functions/functions-app-settings.md#functions_worker_runtime). |
+| `ServiceProviders.Sftp.FileUploadBufferTimeForTrigger` | No | `00:00:20` <br>(20 seconds) | Sets the buffer time to ignore files that have a last modified timestamp that's greater than the current time. This setting is useful when large file writes take a long time and avoids fetching data for a partially written file. |
+| `ServiceProviders.Sftp.OperationTimeout` | No | `00:02:00` <br>(2 min) | Sets the time to wait before timing out on any operation. |
+| `ServiceProviders.Sftp.ServerAliveInterval` | No | `00:30:00` <br>(30 min) | Sends a "keep alive" message to keep the SSH connection active if no data exchange with the server happens during the specified period. |
+| `ServiceProviders.Sftp.SftpConnectionPoolSize` | No | `2` connections | Sets the number of connections that each processor can cache. The total number of connections that you can cache is *ProcessorCount* multiplied by the setting value. |
+| `ServiceProviders.MaximumAllowedTriggerStateSizeInKB` | No | `10` KB, which is ~1,000 files | Sets the trigger state entity size in kilobytes, which is proportional to the number of files in the monitored folder and is used to detect files. If the number of files exceeds 1,000, increase this value. |
+| `ServiceProviders.Sql.QueryTimeout` | No | `00:02:00` <br>(2 min) | Sets the request timeout value for SQL service provider operations. |
+| `WEBSITE_CONTENTSHARE` | Yes | Dynamic | Required to set the name for the file share that Azure Functions uses to store function app code and configuration files and is used with [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING](/azure/azure-functions/functions-app-settings#website_contentazurefileconnectionstring). The default is a unique string generated by the runtime. For more information, see [WEBSITE_CONTENTSHARE](/azure/azure-functions/functions-app-settings#website_contentshare). |
+| `WEBSITE_LOAD_ROOT_CERTIFICATES` | No | None | Sets the thumbprints for the root certificates to be trusted. |
+| `Workflows.Connection.AuthenticationAudience` | No | None | Sets the audience for authenticating a managed (Azure-hosted) connection. |
+| `Workflows.CustomHostName` | No | None | Sets the host name to use for workflow and input-output URLs, for example, "logic.contoso.com". For information to configure a custom DNS name, see [Map an existing custom DNS name to Azure App Service](../app-service/app-service-web-tutorial-custom-domain.md) and [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../app-service/configure-ssl-bindings.md). |
+| `Workflows.<workflowName>.FlowState` | No | None | Sets the state for <*workflowName*>. |
+| `Workflows.<workflowName>.RuntimeConfiguration.RetentionInDays` | No | None | Sets the amount of time in days to keep the run history for <*workflowName*>. |
+| `Workflows.RuntimeConfiguration.RetentionInDays` | No | `90` days | Sets the amount of time in days to keep workflow run history after a run starts. |
+| `Workflows.WebhookRedirectHostUri` | No | None | Sets the host name to use for webhook callback URLs. |
<a name="manage-app-settings"></a>
modeling-simulation-workbench How To Guide Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-licenses.md
This article shows you how to upload a license file and activate a license servi
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). + - A FLEXlm license file for a software vendor that requires a license. You need to buy a production environment license from a vendor such as Synopsys, Cadence, Siemens, or Ansys. + ## Upload or update a license for FLEXlm-based tools This section lists the steps to upload a license for a FLEXlm-based tool. First, you get the FLEXlm host ID or the virtual machine (VM) universally unique ID (UUID) from the chamber. Then you provide that value to the license vendor to get the license file. After you get the license file from the vendor, you upload it to the chamber and activate it.
This section lists the steps to upload a license for a FLEXlm-based tool. First,
> [!IMPORTANT] > Loading a new license causes the license server to restart. This could affect actively running jobs.
+>
+> For Siemens license files validate that:
+>
+> - A `saltd` compatible file is requested; and
+> - The clause `VENDOR saltd` is included in the license file.
## Next steps
modeling-simulation-workbench Quickstart Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/quickstart-create-portal.md
Creating a client secret allows the Azure Modeling and Simulation Workbench to r
1. In **App registrations**, select your application *QuickstartModSimWorkbenchApp*. 1. Select **Certificates & secrets** > **Client secrets** > **New client secret**. 1. Add a description for your client secret.
-1. Select **6 months** for the **Expires**.
+1. Select **12 months** for the **Expires**.
1. Select **Add**. The application properties displays. 1. Locate the **Client secret value** and document it. You need the client secret value when you create your Key Vault. Make sure you write it down now, as it will never be displayed again after you leave this page.
modeling-simulation-workbench Resources Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/resources-troubleshoot.md
A *not authorized error* while accessing the remote desktop dashboard URL indica
#### Failing for all users -- Review the [Create an application in Microsoft Entra ID](./quickstart-create-portal.md#create-an-application-in-microsoft-entra-id) article to verify your application registration is set up correctly.
+- Review the [Create an application in Microsoft Entra ID](./quickstart-create-portal.md#create-an-application-in-microsoft-entra-id) article and verify your application registration is set up correctly.
- Review the redirect URI registrations for the specific chamber and confirm the connector's redirects match those found with the application. If they don't match, re[register the redirect URIs](./how-to-guide-add-redirect-uris.md). - Review the application registration secrets for Modeling and Simulation Workbench and check to see if your application client secret has expired. Complete the following steps if it's expired. 1. Generate a new secret and make note of the client secret value.
A *not authorized error* while accessing the remote desktop dashboard URL indica
#### Failing for some users 1. Ensure the user is provisioned as a Chamber User or a Chamber Admin on the **chamber** resource. They should be set up as an IAM role directly for that chamber, not as a parent resource with inherited permission.
-1. Ensure the user has a valid email set for their Microsoft Entra profile, and that their Microsoft Entra alias matches their email alias. For example, a Microsoft Entra sign-in alias of *jane.doe* must also have an email alias of *jane.doe*. Jane Doe can't sign in to Microsoft Entra ID with jadoe or any other variation.
+1. Ensure the user has a valid email set for their Microsoft Entra profile, and that their Microsoft Entra alias matches their email alias. For example, a Microsoft Entra sign-in alias of *jane.doe* must also have an email alias of *jane.doe*. Jane Doe can't sign in to Microsoft Entra ID with *jadoe* or any other variation.
1. Validate your `/mount/sharehome` folder has available space. The`/mount/sharedhome` directory is set up to store user keys to establish a secure connection. Don't store uploaded tarballs/binaries in this folder or install tools and use disk capacity, as it may create system connection errors causing an outage. Use /mount/chamberstorages/\<storage name\> directory instead for all your data storage and tool installation needs. 1. Validate your folder permission settings are correct within your chamber. User provisioning may not work properly if the folder permission settings aren't correct. You can check folder permissions in a terminal session using the *ls -al* command for each /mount/sharedhome/\<useralias\>/.ssh folder, results should match below expectations:
A *not authorized error* while accessing the remote desktop dashboard URL indica
### License error
-An *all licenses are in use for the remote desktop error* means that all licenses are already being used for the remote desktop tool. Ask someone on your team to sign out of their session so that you can sign in. Or contact your Microsoft account manager to get more remote desktop licenses.
+An *all licenses are in use for the remote desktop error* means that all licenses are already being used for the remote desktop tool. Ask someone on your team to sign out of their session so that you can sign in. Or contact your Microsoft account manager to get more remote desktop licenses.
## License server troubleshooting
An *all licenses are in use for the remote desktop error* means that all license
### Unable to approve data export request
-1. Confirm you're a Workbench Owner. A Workbench Owner has a Subscription Owner or Subscription Contributor role assigned to them. It's the only role that can approve (or reject) a data export request.
-1. Confirm you didn't request the data export. The user who requests the data export isn't allowed to also approve the data export. For more information about data export, see [Export data from chamber.](./how-to-guide-download-data.md)
+1. Confirm you're a Workbench Owner. A Workbench Owner has a Subscription Owner or Subscription Contributor role assigned to them. It's the only role that can approve (or reject) a data export request.
+1. Confirm you didn't request the data export. The user who requests the data export isn't allowed to also approve the data export. For more information about data export, see [Export data from chamber.](./how-to-guide-download-data.md)
### Data export from chamber not working
Complete the following steps if you're unable to export data from the chamber us
## Quota/capacity troubleshooting
-For storage or computing quota issues, contact your Microsoft account manager. They'll get you more allocation for your workbench subscription, subject to regional capacity limits/constraints.
+For storage or computing quota issues, contact your Microsoft account manager. They'll get you more allocation for your workbench subscription, subject to regional capacity limits/constraints.
+
+## Chamber or connector troubleshooting
+
+### Excessive time starting or stopping
+
+When starting a connector or chamber from stopped state, if the power state is in the 'Starting' status for longer than 25 minutes, then perform a Stop, wait for Power State to show Stopped, then perform a Start again. [Learn how to start and stop a chamber, connector, or VM.](how-to-guide-start-stop-restart.md)
## Issue not covered or addressed
nat-gateway Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-gateway-resource.md
Title: Azure NAT Gateway resource
description: Learn about the NAT gateway resource of the Azure NAT Gateway service. -+ Last updated 04/29/2024
network-watcher Network Watcher Agent Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-agent-update.md
Previously updated : 07/05/2024 Last updated : 09/06/2024
## Latest version ### Identify latest version
networking Azure Network Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/azure-network-latency.md
description: Learn about round-trip latency statistics between Azure regions.
-+ Last updated 06/20/2024
networking Check Usage Against Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/check-usage-against-limits.md
-+ Last updated 06/05/2018 # Check resource usage against limits
networking Connectivity Interoperability Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/connectivity-interoperability-control-plane.md
Title: Interoperability in Azure - Control plane analysis
description: This article provides the control plane analysis of the test setup you can use to analyze interoperability between ExpressRoute, a site-to-site VPN, and virtual network peering in Azure. -+ Last updated 03/24/2023
networking Connectivity Interoperability Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/connectivity-interoperability-data-plane.md
Title: Interoperability in Azure - Data plane analysis
description: This article provides the data plane analysis of the test setup you can use to analyze interoperability between ExpressRoute, a site-to-site VPN, and virtual network peering in Azure. -+ Last updated 03/24/2023
networking Connectivity Interoperability Preface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/connectivity-interoperability-preface.md
Title: Interoperability in Azure - Test setup
description: This article describes a test setup you can use to analyze interoperability between ExpressRoute, a site-to-site VPN, and virtual network peering in Azure. -+ Last updated 03/26/2023
networking Architecture Guides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/architecture-guides.md
Title: Azure Networking architecture documentation
description: Learn about the reference architecture documentation available for Azure networking services. -+ Last updated 06/13/2023
The following table includes articles that describe how to protect your network
## Next steps
-Learn about [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md).
+Learn about [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md).
networking Lumenisity Patent List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/lumenisity-patent-list.md
Title: Lumenisity University of Southampton Patents
description: List of Lumenisity UoS Patents as of April 19, 2023. -+ Last updated 05/31/2023
The following is a list of patents owned by Lumenisity UoS (University of Southa
## Next Steps
-Learn more about [Microsoft's acquisition of Lumenisity](https://blogs.microsoft.com/blog/2022/12/09/microsoft-acquires-lumenisity-an-innovator-in-hollow-core-fiber-hcf-cable/).
+Learn more about [Microsoft's acquisition of Lumenisity](https://blogs.microsoft.com/blog/2022/12/09/microsoft-acquires-lumenisity-an-innovator-in-hollow-core-fiber-hcf-cable/).
networking Microsoft Global Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/microsoft-global-network.md
Title: 'Microsoft global network - Azure'
description: Learn how Microsoft builds and operates one of the largest backbone networks in the world, and why it's central to delivering a great cloud experience. -+ Last updated 04/06/2023
The exponential growth of Azure and its network has reached a point where we eve
- [Learn about how Microsoft is advancing global network reliability through intelligent software](https://azure.microsoft.com/blog/advancing-global-network-reliability-through-intelligent-software-part-1-of-2/) -- [Learn more about the networking services provided in Azure](https://azure.microsoft.com/product-categories/networking/)
+- [Learn more about the networking services provided in Azure](https://azure.microsoft.com/product-categories/networking/)
networking Network Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/network-monitoring-overview.md
Title: Network Monitoring in Azure Monitor logs
description: Overview of network monitoring solutions, including network performance monitor, to manage networks across cloud, on-premises, and hybrid environments. -+ Last updated 10/30/2023
Related links:
## Miscellaneous
-* [New Pricing](/previous-versions/azure/azure-monitor/insights/network-performance-monitor-pricing-faq)
+* [New Pricing](/previous-versions/azure/azure-monitor/insights/network-performance-monitor-pricing-faq)
networking Secure Application Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/secure-application-delivery.md
Title: Choose a secure application delivery service
description: Learn how you can use a decision tree to help choose a secure application delivery service. -+ Last updated 06/17/2024
Treat this decision tree as a starting point. Every deployment has unique requir
## Next steps - [Choose a secure network topology](secure-network-topology.md)-- [Learn more about Azure network security](security/index.yml)
+- [Learn more about Azure network security](security/index.yml)
networking Secure Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/secure-network-topology.md
Title: Choose a secure network topology
description: Learn how you can use a decision tree to help choose the best topology to secure your network. -+ Last updated 06/17/2024
Treat this decision tree as a starting point. Every deployment has unique requir
## Next steps - [Choose a secure application delivery service](secure-application-delivery.md)-- [Learn more about Azure network security](security/index.yml)
+- [Learn more about Azure network security](security/index.yml)
networking Working Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/working-remotely-support.md
Title: Enable remote work by using Azure networking services
description: Learn how to use Azure networking services to enable remote work and how to mitigate traffic issues that result from an increased number of people who work remotely. -+ Last updated 04/09/2023
operator-service-manager Get Started With Cluster Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/get-started-with-cluster-registry.md
+
+ Title: Get started with Azure Operator Service Manager cluster registry
+description: Azure Operator Service Manager cluster registry provides a locally resilent edge registry service to host Nexus K8s container image artifacts.
++ Last updated : 09/06/2024++++
+# Get started with cluster registry
+* Original
+* Original Publish Date: July 26, 2024
+
+## Overview
+Improve resiliency for cloud native network functions with Azure Operator Service Manager cluster registry. This feature requires the following minimum environment:
+* AOSM ARM API Version: 2023-09-01
+* AOSM CNF Arc for Kubernetes Extension Build Number: 1.0.2711-7
+
+## Introduction
+Azure Operator Service Manager (AOSM) cluster registry (CR) enables a local copy of container images in the Nexus K8s cluster. When the containerized network function (CNF) is installed with cluster registry enabled, the container images are pulled from the remote AOSM artifact store and saved to a local registry. With cluster register, CNF access to container images survives loss of connectivity to the remote artifact store.
+
+### Key use cases
+Cloud native network functions (CNF) need access to container images, not only during the initial deployment using AOSM artifact store, but also to keep the network function operational. Some of these scenarios include:
+* Pod restarts: Stopping and starting a pod can result in a cluster node pulling container images from the registry.
+* Kubernetes scheduler operations: During pod to node assignments, according to scheduler profile rules, if the new node does not have the container images locally cached, the node pulls container images from the registry.
+
+In the above scenarios, if there's a temporary issue with accessing the AOSM artifact store, the cluster registry provides the necessary container images to prevent disruption to the running CNF. Also, the AOSM cluster registry feature decreases the number of image pull requests on AOSM artifact store since each Nexus K8s node pulls container images from the cluster registry instead of the AOSM artifact store.
+
+## How cluster registry works
+AOSM cluster registry is enabled using the Network Function Operator Arc K8s extension. The following CLI shows how cluster registry is enabled on a Nexus K8s cluster.
+```
+az k8s-extension create --name networkfunction-operator --cluster-name <CLUSTER_NAME> --resource-group <RESOURCE_GROUP_NAME> --cluster-type connectedClusters --extension-type Microsoft.Azure.HybridNetwork --scope cluster --release-namespace azurehybridnetwork --config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunctionoperator --config global.networkfunctionextension.enableClusterRegistry=true --config global.networkfunctionextension.clusterRegistry.storageSize=100Gi --version 1.0.2711-7 --auto-upgrade-minor-version false --release-train stable
+```
+When the cluster registry feature is enabled in the Network Function Operator Arc K8s extension, any container images deployed from AOSM artifact store are accessible locally in the Nexus K8s cluster. The user can choose the persistent storage size for the cluster registry.
+
+> [!NOTE]
+> If the user doesn't provide any input, a default persistent volume of 100 GB is used.
+
+## Frequently Asked Questions
+
+### Can I use AOSM cluster registry with a CNF application previously deployed?
+If there's a CNF application already deployed without cluster registry, the container images are not available automatically. The cluster registry must be enabled before deploying the network function with AOSM.
+
+### Which Nexus K8s storage class is used?
+AOSM cluster registry feature uses nexus-volume storage class to store the container images in the Nexus Kubernetes cluster. By default, a 100-GB persistent volume is created if the user doesn't specify the size of the cluster registry.
+
+### Can I change the storage size after a deployment?
+Storage size can't be modified after the initial deployment. We recommend configuring the volume size by 3x to 4x of the starting size.
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
The following regions currently support availability zones:
| South Central US | UK South | | | East Asia | | US Gov Virginia | West Europe | | | China North 3 | | West US 2 | Sweden Central | | |Korea Central |
-| West US 3 | Switzerland North | | | New Zealand North |
+| West US 3 | Switzerland North | | | *New Zealand North |
| Mexico Central | Poland Central |||| ||Spain Central ||||
security Network Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-best-practices.md
Best practices for logically segmenting subnets include:
**Detail**: Use [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing)-based subnetting principles to create your subnets. **Best practice**: Create network access controls between subnets. Routing between subnets happens automatically, and you don't need to manually configure routing tables. By default, there are no network access controls between the subnets that you create on an Azure virtual network.
-**Detail**: Use a [network security group](../../virtual-network/manage-network-security-group.md) to protect against unsolicited traffic into Azure subnets. Network security groups (NSGs) are simple, stateful packet inspection devices. NSGs use the 5-tuple approach (source IP, source port, destination IP, destination port, and layer 4 protocol) to create allow/deny rules for network traffic. You allow or deny traffic to and from a single IP address, to and from multiple IP addresses, or to and from entire subnets.
+**Detail**: Use a [network security group](../../virtual-network/manage-network-security-group.md) to protect against unsolicited traffic into Azure subnets. Network security groups (NSGs) are simple, stateful packet inspection devices. NSGs use the 5-tuple approach (source IP, source port, destination IP, destination port and protocol) to create allow/deny rules for network traffic. You allow or deny traffic to and from a single IP address, to and from multiple IP addresses, or to and from entire subnets.
When you use network security groups for network access control between subnets, you can put resources that belong to the same security zone or role in their own subnets.
security Steps Secure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/steps-secure-identity.md
description: This document outlines a list of important actions administrators s
Previously updated : 08/17/2022 Last updated : 09/06/2024
If you're reading this document, you're aware of the significance of security. You likely already carry the responsibility for securing your organization. If you need to convince others of the importance of security, send them to read the latest [Microsoft Digital Defense Report](https://www.microsoft.com/security/business/security-intelligence-report).
-This document will help you get a more secure posture using the capabilities of Microsoft Entra ID by using a five-step checklist to improve your organization's protection against cyber-attacks.
+This document helps you get a more secure posture using the capabilities of Microsoft Entra ID by using a five-step checklist to improve your organization's protection against cyber-attacks.
-This checklist will help you quickly deploy critical recommended actions to protect your organization immediately by explaining how to:
+This checklist helps you quickly deploy critical recommended actions to protect your organization immediately by explaining how to:
- Strengthen your credentials - Reduce your attack surface area
This checklist will help you quickly deploy critical recommended actions to prot
> [!NOTE] > Many of the recommendations in this document apply only to applications that are configured to use Microsoft Entra ID as their identity provider. Configuring apps for Single Sign-On assures the benefits of credential policies, threat detection, auditing, logging, and other features add to those applications. [Microsoft Entra Application Management](../../active-directory/manage-apps/what-is-application-management.md) is the foundation on which all these recommendations are based.
-The recommendations in this document are aligned with the [Identity Secure Score](../../active-directory/fundamentals/identity-secure-score.md), an automated assessment of your Microsoft Entra tenantΓÇÖs identity security configuration. Organizations can use the Identity Secure Score page in the Microsoft Entra admin center to find gaps in their current security configuration to ensure they follow current Microsoft best practices for security. Implementing each recommendation in the Secure Score page will increase your score and allow you to track your progress, plus help you compare your implementation against other similar size organizations.
+The recommendations in this document are aligned with the [Identity Secure Score](../../active-directory/fundamentals/identity-secure-score.md), an automated assessment of your Microsoft Entra tenantΓÇÖs identity security configuration. Organizations can use the Identity Secure Score page in the Microsoft Entra admin center to find gaps in their current security configuration to ensure they follow current Microsoft best practices for security. Implementing each recommendation in the Secure Score page increases your score and allow you to track your progress, plus help you compare your implementation against other similar size organizations.
:::image type="content" source="media/steps-secure-identity/identity-secure-score-in-azure-portal.png" alt-text="Azure portal window showing Identity Secure Score and some recommendations." lightbox="media/steps-secure-identity/identity-secure-score-in-azure-portal.png":::
All set? Let's get started on the checklist.
## Step 1: Strengthen your credentials
-Although other types of attacks are emerging, including consent phishing and attacks on nonhuman identities, password-based attacks on user identities are still the most prevalent vector of identity compromise. Well-established spear phishing and password spray campaigns by adversaries continue to be successful against organizations that havenΓÇÖt yet implemented multifactor authentication (MFA) or other protections against this common tactic.
+Although other types of attacks are emerging, including consent phishing and attacks on nonhuman identities, password-based attacks on user identities are still the most prevalent vector of identity compromise. Well-established spear phishing and password spray campaigns by adversaries continue to be successful against organizations that don't implement multifactor authentication (MFA) or other protections against this common tactic.
-As an organization you need to make sure that your identities are validated and secured with MFA everywhere. In 2020, the [FBI IC3 Report](https://www.ic3.gov/Medi).
+As an organization you need to make sure that your identities are validated and secured with MFA everywhere. In 2020, the [Federal Bureau of Investigation (FBI) Internet Crime Complaint Center (IC3) Report](https://www.ic3.gov/Medi).
### Make sure your organization uses strong authentication
-To easily enable the basic level of identity security, you can use the one-click enablement with [Microsoft Entra security defaults](../../active-directory/fundamentals/concept-fundamentals-security-defaults.md). Security defaults enforce Microsoft Entra multifactor authentication for all users in a tenant and blocks sign-ins from legacy protocols tenant-wide.
+To easily enable the basic level of identity security, you can use the one-select enablement with [Microsoft Entra security defaults](../../active-directory/fundamentals/concept-fundamentals-security-defaults.md). Security defaults enforce Microsoft Entra multifactor authentication for all users in a tenant and blocks sign-ins from legacy protocols tenant-wide.
If your organization has Microsoft Entra ID P1 or P2 licenses, then you can also use the [Conditional Access insights and reporting workbook](../../active-directory/conditional-access/howto-conditional-access-insights-reporting.md) to help you discover gaps in your configuration and coverage. From these recommendations, you can easily close this gap by creating a policy using the new Conditional Access templates experience. [Conditional Access templates](../../active-directory/conditional-access/concept-conditional-access-policy-common.md) are designed to provide an easy method to deploy new policies that align with Microsoft recommended [best practices](identity-management-best-practices.md), making it easy to deploy common policies to protect your identities and devices. ### Start banning commonly attacked passwords and turn off traditional complexity, and expiration rules.
-Many organizations use traditional complexity and password expiration rules. [Microsoft's research](https://www.microsoft.com/research/publication/password-guidance/) has shown and [NIST guidance](https://pages.nist.gov/800-63-3/sp800-63b.html) states that these policies cause users to choose passwords that are easier to guess. We recommend you use [Microsoft Entra password protection](../../active-directory/authentication/concept-password-ban-bad.md) a dynamic banned password feature using current attacker behavior to prevent users from setting passwords that can easily be guessed. This capability is always on when users are created in the cloud, but is now also available for hybrid organizations when they deploy [Microsoft Entra password protection for Windows Server Active Directory](../../active-directory/authentication/concept-password-ban-bad-on-premises.md). In addition, we recommend you remove expiration policies. Password change offers no containment benefits as cyber criminals almost always use credentials as soon as they compromise them. Refer to the following article to [Set the password expiration policy for your organization](/microsoft-365/admin/manage/set-password-expiration-policy).
+Many organizations use traditional complexity and password expiration rules. [Microsoft's research](https://www.microsoft.com/research/publication/password-guidance/) shows, and [National Institute of Standards and Technology (NIST) Special Publication 800-63B Digital Identity Guidelines](https://pages.nist.gov/800-63-3/sp800-63b.html) state, that these policies cause users to choose passwords that are easier to guess. We recommend you use [Microsoft Entra password protection](../../active-directory/authentication/concept-password-ban-bad.md) a dynamic banned password feature using current attacker behavior to prevent users from setting passwords that can easily be guessed. This capability is always on when users are created in the cloud, but is now also available for hybrid organizations when they deploy [Microsoft Entra password protection for Windows Server Active Directory](../../active-directory/authentication/concept-password-ban-bad-on-premises.md). In addition, we recommend you remove expiration policies. Password change offers no containment benefits as cyber criminals almost always use credentials as soon as they compromise them. Refer to the following article to [Set the password expiration policy for your organization](/microsoft-365/admin/manage/set-password-expiration-policy).
### Protect against leaked credentials and add resilience against outages The simplest and recommended method for enabling cloud authentication for on-premises directory objects in Microsoft Entra ID is to enable [password hash synchronization (PHS)](../../active-directory/hybrid/how-to-connect-password-hash-synchronization.md). If your organization uses a hybrid identity solution with pass-through authentication or federation, then you should enable password hash sync for the following two reasons: -- The [Users with leaked credentials report](../../active-directory/identity-protection/overview-identity-protection.md) in Microsoft Entra ID warns of username and password pairs, which have been exposed publically. An incredible volume of passwords is leaked via phishing, malware, and password reuse on third-party sites that are later breached. Microsoft finds many of these leaked credentials and will tell you, in this report, if they match credentials in your organization ΓÇô but only if you enable [password hash sync](../../active-directory/hybrid/how-to-connect-password-hash-synchronization.md) or have cloud-only identities.-- If an on-premises outage happens, like a ransomware attack, you can [switch over to using cloud authentication using password hash sync](../../active-directory/hybrid/choose-ad-authn.md). This backup authentication method will allow you to continue accessing apps configured for authentication with Microsoft Entra ID, including Microsoft 365. In this case, IT staff won't need to resort to shadow IT or personal email accounts to share data until the on-premises outage is resolved.
+- The [Users with leaked credentials report](../../active-directory/identity-protection/overview-identity-protection.md) in Microsoft Entra ID warns of publicly exposed username and password pairs. An incredible volume of passwords is leaked via phishing, malware, and password reuse on third-party sites that are later breached. Microsoft finds many of these leaked credentials and tells you, in this report, if they match credentials in your organization ΓÇô but only if you enable [password hash sync](../../active-directory/hybrid/how-to-connect-password-hash-synchronization.md) or have cloud-only identities.
+- If an on-premises outage happens, like a ransomware attack, you can [switch over to using cloud authentication using password hash sync](../../active-directory/hybrid/choose-ad-authn.md). This backup authentication method allows you to continue accessing apps configured for authentication with Microsoft Entra ID, including Microsoft 365. In this case, IT staff doesn't need to resort to shadow IT or personal email accounts to share data until the on-premises outage is resolved.
Passwords are never stored in clear text or encrypted with a reversible algorithm in Microsoft Entra ID. For more information on the actual process of password hash synchronization, see [Detailed description of how password hash synchronization works](../../active-directory/hybrid/how-to-connect-password-hash-synchronization.md#detailed-description-of-how-password-hash-synchronization-works).
Smart lockout helps lock out bad actors that try to guess your users' passwords
## Step 2: Reduce your attack surface area
-Given the pervasiveness of password compromise, minimizing the attack surface in your organization is critical. Disabling the use of older, less secure protocols, limiting access entry points, moving to cloud authentication, and exercising more significant control of administrative access to resources and embracing Zero Trust security principles.
+Given the pervasiveness of password compromise, minimizing the attack surface in your organization is critical. Disable the use of older, less secure protocols, limit access entry points, moving to cloud authentication, exercise more significant control of administrative access to resources, and embrace Zero Trust security principles.
### Use Cloud Authentication
-Credentials are a primary attack vector. The practices in this blog can reduce the attack surface by using cloud authentication, deploy MFA and use passwordless authentication methods. You can deploy passwordless methods such as Windows Hello for Business, Phone Sign-in with the Microsoft Authenticator App or FIDO.
+Credentials are a primary attack vector. The practices in this blog can reduce the attack surface by using cloud authentication, deploy MFA, and use passwordless authentication methods. You can deploy passwordless methods such as Windows Hello for Business, Phone Sign-in with the Microsoft Authenticator App or FIDO.
### Block legacy authentication
-Apps using their own legacy methods to authenticate with Microsoft Entra ID and access company data, pose another risk for organizations. Examples of apps using legacy authentication are POP3, IMAP4, or SMTP clients. Legacy authentication apps authenticate on behalf of the user and prevent Microsoft Entra ID from doing advanced security evaluations. The alternative, modern authentication, will reduce your security risk, because it supports multifactor authentication and Conditional Access.
+Apps using their own legacy methods to authenticate with Microsoft Entra ID and access company data, pose another risk for organizations. Examples of apps using legacy authentication are POP3, IMAP4, or SMTP clients. Legacy authentication apps authenticate on behalf of the user and prevent Microsoft Entra ID from doing advanced security evaluations. The alternative, modern authentication, reduces your security risk, because it supports multifactor authentication and Conditional Access.
We recommend the following actions:
For more information, see the article [Blocking legacy authentication protocols
### Block invalid authentication entry points
-Using the verify explicitly principle, you should reduce the impact of compromised user credentials when they happen. For each app in your environment, consider the valid use cases: which groups, which networks, which devices and other elements are authorized ΓÇô then block the rest. With Microsoft Entra Conditional Access, you can control how authorized users access their apps and resources based on specific conditions you define.
+Using the *verify explicitly principle*, you should reduce the impact of compromised user credentials when they happen. For each app in your environment, consider the valid use cases: which groups, which networks, which devices and other elements are authorized ΓÇô then block the rest. With Microsoft Entra Conditional Access, you can control how authorized users access their apps and resources based on specific conditions you define.
For more information on how to use Conditional Access for your Cloud Apps and user actions, see [Conditional Access Cloud apps, actions, and authentication context](../../active-directory/conditional-access/concept-conditional-access-cloud-apps.md). ### Review and govern admin roles
-Another Zero Trust pillar is the need to minimize the likelihood a compromised account can operate with a privileged role. This control can be accomplished by assigning the least amount of privilege to an identity. If youΓÇÖre new to Microsoft Entra roles, this article will help you understand Microsoft Entra roles.
+Another Zero Trust pillar is the need to minimize the likelihood a compromised account can operate with a privileged role. This control can be accomplished by assigning the least amount of privilege to an identity. If youΓÇÖre new to Microsoft Entra roles, this article helps you understand Microsoft Entra roles.
Privileged roles in Microsoft Entra ID should be cloud only accounts in order to isolate them from any on-premises environments and donΓÇÖt use on-premises password vaults to store the credentials.
For more information, see the article [Plan a Privileged Identity Management dep
ItΓÇÖs important to understand the various Microsoft Entra application consent experiences, the types of permissions and consent, and their implications on your organizationΓÇÖs security posture. While allowing users to consent by themselves does allow users to easily acquire useful applications that integrate with Microsoft 365, Azure, and other services, it can represent a risk if not used and monitored carefully.
-Microsoft recommends restricting user consent to allow end-user consent only for apps from verified publishers and only for permissions you select. If end-user consent is restricted, previous consent grants will still be honored but all future consent operations must be performed by an administrator. For restricted cases, admin consent can be requested by users through an integrated admin consent request workflow or through your own support processes. Before restricting end-user consent, use our recommendations to plan this change in your organization. For applications you wish to allow all users to access, consider granting consent on behalf of all users, making sure users who havenΓÇÖt yet consented individually will be able to access the app. If you donΓÇÖt want these applications to be available to all users in all scenarios, use application assignment and Conditional Access to restrict user access to specific apps.
+Microsoft recommends restricting user consent to allow end-user consent only for apps from verified publishers and only for permissions you select. If end-user consent is restricted, previous consent grants will still be honored but all future consent operations that an administrator must perform. For restricted cases, users can request admin consent through an integrated admin consent request workflow or through your own support processes. Before restricting end-user consent, use our recommendations to plan this change in your organization. For applications you wish to allow all users to access, consider granting consent on behalf of all users, making sure users who didn't yet individually consent can access the app. If you donΓÇÖt want these applications to be available to all users in all scenarios, use application assignment and Conditional Access to restrict user access to specific apps.
Make sure users can request admin approval for new applications to reduce user friction, minimize support volume, and prevent users from signing up for applications using non-Microsoft Entra credentials. Once you regulate your consent operations, administrators should audit app and consent permissions regularly.
For more information, see the article [How To: Configure and enable risk policie
### Implement sign-in risk policy
-A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. A sign-in risk-based policy can be implemented through adding a sign-in risk condition to your Conditional Access policies that evaluates the risk level to a specific user or group. Based on the risk level (high/medium/low), a policy can be configured to block access or force multifactor authentication. We recommend that you force multifactor authentication on Medium or above risky sign-ins.
+A sign-in risk represents the probability that a given that the identity owner didn't authorize the authentication request. A sign-in risk-based policy can be implemented through adding a sign-in risk condition to your Conditional Access policies that evaluates the risk level to a specific user or group. Based on the risk level (high/medium/low), a policy can be configured to block access or force multifactor authentication. We recommend that you force multifactor authentication on Medium or above risky sign-ins.
:::image type="content" source="media/steps-secure-identity/require-mfa-medium-or-high-risk-sign-in.png" alt-text="Conditional Access policy requiring MFA for medium and high risk sign-ins." lightbox="media/steps-secure-identity/require-mfa-medium-or-high-risk-sign-in.png"::: ### Implement user risk security policy
-User risk indicates the likelihood a user's identity has been compromised and is calculated based on the user risk detections that are associated with a user's identity. A user risk-based policy can be implemented through adding a user risk condition to your Conditional Access policies that evaluates the risk level to a specific user. Based on Low, Medium, High risk-level, a policy can be configured to block access or require a secure password change using multifactor authentication. Microsoft's recommendation is to require a secure password change for users on high risk.
+User risk indicates the likelihood of user identity compromise and is calculated based on the user risk detections that are associated with a user's identity. A user risk-based policy can be implemented through adding a user risk condition to your Conditional Access policies that evaluates the risk level to a specific user. Based on Low, Medium, High risk-level, a policy can be configured to block access or require a secure password change using multifactor authentication. Microsoft's recommendation is to require a secure password change for users on high risk.
:::image type="content" source="media/steps-secure-identity/require-password-change-high-risk-user.png" alt-text="Conditional Access policy requiring password change for high risk users." lightbox="media/steps-secure-identity/require-password-change-high-risk-user.png":::
Monitoring and auditing your logs is important to detect suspicious behavior. Th
## Step 4: Utilize cloud intelligence
-Auditing and logging of security-related events and related alerts are essential components of an efficient protection strategy. Security logs and reports provide you with an electronic record of suspicious activities and help you detect patterns that may indicate attempted or successful external penetration of the network, and internal attacks. You can use auditing to monitor user activity, document regulatory compliance, do forensic analysis, and more. Alerts provide notifications of security events. Make sure you have a log retention policy in place for both your sign-in logs and audit logs for Microsoft Entra ID by exporting into Azure Monitor or a SIEM tool.
+Auditing and logging of security-related events and related alerts are essential components of an efficient protection strategy. Security logs and reports provide you with an electronic record of suspicious activities and help you detect patterns that might indicate attempted or successful external penetration of the network, and internal attacks. You can use auditing to monitor user activity, document regulatory compliance, do forensic analysis, and more. Alerts provide notifications of security events. Make sure you have a log retention policy in place for both your sign-in logs and audit logs for Microsoft Entra ID by exporting into Azure Monitor or a SIEM tool.
<a name='monitor-azure-ad'></a>
Microsoft Azure services and features provide you with configurable security aud
[Microsoft Entra ID Protection](../../active-directory/identity-protection/overview-identity-protection.md) provides two important reports you should monitor daily:
-1. Risky sign-in reports will surface user sign-in activities you should investigate, the legitimate owner may not have performed the sign-in.
-1. Risky user reports will surface user accounts that may have been compromised, such as leaked credential that was detected or the user signed in from different locations causing an impossible travel event.
+1. Risky sign-in reports surface user sign-in activities you should investigate whether the legitimate owner performed the sign-in.
+1. Risky user reports surface user accounts that might be compromised, such as leaked credential that was detected or the user signed in from different locations, causing an impossible travel event.
:::image type="content" source="media/steps-secure-identity/identity-protection-overview.png" alt-text="Overview charts of activity in Identity Protection in the Azure portal." lightbox="media/steps-secure-identity/identity-protection-overview.png"::: ### Audit apps and consented permissions
-Users can be tricked into navigating to a compromised web site or apps that will gain access to their profile information and user data, such as their email. A malicious actor can use the consented permissions it received to encrypt their mailbox content and demand a ransom to regain your mailbox data. [Administrators should review and audit](/office365/securitycompliance/detect-and-remediate-illicit-consent-grants) the permissions given by users. In addition to auditing the permissions given by users, you can [locate risky or unwanted OAuth applications](/cloud-app-security/investigate-risky-oauth) in premium environments.
+Users can be tricked into navigating to a compromised web site or apps that gain access to their profile information and user data, such as their email. A malicious actor can use the consented permissions it received to encrypt their mailbox content and demand a ransom to regain your mailbox data. [Administrators should review and audit](/office365/securitycompliance/detect-and-remediate-illicit-consent-grants) the permissions given by users. In addition to auditing the permissions given by users, you can [locate risky or unwanted OAuth applications](/cloud-app-security/investigate-risky-oauth) in premium environments.
## Step 5: Enable end-user self-service
-As much as possible you'll want to balance security with productivity. Approaching your journey with the mindset that you're setting a foundation for security, you can remove friction from your organization by empowering your users while remaining vigilant and reducing your operational overheads.
+As much as possible you want to balance security with productivity. Approaching your journey with the mindset that you're setting a foundation for security, you can remove friction from your organization by empowering your users while remaining vigilant and reducing your operational overheads.
### Implement self-service password reset
-Microsoft Entra ID's [self-service password reset (SSPR)](../../active-directory/authentication/tutorial-enable-sspr.md) offers a simple means for IT administrators to allow users to reset or unlock their passwords or accounts without helpdesk or administrator intervention. The system includes detailed reporting that tracks when users have reset their passwords, along with notifications to alert you to misuse or abuse.
+Microsoft Entra ID's [self-service password reset (SSPR)](../../active-directory/authentication/tutorial-enable-sspr.md) offers a simple means for IT administrators to allow users to reset or unlock their passwords or accounts without helpdesk or administrator intervention. The system includes detailed reporting that tracks when users reset their passwords, along with notifications to alert you to misuse or abuse.
### Implement self-service group and application access
-Microsoft Entra ID can allow non-administrators to manage access to resources, using security groups, Microsoft 365 groups, application roles, and access package catalogs. [Self-service group management](../../active-directory/enterprise-users/groups-self-service-management.md) enables group owners to manage their own groups, without needing to be assigned an administrative role. Users can also create and manage Microsoft 365 groups without relying on administrators to handle their requests, and unused groups expire automatically. [Microsoft Entra entitlement management](../../active-directory/governance/entitlement-management-overview.md) further enables delegation and visibility, with comprehensive access request workflows and automatic expiration. You can delegate to non-administrators the ability to configure their own access packages for groups, Teams, applications, and SharePoint Online sites they own, with custom policies for who is required to approve access, including configuring employee's managers and business partner sponsors as approvers.
+Microsoft Entra ID can allow nonadministrators to manage access to resources, using security groups, Microsoft 365 groups, application roles, and access package catalogs. [Self-service group management](../../active-directory/enterprise-users/groups-self-service-management.md) enables group owners to manage their own groups, without needing to be assigned an administrative role. Users can also create and manage Microsoft 365 groups without relying on administrators to handle their requests, and unused groups expire automatically. [Microsoft Entra entitlement management](../../active-directory/governance/entitlement-management-overview.md) further enables delegation and visibility, with comprehensive access request workflows and automatic expiration. You can delegate to nonadministrators the ability to configure their own access packages for groups, Teams, applications, and SharePoint Online sites they own, with custom policies for who is required to approve access, including configuring employee's managers and business partner sponsors as approvers.
<a name='implement-azure-ad-access-reviews'></a>
With [Microsoft Entra access reviews](../../active-directory/governance/access-r
Provisioning and deprovisioning are the processes that ensure consistency of digital identities across multiple systems. These processes are typically applied as part of [identity lifecycle management](../../active-directory/governance/what-is-identity-lifecycle-management.md).
-Provisioning is the processes of creating an identity in a target system based on certain conditions. De-provisioning is the process of removing the identity from the target system, when conditions are no longer met. Synchronization is the process of keeping the provisioned object, up to date, so that the source object and target object are similar.
+Provisioning is the processes of creating an identity in a target system based on certain conditions. Deprovisioning is the process of removing the identity from the target system, when conditions are no longer met. Synchronization is the process of keeping the provisioned object, up to date, so that the source object and target object are similar.
Microsoft Entra ID currently provides three areas of automated provisioning. They are: -- Provisioning from an external non-directory authoritative system of record to Microsoft Entra ID, via [HR-driven provisioning](../../active-directory/governance/what-is-provisioning.md#hr-driven-provisioning)
+- Provisioning from an external nondirectory authoritative system of record to Microsoft Entra ID, via [HR-driven provisioning](../../active-directory/governance/what-is-provisioning.md#hr-driven-provisioning)
- Provisioning from Microsoft Entra ID to applications, via [App provisioning](../../active-directory/governance/what-is-provisioning.md#app-provisioning) - Provisioning between Microsoft Entra ID and Active Directory Domain Services, via [inter-directory provisioning](../../active-directory/governance/what-is-provisioning.md#inter-directory-provisioning)
Find out more here: What is provisioning with Microsoft Entra ID?
## Summary
-There are many aspects to a secure Identity infrastructure, but this five-step checklist will help you quickly accomplish a safer and secure identity infrastructure:
+There are many aspects to a secure Identity infrastructure, but this five-step checklist helps you to quickly accomplish a safer and secure identity infrastructure:
- Strengthen your credentials - Reduce your attack surface area
We appreciate how seriously you take security and hope this document is a useful
If you need assistance to plan and deploy the recommendations, refer to the [Microsoft Entra ID project deployment plans](../../active-directory/fundamentals/deployment-plans.md) for help.
-If you're confident all these steps are complete, use MicrosoftΓÇÖs [Identity Secure Score](../../active-directory/fundamentals/identity-secure-score.md), which will keep you up to date with the [latest best practices](identity-management-best-practices.md) and security threats.
+If you're confident all these steps are complete, use MicrosoftΓÇÖs [Identity Secure Score](../../active-directory/fundamentals/identity-secure-score.md), which keeps you up to date with the [latest best practices](identity-management-best-practices.md) and security threats.
service-bus-messaging Service Bus Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-geo-replication.md
This feature allows promoting any secondary region to primary, at any time. Prom
> - This feature is currently in public preview, and as such shouldn't be used in production scenarios. > - The below regions are currently supported in the public preview. >
-> | US | Europe |
-> |||
-> | Central US EUAP | Italy North |
-> | | Spain Central |
-> | | Norway East |
+> | North America | Europe | APAC |
+> |--||-|
+> | Central US EUAP | Italy North | Australia Central |
+> | Canada Central | Spain Central | Australia East |
+> | Canada East | Norway East | |
> > - This feature is currently available on new namespaces. If a namespace had this feature enabled before, it can be disabled (by removing the secondary regions), and re-enabled. > - The following features currently aren't supported. We're continuously working on bringing more features to the public preview, and will update this list with the latest status.
To learn more about Service Bus messaging, see the following articles:
* [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md) * [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md) * [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
-* [REST API](/rest/api/servicebus/)
+* [REST API](/rest/api/servicebus/)
service-health Alerts Activity Log Service Notifications Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-portal.md
You also can configure who the alert should be sent to:
- Select an existing action group. - Create a new action group (that can be used for future alerts).
+> [!NOTE]
+> Service Health Alerts are only supported in public clouds within the global region. For Action Groups to properly function in response to a Service Health Alert the region of the action group must be set as "Global".
To learn more about action groups, see [Create and manage action groups](../azure-monitor/alerts/action-groups.md).
service-health Resource Health Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-overview.md
Title: Azure Resource Health overview description: Learn how Azure Resource Health helps you diagnose and get support for service problems that affect your Azure resources.-+ Last updated 02/14/2023
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
Previously updated : 07/15/2024 Last updated : 09/06/2024 - engagement-fy23 - linux-related-content
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Replication appliance / Configuration server** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
+[Rollup 75](https://support.microsoft.com/topic/update-rollup-75-for-azure-site-recovery-4884b937-8976-454a-9b80-57e0200eb2ec) | 9.62.7172.1 | 9.62.7172.1 | 9.56.6879.1 | 5.24.0814.2 | 2.0.9932.0
[Rollup 74](https://support.microsoft.com/topic/update-rollup-74-for-azure-site-recovery-584e3586-4c55-4cc2-8b1c-63038b6b4464) | 9.62.7096.1 | 9.62.7096.1 | 9.62.7096.1 | 5.24.0614.1 | 2.0.9919.0 [Rollup 73](https://support.microsoft.com/topic/update-rollup-73-for-azure-site-recovery-d3845f1e-2454-4ae8-b058-c1fec6206698) | 9.61.7016.1 | 9.61.7016.1 | 9.61.7016.1 | 5.24.0317.5 | 2.0.9917.0 [Rollup 72](https://support.microsoft.com/topic/update-rollup-72-for-azure-site-recovery-kb5036010-aba602a9-8590-4afe-ac8a-599141ec99a5) | 9.60.6956.1 | NA | 9.60.6956.1 | 5.24.0117.5 | 2.0.9917.0 [Rollup 71](https://support.microsoft.com/topic/update-rollup-71-for-azure-site-recovery-kb5035688-4df258c7-7143-43e7-9aa5-afeef9c26e1a) | 9.59.6930.1 | NA | 9.59.6930.1 | NA | NA [Rollup 70](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | 9.57.6920.1 | 9.57.6911.1 / NA | 9.57.6911.1 | 5.23.1204.5 (VMware) | 2.0.9263.0 (VMware)
-[Rollup 69](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | NA | 9.56.6879.1 / NA | 9.56.6879.1 | 5.23.1101.10 (VMware) | 2.0.9263.0 (VMware)
[Learn more](service-updates-how-to.md) about update installation and support. +
+## Updates (August 2024)
+
+### Update Rollup 75
+
+Update [rollup 75](https://support.microsoft.com/topic/update-rollup-75-for-azure-site-recovery-4884b937-8976-454a-9b80-57e0200eb2ec) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
+**Issue fixes/improvements** | Certificate renewal for VMware to Azure in Modernized Appliances.
+**Azure VM disaster recovery** | No improvements added. 
+**VMware VM/physical disaster recovery to Azure** | No improvements added. 
+++ ## Updates (July 2024) ### Update Rollup 74
spring-apps How To Configure Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-configure-planned-maintenance.md
Notifications and messages are sent out before and during the maintenance. The f
Currently, Azure Spring Apps performs one regular planned maintenance to upgrade the underlying infrastructure every three months. For a detailed maintenance timeline, check the notifications on the [Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health) page.
+> [!NOTE]
+> In compliance with MicrosoftΓÇÖs security standards, we perform additional security patching for underlying Azure Kubernetes Service (AKS) clusters during the second week of each month. Maintenance occurs within an 8-hour window during non-working hours. We do this work in a rolling fashion to ensure uninterrupted service.
+ ## Best practices - When you configure planned maintenance for multiple service instances in the same region, the maintenance takes place within the same week. For example, if maintenance for cluster A is set on Monday and cluster B on Sunday, then cluster A is maintained before cluster B, in the same week.
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
Previously updated : 04/12/2024 Last updated : 06/06/2024 ms.devlang: python
Azure Data Lake Storage implements an access control model that supports both Az
You can associate a [security principal](../../role-based-access-control/overview.md#security-principal) with an access level for files and directories. Each association is captured as an entry in an *access control list (ACL)*. Each file and directory in your storage account has an access control list. When a security principal attempts an operation on a file or directory, an ACL check determines whether that security principal (user, group, service principal, or managed identity) has the correct permission level to perform the operation. > [!NOTE]
-> ACLs apply only to security principals in the same tenant, and they don't apply to users who use Shared Key or shared access signature (SAS) token authentication. That's because no identity is associated with the caller and therefore security principal permission-based authorization cannot be performed.
+> ACLs apply only to security principals in the same tenant. ACLs don't apply to users who use Shared Key authorization because no identity is associated with the caller and therefore security principal permission-based authorization cannot be performed. The same is true for shared access signature (SAS) tokens except when a user delegated SAS token is used. In that case, Azure Storage performs a POSIX ACL check against the object ID before it authorizes the operation as long as the optional parameter suoid is used. To learn more, see [Construct a user delegation SAS](/rest/api/storageservices/create-user-delegation-sas#construct-a-user-delegation-sas).
<a id="set-access-control-lists"></a>
storage Sas Service Create Dotnet Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet-container.md
- Title: Create a service SAS for a container with .NET-
-description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for .NET.
---- Previously updated : 08/05/2024-----
-# Create a service SAS for a container with .NET
---
-This article shows how to use the storage account key to create a service SAS for a container with the Azure Blob Storage client library for .NET.
-
-## About the service SAS
-
-A service SAS is signed with the account access key. You can use the [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) class to create the credential that is used to sign the service SAS.
-
-You can also use a stored access policy to define the permissions and duration of the SAS. If the name of an existing stored access policy is provided, that policy is associated with the SAS. To learn more about stored access policies, see [Define a stored access policy](#define-a-stored-access-policy). If no stored access policy is provided, the code examples in this article show how to define permissions and duration for the SAS.
-
-## Create a service SAS for a container
-
-The following code example shows how to create a service SAS for a container resource. First, the code verifies that the [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) object is authorized with a shared key credential by checking the [CanGenerateSasUri](/dotnet/api/azure.storage.blobs.blobcontainerclient.cangeneratesasuri) property. Then, it generates the service SAS via the [BlobSasBuilder](/dotnet/api/azure.storage.sas.blobsasbuilder) class, and calls [GenerateSasUri](/dotnet/api/azure.storage.blobs.blobcontainerclient.generatesasuri) to create a service SAS URI based on the client and builder objects.
--
-## Use a service SAS to authorize a client object
-
-The following code examples show how to use the service SAS to authorize a [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
-
-First, create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) object signed with the account access key:
--
-Then, generate the service SAS as shown in the earlier example and use the SAS to authorize a [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) object:
---
-## Resources
-
-To learn more about creating a service SAS using the Azure Blob Storage client library for .NET, see the following resources.
--
-### See also
--- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md)-- [Create a service SAS](/rest/api/storageservices/create-service-sas)-- For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](blob-v11-samples-dotnet.md#create-a-service-sas-for-a-blob-container).
storage Sas Service Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet.md
Title: Create a service SAS for a blob with .NET
+ Title: Create a service SAS for a container or blob with .NET
-description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for .NET.
+description: Learn how to create a service shared access signature (SAS) for a container or blob using the Azure Blob Storage client library for .NET.
Previously updated : 08/05/2024 Last updated : 09/06/2024 ms.devlang: csharp
-# Create a service SAS for a blob with .NET
+# Create a service SAS for a container or blob with .NET
[!INCLUDE [storage-dev-guide-selector-service-sas](../../../includes/storage-dev-guides/storage-dev-guide-selector-service-sas.md)] [!INCLUDE [storage-auth-sas-intro-include](../../../includes/storage-auth-sas-intro-include.md)]
-This article shows how to use the storage account key to create a service SAS for a blob with the Azure Blob Storage client library for .NET.
+This article shows how to use the storage account key to create a service SAS for a container or blob with the Azure Blob Storage client library for .NET.
## About the service SAS
A service SAS is signed with the account access key. You can use the [StorageSha
You can also use a stored access policy to define the permissions and duration of the SAS. If the name of an existing stored access policy is provided, that policy is associated with the SAS. To learn more about stored access policies, see [Define a stored access policy](#define-a-stored-access-policy). If no stored access policy is provided, the code examples in this article show how to define permissions and duration for the SAS.
-## Create a service SAS for a blob
+## Create a service SAS
+
+You can create a service SAS for a container or blob, based on the needs of your app.
+
+### [Container](#tab/container)
+
+The following code example shows how to create a service SAS for a container resource. First, the code verifies that the [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) object is authorized with a shared key credential by checking the [CanGenerateSasUri](/dotnet/api/azure.storage.blobs.blobcontainerclient.cangeneratesasuri) property. Then, it generates the service SAS via the [BlobSasBuilder](/dotnet/api/azure.storage.sas.blobsasbuilder) class, and calls [GenerateSasUri](/dotnet/api/azure.storage.blobs.blobcontainerclient.generatesasuri) to create a service SAS URI based on the client and builder objects.
++
+### [Blob](#tab/blob)
The following code example shows how to create a service SAS for a blob resource. First, the code verifies that the [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) object is authorized with a shared key credential by checking the [CanGenerateSasUri](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.cangeneratesasuri#azure-storage-blobs-specialized-blobbaseclient-cangeneratesasuri) property. Then, it generates the service SAS via the [BlobSasBuilder](/dotnet/api/azure.storage.sas.blobsasbuilder) class, and calls [GenerateSasUri](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.generatesasuri#azure-storage-blobs-specialized-blobbaseclient-generatesasuri(azure-storage-sas-blobsasbuilder)) to create a service SAS URI based on the client and builder objects. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/BlobDevGuideBlobs/CreateSas.cs" id="Snippet_CreateServiceSASBlob"::: ++ ## Use a service SAS to authorize a client object
+You can use a service SAS to authorize a client object to perform operations on a container or blob based on the permissions granted by the SAS.
+
+### [Container](#tab/container)
+
+The following code examples show how to use the service SAS to authorize a [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
+
+First, create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) object signed with the account access key:
++
+Then, generate the service SAS as shown in the earlier example and use the SAS to authorize a [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) object:
++
+### [Blob](#tab/blob)
+ The following code example shows how to use the service SAS to authorize a [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) object. This client object can be used to perform operations on the blob resource based on the permissions granted by the SAS. First, create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) object signed with the account access key:
Then, generate the service SAS as shown in the earlier example and use the SAS t
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/BlobDevGuideBlobs/CreateSas.cs" id="Snippet_UseServiceSASBlob"::: ++ [!INCLUDE [storage-dev-guide-stored-access-policy](../../../includes/storage-dev-guides/storage-dev-guide-stored-access-policy.md)] ## Resources To learn more about creating a service SAS using the Azure Blob Storage client library for .NET, see the following resources.
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/dotnet/BlobDevGuideBlobs/CreateSAS.cs)
+ [!INCLUDE [storage-dev-guide-resources-dotnet](../../../includes/storage-dev-guides/storage-dev-guide-resources-dotnet.md)] ### See also - [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md) - [Create a service SAS](/rest/api/storageservices/create-service-sas)-- For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](blob-v11-samples-dotnet.md#create-a-service-sas-for-a-blob-container).
storage Sas Service Create Java Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java-container.md
- Title: Create a service SAS for a container with Java-
-description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for Java.
---- Previously updated : 08/05/2024-----
-# Create a service SAS for a container with Java
---
-This article shows how to use the storage account key to create a service SAS for a container with the Blob Storage client library for Java.
-
-## About the service SAS
-
-A service SAS is signed with the account access key. You can use the [StorageSharedKeyCredential](/java/api/com.azure.storage.common.storagesharedkeycredential) class to create the credential that is used to sign the service SAS.
-
-You can also use a stored access policy to define the permissions and duration of the SAS. If the name of an existing stored access policy is provided, that policy is associated with the SAS. To learn more about stored access policies, see [Define a stored access policy](/rest/api/storageservices/define-stored-access-policy). If no stored access policy is provided, the code examples in this article show how to define permissions and duration for the SAS.
-
-## Create a service SAS for a container
-
-You can create a service SAS to delegate limited access to a container resource using the following method:
--- [generateSas](/java/api/com.azure.storage.blob.specialized.blobclientbase#method-details)-
-SAS signature values, such as expiry time and signed permissions, are passed to the method as part of a [BlobServiceSasSignatureValues](/java/api/com.azure.storage.blob.sas.blobservicesassignaturevalues) instance. Permissions are specified as a [BlobContainerSasPermission](/java/api/com.azure.storage.blob.sas.blobcontainersaspermission) instance.
-
-The following code example shows how to create a service SAS with read permissions for a container resource:
--
-## Use a service SAS to authorize a client object
-
-The following code examples show how to use the service SAS to authorize a [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
-
-First, create a [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) object signed with the account access key:
-
-```java
-String accountName = "<account-name>";
-String accountKey = "<account-key>";
-StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
-
-BlobServiceClient blobServiceClient = new BlobServiceClientBuilder()
- .endpoint(String.format("https://%s.blob.core.windows.net/", accountName))
- .credential(credential)
- .buildClient();
-```
-
-Then, generate the service SAS as shown in the earlier example and use the SAS to authorize a [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) object:
--
-## Resources
-
-To learn more about using the Azure Blob Storage client library for Java, see the following resources.
--
-### See also
--- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md)-- [Create a service SAS](/rest/api/storageservices/create-service-sas)
storage Sas Service Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java.md
Title: Create a service SAS for a blob with Java
+ Title: Create a service SAS for a container or blob with Java
-description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for Java.
+description: Learn how to create a service shared access signature (SAS) for a container or blob using the Azure Blob Storage client library for Java.
Previously updated : 08/05/2024 Last updated : 09/06/2024 ms.devlang: java
-# Create a service SAS for a blob with Java
+# Create a service SAS for a container or blob with Java
[!INCLUDE [storage-dev-guide-selector-service-sas](../../../includes/storage-dev-guides/storage-dev-guide-selector-service-sas.md)] [!INCLUDE [storage-auth-sas-intro-include](../../../includes/storage-auth-sas-intro-include.md)]
-This article shows how to use the storage account key to create a service SAS for a blob with the Blob Storage client library for Java.
+This article shows how to use the storage account key to create a service SAS for a container or blob with the Blob Storage client library for Java.
## About the service SAS
A service SAS is signed with the account access key. You can use the [StorageSha
You can also use a stored access policy to define the permissions and duration of the SAS. If the name of an existing stored access policy is provided, that policy is associated with the SAS. To learn more about stored access policies, see [Define a stored access policy](/rest/api/storageservices/define-stored-access-policy). If no stored access policy is provided, the code examples in this article show how to define permissions and duration for the SAS.
-## Create a service SAS for a blob
+## Create a service SAS
+
+You can create a service SAS for a container or blob, based on the needs of your app.
+
+### [Container](#tab/container)
+
+You can create a service SAS to delegate limited access to a container resource using the following method:
+
+- [generateSas](/java/api/com.azure.storage.blob.specialized.blobclientbase#method-details)
+
+SAS signature values, such as expiry time and signed permissions, are passed to the method as part of a [BlobServiceSasSignatureValues](/java/api/com.azure.storage.blob.sas.blobservicesassignaturevalues) instance. Permissions are specified as a [BlobContainerSasPermission](/java/api/com.azure.storage.blob.sas.blobcontainersaspermission) instance.
+
+The following code example shows how to create a service SAS with read permissions for a container resource:
++
+### [Blob](#tab/blob)
You can create a service SAS to delegate limited access to a blob resource using the following method:
The following code example shows how to create a service SAS with read permissio
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobSAS.java" id="Snippet_CreateServiceSASBlob"::: ++ ## Use a service SAS to authorize a client object
+You can use a service SAS to authorize a client object to perform operations on a container or blob based on the permissions granted by the SAS.
+
+### [Container](#tab/container)
+
+The following code examples show how to use the service SAS to authorize a [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
+
+First, create a [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) object signed with the account access key:
+
+```java
+String accountName = "<account-name>";
+String accountKey = "<account-key>";
+StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
+
+BlobServiceClient blobServiceClient = new BlobServiceClientBuilder()
+ .endpoint(String.format("https://%s.blob.core.windows.net/", accountName))
+ .credential(credential)
+ .buildClient();
+```
+
+Then, generate the service SAS as shown in the earlier example and use the SAS to authorize a [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) object:
++
+### [Blob](#tab/blob)
+ The following code example shows how to use the service SAS created in the earlier example to authorize a [BlobClient](/java/api/com.azure.storage.blob.blobclient) object. This client object can be used to perform operations on the blob resource based on the permissions granted by the SAS. First, create a [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) object signed with the account access key:
Then, generate the service SAS as shown in the earlier example and use the SAS t
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobSAS.java" id="Snippet_UseServiceSASBlob"::: ++ ## Resources To learn more about using the Azure Blob Storage client library for Java, see the following resources.
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobSAS.java)
+ [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] ### See also
storage Sas Service Create Python Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python-container.md
- Title: Create a service SAS for a container with Python-
-description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for Python.
---- Previously updated : 08/05/2024-----
-# Create a service SAS for a container with Python
---
-This article shows how to use the storage account key to create a service SAS for a container with the Blob Storage client library for Python.
-
-## About the service SAS
-
-A service SAS is signed with the storage account access key. A service SAS delegates access to a resource in a single Azure Storage service, such as Blob Storage.
-
-You can also use a stored access policy to define the permissions and duration of the SAS. If the name of an existing stored access policy is provided, that policy is associated with the SAS. To learn more about stored access policies, see [Define a stored access policy](/rest/api/storageservices/define-stored-access-policy). If no stored access policy is provided, the code examples in this article show how to define permissions and duration for the SAS.
-
-## Create a service SAS for a container
-
-You can create a service SAS to delegate limited access to a container resource using the following method:
--- [generate_container_sas](/python/api/azure-storage-blob/azure.storage.blob#azure-storage-blob-generate-blob-sas)-
-The storage account access key used to sign the SAS is passed to the method as the `account_key` argument. Allowed permissions are passed to the method as the `permission` argument, and are defined in the [ContainerSasPermissions](/python/api/azure-storage-blob/azure.storage.blob.containersaspermissions) class.
-
-The following code example shows how to create a service SAS with read permissions for a container resource:
--
-## Use a service SAS to authorize a client object
-
-The following code example shows how to use the service SAS created in the earlier example to authorize a [ContainerClient](/python/api/azure-storage-blob/azure.storage.blob.containerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
--
-## Resources
-
-To learn more about using the Azure Blob Storage client library for Python, see the following resources.
-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_create_sas.py)--
-### See also
--- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md)-- [Create a service SAS](/rest/api/storageservices/create-service-sas)
storage Sas Service Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python.md
Title: Create a service SAS for a blob with Python
+ Title: Create a service SAS for a container or blob with Python
-description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for Python.
+description: Learn how to create a service shared access signature (SAS) for a container or blob using the Azure Blob Storage client library for Python.
Previously updated : 08/05/2024 Last updated : 09/06/2024 ms.devlang: python
-# Create a service SAS for a blob with Python
+# Create a service SAS for a container or blob with Python
[!INCLUDE [storage-dev-guide-selector-service-sas](../../../includes/storage-dev-guides/storage-dev-guide-selector-service-sas.md)] [!INCLUDE [storage-auth-sas-intro-include](../../../includes/storage-auth-sas-intro-include.md)]
-This article shows how to use the storage account key to create a service SAS for a blob with the Blob Storage client library for Python.
+This article shows how to use the storage account key to create a service SAS for a container or blob with the Blob Storage client library for Python.
## About the service SAS
A service SAS is signed with the storage account access key. A service SAS deleg
You can also use a stored access policy to define the permissions and duration of the SAS. If the name of an existing stored access policy is provided, that policy is associated with the SAS. To learn more about stored access policies, see [Define a stored access policy](/rest/api/storageservices/define-stored-access-policy). If no stored access policy is provided, the code examples in this article show how to define permissions and duration for the SAS.
-## Create a service SAS for a blob
+## Create a service SAS
+
+You can create a service SAS for a container or blob, based on the needs of your app.
+
+### [Container](#tab/container)
+
+You can create a service SAS to delegate limited access to a container resource using the following method:
+
+- [generate_container_sas](/python/api/azure-storage-blob/azure.storage.blob#azure-storage-blob-generate-blob-sas)
+
+The storage account access key used to sign the SAS is passed to the method as the `account_key` argument. Allowed permissions are passed to the method as the `permission` argument, and are defined in the [ContainerSasPermissions](/python/api/azure-storage-blob/azure.storage.blob.containersaspermissions) class.
+
+The following code example shows how to create a service SAS with read permissions for a container resource:
++
+### [Blob](#tab/blob)
You can create a service SAS to delegate limited access to a blob resource using the following method:
The following code example shows how to create a service SAS with read permissio
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob_devguide_create_sas.py" id="Snippet_create_service_sas_blob"::: ++ ## Use a service SAS to authorize a client object
+You can use a service SAS to authorize a client object to perform operations on a container or blob based on the permissions granted by the SAS.
+
+### [Container](#tab/container)
+
+The following code example shows how to use the service SAS created in the earlier example to authorize a [ContainerClient](/python/api/azure-storage-blob/azure.storage.blob.containerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
++
+### [Blob](#tab/blob)
+ The following code example shows how to use the service SAS created in the earlier example to authorize a [BlobClient](/python/api/azure-storage-blob/azure.storage.blob.blobclient) object. This client object can be used to perform operations on the blob resource based on the permissions granted by the SAS. :::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob_devguide_create_sas.py" id="Snippet_use_service_sas_blob"::: ++ ## Resources To learn more about using the Azure Blob Storage client library for Python, see the following resources.
storage Storage Blob Container User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-dotnet.md
- Title: Create a user delegation SAS for a container with .NET-
-description: Learn how to create a user delegation SAS for a container with Microsoft Entra credentials by using the .NET client library for Blob Storage.
----- Previously updated : 08/05/2024-----
-# Create a user delegation SAS for a container with .NET
---
-This article shows how to use Microsoft Entra credentials to create a user delegation SAS for a container using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage).
--
-## Assign Azure roles for access to data
-
-When a Microsoft Entra security principal attempts to access blob data, that security principal must have permissions to the resource. Whether the security principal is a managed identity in Azure or a Microsoft Entra user account running code in the development environment, the security principal must be assigned an Azure role that grants access to blob data. For information about assigning permissions via Azure RBAC, see [Assign an Azure role for access to blob data](assign-azure-role-data-access.md).
--
-## Create a user delegation SAS for a container
-
-Once you've obtained the user delegation key, you can create a user delegation SAS to delegate limited access to a container resource. The following code example shows how to create a user delegation SAS for a container:
--
-## Use a user delegation SAS to authorize a client object
-
-The following code example shows how to use the user delegation SAS to authorize a [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
--
-## Resources
-
-To learn more about creating a user delegation SAS using the Azure Blob Storage client library for .NET, see the following resources.
-
-### REST API operations
-
-The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library method for getting a user delegation key uses the following REST API operations:
--- [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key) (REST API)--
-### See also
--- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md)-- [Create a user delegation SAS](/rest/api/storageservices/create-user-delegation-sas)-
storage Storage Blob Container User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-java.md
- Title: Create a user delegation SAS for a container with Java-
-description: Learn how to create a user delegation SAS for a container with Microsoft Entra credentials by using the Java client library for Blob Storage.
----- Previously updated : 08/05/2024-----
-# Create a user delegation SAS for a container with Java
---
-This article shows how to use Microsoft Entra credentials to create a user delegation SAS for a container using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme).
--
-## Assign Azure roles for access to data
-
-When a Microsoft Entra security principal attempts to access blob data, that security principal must have permissions to the resource. Whether the security principal is a managed identity in Azure or a Microsoft Entra user account running code in the development environment, the security principal must be assigned an Azure role that grants access to blob data. For information about assigning permissions via Azure RBAC, see [Assign an Azure role for access to blob data](assign-azure-role-data-access.md).
--
-## Create a user delegation SAS for a container
-
-Once you've obtained the user delegation key, you can create a user delegation SAS. You can create a user delegation SAS to delegate limited access to a container resource using the following method from a [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) instance:
--- [generateUserDelegationSas](/java/api/com.azure.storage.blob.specialized.blobclientbase#method-details)-
-The user delegation key to sign the SAS is passed to this method along with specified values for [BlobServiceSasSignatureValues](/java/api/com.azure.storage.blob.sas.blobservicesassignaturevalues). Permissions are specified as a [BlobContainerSasPermission](/java/api/com.azure.storage.blob.sas.blobcontainersaspermission) instance.
-
-The following code example shows how to create a user delegation SAS for a container:
--
-## Use a user delegation SAS to authorize a client object
-
-The following code example shows how to use the user delegation SAS created in the earlier example to authorize a [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
--
-## Resources
-
-To learn more about creating a user delegation SAS using the Azure Blob Storage client library for Java, see the following resources.
-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobSAS.java)-
-### REST API operations
-
-The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library method for getting a user delegation key uses the following REST API operation:
--- [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key) (REST API)--
-### See also
--- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md)-- [Create a user delegation SAS](/rest/api/storageservices/create-user-delegation-sas)-
storage Storage Blob Container User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-python.md
- Title: Create a user delegation SAS for a container with Python-
-description: Learn how to create a user delegation SAS for a container with Microsoft Entra credentials by using the Python client library for Blob Storage.
----- Previously updated : 08/05/2024-----
-# Create a user delegation SAS for a container with Python
---
-This article shows how to use Microsoft Entra credentials to create a user delegation SAS for a container using the [Azure Storage client library for Python](/python/api/overview/azure/storage).
--
-## Assign Azure roles for access to data
-
-When a Microsoft Entra security principal attempts to access blob data, that security principal must have permissions to the resource. Whether the security principal is a managed identity in Azure or a Microsoft Entra user account running code in the development environment, the security principal must be assigned an Azure role that grants access to blob data. For information about assigning permissions via Azure RBAC, see [Assign an Azure role for access to blob data](assign-azure-role-data-access.md).
--
-## Create a user delegation SAS for a container
-
-Once you've obtained the user delegation key, you can create a user delegation SAS. You can create a user delegation SAS to delegate limited access to a container resource using the following method:
--- [generate_container_sas](/python/api/azure-storage-blob/azure.storage.blob#azure-storage-blob-generate-container-sas)-
-The user delegation key to sign the SAS is passed to the method as the `user_delegation_key` argument. Allowed permissions are passed to the method as the `permission` argument, and are defined in the [ContainerSasPermissions](/python/api/azure-storage-blob/azure.storage.blob.containersaspermissions) class.
-
-The following code example shows how to create a user delegation SAS for a container:
--
-## Use a user delegation SAS to authorize a client object
-
-The following code example shows how to use the user delegation SAS created in the earlier example to authorize a [ContainerClient](/python/api/azure-storage-blob/azure.storage.blob.containerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
--
-## Resources
-
-To learn more about creating a user delegation SAS using the Azure Blob Storage client library for Python, see the following resources.
-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_create_sas.py)-
-### REST API operations
-
-The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library method for getting a user delegation key uses the following REST API operations:
--- [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key) (REST API)--
-### See also
--- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md)-- [Create a user delegation SAS](/rest/api/storageservices/create-user-delegation-sas)-
storage Storage Blob User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-dotnet.md
Title: Create a user delegation SAS for a blob with .NET
-description: Learn how to create a user delegation SAS for a blob with Microsoft Entra credentials by using the .NET client library for Blob Storage.
+description: Learn how to create a user delegation SAS for a container or blob with Microsoft Entra credentials by using the .NET client library for Blob Storage.
Previously updated : 08/05/2024 Last updated : 09/06/2024 ms.devlang: csharp
-# Create a user delegation SAS for a blob with .NET
+# Create a user delegation SAS for a container or blob with .NET
[!INCLUDE [storage-dev-guide-selector-user-delegation-sas](../../../includes/storage-dev-guides/storage-dev-guide-selector-user-delegation-sas.md)] [!INCLUDE [storage-auth-sas-intro-include](../../../includes/storage-auth-sas-intro-include.md)]
-This article shows how to use Microsoft Entra credentials to create a user delegation SAS for a blob using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage).
+This article shows how to use Microsoft Entra credentials to create a user delegation SAS for a container or blob using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage).
[!INCLUDE [storage-auth-user-delegation-include](../../../includes/storage-auth-user-delegation-include.md)] ## Assign Azure roles for access to data
-When a Microsoft Entra security principal attempts to access blob data, that security principal must have permissions to the resource. Whether the security principal is a managed identity in Azure or a Microsoft Entra user account running code in the development environment, the security principal must be assigned an Azure role that grants access to blob data. For information about assigning permissions via Azure RBAC, see [Assign an Azure role for access to blob data](assign-azure-role-data-access.md).
+When a Microsoft Entra security principal attempts to access data, that security principal must have permissions to the resource. Whether the security principal is a managed identity in Azure or a Microsoft Entra user account running code in the development environment, the security principal must be assigned an Azure role that grants access to data. For information about assigning permissions via Azure RBAC, see [Assign an Azure role for access to blob data](assign-azure-role-data-access.md).
[!INCLUDE [storage-dev-guide-user-delegation-sas-dotnet](../../../includes/storage-dev-guides/storage-dev-guide-user-delegation-sas-dotnet.md)]
-## Create a user delegation SAS for a blob
+## Create a user delegation SAS
-Once you've obtained the user delegation key, you can create a user delegation SAS to delegate limited access to a blob resource. The following code example shows how to create a user delegation SAS for a blob:
+You can create a user delegation SAS for a container or blob, based on the needs of your app.
+
+### [Container](#tab/container)
+
+Once you've obtained the user delegation key, you can create a user delegation SAS to delegate limited access to a container. The following code example shows how to create a user delegation SAS for a container:
++
+### [Blob](#tab/blob)
+
+Once you've obtained the user delegation key, you can create a user delegation SAS to delegate limited access to a blob. The following code example shows how to create a user delegation SAS for a blob:
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/BlobDevGuideBlobs/CreateSas.cs" id="Snippet_CreateUserDelegationSASBlob"::: ++ ## Use a user delegation SAS to authorize a client object
+You can use a user delegation SAS to authorize a client object to perform operations on a container or blob based on the permissions granted by the SAS.
+
+### [Container](#tab/container)
+
+The following code example shows how to use the user delegation SAS to authorize a [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
++
+### [Blob](#tab/blob)
+ The following code example shows how to use the user delegation SAS to authorize a [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) object. This client object can be used to perform operations on the blob resource based on the permissions granted by the SAS. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/BlobDevGuideBlobs/CreateSas.cs" id="Snippet_UseUserDelegationSASBlob"::: ++ ## Resources To learn more about creating a user delegation SAS using the Azure Blob Storage client library for .NET, see the following resources.
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/dotnet/BlobDevGuideBlobs/CreateSAS.cs)
+ ### REST API operations
-The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library method for getting a user delegation key uses the following REST API operations:
+The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library method for getting a user delegation key uses the following REST API operation:
- [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key) (REST API)
storage Storage Blob User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-java.md
Title: Create a user delegation SAS for a blob with Java
-description: Learn how to create a user delegation SAS for a blob with Microsoft Entra credentials by using the Azure Storage client library for Java.
+description: Learn how to create a user delegation SAS for a container or blob with Microsoft Entra credentials by using the Azure Storage client library for Java.
Previously updated : 08/05/2024 Last updated : 09/06/2024 ms.devlang: java
-# Create a user delegation SAS for a blob with Java
+# Create a user delegation SAS for a container or blob with Java
[!INCLUDE [storage-dev-guide-selector-user-delegation-sas](../../../includes/storage-dev-guides/storage-dev-guide-selector-user-delegation-sas.md)] [!INCLUDE [storage-auth-sas-intro-include](../../../includes/storage-auth-sas-intro-include.md)]
-This article shows how to use Microsoft Entra credentials to create a user delegation SAS for a blob using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme).
+This article shows how to use Microsoft Entra credentials to create a user delegation SAS for a container or blob using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme).
[!INCLUDE [storage-auth-user-delegation-include](../../../includes/storage-auth-user-delegation-include.md)] ## Assign Azure roles for access to data
-When a Microsoft Entra security principal attempts to access blob data, that security principal must have permissions to the resource. Whether the security principal is a managed identity in Azure or a Microsoft Entra user account running code in the development environment, the security principal must be assigned an Azure role that grants access to blob data. For information about assigning permissions via Azure RBAC, see [Assign an Azure role for access to blob data](assign-azure-role-data-access.md).
+When a Microsoft Entra security principal attempts to access data, that security principal must have permissions to the resource. Whether the security principal is a managed identity in Azure or a Microsoft Entra user account running code in the development environment, the security principal must be assigned an Azure role that grants access to data. For information about assigning permissions via Azure RBAC, see [Assign an Azure role for access to blob data](assign-azure-role-data-access.md).
[!INCLUDE [storage-dev-guide-user-delegation-sas-java](../../../includes/storage-dev-guides/storage-dev-guide-user-delegation-sas-java.md)]
-## Create a user delegation SAS for a blob
+## Create a user delegation SAS
-Once you've obtained the user delegation key, you can create a user delegation SAS. You can create a user delegation SAS to delegate limited access to a blob resource using the following method from a [BlobClient](/java/api/com.azure.storage.blob.blobclient) instance:
+You can create a user delegation SAS for a container or blob, based on the needs of your app.
+
+### [Container](#tab/container)
+
+Once you've obtained the user delegation key, you can create a user delegation SAS. You can create a user delegation SAS to delegate limited access to a container resource using the following method from a [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) instance:
+
+- [generateUserDelegationSas](/java/api/com.azure.storage.blob.specialized.blobclientbase#method-details)
+
+The user delegation key to sign the SAS is passed to this method along with specified values for [BlobServiceSasSignatureValues](/java/api/com.azure.storage.blob.sas.blobservicesassignaturevalues). Permissions are specified as a [BlobContainerSasPermission](/java/api/com.azure.storage.blob.sas.blobcontainersaspermission) instance.
+
+The following code example shows how to create a user delegation SAS for a container:
++
+### [Blob](#tab/blob)
+
+Once you've obtained the user delegation key, you can create a user delegation SAS. You can create a user delegation SAS to delegate limited access to a blob using the following method from a [BlobClient](/java/api/com.azure.storage.blob.blobclient) instance:
- [generateUserDelegationSas](/java/api/com.azure.storage.blob.specialized.blobclientbase#method-details)
The following code example shows how to create a user delegation SAS for a blob:
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobSAS.java" id="Snippet_CreateUserDelegationSASBlob"::: ++ ## Use a user delegation SAS to authorize a client object
+You can use a user delegation SAS to authorize a client object to perform operations on a container or blob based on the permissions granted by the SAS.
+
+### [Container](#tab/container)
+
+The following code example shows how to use the user delegation SAS created in the earlier example to authorize a [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
++
+### [Blob](#tab/blob)
+ The following code example shows how to use the user delegation SAS created in the earlier example to authorize a [BlobClient](/java/api/com.azure.storage.blob.blobclient) object. This client object can be used to perform operations on the blob resource based on the permissions granted by the SAS. :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobSAS.java" id="Snippet_UseUserDelegationSASBlob"::: ++ ## Resources To learn more about creating a user delegation SAS using the Azure Blob Storage client library for Java, see the following resources.
storage Storage Blob User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-python.md
Title: Create a user delegation SAS for a blob with Python
+ Title: Create a user delegation SAS for a container or blob with Python
-description: Learn how to create a user delegation SAS for a blob with Microsoft Entra credentials by using the Python client library for Blob Storage.
+description: Learn how to create a user delegation SAS for a container or blob with Microsoft Entra credentials by using the Python client library for Blob Storage.
Previously updated : 08/05/2024 Last updated : 09/06/2024 ms.devlang: python
-# Create a user delegation SAS for a blob with Python
+# Create a user delegation SAS for a container or blob with Python
[!INCLUDE [storage-dev-guide-selector-user-delegation-sas](../../../includes/storage-dev-guides/storage-dev-guide-selector-user-delegation-sas.md)] [!INCLUDE [storage-auth-sas-intro-include](../../../includes/storage-auth-sas-intro-include.md)]
-This article shows how to use Microsoft Entra credentials to create a user delegation SAS for a blob using the [Azure Storage client library for Python](/python/api/overview/azure/storage).
+This article shows how to use Microsoft Entra credentials to create a user delegation SAS for a container or blob using the [Azure Storage client library for Python](/python/api/overview/azure/storage).
[!INCLUDE [storage-auth-user-delegation-include](../../../includes/storage-auth-user-delegation-include.md)] ## Assign Azure roles for access to data
-When a Microsoft Entra security principal attempts to access blob data, that security principal must have permissions to the resource. Whether the security principal is a managed identity in Azure or a Microsoft Entra user account running code in the development environment, the security principal must be assigned an Azure role that grants access to blob data. For information about assigning permissions via Azure RBAC, see [Assign an Azure role for access to blob data](assign-azure-role-data-access.md).
+When a Microsoft Entra security principal attempts to access data, that security principal must have permissions to the resource. Whether the security principal is a managed identity in Azure or a Microsoft Entra user account running code in the development environment, the security principal must be assigned an Azure role that grants access to data. For information about assigning permissions via Azure RBAC, see [Assign an Azure role for access to blob data](assign-azure-role-data-access.md).
[!INCLUDE [storage-dev-guide-user-delegation-sas-python](../../../includes/storage-dev-guides/storage-dev-guide-user-delegation-sas-python.md)]
-## Create a user delegation SAS for a blob
+## Create a user delegation SAS
+
+You can create a user delegation SAS for a container or blob, based on the needs of your app.
+
+### [Container](#tab/container)
+
+Once you've obtained the user delegation key, you can create a user delegation SAS. You can create a user delegation SAS to delegate limited access to a container resource using the following method:
+
+- [generate_container_sas](/python/api/azure-storage-blob/azure.storage.blob#azure-storage-blob-generate-container-sas)
+
+The user delegation key to sign the SAS is passed to the method as the `user_delegation_key` argument. Allowed permissions are passed to the method as the `permission` argument, and are defined in the [ContainerSasPermissions](/python/api/azure-storage-blob/azure.storage.blob.containersaspermissions) class.
+
+The following code example shows how to create a user delegation SAS for a container:
++
+### [Blob](#tab/blob)
Once you've obtained the user delegation key, you can create a user delegation SAS. You can create a user delegation SAS to delegate limited access to a blob resource using the following method:
The following code example shows how to create a user delegation SAS for a blob:
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob_devguide_create_sas.py" id="Snippet_create_user_delegation_sas_blob"::: ++ ## Use a user delegation SAS to authorize a client object
+You can use a user delegation SAS to authorize a client object to perform operations on a container or blob based on the permissions granted by the SAS.
+
+### [Container](#tab/container)
+
+The following code example shows how to use the user delegation SAS created in the earlier example to authorize a [ContainerClient](/python/api/azure-storage-blob/azure.storage.blob.containerclient) object. This client object can be used to perform operations on the container resource based on the permissions granted by the SAS.
++
+### [Blob](#tab/blob)
+ The following code example shows how to use the user delegation SAS created in the earlier example to authorize a [BlobClient](/python/api/azure-storage-blob/azure.storage.blob.blobclient) object. This client object can be used to perform operations on the blob resource based on the permissions granted by the SAS. :::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob_devguide_create_sas.py" id="Snippet_use_user_delegation_sas_blob"::: ++ ## Resources To learn more about creating a user delegation SAS using the Azure Blob Storage client library for Python, see the following resources.
storage Use Container Storage With Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-managed-disks.md
Follow these steps to create a dynamic storage pool for Azure Disks.
azureDisk: skuName: PremiumV2_LRS iopsReadWrite: 5000
- MbpsReadWrite: 200
+ mbpsReadWrite: 200
resources: requests: storage: 1Ti
storage Storage Files Configure S2s Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-s2s-vpn.md
Title: Configure a site-to-site VPN for Azure Files
-description: Learn how to configure a site-to-site (S2S) VPN for use with Azure Files so you can mount your Azure file shares from on premises.
+description: Learn how to configure a site-to-site (S2S) VPN for use with Azure Files so you can mount your Azure file shares from on premises. Use the Azure portal, PowerShell, or CLI.
Previously updated : 05/09/2024 Last updated : 09/06/2024 # Configure a site-to-site VPN for use with Azure Files
-You can use a site-to-site (S2S) VPN connection to mount your Azure file shares from your on-premises network, without sending data over the open internet. You can set up a S2S VPN using [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md), which is an Azure resource offering VPN services, and is deployed in a resource group alongside storage accounts or other Azure resources.
+You can use a site-to-site (S2S) VPN connection to mount your Azure file shares from your on-premises network, without sending data over the open internet. You can set up a S2S VPN using [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md), which is an Azure resource offering VPN services. You deploy VPN Gateway in a resource group alongside storage accounts or other Azure resources.
![A topology chart illustrating the topology of an Azure VPN gateway connecting an Azure file share to an on-premises site using a S2S VPN](media/storage-files-configure-s2s-vpn/s2s-topology.png)
The article details the steps to configure a site-to-site VPN to mount Azure fil
- An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage accounts, which are management constructs that represent a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blobs or queues. You can learn more about how to deploy Azure file shares and storage accounts in [Create an Azure file share](storage-how-to-create-file-share.md). -- A private endpoint for the storage account containing the Azure file share you want to mount on-premises. To learn how to create a private endpoint, see [Configuring Azure Files network endpoints](storage-files-networking-endpoints.md?tabs=azure-portal).- - A network appliance or server in your on-premises data center that's compatible with Azure VPN Gateway. Azure Files is agnostic of the on-premises network appliance chosen, but Azure VPN Gateway maintains a [list of tested devices](../../vpn-gateway/vpn-gateway-about-vpn-devices.md). Different network appliances offer different features, performance characteristics, and management functionalities, so consider these when selecting a network appliance. If you don't have an existing network appliance, Windows Server contains a built-in Server Role, Routing and Remote Access (RRAS), which can be used as the on-premises network appliance. To learn more about how to configure Routing and Remote Access in Windows Server, see [RAS Gateway](/windows-server/remote/remote-access/ras-gateway/ras-gateway).
If you don't have an existing network appliance, Windows Server contains a built
To add a new or existing virtual network to your storage account, follow these steps.
+# [Portal](#tab/azure-portal)
+ 1. Sign in to the Azure portal and navigate to the storage account containing the Azure file share you would like to mount on-premises.
-1. In the table of contents for the storage account, select **Security + networking > Networking**. Unless you added a virtual network to your storage account when you created it, the resulting pane should have the radio button for **Enabled from all networks** selected under **Public network access**.
+1. In the service menu, under **Security + networking**, select **Networking**. Unless you added a virtual network to your storage account when you created it, the resulting pane should have the radio button for **Enabled from all networks** selected under **Public network access**.
1. To add a virtual network, select the **Enabled from selected virtual networks and IP addresses** radio button. Under the **Virtual networks** subheading, select either **+ Add existing virtual network** or **+ Add new virtual network**. Creating a new virtual network will result in a new Azure resource being created. The new or existing virtual network resource must be in the same region as the storage account, but it doesn't need to be in the same resource group or subscription. However, keep in mind that the resource group, region, and subscription you deploy your virtual network into must match where you deploy your virtual network gateway in the next step.
To add a new or existing virtual network to your storage account, follow these s
If you add an existing virtual network, you must first create a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) on the virtual network. You'll be asked to select one or more subnets of that virtual network. If you create a new virtual network, you'll create a subnet as part of the creation process. You can add more subnets later through the resulting Azure resource for the virtual network.
- If you haven't enabled public network access to the virtual network previously, the Microsoft.Storage service endpoint will need to be added to the virtual network subnet. This can take up to 15 minutes to complete, although in most cases it will complete much faster. Until this operation has completed, you won't be able to access the Azure file shares within that storage account, including via the VPN connection.
+ If you haven't enabled public network access to the virtual network previously, the `Microsoft.Storage` service endpoint will need to be added to the virtual network subnet. This can take up to 15 minutes to complete, although in most cases it will complete much faster. Until this operation has completed, you won't be able to access the Azure file shares within that storage account, including via the VPN connection. The service endpoint routes traffic from the virtual network through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request.
1. Select **Save** at the top of the page.
+# [Azure PowerShell](#tab/azure-powershell)
+
+1. Sign in to the Azure portal.
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
+
+1. If you want to add a new virtual network and gateway subnet, run the following script. If you have an existing virtual network that you want to use, then skip this step and proceed to step 3. Be sure to replace `<your-subscription-id>`, `<resource-group>`, and `<storage-account-name>` with your own values. If desired, provide your own values for `$location` and `$vnetName`. The `-AddressPrefix` parameter defines the IP address blocks for the virtual network and the subnet, so replace those with your respective values.
+
+ ```azurepowershell-interactive
+ # Select subscription
+ $subscriptionId = "<your-subscription-id>"
+ Select-AzSubscription -SubscriptionId $subscriptionId
+
+ # Define parameters
+ $storageAccount = "<storage-account-name>"
+ $resourceGroup = "<resource-group>"
+ $location = "East US" # Change to desired Azure region
+ $vnetName = "myVNet"
+ # Virtual network gateway can only be created in subnet with name 'GatewaySubnet'.
+ $subnetName = "GatewaySubnet"
+ $vnetAddressPrefix = "10.0.0.0/16" # Update this address as per your requirements
+ $subnetAddressPrefix = "10.0.0.0/24" # Update this address as per your requirements
+
+ # Set current storage account
+ Set-AzCurrentStorageAccount -ResourceGroupName $resourceGroup -Name $storageAccount
+
+ # Define subnet configuration
+ $subnetConfig = New-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnetAddressPrefix
+
+ # Create a virtual network
+ New-AzVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup -Location $location -AddressPrefix $vnetAddressPrefix -Subnet $subnetConfig
+ ```
+
+1. If you created a new virtual network and subnet in the previous step, then skip this step. If you have an existing virtual network you want to use, you must first create a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) on the virtual network before you can deploy a virtual network gateway.
+
+ To add a gateway subnet to an existing virtual network, run the following script. Be sure to replace `<your-subscription-id>`, `<resource-group>`, and `<virtual-network-name>` with your own values. The `$subnetAddressPrefix` parameter defines the IP address block for the subnet, so replace the IP address per your requirements.
+
+ ```azurepowershell-interactive
+ # Select subscription
+ $subscriptionId = "<your-subscription-id>"
+ Select-AzSubscription -SubscriptionId $subscriptionId
+
+ # Define parameters
+ $storageAccount = "<storage-account-name>"
+ $resourceGroup = "<resource-group>"
+ $vnetName = "<virtual-network-name>"
+ # Virtual network gateway can only be created in subnet with name 'GatewaySubnet'.
+ $subnetName = "GatewaySubnet"
+ $subnetAddressPrefix = "10.0.0.0/24" # Update this address as per your requirements
+
+ # Set current storage account
+ Set-AzCurrentStorageAccount -ResourceGroupName $resourceGroup -Name $storageAccount
+
+ # Get the virtual network
+ $vnet = Get-AzVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup
+
+ # Add the gateway subnet
+ Add-AzVirtualNetworkSubnetConfig -Name $subnetName -VirtualNetwork $vnet -AddressPrefix $subnetAddressPrefix
+
+ # Apply the configuration to the virtual network
+ Set-AzVirtualNetwork -VirtualNetwork $vnet
+ ```
+
+1. To allow traffic only from specific virtual networks, use the `Update-AzStorageAccountNetworkRuleSet` command and set the `-DefaultAction` parameter to Deny.
+
+ ```azurepowershell-interactive
+ Update-AzStorageAccountNetworkRuleSet -ResourceGroupName $resourceGroup -Name $storageAccount -DefaultAction Deny
+ ```
+
+1. Enable a `Microsoft.Storage` service endpoint on the virtual network and subnet. This can take up to 15 minutes to complete, although in most cases it will complete much faster. Until this operation has completed, you won't be able to access the Azure file shares within that storage account, including via the VPN connection. The service endpoint routes traffic from the virtual network through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request.
+
+ ```azurepowershell-interactive
+ Get-AzVirtualNetwork -ResourceGroupName $resourceGroup -Name $vnetName | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnetAddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
+ ```
+
+1. Add a network rule for the virtual network and subnet.
+
+ ```azurepowershell-interactive
+ $subnet = Get-AzVirtualNetwork -ResourceGroupName $resourceGroup -Name $vnetName | Get-AzVirtualNetworkSubnetConfig -Name $subnetName
+ Add-AzStorageAccountNetworkRule -ResourceGroupName $resourceGroup -Name $storageAccount -VirtualNetworkResourceId $subnet.Id
+ ```
+
+# [Azure CLI](#tab/azure-cli)
+
+1. Sign in to the Azure portal.
+
+ ```azurecli-interactive
+ az login
+ ```
+
+1. If you want to add a new virtual network and gateway subnet, run the following script. If you have an existing virtual network that you want to use, then skip this step and proceed to step 3. Be sure to replace `<your-subscription-id>`, `<storage-account-name>`, and `<resource-group>` with your own values. Replace `<virtual-network-name>` with the name of the new virtual network you want to create. The `--address-prefix` and `--subnet-prefixes` parameters define the IP address blocks for the virtual network and the subnet, so replace those with your respective values. The virtual network will be created in the same region as the resource group.
+
+ ```azurecli-interactive
+ # Set your subscription
+ az account set --subscription "<your-subscription-id>"
+
+ # Define parameters
+ storageAccount="<storage-account-name>"
+ resourceGroup="<resource-group>"
+ vnetName="<virtual-network-name>"
+ # Virtual network gateway can only be created in subnet with name 'GatewaySubnet'.
+ subnetName="GatewaySubnet"
+ vnetAddressPrefix="10.0.0.0/16" # Update this address per your requirements
+ subnetAddressPrefix="10.0.0.0/24" # Update this address per your requirements
+
+ # Create a virtual network and subnet
+ az network vnet create \
+ --resource-group $resourceGroup \
+ --name $vnetName \
+ --address-prefix $vnetAddressPrefix \
+ --subnet-name $subnetName \
+ --subnet-prefixes $subnetAddressPrefix
+ ```
+
+1. If you created a new virtual network and subnet in the previous step, then skip this step. If you have an existing virtual network you want to use, you must first create a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) on the virtual network before you can deploy a virtual network gateway.
+
+ To add a gateway subnet to an existing virtual network, run the following script. Be sure to replace `<your-subscription-id>`, `<resource-group>`, and `<virtual-network-name>` with your own values. The `--address-prefixes` parameter defines the IP address block for the subnet, so replace the IP address block as needed.
+
+ ```azurecli-interactive
+ # Set your subscription
+ az account set --subscription "<your-subscription-id>"
+
+ # Define parameters
+ storageAccount="<storage-account-name>"
+ resourceGroup="<resource-group>"
+ vnetName="<virtual-network-name>"
+ # Virtual network gateway can only be created in subnet with name 'GatewaySubnet'.
+ subnetName="GatewaySubnet"
+ subnetAddressPrefix="10.0.0.0/24" # Update this address per your requirements
+
+ # Create the gateway subnet
+ az network vnet subnet create \
+ --resource-group $resourceGroup \
+ --vnet-name $vnetName \
+ --name $subnetName \
+ --address-prefixes $subnetAddressPrefix
+ ```
+
+1. To allow traffic only from specific virtual networks, use the `az storage account update` command and set the `--default-action` parameter to Deny.
+
+ ```azurecli-interactive
+ az storage account update --resource-group $resourceGroup --name $storageAccount --default-action Deny
+ ```
+
+1. Enable a `Microsoft.Storage` service endpoint on the virtual network and subnet. This can take up to 15 minutes to complete, although in most cases it will complete much faster. Until this operation has completed, you won't be able to access the Azure file shares within that storage account, including via the VPN connection. The service endpoint routes traffic from the virtual network through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request.
+
+ ```azurecli-interactive
+ az network vnet subnet update --resource-group $resourceGroup --vnet-name $vnetName --name $subnetName --service-endpoints "Microsoft.Storage.Global"
+ ```
+
+1. Add a network rule for the virtual network and subnet.
+
+ ```azurecli-interactive
+ subnetid=$(az network vnet subnet show --resource-group $resourceGroup --vnet-name $vnetName --name $subnetName --query id --output tsv)
+ az storage account network-rule add --resource-group $resourceGroup --account-name $storageAccount --subnet $subnetid
+ ```
+++ ## Deploy a virtual network gateway To deploy a virtual network gateway, follow these steps.
+# [Portal](#tab/azure-portal)
+ 1. In the search box at the top of the Azure portal, search for and then select *Virtual network gateways*. The **Virtual network gateways** page should appear. At the top of the page, select **+ Create**. 1. On the **Basics** tab, fill in the values for **Project details** and **Instance details**. Your virtual network gateway must be in the same subscription, Azure region, and resource group as the virtual network.
To deploy a virtual network gateway, follow these steps.
1. Select **Review + create** to run validation. Once validation passes, select **Create** to deploy the virtual network gateway. Deployment can take up to 45 minutes to complete.
+# [Azure PowerShell](#tab/azure-powershell)
+
+1. First, request a public IP address. If you have an existing unused IP address that you want to use, you can skip this step. Replace `<resource-group>` with your resource group name, and specify the same Azure region that you used for your virtual network.
+
+ ```azurepowershell-interactive
+ $gwpip = New-AzPublicIpAddress -Name "mypublicip" -ResourceGroupName "<resource-group>" -Location "East US" -AllocationMethod Static -Sku Standard
+ ```
+
+1. Next, create the gateway IP address configuration by defining the subnet and the public IP address to use. The public IP address of the VPN gateway will be exposed to the internet. Replace `<virtual-network-name>` and `<resource-group>` with your own values.
+
+ ```azurepowershell-interactive
+ $vnet = Get-AzVirtualNetwork -Name <virtual-network-name> -ResourceGroupName <resource-group>
+ $subnet = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet
+ $gwipconfig = New-AzVirtualNetworkGatewayIpConfig -Name gwipconfig -SubnetId $subnet.Id -PublicIpAddressId $gwpip.Id
+ ```
+
+1. Run the following script to create the VPN gateway.
+
+ Replace `<resource-group>` with the same resource group as your virtual network. Specify the [gateway SKU](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsku) that supports the features you want to use. The gateway SKU controls the number of allowed Site-to-Site tunnels and desired performance of the VPN. We recommend using a Generation 2 SKU. Don't use the Basic SKU if you want to use IKEv2 authentication (route-based VPN).
+
+ ```azurepowershell-interactive
+ New-AzVirtualNetworkGateway -Name MyVnetGateway -ResourceGroupName <resource-group> -Location "East US" -IpConfigurations $gwipconfig -GatewayType "Vpn" -VpnType RouteBased -GatewaySku VpnGw2 -VpnGatewayGeneration Generation2
+ ```
+
+ You can also choose to include other features like [Border Gateway Protocol (BGP)](../../vpn-gateway/vpn-gateway-bgp-overview.md) and [Active-Active](../../vpn-gateway/vpn-gateway-highlyavailable.md). See the documentation for the [New-AzVirtualNetworkGateway](/powershell/module/az.network/new-azvirtualnetworkgateway) cmdlet. If you do require BGP, the default ASN is 65515, although this value can be changed.
+
+1. Creating a gateway can take 45 minutes or more, depending on the gateway SKU you specified. You can view the VPN gateway using the [Get-AzVirtualNetworkGateway](/powershell/module/az.network/Get-azVirtualNetworkGateway) cmdlet.
+
+ ```azurepowershell-interactive
+ Get-AzVirtualNetworkGateway -Name MyVnetGateway -ResourceGroup <resource-group>
+ ```
+
+# [Azure CLI](#tab/azure-cli)
+
+1. First, request a public IP address. If you have an existing unused IP address that you want to use, you can skip this step. Replace `<resource-group>` with your resource group name.
+
+ ```azurecli-interactive
+ az network public-ip create -n mypublicip -g <resource-group>
+ ```
+
+1. Run the following script to create the VPN gateway. Creating a gateway can take 45 minutes or more, depending on the gateway SKU you specify.
+
+ Replace `<resource-group>` with the same resource group as your virtual network. Specify the [gateway SKU](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsku) that supports the features you want to use. The gateway SKU controls the number of allowed Site-to-Site tunnels and desired performance of the VPN. We recommend using a Generation 2 SKU. Don't use the Basic SKU if you want to use IKEv2 authentication (route-based VPN).
+
+ ```azurecli-interactive
+ az network vnet-gateway create -n MyVnetGateway -l eastus --public-ip-address mypublicip -g <resource-group> --vnet <virtual-network-name> --gateway-type Vpn --sku VpnGw2 --vpn-gateway-generation Generation2 --no-wait
+ ```
+
+ The `--no-wait` parameter allows the gateway to be created in the background. It doesn't mean that the VPN gateway is created immediately.
+
+1. You can view the VPN gateway using the following command. If the VPN gateway isn't fully deployed, you'll receive an error message.
+
+ ```azurecli-interactive
+ az network vnet-gateway show -n MyVnetGateway -g <resource-group>
+ ```
+++ ### Create a local network gateway for your on-premises gateway A local network gateway is an Azure resource that represents your on-premises network appliance. It's deployed alongside your storage account, virtual network, and virtual network gateway, but doesn't need to be in the same resource group or subscription as the storage account. To create a local network gateway, follow these steps.
+# [Portal](#tab/azure-portal)
+ 1. In the search box at the top of the Azure portal, search for and select *local network gateways*. The **Local network gateways** page should appear. At the top of the page, select **+ Create**. 1. On the **Basics** tab, fill in the values for **Project details** and **Instance details**.
A local network gateway is an Azure resource that represents your on-premises ne
1. Select **Review + create** to run validation. Once validation passes, select **Create** to create the local network gateway.
+# [Azure PowerShell](#tab/azure-powershell)
+
+Run the following command to create a new local network gateway. Replace `<resource-group>` with your own value.
+
+The `-AddressPrefix` parameter specifies the address range or ranges for the network this local network gateway represents. If you add multiple address space ranges, make sure that the ranges you specify don't overlap with ranges of other networks that you want to connect to.
+
+```azurepowershell-interactive
+New-AzLocalNetworkGateway -Name MyLocalGateway -Location "East US" -AddressPrefix @('10.101.0.0/24','10.101.1.0/24') -GatewayIpAddress "5.4.3.2" -ResourceGroupName <resource-group>
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Run the following command to create a new local network gateway. Replace `<resource-group>` with your own value.
+
+The `--local-address-prefixes` parameter specifies the address range or ranges for the network this local network gateway represents. If you add multiple address space ranges, make sure that the ranges you specify don't overlap with ranges of other networks that you want to connect to.
+
+```azurecli-interactive
+az network local-gateway create --gateway-ip-address 5.4.3.2 --name MyLocalGateway -g <resource-group> --local-address-prefixes 10.101.0.0/24 10.101.1.0/24
+```
+++ ## Configure on-premises network appliance
-The specific steps to configure your on-premises network appliance depend on the network appliance your organization has selected. Depending on the device your organization has chosen, the [list of tested devices](../../vpn-gateway/vpn-gateway-about-vpn-devices.md) might have a link to your device vendor's instructions for configuring with Azure virtual network gateway.
+The specific steps to configure your on-premises network appliance depend on the network appliance your organization has selected.
+
+When configuring your network appliance, you'll need the following items:
+
+* **A shared key.** This is the same shared key that you specify when creating your site-to-site VPN connection. In our examples, we use a basic shared key such as 'abc123'. We recommend that you generate a more complex key to use that complies with your organization's security requirements.
+* **The public IP address of your virtual network gateway.** To find the public IP address of your virtual network gateway using PowerShell, run the following command. In this example, `mypublicip` is the name of the public IP address resource that you created in an earlier step.
+
+ ```azurepowershell-interactive
+ Get-AzPublicIpAddress -Name mypublicip -ResourceGroupName <resource-group>
+ ```
+ ## Create the site-to-site connection To complete the deployment of a S2S VPN, you must create a connection between your on-premises network appliance (represented by the local network gateway resource) and the Azure virtual network gateway. To do this, follow these steps.
+# [Portal](#tab/azure-portal)
+ 1. Navigate to the virtual network gateway you created. In the table of contents for the virtual network gateway, select **Settings > Connections**, and then select **+ Add**. 1. On the **Basics** tab, fill in the values for **Project details** and **Instance details**.
To complete the deployment of a S2S VPN, you must create a connection between yo
1. Select **Review + create** to run validation. Once validation passes, select **Create** to create the connection. You can verify the connection has been made successfully through the virtual network gateway's **Connections** page.
+# [Azure PowerShell](#tab/azure-powershell)
+
+Run the following commands to create the site-to-site VPN connection between your virtual network gateway and your on-premises device. Be sure to replace the values with your own. The shared key must match the value you used for your VPN device configuration. The `-ConnectionType` for site-to-site VPN is **IPsec**.
+
+For more options, see the documentation for the [New-AzVirtualNetworkGatewayConnection](/powershell/module/az.network/new-azvirtualnetworkgatewayconnection) cmdlet.
+
+1. Set the variables.
+
+ ```azurepowershell-interactive
+ $gateway1 = Get-AzVirtualNetworkGateway -Name MyVnetGateway -ResourceGroupName <resource-group>
+ $local = Get-AzLocalNetworkGateway -Name MyLocalGateway -ResourceGroupName <resource-group>
+ ```
+
+1. Create the VPN connection.
+
+ ```azurepowershell-interactive
+ New-AzVirtualNetworkGatewayConnection -Name VNet1toSite1 -ResourceGroupName <resource-group> `
+ -Location 'East US' -VirtualNetworkGateway1 $gateway1 -LocalNetworkGateway2 $local `
+ -ConnectionType IPsec -SharedKey 'abc123'
+ ```
+
+1. After a short while, the connection will be established. You can verify your VPN connection by running the following command. If prompted, select 'A' in order to run 'All'.
+
+ ```azurepowershell-interactive
+ Get-AzVirtualNetworkGatewayConnection -Name VNet1toSite1 -ResourceGroupName <resource-group>
+ ```
+
+ The connection status should show as "Connected."
++
+# [Azure CLI](#tab/azure-cli)
+
+Run the following commands to create the site-to-site VPN connection between your virtual network gateway and your on-premises device. Be sure to replace the values with your own. The shared key must match the value you used for your VPN device configuration.
+
+For more options, see the documentation for the [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) command.
+
+```azurecli-interactive
+az network vpn-connection create --name VNet1toSite1 --resource-group <resource-group> --vnet-gateway1 MyVnetGateway -l eastus --shared-key abc123 --local-gateway MyLocalGateway
+```
+
+After a short while, the connection will be established. You can verify your VPN connection by running the following command. When the connection is in the process of being established, its connection status shows 'Connecting'. Once the connection is established, the status changes to 'Connected'.
+
+```azurecli-interactive
+az network vpn-connection show --name VNet1toSite1 --resource-group <resource-group>
+```
+++ ## Mount Azure file share The final step in configuring a S2S VPN is verifying that it works for Azure Files. You can do this by mounting your Azure file share on-premises. See the instructions to mount by OS here:
trusted-signing How To Change Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/how-to-change-sku.md
Title: Change SKU selection
-description: How-to change your SKU for Trusted Signing account.
+ Title: Change the account SKU
+description: Learn how to change your SKU or pricing tier for a Trusted Signing account.
Last updated 05/30/2024
-# Select or change Trusted Signing SKU (Pricing tier)
+# Change a Trusted Signing account SKU (pricing tier)
-Trusted Signing provides a choice between two pricing tiers: Basic and Premium. Both tiers are tailored to offer the service at an optimal cost, suitable for any signing scenario. Additional details on Trusted Signing's [Pricing](https://azure.microsoft.com/pricing/details/trusted-signing/) page.
+Trusted Signing gives you a choice between two pricing tiers: Basic and Premium. Both tiers are tailored to offer the service at an optimal cost and to be suitable for any signing scenario.
-## SKU (Pricing tier) overview
+For more information, see [Trusted Signing pricing](https://azure.microsoft.com/pricing/details/trusted-signing/).
-Each pricing tier provides varying options for the number of certificate profile types available and the monthly allocation of signatures.
+## SKU (pricing tier) overview
-| Account level | Basic | Premium |
+The following table describes key account details for the Basic SKU and the Premium SKU:
+
+| Account detail | Basic | Premium |
| :- | :- |:|
-| Price(monthly) | **$9.99 / account** | **$99.99 / account** |
-| Quota (signatures / month) | 5,000 | 100,000 |
-| Price after quota is reached | $0.005 / signature | $0.005 / signature |
-| Certificate Profiles | 1 of each available type | 10 of each available type |
-| Public-Trust Signing | Yes | Yes |
-| Private-Trust Signing | Yes | Yes |
-
-**Note**: The pricing tier is also referred to as the SKU.
+| Price (monthly) | **$9.99 per account** | **$99.99 per account** |
+| Quota (signatures per month) | 5,000 | 100,000 |
+| Price after quota is reached | $0.005 per signature | $0.005 per signature |
+| Certificate profiles | 1 of each available type | 10 of each available type |
+| Public Trust signing | Yes | Yes |
+| Private Trust signing | Yes | Yes |
+
+> [!NOTE]
+> The pricing tier is also called the *account SKU*.
+
+## Change the SKU
+You can change the SKU for a Trusted Signing account at any time by upgrading to Premium or by downgrading to Basic. You can change the SKU by using either the Azure portal or the Azure CLI.
-## Change SKU
+Considerations:
-You can change the SKU for a Trusted Signing account at any time by upgrading to Premium or downgrading to Basic. This change can be done from both the Azure portal and from Azure CLI.
+- SKU updates are effective beginning in the next billing cycle.
+- SKU limitations for an updated SKU are enforced after the update is successful.
+- After you change the SKU, you must manually refresh the account overview to see the updated SKU under **SKU (Pricing tier)**. (We are actively working to resolve this known limitation.)
+- To upgrade to Premium:
-- SKU updates are effective from next billing cycle.-- SKU limitations for updated SKU are enforced after the update is successful.-- Downgrade to Basic:
- - The Basic SKU allows only one certificate profile of each type. For example, if you have two certificate profiles of type Public Trust, you need to delete any one profile to be eligible to downgrade. Same applies for other certificate profile types as well.
- - In Azure portal on Certificate Profiles page, make sure **Status: All** to view all certificate profiles to help you delete all relevant certificate profiles to meet the criteria to downgrade.
-
- :::image type="content" source="media/trusted-signing-certificate-profile-deletion-changesku.png" alt-text="Screenshot that shows adding a diagnostic setting." lightbox="media/trusted-signing-certificate-profile-deletion-changesku.png":::
+ - No limitations are applied when you upgrade from the Basic SKU to the Premium SKU.
+- To downgrade to Basic:
-- Upgrade to Premium:
- - There are no limitations when you upgrade to the Premium SKU from Basic SKU.
-- After changing the SKU, you're required to manually refresh the Account Overview section to see the updated SKU under SKU (Pricing tier). (This limitation is known, and being actively worked on to resolve).
+ - The Basic SKU allows only one certificate profile of each type. For example, if you have two certificate profiles of the Public Trust type, you must delete any single profile to be eligible to downgrade. The same limitation applies for other certificate profile types.
+ - In the Azure portal, on the **Certificate Profiles** pane, make sure that you select **Status: All** to view all certificate profiles. Viewing all certificate profiles can help you delete all relevant certificate profiles to meet the criteria to downgrade.
+
+ :::image type="content" source="media/trusted-signing-certificate-profile-deletion-changesku.png" alt-text="Screenshot that shows selecting all certificate profile statuses to view all certificate profiles." lightbox="media/trusted-signing-certificate-profile-deletion-changesku.png":::
# [Azure portal](#tab/sku-portal)
-To change the SKU (Pricing tier) from the Azure portal, follow these steps:
+To change the SKU (pricing tier) by using the Azure portal:
-1. Sign in to the Azure portal.
-2. Navigate to your Trusting Signing account in the Azure portal.
-3. On the account Overview page, locate the current **SKU (Pricing tier)**.
-4. Select the current SKU selection hyperlink. Your current selection is highlighted in the "choose pricing tier" window.
-5. Select the SKU you want to update to (for example, downgrade to Basic or upgrade to Premium) and select **Update**.
+1. In the Azure portal, go to your Trusting Signing account.
+1. On the account **Overview** pane, find the current value for **SKU (Pricing tier)**.
+1. Select the link for the current SKU. Your current SKU selection is highlighted in the **Choose pricing tier** pane.
+1. Select the SKU to update to (for example, downgrade to Basic or upgrade to Premium), and then select **Update**.
-
# [Azure CLI](#tab/sku-cli)
-To change the SKU with Azure CLI, use the following command:
+To change the SKU by using the Azure CLI, run this command:
-```
+```azurecli
az trustedsigning update -n MyAccount -g MyResourceGroup --sku Premium ```+
-## Cost Management and Billing
+## Cost management and billing
+
+View details about cost management and billing for your Trusted Signing resource by viewing your Azure subscription.
+
+### Cost management
+
+To view and estimate the cost of your Trusted Signing resource usage:
-**Cost Management**
+1. In the Azure portal, search for **Subscriptions**.
+1. Select the subscription you used to create your Trusted Signing resource.
+1. On the left menu, select **Cost Management**. Learn more about [cost management](../cost-management-billing/costs/overview-cost-management.md).
+1. Under **Trusted Signing**, verify that you can see the costs that are associated with your Trusted Signing account.
-View and estimate the cost of your Trusted Signing resource usage.
-1. In the Azure portal, search **Subscriptions**.
-2. Select the **Subscription**, where you have created Trusted Signing resources.
-3. Select Cost Management from the menu on the left. Learn more about using [Cost Management](https://learn.microsoft.com/azure/cost-management-billing/costs/).
-4. For Trusted Signing, you can see costs associated to your Trusted Signing account.
+### Billing
-**Billing**
+To view invoices for your Trusted Signing account:
-View Invoice for Trusted Signing service.
-1. In the Azure portal, search **Subscriptions**.
-2. Select the **Subscription**, where you have created Trusted Signing resources.
-3. Select Billing from the menu on the left. Learn more about [Billing](https://learn.microsoft.com/azure/cost-management-billing/manage/).
+1. In the Azure portal, search for **Subscriptions**.
+1. Select the subscription you used to create your Trusted Signing resource.
+1. On the left menu, select **Billing**. Learn more about [billing](../cost-management-billing/cost-management-billing-overview.md).
update-manager Assessment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/assessment-options.md
Title: Assessment options in Update Manager.
description: The article describes the assessment options available in Update Manager. Last updated 02/03/2024-+
update-manager Configure Wu Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/configure-wu-agent.md
Title: Configure Windows Update settings in Azure Update Manager description: This article tells how to configure Windows update settings to work with Azure Update Manager. Previously updated : 01/19/2024- Last updated : 09/06/2024+
update-manager Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/deploy-updates.md
Title: Deploy updates and track results in Azure Update Manager
description: This article details how to use Azure Update Manager in the Azure portal to deploy updates and view results for supported machines. Last updated 02/26/2024-+
update-manager Dynamic Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/dynamic-scope-overview.md
Title: An overview of Dynamic Scoping description: This article provides information about Dynamic Scoping, its purpose and advantages. Previously updated : 02/03/2024- Last updated : 09/06/2024+
update-manager Guidance Migration Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-azure.md
Title: Patching guidance overview for Microsoft Configuration Manager to Azure
description: Patching guidance overview for Microsoft Configuration Manager to Azure. View on how to get started with Azure Update Manager, mapping capabilities of MCM software and FAQs. - Previously updated : 07/31/2024+ Last updated : 09/06/2024
update-manager Guidance Patching Sql Server Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-patching-sql-server-azure-vm.md
Title: Guidance on patching for SQL Server on Azure VMs using Azure Update Manag
description: An overview on patching guidance for SQL Server on Azure VMs using Azure Update Manager -+ Last updated 07/06/2024
Azure Update Manager designed as a standalone Azure service to provide SaaS expe
Using Azure Update Manager you can manage and govern updates for all your SQL Server instances at scale. Unlike with [Automated Patching](/azure/azure-sql/virtual-machines/windows/automated-patching), Update Manager installs cumulative updates for SQL server. -- ## Next steps - [An overview on Azure Update Manager](overview.md)
update-manager Manage Arc Enabled Servers Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-arc-enabled-servers-programmatically.md
Last updated 05/13/2024-+ # How to programmatically manage updates for Azure Arc-enabled servers
update-manager Manage Multiple Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-multiple-machines.md
Title: Manage multiple machines in Azure Update Manager
description: This article explains how to use Azure Update Manager in Azure to manage multiple supported machines and view their compliance state in the Azure portal. Last updated 08/22/2024-+
update-manager Manage Update Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-update-settings.md
Last updated 03/07/2024-+ # Manage update configuration settings
update-manager Manage Updates Customized Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-updates-customized-images.md
Last updated 08/22/2024-+ # Manage updates for customized images
update-manager Manage Vms Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-vms-programmatically.md
Last updated 05/13/2024-+ # How to programmatically manage updates for Azure VMs
update-manager Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/migration-overview.md
Title: An overview on how to move virtual machines from Automation Update Manage
description: A guidance overview on migration from Automation Update Management to Azure Update Manager -+ Last updated 08/01/2024
update-manager Periodic Assessment At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/periodic-assessment-at-scale.md
Previously updated : 04/03/2024- Last updated : 09/06/2024+ # Automate assessment at scale by using Azure Policy
update-manager Pre Post Scripts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/pre-post-scripts-overview.md
Title: An overview of pre and post events in your Azure Update Manager description: This article provides an overview on pre and post events and its requirements. Previously updated : 07/24/2024- Last updated : 09/06/2024+
update-manager Prerequsite For Schedule Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/prerequsite-for-schedule-patching.md
Title: Configure schedule patching on Azure VMs for business continuity
description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Azure Update Manager. Previously updated : 02/20/2024- Last updated : 09/06/2024+
update-manager Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/query-logs.md
Last updated 08/27/2024-+ # Access Azure Update Manager operations data using Azure Resource Graph
update-manager Sample Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/sample-query-logs.md
Last updated 07/29/2024-+ # Sample Azure Resource Graph queries to access Azure Update Manager operations data
update-manager Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/scheduled-patching.md
Title: Scheduling recurring updates in Azure Update Manager
description: This article details how to use Azure Update Manager to set update schedules that install recurring updates on your machines. Last updated 06/24/2024-+
update-manager Security Awareness Ubuntu Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/security-awareness-ubuntu-support.md
Title: Security awareness and Ubuntu Pro support in Azure Update Manager
description: Guidance on security awareness and Ubuntu Pro support in Azure Update Manager. - Previously updated : 08/22/2024+ Last updated : 09/06/2024
update-manager Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshoot.md
Title: Troubleshoot known issues with Azure Update Manager description: This article provides details on known issues and how to troubleshoot any problems with Azure Update Manager. Previously updated : 07/04/2024- Last updated : 09/06/2024+
update-manager Update Manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/update-manager-faq.md
Title: Azure Update Manager FAQ description: This article gives answers to frequently asked questions about Azure Update Manager - Previously updated : 07/08/2024+ Last updated : 09/06/2024 #Customer intent: As an implementer, I want answers to various questions.
update-manager Updates Maintenance Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/updates-maintenance-schedules.md
Title: Updates and maintenance in Azure Update Manager description: This article describes the updates and maintenance options available in Azure Update Manager. Previously updated : 06/19/2024- Last updated : 09/06/2024+
update-manager View Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/view-updates.md
Title: Check update compliance in Azure Update Manager
description: This article explains how to use Azure Update Manager in the Azure portal to assess update compliance for supported machines. Last updated 08/22/2024-+
update-manager Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/workbooks.md
Title: An overview of workbooks description: This article provides information on how workbooks provide a flexible canvas for data analysis and the creation of rich visual reports. Previously updated : 08/22/2024- Last updated : 09/06/2024+
virtual-desktop Custom Image Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/custom-image-templates.md
Here are some examples of the built-in scripts you can add to a custom image tem
- Enable FSLogix with Kerberos. - Enable [RDP Shortpath for managed networks](rdp-shortpath.md?tabs=managed-networks). - Enable [screen capture protection](screen-capture-protection.md).-- Configure [Teams optimizations](teams-on-avd.md).
+- Configure [Teams optimizations](teams-on-avd.md). Optimizations include WebRTC redirector service and Visual C++ Redistributable.
- Configure session timeouts. - Disable automatic updates for [MSIX applications](app-attach-setup.md#disable-automatic-updates). - Add or remove Microsoft Office applications.
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
# What's new in the Azure Virtual Desktop Agent?
+The Azure Virtual Desktop agent links your session hosts with the Azure Virtual Desktop service. It acts as the intermediate communicator between the service and the virtual machines, enabling connectivity.
+
+The Azure Virtual Desktop Agent is updated regularly. New versions of the Azure Virtual Desktop Agent are installed automatically. When new versions are released, they're rolled out progressively to session hosts. This process is called *flighting* and it enables Microsoft to monitor the rollout in [validation environments](create-validation-host-pool.md) first.
+
+A rollout might take several weeks before the agent is available in all environments. Some agent versions might not reach nonvalidation environments, so you might see multiple versions of the agent deployed across your environments.
The Azure Virtual Desktop Agent updates regularly. This article is where you'll find out about: - The latest updates
Make sure to check back here often to keep up with new updates.
## Latest available versions
-New versions of the Azure Virtual Desktop Agent are installed automatically. When new versions are released, they're rolled out progressively to session hosts. This process is called *flighting* and it enables Microsoft to monitor the rollout in [validation environments](create-validation-host-pool.md) first.
-
-A rollout might take several weeks before the agent is available in all environments. Some agent versions might not reach nonvalidation environments, so you may see multiple versions of the agent deployed across your environments.
+Here's information about the Azure Virtual Desktop Agent.
| Release | Latest version | |--|--|
virtual-desktop Whats New Sxs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-sxs.md
+
+ Title: What's new in the Azure Virtual Desktop SxS Network Stack? - Azure
+description: New features and product updates for the Azure Virtual Desktop SxS Network Stack.
++ Last updated : 08/13/2024++++
+# What's new in the Azure Virtual Desktop SxS Network Stack?
+
+The Azure Virtual Desktop agent links your session hosts with the Azure Virtual Desktop service. It also includes a component called the SxS Network Stack. The Azure Virtual Desktop agent acts as the intermediate communicator between the service and the virtual machines, enabling connectivity. The SxS Network Stack component is required for users to securely establish reverse server-to-client connections.
+
+The Azure Virtual Desktop SxS Network Stack is updated regularly. New versions of the Azure Virtual Desktop SxS Network Stack are installed automatically. When new versions are released, they're rolled out progressively to session hosts. This process is called *flighting* and it enables Microsoft to monitor the rollout in [validation environments](create-validation-host-pool.md) first.
+
+A rollout might take several weeks before the agent is available in all environments. Some agent versions might not reach nonvalidation environments, so you might see multiple versions of the agent deployed across your environments.
+
+This article is where you'll find out about:
+
+- The latest updates
+- New features
+- Improvements to existing features
+- Bug fixes
+
+Make sure to check back here often to keep up with new updates.
+
+## Latest available versions
+
+Here's information about the SxS Network Stack.
+
+| Release | Latest version |
+|--|--|
+| Production | 1.0.2404.16760 |
+| Validation | 1.0.2404.16760 |
+
+## Version 1.0.2404.16760
+
+*Published: July 2024*
+
+In this release, we've made the following changes:
+
+- General improvements and bug fixes mainly around `rdpshell` and RemoteApp.
+
+## Version 1.0.2402.09880
+
+*Published: July 2024*
+
+In this release, we've made the following changes:
+
+- General improvements and bug fixes mainly around `rdpshell` and RemoteApp.
+- The default chroma value has been changed from 4:4:4 to 4:2:0.
+- Reduce chance of progressive update blocking real updates from driver.
+- Improve user experience when bad credentials are saved.
+- Improve session switching to avoid hangs.
+- Update Intune version numbers for the granular clipboard feature.
+- Bug fixes for RemoteApp V2 decoder.
+- Bug fixes for RemoteApp.
+- Fix issue with caps lock state when using the on-screen keyboard.
+
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
description: Find answers to frequently asked questions about Azure Virtual Netw
-+ Last updated 01/30/2024
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
The following diagram illustrates how two VMs communicate with and without Accel
Accelerated Networking has the following benefits: -- **Lower latency and higher packets per second (pps)**. Removing the virtual switch from the data path eliminates the time that packets spend in the host for policy processing. It also increases the number of packets that the VM can process.
+- **Lower latency and higher packets per second**. Removing the virtual switch from the data path eliminates the time that packets spend in the host for policy processing. It also increases the number of packets that the VM can process.
- **Reduced jitter**. Processing time for virtual switches depends on the amount of policy to apply and the workload of the CPU that does the processing. Offloading policy enforcement to the hardware removes that variability by delivering packets directly to the VM. Offloading also removes the host-to-VM communication, all software interrupts, and all context switches.
virtual-network Virtual Machine Network Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-machine-network-throughput.md
Today, the Azure networking stack supports 1M total flows (500k inbound and 500k
- VMs that belong to a virtual network can handle 500k ***active connections*** for all VM sizes with 500k ***active flows in each direction***. -- VMs with network virtual appliances (NVAs) such as gateway, proxy, firewall can handle 250k ***active connections*** with 500k ***active flows in each direction*** due to the forwarding and more new flow creation on new connection setup to the next hop as shown in the above diagram.
+- VMs with NVAs such as gateway, proxy, firewall can handle 250k ***active connections*** with 500k ***active flows in each direction*** due to the forwarding and more new flow creation on new connection setup to the next hop as shown in the above diagram.
Once this limit is hit, other connections are dropped. Connection establishment and termination rates can also affect network performance as connection establishment and termination shares CPU with packet processing routines. We recommend that you benchmark workloads against expected traffic patterns and scale out workloads appropriately to match your performance needs. Metrics are available in [Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachines) to track the number of network flows and the flow creation rate on your VM or Virtual Machine Scale Sets instances. ## Next steps
virtual-network Virtual Network Bandwidth Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-bandwidth-testing.md
This article describes how to use the free NTTTCP tool from Microsoft to test ne
- Note the number of VM cores and the receiver VM IP address to use in the commands. Both the sender and receiver commands use the receiver's IP address. >[!NOTE]
->Testing by using a virtual IP (VIP) is possible, but is beyond the scope of this article.
+>Testing by using a virtual IP is possible, but is beyond the scope of this article.
**Examples used in this article**
vpn-gateway Troubleshoot Ad Vpn Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/troubleshoot-ad-vpn-client.md
- Title: 'Troubleshoot Point-to-Site VPN clients - Microsoft Entra authentication'-
-description: Learn how to troubleshoot VPN Gateway Point-to-Site clients that use Microsoft Entra authentication.
--- Previously updated : 04/29/2021---
-# Troubleshoot a Microsoft Entra authentication VPN client
-
-This article helps you troubleshoot a VPN client to connect to a virtual network using Point-to-Site VPN and Microsoft Entra authentication.
-
-## <a name="status"></a>View Status Log
-
-View the status log for error messages.
-
-![logs](./media/troubleshoot-ad-vpn-client/1.png)
-
-1. Click the arrows icon at the bottom-right corner of the client window to show the **Status Logs**.
-2. Check the logs for errors that may indicate the problem.
-3. Error messages are displayed in red.
-
-## <a name="clear"></a>Clear sign-in information
-
-Clear the sign-in information.
-
-![sign in](./media/troubleshoot-ad-vpn-client/2.png)
-
-1. Select the … next to the profile that you want to troubleshoot. Select **Configure -> Clear Saved Account**.
-2. Select **Save**.
-3. Try to connect.
-4. If the connection still fails, continue to the next section.
-
-## <a name="diagnostics"></a>Run diagnostics
-
-Run diagnostics on the VPN client.
-
-![diagnostics](./media/troubleshoot-ad-vpn-client/3.png)
-
-1. Click the **…** next to the profile that you want to run diagnostics on. Select **Diagnose -> Run Diagnosis**.
-2. The client will run a series of tests and display the result of the test
-
- * Internet Access ΓÇô Checks to see if the client has Internet connectivity
- * Client Credentials ΓÇô Check to see if the Microsoft Entra authentication endpoint is reachable
- * Server Resolvable ΓÇô Contacts the DNS server to resolve the IP address of the configured VPN server
- * Server Reachable ΓÇô Checks to see if the VPN server is responding or not
-3. If any of the tests fail, contact your network administrator to resolve the issue.
-4. The next section shows you how to collect the logs, if needed.
-
-## <a name="logfiles"></a>Collect client log files
-
-Collect the VPN client log files. The log files can be sent to support/administrator via a method of your choosing. For example, e-mail.
-
-1. Click the “…” next to the profile that you want to run diagnostics on. Select **Diagnose -> Show Logs Directory**.
-
- ![show logs](./media/troubleshoot-ad-vpn-client/4.png)
-2. Windows Explorer opens to the folder that contains the log files.
-
- ![view file](./media/troubleshoot-ad-vpn-client/5.png)
-
-## Next steps
-
-For more information, see [Create a Microsoft Entra tenant for P2S Open VPN connections that use Microsoft Entra authentication](openvpn-azure-ad-tenant.md).
vpn-gateway Troubleshoot Azure Vpn Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/troubleshoot-azure-vpn-client.md
+
+ Title: Troubleshoot Azure VPN Client
+
+description: Learn how to troubleshoot VPN Gateway point-to-site connections that use the Azure VPN Client.
+++ Last updated : 09/05/2024+++
+# Troubleshoot Azure VPN Client
+
+This article helps you troubleshoot Azure VPN Client connection and configuration issues.
+
+## <a name="status"></a>View Status Logs
+
+View the status log for error messages.
+
+1. Click the arrows icon at the bottom-right corner of the Azure VPN Client window to show the **Status Logs**.
+1. Check the logs for errors that might indicate the problem.
+1. Error messages are displayed in red.
+
+ :::image type="content" source="./media/troubleshoot-azure-vpn-client/status-logs.png" alt-text="Screenshot shows status logs." lightbox="./media/troubleshoot-azure-vpn-client/status-logs.png":::
+
+## <a name="clear"></a>Clear sign-in information
+
+This step applies to Microsoft Entra ID authentication. If you're using certificate authentication, this step isn't applicable.
+
+Clear the sign-in information.
+
+1. Select the … next to the profile that you want to troubleshoot. Select **Configure**.
+1. Select **Clear Saved Account**.
+1. Select **Save**.
+1. Try to connect.
+1. If the connection still fails, continue to the next section.
+
+ :::image type="content" source="./media/troubleshoot-azure-vpn-client/clear-sign-in.png" alt-text="Screenshot shows how to clear the saved account." lightbox="./media/troubleshoot-azure-vpn-client/clear-sign-in.png":::
+
+## <a name="diagnostics"></a>Run diagnostics
+
+Run diagnostics on the VPN client.
+
+1. Click the **…** next to the profile on which you want to run diagnostics.
+1. Select **Diagnose -> Run Diagnosis**.
+1. The client runs a series of tests and displays the results of the tests. The tests include:
+
+ * Internet Access ΓÇô Checks to see if the client has Internet connectivity.
+ * Client Credentials ΓÇô Check to see if the Microsoft Entra ID authentication endpoint is reachable.
+ * Server Resolvable ΓÇô Contacts the DNS server to resolve the IP address of the configured VPN server.
+ * Server Reachable ΓÇô Checks to see if the VPN server is responding or not
+1. If any of the tests fail, contact your network administrator to resolve the issue. To collect logs, see [Collect client log files](#logfiles).
+
+ :::image type="content" source="./media/troubleshoot-azure-vpn-client/diagnostics.png" alt-text="Screenshot shows how to run diagnostics." lightbox="./media/troubleshoot-azure-vpn-client/diagnostics.png":::
+
+## <a name="logfiles"></a>Collect client log files
+
+Collect the VPN client log files. The log files can be sent to support/administrator via a method of your choosing. For example, e-mail.
+
+1. Click the "…" next to the profile that you want to run diagnostics on. Select **Diagnose -> Show Logs Directory**
+1. Windows Explorer opens to the folder that contains the log files.
+
+ :::image type="content" source="./media/troubleshoot-azure-vpn-client/show-logs-directory.png" alt-text="Screenshot shows how to show log directory." lightbox="./media/troubleshoot-azure-vpn-client/show-logs-directory.png":::
+
+## Next steps
+
+To report an Azure VPN Client problem, see [Use Feedback Hub - Azure VPN Client](feedback-hub-azure-vpn-client.md).
vpn-gateway Vpn Gateway About Compliance Crypto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-compliance-crypto.md
description: Learn how to configure Azure VPN gateways to satisfy cryptographic requirements for both cross-premises S2S VPN tunnels, and Azure VNet-to-VNet connections. -+ Last updated 01/26/2024
vpn-gateway Vpn Gateway About Forced Tunneling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-forced-tunneling.md
description: Learn how to configure forced tunneling for virtual networks create
-+ Last updated 06/09/2023
vpn-gateway Vpn Gateway About Point To Site Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-point-to-site-routing.md
description: Learn about Azure Point-to-Site VPN routing for different operating systems, remote access protocols, and virtual network configurations. -+ Last updated 07/28/2023
vpn-gateway Vpn Gateway About Skus Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-skus-legacy.md
Title: VPN Gateway legacy SKUs
description: How to work with the old virtual network gateway SKUs; Standard, and High Performance. -+ Last updated 08/06/2024
vpn-gateway Vpn Gateway About Vpn Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md
description: Learn about VPN devices and IPsec parameters for Site-to-Site cross-premises connections. Links are provided to configuration instructions and samples. -+ Last updated 10/06/2023
vpn-gateway Vpn Gateway Bgp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-bgp-overview.md
description: Learn about Border Gateway Protocol (BGP) in Azure VPN, the standard internet protocol to exchange routing and reachability information between networks. -+ Last updated 05/02/2023
See the VPN Gateway [BGP FAQ](vpn-gateway-vpn-faq.md#bgp) for frequently asked q
## Next steps
-See [How to configure BGP for Azure VPN Gateway](bgp-howto.md) for steps to configure BGP for your cross-premises and VNet-to-VNet connections.
+See [How to configure BGP for Azure VPN Gateway](bgp-howto.md) for steps to configure BGP for your cross-premises and VNet-to-VNet connections.
vpn-gateway Vpn Gateway Highlyavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-highlyavailable.md
description: Learn about highly available configuration options for VPN Gateway. -+ Last updated 07/24/2024
web-application-firewall Waf Front Door Configure Custom Response Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-configure-custom-response-code.md
description: Learn how to configure a custom response code and message when Azur
-+ Last updated 08/16/2022
web-application-firewall Waf Front Door Configure Ip Restriction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-configure-ip-restriction.md
description: Learn how to configure an Azure Web Application Firewall rule to re
-+ Last updated 05/29/2024
web-application-firewall Waf Front Door Custom Rules Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-custom-rules-powershell.md
description: Learn how to configure a web application firewall (WAF) policy that
-+ Last updated 09/05/2019
web-application-firewall Waf Front Door Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-custom-rules.md
Title: Web application firewall custom rule for Azure Front Door
description: Learn how to use web application firewall (WAF) custom rules to protect your web applications from malicious attacks. -+ Last updated 05/31/2024
web-application-firewall Waf Front Door Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-monitor.md
description: Learn about Azure Web Application Firewall in Azure Front Door moni
-+ Last updated 05/23/2024
web-application-firewall Waf Front Door Policy Configure Bot Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-policy-configure-bot-protection.md
description: Learn how to configure bot protection rule in Azure Web Application
-+ Last updated 11/10/2022
web-application-firewall Waf Front Door Policy Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-policy-settings.md
Title: Policy settings for Web Application Firewall in Azure Front Door
description: Learn about policy settings for Azure Web Application Firewall in Azure Front Door. -+ Last updated 10/12/2023
web-application-firewall Waf Front Door Rate Limit Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-rate-limit-configure.md
Title: Configure a WAF rate-limit rule for Azure Front Door
description: Learn how to configure a rate-limit rule for an existing Azure Front Door endpoint. -+ Last updated 05/19/2023
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
CRS 3.0 includes 13 rule groups, as shown in the following table. Each group con
CRS 2.2.9 includes 10 rule groups, as shown in the following table. Each group contains multiple rules, which can be disabled. > [!NOTE]
-> CRS 2.2.9 is no longer supported for new WAF policies. We recommend you upgrade to the latest CRS version. CRS 2.2.9 can't be used along with CRS 3.2/DRS 2.1 and greater versions.
+> CRS 2.2.9 is no longer supported for new WAF policies. We recommend you upgrade to the latest CRS 3.2/DRS 2.1 and greater versions.
|Rule group name|Description| |||
The following rule groups and rules are available when using Web Application Fir
|942100|SQL Injection Attack Detected via libinjection| |942110|SQL Injection Attack: Common Injection Testing Detected| |942120|SQL Injection Attack: SQL Operator Detected|
+|942130|SQL Injection Attack: SQL Tautology Detected.|
|942140|SQL Injection Attack: Common DB Names Detected| |942150|SQL Injection Attack| |942160|Detects blind sqli tests using sleep() or benchmark().|
web-application-firewall Application Gateway Customize Waf Rules Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-customize-waf-rules-cli.md
Last updated 08/25/2023 -+ # Customize Web Application Firewall rules using the Azure CLI
web-application-firewall Application Gateway Customize Waf Rules Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-customize-waf-rules-portal.md
Last updated 11/07/2022 -+
web-application-firewall Application Gateway Customize Waf Rules Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-customize-waf-rules-powershell.md
Last updated 11/14/2019 -+
web-application-firewall Associate Waf Policy Existing Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/associate-waf-policy-existing-gateway.md
Title: Associate a Web Application Firewall policy with an existing Azure Application Gateway description: Learn how to associate a Web Application Firewall policy with an existing Azure Application Gateway. -+
web-application-firewall Bot Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/bot-protection.md
Title: Configure bot protection for Azure Web Application Firewall (WAF) description: Learn how to configure bot protection for Web Application Firewall (WAF) on Azure Application Gateway. -+ Last updated 06/01/2023
Create a WAF policy for Application Gateway by following the instructions descri
## Next steps
-For more information about the Bot Manager rule set, see [Web Application Firewall CRS rule groups and rules](application-gateway-crs-rulegroups-rules.md?tabs=bot).
+For more information about the Bot Manager rule set, see [Web Application Firewall CRS rule groups and rules](application-gateway-crs-rulegroups-rules.md?tabs=bot).
web-application-firewall Configure Waf Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/configure-waf-custom-rules.md
description: Learn how to configure Web Application Firewall (WAF) v2 custom rul
-+ Last updated 05/21/2020
web-application-firewall Create Custom Waf Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/create-custom-waf-rules.md
Title: Create and use v2 custom rules
description: This article provides information on how to create Web Application Firewall (WAF) v2 custom rules in Azure Application Gateway. -+ Last updated 04/06/2023
web-application-firewall Custom Waf Rules Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/custom-waf-rules-overview.md
Title: Azure Web Application Firewall (WAF) v2 custom rules on Application Gateway description: This article provides an overview of Web Application Firewall (WAF) v2 custom rules on Azure Application Gateway. -+ Last updated 01/30/2024
web-application-firewall Geomatch Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/geomatch-custom-rules.md
Title: Azure Web Application Firewall (WAF) Geomatch custom rules description: This article is an overview of Web Application Firewall (WAF) geomatch custom rules on Azure Application Gateway. -+ Last updated 09/05/2023
web-application-firewall Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/log-analytics.md
description: This article shows you how you can use Azure Log Analytics to exami
-+ Last updated 08/14/2024
Once you create a query, you can add it to your dashboard. Select the **Pin to
## Next steps
-[Back-end health, resource logs, and metrics for Application Gateway](../../application-gateway/application-gateway-diagnostics.md)
+[Back-end health, resource logs, and metrics for Application Gateway](../../application-gateway/application-gateway-diagnostics.md)
web-application-firewall Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/policy-overview.md
Title: Azure Web Application Firewall (WAF) policy overview description: This article is an overview of Web Application Firewall (WAF) global, per-site, and per-URI policies. -+ Last updated 10/06/2023
web-application-firewall Web Application Firewall Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/web-application-firewall-logs.md
description: Learn how to enable and manage logs and for Azure Web Application F
-+ Last updated 08/24/2023