Updates from: 09/26/2024 01:07:39
Service Microsoft Docs article Related commit history on GitHub Change details
app-service Overview Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-private-endpoint.md
Using private endpoint for your app enables you to:
A private endpoint is a special network interface (NIC) for your App Service app in a subnet in your virtual network. When you create a private endpoint for your app, it provides secure connectivity between clients on your private network and your app. The private endpoint is assigned an IP Address from the IP address range of your virtual network.
-The connection between the private endpoint and the app uses a secure [Private Link](../private-link/private-link-overview.md). Private endpoint is only used for incoming traffic to your app. Outgoing traffic won't use this private endpoint. You can inject outgoing traffic to your network in a different subnet through the [virtual network integration feature](./overview-vnet-integration.md).
+The connection between the private endpoint and the app uses a secure [Private Link](../private-link/private-link-overview.md). Private endpoint is only used for incoming traffic to your app. Outgoing traffic doesn't use this private endpoint. You can inject outgoing traffic to your network in a different subnet through the [virtual network integration feature](./overview-vnet-integration.md).
-Each slot of an app is configured separately. You can plug up to 100 private endpoints per slot. You can't share a private endpoint between slots. The sub-resource name of a slot is `sites-<slot-name>`.
+Each slot of an app is configured separately. You can plug up to 100 private endpoints per slot. You can't share a private endpoint between slots. The subresource name of a slot is `sites-<slot-name>`.
The subnet where you plug the private endpoint can have other resources in it, you don't need a dedicated empty subnet. You can also deploy the private endpoint in a different region than your app.
You can also deploy the private endpoint in a different region than your app.
From a security perspective: -- Private endpoint and public access can co-exist on an app. For more information, see [overview of access restrictions](./overview-access-restrictions.md#how-it-works)
+- Private endpoint and public access can coexist on an app. For more information, see [overview of access restrictions](./overview-access-restrictions.md#how-it-works)
- When you enable private endpoints to your app, ensure that public network access is disabled to ensure isolation. - You can enable multiple private endpoints in others virtual networks and subnets, including virtual network in other regions. - The access restrictions rules of your app aren't evaluated for traffic through the private endpoint.-- You can eliminate the data exfiltration risk from the virtual network by removing all NSG rules where destination is tag Internet or Azure services.
+- You can eliminate the data exfiltration risk from the virtual network by removing all Network Security Group (NSG) rules where destination is tag Internet or Azure services.
In the Web HTTP logs of your app, you find the client source IP. This feature is implemented using the TCP Proxy protocol, forwarding the client IP property up to the app. For more information, see [Getting connection Information using TCP Proxy v2](../private-link/private-link-service-overview.md#getting-connection-information-using-tcp-proxy-v2).
In the Web HTTP logs of your app, you find the client source IP. This feature is
## DNS
-When you use private endpoint for App Service apps, the requested URL must match the name of your app. By default mywebappname.azurewebsites.net (see [note at top](#dnl-note)).
+When you use private endpoint for App Service apps, the requested URL must match the name of your app. By default `<app-name>.azurewebsites.net`. When you're using [unique default hostname](#dnl-note) your app name has the format `<app-name>-<random-hash>.<region>.azurewebsites.net`. In the examples below _mywebapp_ could also represent the full regionalized unique hostname.
-By default, without private endpoint, the public name of your web app is a canonical name to the cluster.
-For example, the name resolution is:
+By default, without private endpoint, the public name of your web app is a canonical name to the cluster. For example, the name resolution is:
|Name |Type |Value | |--|--||
For example, the name resolution is:
|mywebapp.azurewebsites.net|CNAME|mywebapp.privatelink.azurewebsites.net|<--Azure creates this CNAME entry in Azure Public DNS to point the app address to the private endpoint address| |mywebapp.privatelink.azurewebsites.net|A|10.10.10.8|<--You manage this entry in your DNS system to point to your private endpoint IP address|
-After this DNS configuration, you can reach your app privately with the default name mywebappname.azurewebsites.net. You must use this name, because the default certificate is issued for *.azurewebsites.net.
+After this DNS configuration, you can reach your app privately with the default name mywebapp.azurewebsites.net. You must use this name, because the default certificate is issued for *.azurewebsites.net.
If you need to use a custom DNS name, you must add the custom name in your app and you must validate the custom name like any custom name, using public DNS resolution. For more information, see [custom DNS validation](./app-service-web-tutorial-custom-domain.md).
-For the Kudu console, or Kudu REST API (deployment with Azure DevOps self-hosted agents for example), you must create two records pointing to the private endpoint IP in your Azure DNS private zone or your custom DNS server. The first is for your app, the second is for the SCM of your app.
+For the Kudu console, or Kudu REST API (deployment with Azure DevOps Services self-hosted agents for example) you must create two records pointing to the private endpoint IP in your Azure DNS private zone or your custom DNS server. The first is for your app, the second is for the SCM of your app.
| Name | Type | Value | |--|--|--|
az appservice ase update --name myasename --allow-new-private-endpoint-connectio
## Specific requirements
-If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but you also automatically register the provider when you create the first web app in a subscription.
+If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) but you also automatically register the provider when you create the first web app in a subscription.
## Pricing
For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co
## Limitations
-* When you use Azure Function in Elastic Premium plan with private endpoint, to run or execute the function in Azure portal, you must have direct network access or you receive an HTTP 403 error. In other words, your browser must be able to reach the private endpoint to execute the function from the Azure portal.
+* When you use Azure Function in Elastic Premium plan with private endpoint, to run or execute the function in Azure portal you must have direct network access or you receive an HTTP 403 error. In other words, your browser must be able to reach the private endpoint to execute the function from the Azure portal.
* You can connect up to 100 private endpoints to a particular app. * Remote Debugging functionality isn't available through the private endpoint. The recommendation is to deploy the code to a slot and remote debug it there. * FTP access is provided through the inbound public IP address. Private endpoint doesn't support FTP access to the app. * IP-Based SSL isn't supported with private endpoints.
-* Apps that you configure with private endpoints cannot receive public traffic coming from subnets with `Microsoft.Web` service endpoint enabled and cannot use [service endpoint-based access restriction rules](./overview-access-restrictions.md#access-restriction-rules-based-on-service-endpoints).
+* Apps that you configure with private endpoints can't receive public traffic coming from subnets with `Microsoft.Web` service endpoint enabled and can't use [service endpoint-based access restriction rules](./overview-access-restrictions.md#access-restriction-rules-based-on-service-endpoints).
* Private endpoint naming must follow the rules defined for resources of type `Microsoft.Network/privateEndpoints`. Naming rules can be found [here](../azure-resource-manager/management/resource-name-rules.md#microsoftnetwork). We're improving Azure Private Link feature and private endpoint regularly, check [this article](../private-link/private-endpoint-overview.md#limitations) for up-to-date information about limitations.
application-gateway Application Gateway Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-metrics.md
Application Gateway provides several builtΓÇæin timing metrics related to the re
> > If there is more than one listener in the Application Gateway, then always filter by *Listener* dimension while comparing different latency metrics in order to get meaningful inference.
+> [!NOTE]
+>
+> Latency might be observed in the metric data, as all metrics are aggregated at one-minute intervals. This latency may vary for different application gateway instances based on the metric start time.
+ You can use timing metrics to determine whether the observed slowdown is due to the client network, Application Gateway performance, the backend network and backend server TCP stack saturation, backend application performance, or large file size. For more information, see [Timing metrics](monitor-application-gateway-reference.md#timing-metrics-for-application-gateway-v2-sku). For example, if there's a spike in *Backend first byte response time* trend but the *Backend connect time* trend is stable, you can infer that the application gateway to backend latency and the time taken to establish the connection is stable. The spike is caused due to an increase in the response time of backend application. On the other hand, if the spike in *Backend first byte response time* is associated with a corresponding spike in *Backend connect time*, you can deduce that either the network between Application Gateway and backend server or the backend server TCP stack has saturated.
application-gateway Proxy Buffers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/proxy-buffers.md
Previously updated : 08/03/2022 Last updated : 09/25/2024 #Customer intent: As a user, I want to know how can I disable/enable proxy buffers. # Configure Request and Response Proxy Buffers
-Azure Application Gateway Standard v2 SKU supports buffering Requests (from clients) or Responses (from the backend servers). Based on the processing capabilities of the clients that interact with your Application Gateway, you can use these buffers to configure the speed of packet delivery.
+Azure Application Gateway Standard v2 SKU supports buffering Requests from clients or Responses (from the backend servers). Based on the processing capabilities of the clients that interact with your application gateway, you can use these buffers to configure the speed of packet delivery.
## Response Buffer
-Application Gateway's Response buffer can collect all or parts of the response packets sent by the backend server, before delivering them to the clients. By default, the Response buffering is enabled on Application Gateway which is useful to accommodate slow clients. This setting allows you to conserve the backend TCP connections as they can be closed once Application Gateway receives complete response and work according to the client's processing speed. This way, your Application Gateway will continue to deliver the response as per clientΓÇÖs pace.
+Application Gateway's response buffer can collect all or parts of the response packets sent by the backend server, before delivering them to the clients. By default, the Response buffering is enabled on Application Gateway which is useful to accommodate slow clients. This setting allows you to conserve the backend TCP connections as they can be closed once Application Gateway receives complete response and work according to the client's processing speed. This way, your Application Gateway continues to deliver the response as per the clientΓÇÖs pace.
## Request Buffer
-In a similar way, Application Gateway's Request buffer can temporarily store the entire or parts of the request body, and then forward a larger upload request at once to the backend server. By default, Request buffering setting is enabled on Application Gateway and is useful to offload the processing function of re-assembling the smaller packets of data on the backend server.
+In a similar way, Application Gateway's Request buffer can temporarily store the entire or parts of the request body, and then forward a larger upload request at once to the backend server. By default, Request buffering setting is enabled on Application Gateway and is useful to offload the processing function of reassembling the smaller packets of data on the backend server.
>[!NOTE]
->By default, both Request and Response buffers are enabled on your Application Gateway resource but you can choose to configure them separately. Further, the settings are applied at a resource level and cannot be managed separately for each listener.
+>By default, both Request and Response buffers are enabled on your Application Gateway resource but you can choose to configure them separately. Further, the settings are applied at a resource level and can't be managed separately for each listener.
</br>
-You can keep either the Request or Response buffer, enabled or disable, based on your requirements and/or the observed performance of the client systems that communicate with your Application Gateway.
+You can keep either the Request or Response buffer, enabled or disabled, based on your requirements and the observed performance of the client systems that communicate with your Application Gateway.
</br> > [!WARNING]
->We strongly recommend that you test and evaluate the performance before rolling this out on the production gateways.
+> We strongly recommend that you test and evaluate the performance before rolling this out on the production gateways.
## How to change the buffer settings?
az network application-gateway update --name <gw-name> --resource-group <rg-name
az network application-gateway update --name <gw-name> --resource-group <rg-name> --set globalConfiguration.enableRequestBuffering=false ```
+### PowerShell method
+
+**New application gateway**
+```PowerShell
+$AppGw02 = New-AzApplicationGateway -Name "ApplicationGateway02" -ResourceGroupName "ResourceGroup02" -Location $location -BackendAddressPools $pool -BackendHttpSettingsCollection $poolSetting01 -FrontendIpConfigurations $fipconfig -GatewayIpConfigurations $gipconfig -FrontendPorts $fp01 -HttpListeners $listener01 -RequestRoutingRules $rule01 -Sku $sku -EnableRequestBuffering:$false -EnableResponseBuffering:$false
+```
+**Update an existing application gateway**
+```PowerShell
+$appgw = Get-AzApplicationGateway -Name $appgwName -ResourceGroupName $rgname
+$appgw.EnableRequestBuffering = $false
+$appgw.EnableResponseBuffering = $false
+Set-AzApplicationGateway -ApplicationGateway $appgw
+```
+ ### ARM template method ```json
For reference, visit [Azure SDK for .NET](/dotnet/api/microsoft.azure.management
## Limitations - API version 2020-01-01 or later should be used to configure buffers.-- Currently, these changes are not supported through Portal and PowerShell.-- Request buffering cannot be disabled if you are running the WAF SKU of Application Gateway. The WAF requires the full request to buffer as part of processing, therefore, even if you disable request buffering within Application Gateway the WAF will still buffer the request. Response buffering is not impacted by the WAF.
+- Currently, these changes aren't supported through Portal and PowerShell.
+- Request buffering can't be disabled if you're running the WAF SKU of Application Gateway. The WAF requires the full request to buffer as part of processing, therefore, even if you disable request buffering within Application Gateway the WAF still buffers the request. Response buffering isn't impacted by the WAF.
azure-app-configuration Concept Experimentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-experimentation.md
Benefits:
### For intelligent applications (for example, AI-based features)
-Objective: Accelerate General AI (Gen AI) adoption and optimize AI models and use cases through rapid experimentation.
+Objective: Accelerate Generative AI (Gen AI) adoption and optimize AI models and use cases through rapid experimentation.
Approach: Use experimentation to iterate quickly on AI models, test different scenarios, and determine effective approaches.
azure-functions Flex Consumption Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-plan.md
In Flex Consumption, many of the standard application settings and site configur
Keep these other considerations in mind when using Flex Consumption plan during the current preview:
-+ ** Host ** There is a 30 seconds timeout for the app initialization. If your function app takes longer than 30 seconds to start you will see gRPC related System.TimeoutException entries. This will be configurable and a more clear exception will be implemented as part of [this host work item](https://github.com/Azure/azure-functions-host/issues/10482).
-+ ** Durable Functions Performance ** Due to the per function scaling nature of Flex Consumption, to ensure the best performance for Durable Functions we recommend setting the [Always Ready instance count](./flex-consumption-how-to.md#set-always-ready-instance-counts) for the `durable` group to `1`. Also, with the Azure Storage provider, consider reducing the [queue polling interval](./durable/durable-functions-azure-storage-provider.md#queue-polling) to 10 seconds or less.
++ **Host**: There is a 30 seconds timeout for the app initialization. If your function app takes longer than 30 seconds to start you will see gRPC related System.TimeoutException entries. This timeout will be configurable and a more clear exception will be implemented as part of [this host work item](https://github.com/Azure/azure-functions-host/issues/10482).++ **Durable Functions Performance**: Due to the per function scaling nature of Flex Consumption, to ensure the best performance for Durable Functions we recommend setting the [Always Ready instance count](./flex-consumption-how-to.md#set-always-ready-instance-counts) for the `durable` group to `1`. Also, with the Azure Storage provider, consider reducing the [queue polling interval](./durable/durable-functions-azure-storage-provider.md#queue-polling) to 10 seconds or less. + **VNet Integration** Ensure that the `Microsoft.App` Azure resource provider is enabled for your subscription by [following these instructions](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider). The subnet delegation required by Flex Consumption apps is `Microsoft.App/environments`. + **Triggers**: All triggers are fully supported except for Kafka and Azure SQL triggers. The Blob storage trigger only supports the [Event Grid source](./functions-event-grid-blob-trigger.md). Non-C# function apps must use version `[4.0.0, 5.0.0)` of the [extension bundle](./functions-bindings-register.md#extension-bundles), or a later version.
-+ **Regions**:
- + Not all regions are currently supported. To learn more, see [View currently supported regions](flex-consumption-how-to.md#view-currently-supported-regions).
- + There is a temporary limitation where App Service quota limits for creating new apps are also being applied to Flex Consumption apps. If you see the following error "This region has quota of 0 instances for your subscription. Try selecting different region or SKU." please raise a support ticket so that your app creation can be unblocked.
-+ **Deployments**: These deployment-related features aren't currently supported:
- + Deployment slots
- + Continuous deployment using Azure DevOps Tasks (`AzureFunctionApp@2`)
- + Continuous deployment using GitHub Actions (`functions-action@v1`)
-+ **Scale**: The lowest maximum scale in preview is `40`. The highest currently supported value is `1000`.
++ **Regions**: Not all regions are currently supported. To learn more, see [View currently supported regions](flex-consumption-how-to.md#view-currently-supported-regions).++ **Deployments**: Deployment slots are not currently supported.++ **Scale**: The lowest maximum scale in preview is `40`. The highest currently supported value is `1000`. + **Managed dependencies**: [Managed dependencies in PowerShell](functions-reference-powershell.md#dependency-management) aren't supported by Flex Consumption. You must instead [define your own custom modules](functions-reference-powershell.md#custom-modules). + **Diagnostic settings**: Diagnostic settings are not currently supported.
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
zone_pivot_groups: facility-ontology-schema
# Facility Ontology
+> [!NOTE]
+>
+> **Azure Maps Creator retirement**
+>
+> The Azure Maps Creator indoor map service is now deprecated and will be retired on 9/30/25. For more information, see [End of Life Announcement of Azure Maps Creator](https://aka.ms/AzureMapsCreatorDeprecation).
+ Facility ontology defines how Azure Maps Creator internally stores facility data in a Creator dataset. In addition to defining internal facility data structure, facility ontology is also exposed externally through the WFS API. When WFS API is used to query facility data in a dataset, the response format is defined by the ontology supplied to that dataset. ## Changes and Revisions
azure-maps Drawing Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-conversion-error-codes.md
# Drawing conversion errors and warnings
+> [!NOTE]
+>
+> **Azure Maps Creator retirement**
+>
+> The Azure Maps Creator indoor map service is now deprecated and will be retired on 9/30/25. For more information, see [End of Life Announcement of Azure Maps Creator](https://aka.ms/AzureMapsCreatorDeprecation).
+ The Azure Maps [Conversion service] lets you convert uploaded drawing packages into map data. Drawing packages must adhere to the [Drawing package requirements]. If one or more requirements aren't met, then the Conversion service returns errors or warnings. This article lists the conversion error and warning codes, with recommendations on how to resolve them. It also provides some examples of drawings that can cause the Conversion service to return these codes. The Conversion service succeeds if there are any conversion warnings. However, it's recommended that you review and resolve all warnings. A warning means part of the conversion was ignored or automatically fixed. Failing to resolve the warnings could result in errors in latter processes.
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md
# Using the Azure Maps Drawing Error Visualizer with Creator
+> [!NOTE]
+>
+> **Azure Maps Creator retirement**
+>
+> The Azure Maps Creator indoor map service is now deprecated and will be retired on 9/30/25. For more information, see [End of Life Announcement of Azure Maps Creator](https://aka.ms/AzureMapsCreatorDeprecation).
+ The *Drawing Error Visualizer* is a stand-alone web application that displays [Drawing package warnings and errors] detected during the conversion process. The Error Visualizer web application consists of a static page that you can use without connecting to the internet. You can use the Error Visualizer to fix errors and warnings in accordance with [Drawing package requirements]. The [Azure Maps Conversion API] returns a response with a link to the Error Visualizer only when an error is detected. ## Prerequisites
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
zone_pivot_groups: drawing-package-version
# Drawing package requirements
+> [!NOTE]
+>
+> **Azure Maps Creator retirement**
+>
+> The Azure Maps Creator indoor map service is now deprecated and will be retired on 9/30/25. For more information, see [End of Life Announcement of Azure Maps Creator](https://aka.ms/AzureMapsCreatorDeprecation).
+ :::zone pivot="drawing-package-v1" You can convert uploaded drawing packages into map data by using the Azure Maps [Conversion service]. This article describes the drawing package requirements for the Conversion API. To view a sample package, you can download the sample [Drawing package].
azure-maps Release Notes Spatial Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-spatial-module.md
# Spatial IO Module release notes
-> [!NOTE]
->
-> **Azure Maps Spatial service retirement**
->
-> The Azure Maps Spatial service is now deprecated and will be retired on 9/30/25. For more information, see [End of Life Announcement of Azure Maps Spatial](https://aka.ms/AzureMapsSpatialDeprecation).
- This document contains information about new features and other changes to the Azure Maps Spatial IO Module. ## [0.1.8] (February 22, 2024)
azure-maps Spatial Io Add Simple Data Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-simple-data-layer.md
# Add a simple data layer
-> [!NOTE]
->
-> **Azure Maps Spatial service retirement**
->
-> The Azure Maps Spatial service is now deprecated and will be retired on 9/30/25. For more information, see [End of Life Announcement of Azure Maps Spatial](https://aka.ms/AzureMapsSpatialDeprecation).
- The spatial IO module provides a `SimpleDataLayer` class. This class makes it easy to render styled features on the map. It can even render data sets that have style properties and data sets that contain mixed geometry types. The simple data layer achieves this functionality by wrapping multiple rendering layers and using style expressions. The style expressions search for common style properties of the features inside these wrapped layers. The `atlas.io.read` function and the `atlas.io.write` function use these properties to read and write styles into a supported file format. After adding the properties to a supported file format, the file can be used for various purposes. For example, the file can be used to display the styled features on the map. In addition to styling features, the `SimpleDataLayer` provides a built-in popup feature with a popup template. The popup displays when a feature is selected. The default popup feature can be disabled, if desired. This layer also supports clustered data. When a cluster is clicked, the map zooms into the cluster and expands it into individual points and subclusters.
azure-maps Spatial Io Connect Wfs Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-connect-wfs-service.md
# Connect to a WFS service
-> [!NOTE]
->
-> **Azure Maps Spatial service retirement**
->
-> The Azure Maps Spatial service is now deprecated and will be retired on 9/30/25. For more information, see [End of Life Announcement of Azure Maps Spatial](https://aka.ms/AzureMapsSpatialDeprecation).
- A Web Feature Service (WFS) is a web service for querying spatial data that has a standardized API defined by the Open Geospatial Consortium (OGC). The `WfsClient` class in the spatial IO module lets developers connect to a WFS service and query data from the service. The `WfsClient` class supports the following features:
azure-maps Spatial Io Core Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-core-operations.md
# Core IO operations
-> [!NOTE]
->
-> **Azure Maps Spatial service retirement**
->
-> The Azure Maps Spatial service is now deprecated and will be retired on 9/30/25. For more information, see [End of Life Announcement of Azure Maps Spatial](https://aka.ms/AzureMapsSpatialDeprecation).
- In addition to providing tools to read spatial data files, the spatial IO module exposes core underlying libraries to read and write XML and delimited data fast and efficiently. The `atlas.io.core` namespace contains two low-level classes that can quickly read and write CSV and XML data. These base classes power the spatial data readers and writers in the Spatial IO module. Feel free to use them to add more reading and writing support for CSV or XML files.
azure-maps Spatial Io Read Write Spatial Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-read-write-spatial-data.md
# Read and write spatial data
-> [!NOTE]
->
-> **Azure Maps Spatial service retirement**
->
-> The Azure Maps Spatial service is now deprecated and will be retired on 9/30/25. For more information, see [End of Life Announcement of Azure Maps Spatial](https://aka.ms/AzureMapsSpatialDeprecation).
- The following table lists the spatial file formats that are supported for reading and writing operations with the Spatial IO module. | Data Format | Read | Write |
azure-maps Spatial Io Supported Data Format Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-supported-data-format-details.md
# Supported data format details
-> [!NOTE]
->
-> **Azure Maps Spatial service retirement**
->
-> The Azure Maps Spatial service is now deprecated and will be retired on 9/30/25. For more information, see [End of Life Announcement of Azure Maps Spatial](https://aka.ms/AzureMapsSpatialDeprecation).
- This article provides specifics on the read and write support for all XML tags and Well-Known Text geometry types. It also details how the delimited spatial data is parsed in the spatial IO module. ## Supported XML namespaces
azure-netapp-files Configure Customer Managed Keys Hardware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys-hardware.md
na Previously updated : 09/05/2024 Last updated : 09/25/2024 # Configure customer-managed keys with managed Hardware Security Module for Azure NetApp Files volume encryption
Azure NetApp Files volume encryption with customer-managed keys with the managed
* Australia East * Brazil South * Canada Central
+* Central India
* Central US * East Asia * East US
Azure NetApp Files volume encryption with customer-managed keys with the managed
* North Europe * Norway East * Norway West
+* Qatar Central
* South Africa North * South Central US
+* South India
* Southeast Asia * Spain Central * Sweden Central
Azure NetApp Files volume encryption with customer-managed keys with the managed
* UAE Central * UAE North * UK South
+* West Europe
* West US * West US 2 * West US 3
azure-resource-manager Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/variables.md
Title: Variables in Bicep
description: Describes how to define variables in Bicep Previously updated : 08/20/2024 Last updated : 09/25/2024 # Variables in Bicep
This article describes how to define and use variables in your Bicep file. You u
Resource Manager resolves variables before starting the deployment operations. Wherever the variable is used in the Bicep file, Resource Manager replaces it with the resolved value.
-You're limited to 256 variables in a Bicep file. For more information, see [Template limits](../templates/best-practices.md#template-limits).
+You're limited to 512 variables in a Bicep file. For more information, see [Template limits](../templates/best-practices.md#template-limits).
## Define variables
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/best-practices.md
Title: Best practices for templates
description: Describes recommended approaches for authoring Azure Resource Manager templates (ARM templates). Offers suggestions to avoid common problems when using templates. Previously updated : 09/22/2023 Last updated : 09/25/2024 # ARM template best practices
Limit the size of your template to 4 MB, and each resource definition to 1 MB. T
You're also limited to: * 256 parameters
-* 256 variables
+* 512 variables
* 800 resources (including [copy count](copy-resources.md)) * 64 output values * 10 unique locations per subscription/tenant/management group scope
backup Sap Hana Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md
Title: SAP HANA Backup support matrix description: In this article, learn about the supported scenarios and limitations when you use Azure Backup to back up SAP HANA databases on Azure VMs. Previously updated : 09/11/2024 Last updated : 09/25/2024
Azure Backup supports the backup of SAP HANA databases to Azure. This article su
## Support for multistreaming data backups -- **Supported HANA versions**: SAP HANA 2.0 SP05 and prior. - **Parameters to enable SAP HANA settings for multistreaming**: - *parallel_data_backup_backint_channels* - *data_backup_buffer_size (optional)*
backup Geo Code List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/geo-code-list.md
This sample XML provides you an insight about the geo-codes mapped with the resp
<GeoCodeRegionNameMap GeoCode="gec" RegionName="Germany Central" /> <GeoCodeRegionNameMap GeoCode="gne" RegionName="Germany Northeast" /> <GeoCodeRegionNameMap GeoCode="krc" RegionName="Korea Central" />
- <GeoCodeRegionNameMap GeoCode="frc" RegionName="France Central" />
+ <GeoCodeRegionNameMap GeoCode="fc" RegionName="France Central" />
<GeoCodeRegionNameMap GeoCode="frs" RegionName="France South" /> <GeoCodeRegionNameMap GeoCode="krs" RegionName="Korea South" /> <GeoCodeRegionNameMap GeoCode="ugt" RegionName="USGov Texas" />
This sample XML provides you an insight about the geo-codes mapped with the resp
## Next steps
-[Learn](../private-endpoints.md) to create and add DNS zones for Azure Backup private endpoints using geo-codes.
+[Learn](../private-endpoints.md) to create and add DNS zones for Azure Backup private endpoints using geo-codes.
cdn Classic Cdn Retirement Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/classic-cdn-retirement-faq.md
+
+ Title: Azure CDN Standard from Microsoft (classic) retirement FAQ
+
+description: Common questions about the retirement of Azure CDN Standard from Microsoft (classic).
++++ Last updated : 09/26/2024++++
+# Azure CDN Standard from Microsoft (classic) retirement FAQ
+
+Azure Front Door introduced two new tiers named Standard and Premium on March 29, 2022. These tiers offer improvements over the current product offerings of Azure CDN Standard from Microsoft (classic), incorporating capabilities such as Azure Private Link integration, Bot management, advanced Web Application Firewall (WAF) enhancements with DRS 2.1, anomaly scoring-based detection and bot management, out-of-the-box reports and enhanced diagnostic logs, a simplified pricing model, and much more.
+
+In our ongoing efforts to provide the best product experience and streamline our portfolio of products and tiers, we're announcing the retirement of the Azure CDN Standard from Microsoft (classic) tier. This retirement will affect the public cloud and the Azure Government regions of Arizona and Texas, effective September 30, 2027. We strongly recommend all users of Azure CDN Standard from Microsoft (classic) to transition to Azure Front Door Standard and Premium.
+
+## Frequently asked questions
+
+### When is the retirement for Azure CDN Standard from Microsoft (classic)?
+
+Azure CDN Standard from Microsoft (classic) will be retired on September 30, 2027.
+
+### Why is Azure CDN Standard from Microsoft (classic) being retired?
+
+Azure CDN Standard from Microsoft (classic) is a legacy Content Delivery Network service that provides static caching capabilities. In March 2022, we announced the general availability of Azure Front Door Standard and Premium. These new tiers serve as a modern Content Delivery Network platform that supports both dynamic and static scenarios with enhanced Web Application Firewall capabilities, Private Link integration, simplified pricing model and many more enhancements. As part of our plans to offer the best product experience and simplify our product portfolio, we're announcing the retirement of Azure CDN Standard from Microsoft (classic) tier.
+
+### What advantages does migrating to Azure Front Door Standard or Premium tier offer?
+
+Azure Front Door Standard and Premium tiers represent the enhanced versions of Azure CDN Standard from Microsoft (classic). They maintain the same Service Level Agreement (SLA) and offer more benefits, including:
+
+* A unified static and dynamic delivery platform, with simplified cost model.
+* Enhanced security features, such as[Private Link integration](../frontdoor/private-link.md), advanced WAF enhancements with DRS 2.1, anomaly scoring based detection and bot management, and many more to come.
+* Deep integration with Azure services to deliver secure, accelerated, and user friendly end-to-end cloud solutions. These integrations include:
+ * DNS deterministic name library integrations to prevent subdomain take over
+ * [Prevalidated domain integration with PaaS service with one-time domain validation](../frontdoor/standard-premium/how-to-add-custom-domain.md#associate-the-custom-domain-with-your-azure-front-door-endpoint).
+ * [One-click enablement on Static Web Apps](../static-web-apps/front-door-manual.md)
+ * Use [managed identities](../frontdoor/managed-identity.md) to access Azure Key Vault certificates
+ * Azure Advisor integration to provide best practice recommendations
+* Improved capabilities such as simplified, more flexible [rules engine](../frontdoor/front-door-rules-engine.md) with regular expressions and server variables, enhanced and richer [analytics](../frontdoor/standard-premium/how-to-reports.md) and [logging](../frontdoor/front-door-diagnostics.md) capabilities, and more.
+* The ability to update separate resources without updating the whole Azure Front Door instance through DevOps tools.
+* Access to all future features and updates on Azure Front Door Standard and Premium tier.
+
+For more information supported features, see [comparison between Azure Front Door and Azure CDN services](../frontdoor/front-door-cdn-comparison.md).
+
+### How does the performance of the Azure Front Door Standard or Premium tier compare to that of Azure CDN Standard from Microsoft (classic)?
+
+The Azure Front Door Standard and Premium tier have the same Service Level Agreement (SLA). Our goal is to ensure Azure Front Door Standard and Premium delivers optimal performance and reliability.
+
+### What will happen after September 30, 2027 when the service is retired?
+
+After the service is retired, you'll lose the ability to:
+* Create or manage Azure CDN Standard from Microsoft (classic) resources.
+* Access the data through the Azure portal or the APIs/SDKs/client tools.
+* Receive service updates to Azure CDN Standard from Microsoft (classic) or APIs/SDKs/Client tools.
+* Receive support for issues on Azure CDN Standard from Microsoft (classic) through phone, email, or web.
+
+### How can the migration be completed without causing downtime to my applications? Where can I learn more about the migration to Azure Front Door Standard or Premium?
+
+We offer a zero-downtime migration tool. The following resources are available to assist you in understand and perform the migration process:
+
+* Familiarize yourself with the [zero-downtime migration tool](tier-migration.md). It's important to pay attention to the section of **Breaking changes when migrating to Standard or Premium tier** and **resource mapping**.
+* Learn the process of migrating from Azure CDN Standard from Microsoft (classic) to Standard or Premium tier using the [Azure portal](migrate-tier.md).
+
+### How will migrating to Azure Front Door Standard or Premium affect the Total Cost Ownership (TCO)?
+
+For more information, see the [pricing comparison](../frontdoor/compare-cdn-front-door-price.md) between Azure Front Door tier.
+
+### Which clouds does Azure CDN Standard from Microsoft (classic) retirement apply to?
+
+Currently, Azure CDN Standard from Microsoft (classic) retirement affects the public cloud and Azure Government in the regions of Arizona and Texas.
+
+### Can I make updates to Azure CDN Standard from Microsoft (classic) resources?
+
+You can still update your existing Azure CDN Standard from Microsoft (classic) resources using the Azure portal, Terraform, and all command line tools until September 30, 2027. However, you won't be able to create new Azure CDN Standard from Microsoft (classic) resources starting October 1, 2025. We strongly recommend you migrate to Azure Front Door Standard or Premium tier as soon as possible.
+
+### Can I roll back to Azure CDN Standard from Microsoft (classic) after migration?
+
+No, once migration is completed successfully, it can't be rolled back to classic. If you encounter any issues, you can raise a support ticket for assistance.
+
+### How will the Azure CDN Standard from Microsoft (classic) resources be handled after migration?
+
+We recommend you delete the Azure CDN Standard from Microsoft (classic) resource once migration successfully completes. Azure Front Door sends notification through Azure Advisor to remind users to delete the migrated classic resources.
+
+### What are the available resources for support and feedback?
+
+If you have a support plan and you need technical assistance, you can create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) with the following information:
+
+* *Issue type*, select **Technical**.
+* *Subscription*, select the subscription you need assistance with.
+* *Service*, select **My services**, then select **Azure CDN**.
+* *Resource*, select the **Azure CDN resource**.
+* *Summary*, describe the problem youΓÇÖre experiencing with the migration.
+* *Problem type*, select **Migrating Microsoft CDN to Front Door Standard or Premium**
+
+## Next steps
+
+- Migrate from Azure CDN Standard from Microsoft (classic) to Standard or Premium tier using the [Azure portal](migrate-tier.md)
cdn Migrate Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/migrate-tier.md
Azure Front Door Standard and Premium tier bring the latest cloud delivery netwo
1. Select **Validate** to see if your Azure CDN from Microsoft (classic) profile is compatible for migration. Validation can take up to two minutes depending on the complexity of your CDN profile.
- :::image type="content" source="./media/migrate-tier/validate.png" alt-text="Screenshot of the validated compatibility section of the migration page.":::
+ :::image type="content" source="./media/migrate-tier/validate-cdn-profile.png" alt-text="Screenshot of the validated compatibility section of the migration page.":::
If the migration isn't compatible, you can select **View errors** to see the list of errors, and recommendations to resolve them.
Azure Front Door Standard and Premium tier bring the latest cloud delivery netwo
> [!NOTE] > If your Azure CDN from Microsoft (classic) profile can be migrated to the Standard tier but the number of resources exceeds the Standard tier limits, you'll be migrated to the Premium tier.
- :::image type="content" source="./media/migrate-tier/prepare-tier.png" alt-text="Screenshot of the selected tier for the new Front Door profile.":::
+ :::image type="content" source="./media/migrate-tier/prepare-for-migration.png" alt-text="Screenshot of the selected tier for the new Front Door profile.":::
1. You need to change the endpoint name if the CDN endpoint name length exceeds the maximum of 46 characters. This isn't required if the endpoint name is within the character limit. For more information, see [Azure Front Door endpoints](../frontdoor/endpoint.md). Since the maximum endpoint length for Azure Front Door is 64 characters, Azure adds a 16 character hash to the end of the endpoint name to ensure uniqueness and to prevent subdomain takeovers.
Azure Front Door Standard and Premium tier bring the latest cloud delivery netwo
1. Select the link that appears to view the configuration of the new Front Door profile. At this time, you can review each of the settings for the new profile to ensure all settings are correct. Once you're done reviewing the read-only profile, select the **X** in the top right corner of the page to go back to the migration screen.
- :::image type="content" source="./media/migrate-tier/verify-new-profile.png" alt-text="Screenshot of the link to view the new read-only Front Door profile.":::
+ :::image type="content" source="./media/migrate-tier/preparation-success.png" alt-text="Screenshot of the link to view the new read-only Front Door profile.":::
## Enable managed identities
communication-services Emergency Calling Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/emergency-calling-concept.md
[!INCLUDE [Emergency Calling Notice](../../includes/emergency-calling-notice-include.md)]
-You can use the Azure Communication Services Calling SDK to add Enhanced Emergency dialing and Public Safety Answering Point (PSAP) callback support to your applications in the United States (US), Puerto Rico (PR), the United Kingdom (GB), Canada (CA), and Denmark (DK). The capability to dial 911 (in US, PR, and CA), to dial 112 (in DK), and to dial 999 or 112 (in GB) and receive a callback might be a requirement for your application. Verify the emergency calling requirements with your legal counsel.
+You can use the Azure Communication Services Calling SDK to add Enhanced Emergency dialing and Public Safety Answering Point (PSAP) callback support to your applications in the United States (US), Puerto Rico (PR), the United Kingdom (GB), Canada (CA), Denmark (DK) and Australiua (AU). The capability to dial 911 (in US, PR, and CA), to dial 112 (in DK), to dial 000 (in AU) and to dial 999 or 112 (in GB) and receive a callback might be a requirement for your application. Verify the emergency calling requirements with your legal counsel.
-Calls to an emergency number are routed over the Microsoft network. Microsoft assigns a temporary phone number as the Call Line Identity (CLI) when a user places an emergency call from US, PR, GB, CA, or DK. Microsoft temporarily maintains a mapping of the phone number to the caller's identity.
+Calls to an emergency number are routed over the Microsoft network. Microsoft assigns a temporary phone number as the Call Line Identity (CLI) when a user places an emergency call from US, PR, GB, CA, DK or AU. Microsoft temporarily maintains a mapping of the phone number to the caller's identity.
If there's a callback from the PSAP, Microsoft routes the call directly to the originating caller. The caller can accept the incoming PSAP call even if inbound calling is disabled.
The service is available for Microsoft phone numbers. It requires the Azure reso
## Call flow
-1. An Azure Communication Services user identity dials an emergency number by using the Calling SDK from US or PR.
+1. An Azure Communication Services user identity dials an emergency number by using the Calling SDK.
1. Microsoft validates that the Azure resource has a Microsoft phone number enabled for outbound dialing. 1. The Microsoft Azure Communication Services emergency service replaces the user's phone number (the `alternateCallerId` value) with a temporary unique phone number. This number allocation remains in place for at least 60 minutes from the time that the emergency number is dialed. 1. Microsoft maintains a temporary record (for about 60 minutes) that maps the unique phone number to the user identity.
Emergency calling is automatically enabled for all users of the Azure Communicat
- Microsoft uses the ISO 3166-1 alpha-2 standard for country/region codes.
- - Microsoft supports US, PR, GB, CA, and DK country/region codes for emergency number dialing.
+ - Microsoft supports US, PR, GB, CA, DK or AU country/region codes for emergency number dialing.
- If you don't provide the country/region code to the SDK, Microsoft uses the IP address to determine the country or region of the caller.
There's also an option to use a purchased number as a caller ID for direct routi
Try these quickstarts: - [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)-- [Add emergency calling to your app](../../quickstarts/telephony/emergency-calling.md)
+- [Add emergency calling to your app](../../quickstarts/telephony/emergency-calling.md)
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
For more information, review the following documentation:
[azure-event-grid-publisher-doc]: /azure/logic-apps/connectors/built-in/reference/eventgridpublisher/ "Connect to Azure Event Grid for event-based programming using pub-sub semantics" [azure-event-hubs-doc]: /azure/logic-apps/connectors/built-in/reference/eventhub/ "Connect to Azure Event Hubs so that you can receive and send events between logic app workflows and Event Hubs" [azure-file-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azurefile/ "Connect to Azure File Storage so you can create and manage files in your Azure storage account"
-[azure-functions-doc]: ../logic-apps/logic-apps-azure-functions.md "Integrate logic app workflows with Azure Functions"
+[azure-functions-doc]: ../logic-apps/call-azure-functions-from-workflows.md "Integrate logic app workflows with Azure Functions"
[azure-key-vault-doc]: /azure/logic-apps/connectors/built-in/reference/keyvault/ "Connect to Azure Key Vault to securely store, access, and manage secrets" [azure-openai-doc]: https://techcommunity.microsoft.com/t5/azure-integration-services-blog/public-preview-of-azure-openai-and-ai-search-in-app-connectors/ba-p/4049584 "Connect to Azure OpenAI to perform operations on large language models" [azure-queue-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azurequeues/ "Connect to Azure Storage so you can create and manage queue entries and queues"
connectors File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/file-system.md
The File System connector has different versions, based on [logic app type and h
The following example shows the connection information for the File System managed connector trigger:
- ![Screenshot showing Consumption workflow designer and connection information for File System managed connector trigger.](media/connect-file-systems/file-system-connection-consumption.png)
+ ![Screenshot showing Consumption workflow designer and connection information for File System managed connector trigger.](media/file-system/file-system-connection-consumption.png)
1. When you're done, select **Create**.
The File System connector has different versions, based on [logic app type and h
For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check.
- ![Screenshot showing Consumption workflow designer and the trigger named When a file is created.](media/connect-file-systems/trigger-file-system-when-file-created-consumption.png)
+ ![Screenshot showing Consumption workflow designer and the trigger named When a file is created.](media/file-system/trigger-file-system-when-file-created-consumption.png)
1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
- ![Screenshot showing Consumption workflow designer, managed connector trigger named When a file is created, and action named Send an email.](media/connect-file-systems/trigger-file-system-send-email-consumption.png)
+ ![Screenshot showing Consumption workflow designer, managed connector trigger named When a file is created, and action named Send an email.](media/file-system/trigger-file-system-send-email-consumption.png)
> [!TIP] >
The following steps apply only to Standard logic app workflows in an App Service
The following example shows the connection information for the File System built-in connector trigger:
- ![Screenshot showing Standard workflow designer and connection information for File System built-in connector trigger.](media/connect-file-systems/trigger-file-system-connection-built-in-standard.png)
+ ![Screenshot showing Standard workflow designer and connection information for File System built-in connector trigger.](media/file-system/trigger-file-system-connection-built-in-standard.png)
1. When you're done, select **Create**.
The following steps apply only to Standard logic app workflows in an App Service
For this example, select the folder path on your file system server to check for a newly added file. Specify how often you want to check.
- ![Screenshot showing Standard workflow designer and information for the trigger named When a file is added.](media/connect-file-systems/trigger-when-file-added-built-in-standard.png)
+ ![Screenshot showing Standard workflow designer and information for the trigger named When a file is added.](media/file-system/trigger-when-file-added-built-in-standard.png)
1. To test your workflow, add an Outlook action that sends you an email when a file is added to the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
- ![Screenshot showing Standard workflow designer, managed connector trigger named When a file is added, and action named Send an email.](media/connect-file-systems/trigger-send-email-built-in-standard.png)
+ ![Screenshot showing Standard workflow designer, managed connector trigger named When a file is added, and action named Send an email.](media/file-system/trigger-send-email-built-in-standard.png)
> [!TIP] >
If successful, your workflow sends an email about the new file.
The following example shows the connection information for the File System managed connector trigger:
- ![Screenshot showing Standard workflow designer and connection information for File System managed connector trigger.](media/connect-file-systems/trigger-file-system-connection-managed-standard.png)
+ ![Screenshot showing Standard workflow designer and connection information for File System managed connector trigger.](media/file-system/trigger-file-system-connection-managed-standard.png)
1. When you're done, select **Create**.
If successful, your workflow sends an email about the new file.
For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check.
- ![Screenshot showing Standard workflow designer and managed connector trigger named When a file is created.](media/connect-file-systems/trigger-when-file-created-managed-standard.png)
+ ![Screenshot showing Standard workflow designer and managed connector trigger named When a file is created.](media/file-system/trigger-when-file-created-managed-standard.png)
1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
- ![Screenshot showing Standard workflow designer, managed connector trigger named When a file is created, and action named Send an email.](media/connect-file-systems/trigger-send-email-managed-standard.png)
+ ![Screenshot showing Standard workflow designer, managed connector trigger named When a file is created, and action named Send an email.](media/file-system/trigger-send-email-managed-standard.png)
> [!TIP] >
The example logic app workflow starts with the [Dropbox trigger](/connectors/dro
The following example shows the connection information for the File System managed connector action:
- ![Screenshot showing connection information for File System managed connector action.](media/connect-file-systems/file-system-connection-consumption.png)
+ ![Screenshot showing connection information for File System managed connector action.](media/file-system/file-system-connection-consumption.png)
1. When you're done, select **Create**.
The example logic app workflow starts with the [Dropbox trigger](/connectors/dro
For this example, select the folder path on your file system server to use, which is the root folder here. Enter the file name and content, based on the file uploaded to Dropbox.
- ![Screenshot showing Consumption workflow designer and the File System managed connector action named Create file.](media/connect-file-systems/action-file-system-create-file-consumption.png)
+ ![Screenshot showing Consumption workflow designer and the File System managed connector action named Create file.](media/file-system/action-file-system-create-file-consumption.png)
> [!TIP] >
The example logic app workflow starts with the [Dropbox trigger](/connectors/dro
1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
- ![Screenshot showing Consumption workflow designer, managed connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-consumption.png)
+ ![Screenshot showing Consumption workflow designer, managed connector "Create file" action, and "Send an email" action.](media/file-system/action-file-system-send-email-consumption.png)
1. When you're done, save your workflow.
These steps apply only to Standard logic apps in an App Service Environment v3 w
The following example shows the connection information for the File System built-in connector action:
- ![Screenshot showing Standard workflow designer and connection information for File System built-in connector action.](media/connect-file-systems/action-file-system-connection-built-in-standard.png)
+ ![Screenshot showing Standard workflow designer and connection information for File System built-in connector action.](media/file-system/action-file-system-connection-built-in-standard.png)
Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
These steps apply only to Standard logic apps in an App Service Environment v3 w
When you're done, the **File Content** trigger output appears in the **File content** parameter:
- ![Screenshot showing Standard workflow designer and the File System built-in connector "Create file" action.](media/connect-file-systems/action-file-system-create-file-built-in-standard.png)
+ ![Screenshot showing Standard workflow designer and the File System built-in connector "Create file" action.](media/file-system/action-file-system-create-file-built-in-standard.png)
1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
- ![Screenshot showing Standard workflow designer, built-in connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-built-in-standard.png)
+ ![Screenshot showing Standard workflow designer, built-in connector "Create file" action, and "Send an email" action.](media/file-system/action-file-system-send-email-built-in-standard.png)
1. When you're done, save your workflow.
If successful, your workflow creates a file on your file system server, based on
The following example shows the connection information for the File System managed connector action:
- ![Screenshot showing connection information for File System managed connector action.](media/connect-file-systems/action-file-system-connection-managed-standard.png)
+ ![Screenshot showing connection information for File System managed connector action.](media/file-system/action-file-system-connection-managed-standard.png)
Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
If successful, your workflow creates a file on your file system server, based on
When you're done, the **File Content** trigger output appears in the **File content** parameter:
- ![Screenshot showing Standard workflow designer and the File System managed connector "Create file" action.](media/connect-file-systems/action-file-system-create-file-managed-standard.png)
+ ![Screenshot showing Standard workflow designer and the File System managed connector "Create file" action.](media/file-system/action-file-system-create-file-managed-standard.png)
1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
- ![Screenshot showing Standard workflow designer, managed connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-managed-standard.png)
+ ![Screenshot showing Standard workflow designer, managed connector "Create file" action, and "Send an email" action.](media/file-system/action-file-system-send-email-managed-standard.png)
1. When you're done, save your workflow.
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md
For more information, review the following documentation:
[azure-sql-data-warehouse-doc]: /connectors/sqldw/ "Connect to Azure Synapse Analytics so that you can view your data" [azure-table-storage-doc]: /connectors/azuretables/ "Connect to your Azure Storage account so that you can create, update, and query tables and more" [biztalk-server-doc]: /connectors/biztalk/ "Connect to your BizTalk Server so that you can run BizTalk-based applications side by side with Azure Logic Apps"
-[file-system-doc]: ../logic-apps/logic-apps-using-file-connector.md "Connect to an on-premises file system"
+[file-system-doc]: file-system.md "Connect to an on-premises file system"
[ftp-doc]: ./connectors-create-api-ftp.md "Connect to an FTP / FTPS server for FTP tasks, like uploading, getting, deleting files, and more" [github-doc]: ./connectors-create-api-github.md "Connect to GitHub and track issues" [google-calendar-doc]: ./connectors-create-api-googlecalendar.md "Connects to Google Calendar and can manage calendar"
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
The following example creates a subscription named *Dev Team subscription* for t
### [REST](#tab/rest) ```json
-PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/{{guid}}?api-version=2021-10-01
+PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01
``` ### Request body
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db.md
Previously updated : 08/13/2024 Last updated : 09/04/2024 # Copy and transform data in Azure Cosmos DB for NoSQL by using Azure Data Factory
You can also store service principal key in Azure Key Vault.
### <a name="managed-identity"></a> System-assigned managed identity authentication >[!NOTE]
->Currently, the system-assigned managed identity authentication is not supported in data flow.
+>Currently, the system-assigned managed identity authentication is supported in data flows through the use of advanced properties in JSON format.
A data factory or Synapse pipeline can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity), which represents this specific service instance. You can directly use this managed identity for Azure Cosmos DB authentication, similar to using your own service principal. It allows this designated resource to access and copy data to or from your Azure Cosmos DB instance.
These properties are supported for the linked service:
| accountEndpoint | Specify the account endpoint URL for the Azure Cosmos DB instance. | Yes | | database | Specify the name of the database. | Yes | | connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
+| subscriptionId | Specify the subscription id for the Azure Cosmos DB instance | No for Copy Activity, Yes for Mapping Data Flow |
+| tenantId | Specify the tenant id for the Azure Cosmos DB instance | No for Copy Activity, Yes for Mapping Data Flow |
+| resourceGroup | Specify the resource group name for the Azure Cosmos DB instance | No for Copy Activity, Yes for Mapping Data Flow |
**Example:**
These properties are supported for the linked service:
"type": "CosmosDb", "typeProperties": { "accountEndpoint": "<account endpoint>",
- "database": "<database name>"
+ "database": "<database name>",
+ "subscriptionId": "<subscription id>",
+ "tenantId": "<tenant id>",
+ "resourceGroup": "<resource group>"
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
data-factory Connector Microsoft Fabric Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-warehouse.md
Previously updated : 02/23/2024 Last updated : 09/04/2024 # Copy and transform data in Microsoft Fabric Warehouse using Azure Data Factory or Azure Synapse Analytics
This Microsoft Fabric Warehouse connector is supported for the following capabil
| Supported capabilities|IR | Managed private endpoint| || --| --| |[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô |
|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô | |[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô | |[Script activity](transform-data-using-script.md)|&#9312; &#9313;|Γ£ô |
To use this feature, create an [Azure Blob Storage linked service](connector-azu
] ```
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read and write to tables from Microsoft Fabric Warehouse.
+For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows.
+
+### Microsoft Fabric Warehouse as the source
+Settings specific to Microsoft Fabric Warehouse are available in the Source Options tab of the source transformation.
+
+| Name | Description | Required | Allowed Values | Data flow script property |
+| : | :-- | :- |:-- |:- |
+| Input | Select whether you point your source at a table (equivalent of Select * from tablename) or enter a custom SQL query or retrieve data from a Stored Procedure. Query: If you select Query in the input field, enter a SQL query for your source. This setting overrides any table that you've chosen in the dataset. **Order By** clauses aren't supported here, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table. This query will produce a source table that you can use in your data flow. Using queries is also a great way to reduce rows for testing or for lookups.SQL Example: ```Select * from MyTable where customerId > 1000 and customerId < 2000``` | Yes | Table or Query or Stored Procedure | format: 'table' |
+| Batch size | Enter a batch size to chunk large data into reads. In data flows, this setting will be used to set Spark columnar caching. This is an option field, which will use Spark defaults if it is left blank. | No | Numeral values | batchSize: 1234|
+| Isolation Level | The default for SQL sources in mapping data flow is read uncommitted. You can change the isolation level here to one of these values:ΓÇó Read Committed ΓÇó Read Uncommitted ΓÇó Repeatable Read ΓÇó Serializable ΓÇó None (ignore isolation level) | Yes | ΓÇó Read Committed ΓÇó Read Uncommitted ΓÇó Repeatable Read ΓÇó Serializable ΓÇó None (ignore isolation level) | isolationLevel|
+
+>[!NOTE]
+>Read via staging is not supported. CDC support for Microsoft Fabric Warehouse source is currently not available.
+
+### Microsoft Fabric Warehouse as the sink
+Settings specific to Microsoft Fabric Warehouse are available in the Settings tab of the sink transformation.
+
+| Name | Description | Required | Allowed Values | Data flow script property |
+| : | :-- | :- |:-- |:- |
+| Update method | Determines what operations are allowed on your database destination. The default is to only allow inserts. To update, upsert, or delete rows, an alter-row transformation is required to tag rows for those actions. For updates, upserts and deletes, a key column or columns must be set to determine which row to alter. | Yes | true or false | insertable deletable upsertable updateable |
+| Table action | Determines whether to recreate or remove all rows from the destination table prior to writing.ΓÇó None: No action will be done to the table. ΓÇó Recreate: The table will get dropped and recreated. Required if creating a new table dynamically.ΓÇó Truncate: All rows from the target table will get removed. | No | None or recreate or truncate | recreate: true truncate: true |
+| Enable staging | The staging storage is configured in [Execute Data Flow activity](control-flow-execute-data-flow-activity.md). When you use managed identity authentication for your storage linked service, learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively.If your Azure Storage is configured with VNet service endpoint, you must use managed identity authentication with "allow trusted Microsoft service" enabled on storage account, refer to [Impact of using VNet Service Endpoints with Azure storage](/azure/azure-sql/database/vnet-service-endpoint-rule-overview#impact-of-using-virtual-network-service-endpoints-with-azure-storage).| No | true or false |staged: true |
+| Batch size | Controls how many rows are being written in each bucket. Larger batch sizes improve compression and memory optimization, but risk out of memory exceptions when caching data. | No | Numeral values | batchSize: 1234|
+| Use sink schema | By default, a temporary table will be created under the sink schema as staging. You can alternatively uncheck the **Use sink schema** option and instead, in **Select user DB schema**, specify a schema name under which Data Factory will create a staging table to load upstream data and automatically clean them up upon completion. Make sure you have create table permission in the database and alter permission on the schema. | No | true or false | stagingSchemaName|
+| Pre and Post SQL scripts | Enter multi-line SQL scripts that will execute before (pre-processing) and after (post-processing) data is written to your Sink database| No | SQL scripts | preSQLs:['set IDENTITY_INSERT mytable ON'] postSQLs:['set IDENTITY_INSERT mytable OFF'],|
+
+### Error row handling
+By default, a data flow run will fail on the first error it gets. You can choose to Continue on error that allows your data flow to complete even if individual rows have errors. The service provides different options for you to handle these error rows.
+
+Transaction Commit: Choose whether your data gets written in a single transaction or in batches. Single transaction will provide better performance and no data written will be visible to others until the transaction completes. Batch transactions have worse performance but can work for large datasets.
+
+Output rejected data: If enabled, you can output the error rows into a csv file in Azure Blob Storage or an Azure Data Lake Storage Gen2 account of your choosing. This will write the error rows with three additional columns: the SQL operation like INSERT or UPDATE, the data flow error code, and the error message on the row.
+
+Report success on error: If enabled, the data flow will be marked as a success even if error rows are found.
+
+>[!NOTE]
+>For Microsoft Fabric Warehouse Linked Service, the supported authentication type for Service Principal is 'Key'; 'Certificate' authentication is not supported.
+ ## Lookup activity properties To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
When you copy data from Microsoft Fabric Warehouse, the following mappings are u
## Next steps
-For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Tutorial Copy Data Portal Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-portal-private.md
Last updated 05/15/2024
+ai-usage: ai-assisted
# Copy data securely from Azure Blob storage to a SQL database by using private endpoints [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-In this tutorial, you create a data factory by using the Azure Data Factory user interface (UI). *The pipeline in this data factory copies data securely from Azure Blob storage to an Azure SQL database (both allowing access to only selected networks) by using private endpoints in [Azure Data Factory Managed Virtual Network](managed-virtual-network-private-endpoint.md).* The configuration pattern in this tutorial applies to copying from a file-based data store to a relational data store. For a list of data stores supported as sources and sinks, see the [Supported data stores and formats](./copy-activity-overview.md) table.
+In this tutorial, you create a data factory by using the Azure Data Factory user interface (UI). *The pipeline in this data factory copies data securely from Azure Blob storage to an Azure SQL database (both allowing access to only selected networks) by using private endpoints in [Azure Data Factory Managed Virtual Network](managed-virtual-network-private-endpoint.md).* The configuration pattern in this tutorial applies to copying from a file-based data store to a relational data store. For a list of data stores supported as sources and sinks, see the [Supported data stores and formats](./copy-activity-overview.md) table. The private endpoints feature is available across all tiers of Azure Data Factory, so no specific tier is required to utilize them. For more details on pricing and tiers, please refer to the [Azure Data Factory pricing page](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/).
> [!NOTE] > If you're new to Data Factory, see [Introduction to Azure Data Factory](./introduction.md).
In this tutorial, you do the following steps:
* Create a data factory. * Create a pipeline with a copy activity. - ## Prerequisites * **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. * **Azure storage account**. You use Blob storage as a *source* data store. If you don't have a storage account, see [Create an Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) for steps to create one. *Ensure the storage account allows access only from selected networks.*
databox Data Box Portal Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-portal-admin.md
You can only delete orders that are completed or canceled. Perform the following
3. Enter the name of the order when prompted to confirm the order deletion. Click **Delete**.
-## Download shipping label
-
-You may need to download the shipping label if the E-ink display of your Data Box isn't working and doesn't display the return shipping label. There's no E-ink display on Data Box Heavy, so this workflow doesn't apply to Data Box Heavy.
-
-Perform the following steps to download a shipping label.
-
-1. Go to **Overview > Download shipping label**. This option is available only after the device has shipped.
-
- ![Download shipping label](media/data-box-portal-admin/portal-admin-download-shipping-label.png)
-
-2. This downloads the following return shipping label. Save the label and print it out. Fold and insert the label into the clear sleeve on the device. Ensure that the label is visible. Remove any stickers that are on the device from previous shipping.
-
- ![Example shipping label](media/data-box-portal-admin/portal-admin-example-shipping-label.png)
- ## Edit shipping address You may need to edit the shipping address once the order is placed. This is only available until the device is dispatched. Once the device is dispatched, this option is no longer available.
You can find out the device password by viewing your order in the Azure portal.
## Next steps -- Learn how to [Troubleshoot Data Box and Data Box Heavy issues](data-box-troubleshoot.md).
+- Learn how to [Troubleshoot Data Box and Data Box Heavy issues](data-box-troubleshoot.md).
deployment-environments How To Configure Extensibility Terraform Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-extensibility-terraform-container-image.md
Creating a custom container image allows you to customize your deployments to fi
After you complete the image customization, you must build the image and push it to your container registry.
+Alternatively, you can fork the repo [Leveraging ADE's Extensibility Model With Terraform](https://github.com/Azure/ade-extensibility-model-terraform) to build and push the Terraform image to a provided ACR.
+ ## Create and customize a container image with Docker In this example, you learn how to build a Docker image to utilize ADE deployments and access the ADE CLI, basing your image on one of the ADE authored images.
event-hubs Send And Receive Events Using Data Generator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/send-and-receive-events-using-data-generator.md
- Title: Send and receive events by using Data Generator
-description: This quickstart shows you how to send and receive events to an Azure event hub by using Data Generator in the Azure portal.
--- Previously updated : 06/07/2024
-#customer intent: As a developer, I want to send test events to an event hub in Azure Event Hubs and receive or view them.
--
-# Quickstart: Send and receive events by using Azure Event Hubs Data Generator
-
-In this quickstart, you learn how to send and receive events by using Azure Event Hubs Data Generator.
-
-> [!IMPORTANT]
-> The Data generator preview feature is deprecated and has been **replaced with the [Event Hubs Data Explorer](event-hubs-data-explorer.md)**. Please leverage the Event Hubs Data Explorer to send events to and receive events from an Event Hubs namespace using the portal.
->
-
-## Prerequisites
-
-If you're new to Event Hubs, see the [Event Hubs overview](event-hubs-about.md) before you go through this quickstart.
-
-To complete this quickstart, you need the following prerequisites:
--- Have an Azure subscription. To use Azure services, including Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Create an Event Hubs namespace and an event hub. Follow the instructions in [Quickstart: Create an event hub by using the Azure portal](event-hubs-create.md).-- If the event hub is in a virtual network, you need to access the portal from a virtual machine (VM) in the same virtual network. Data Generator doesn't work with private endpoints with public access blocked unless you access the portal from the subnet of the virtual network for which the private endpoint is configured.-
-> [!NOTE]
-> Data Generator for Event Hubs is in preview.
-
-## Send events by using Event Hubs Data Generator
-
-To send events to an event hub by using Event Hubs Data Generator:
-
-1. On the **Event Hubs Namespace** page, select **Generate data** in the **Overview** section on the leftmost menu.
-
- :::image type="content" source="media/send-and-receive-events-using-data-generator/generate-data-menu.png" alt-text="Screenshot that shows the Generate data (preview) menu on an Event Hubs Namespace page." lightbox="media/send-and-receive-events-using-data-generator/generate-data-menu.png":::
-
-1. On the **Generate Data** page, follow these steps:
- 1. For the **Select event hub** field, use the dropdown list to send the data to an event hub in the namespace.
- 1. For the **Select dataset** field, select a precanned dataset, such as **Weather data** and **Clickstream data**. You can also select the **Custom payload** option and specify your own payload.
- 1. If you select **Custom payload**, for the **Select Content-Type** field, choose the type of the content in the event data. Currently, Data Generator supports sending data in the JSON, XML, Text, and Binary content types.
- 1. For the **Repeat Send** field, enter the number of times you want to send the sample dataset to the event hub. The maximum allowed value is 100.
-
- :::image type="content" source="media/send-and-receive-events-using-data-generator/highlighted-data-generator-landing.png" alt-text="Screenshot that shows the landing page for Data Generator." lightbox="media/send-and-receive-events-using-data-generator/highlighted-data-generator-landing.png":::
-
-> [!TIP]
-> For custom payload, the content in the **Enter payload** section is treated as a single event. The number of events sent is equal to the value of **Repeat Send**.
->
-> Precanned datasets are collections of events. For precanned datasets, each event in the dataset is sent separately. For example, if the dataset has 50 events and the value of **Repeat Send** is 10, then 500 events are sent to the event hub.
-
-### Maximum message size support with different tiers
-
-The following table shows the maximum payload size that you can send to an event hub by using Data Generator.
-
-| Tier | Basic | Standard | Premium | Dedicated |
-|--|--|--|--|--|
-| Maximum payload size| 256 Kb | 1 MB | 1 MB | 1 MB |
-
-## View events by using Event Hubs Data Generator
-
-> [!IMPORTANT]
-> Viewing events is meant to act like a magnifying glass to the stream of events that you sent. The tabular section in the **View Events** section lets you glance at the last 15 events that were sent to the event hub. If the event content is in a format that can't be loaded, the **View Events** section shows metadata for the event.
-
-When you select **Send**, Data Generator sends events to the selected event hub and the collapsible **View events** section loads automatically. Expand any tabular row to review the event content sent to event hubs.
--
-## Frequently asked questions
-
-This section answers common questions.
-
-#### I am getting the error "Oops! We couldn't read events from the event hub - `<your event hub name>`. Please make sure that there is no active consumer reading events from $Default Consumer group."
-
- Data Generator makes use of the `$Default` [consumer group](event-hubs-features.md) to view events that were sent to the event hub. To start receiving events from event hubs, a receiver needs to connect to the consumer group and take ownership of the underlying partition. If there's already a consumer reading from the `$Default` consumer group, Data Generator can't establish a connection and view events. If you have an active consumer silently listening to the events and checkpointing them, Data Generator can't find any events in the event hub. Disconnect any active consumer reading from the `$Default` consumer group and try again.
-
-#### I see more events in the View Events section than the ones I sent by using Data Generator. Where are those events coming from?
-
- Multiple applications can connect to event hubs at the same time. If there are multiple applications sending data to event hubs alongside Data Generator, the **View Events** section also shows events sent by other clients. At any instance, the **View Events** section lets you view the last 15 events that were sent to Event Hubs.
-
-## Related content
--- [Send and receive events by using Event Hubs SDKs (AMQP)](event-hubs-dotnet-standard-getstarted-send.md)-- [Send and receive events by using Apache Kafka](event-hubs-quickstart-kafka-enabled-event-hubs.md)
expressroute Expressroute Howto Circuit Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-arm.md
description: This quickstart shows you how to create, provision, verify, update,
Previously updated : 12/28/2023 Last updated : 09/25/2024
# Quickstart: Create and modify an ExpressRoute circuit using Azure PowerShell
-This quickstart shows you how to create an ExpressRoute circuit using PowerShell cmdlets and the Azure Resource Manager deployment model. You can also check the status, update, delete, or deprovision a circuit.
+This quickstart shows you how to create an ExpressRoute circuit in three different resiliency types: **Maximum Resiliency**, **High Resiliency**, and **Standard Resiliency** using Azure PowerShell. You'll learn how to check the status, update, delete, or deprovision a circuit using PowerShell cmdlets.
:::image type="content" source="media/expressroute-howto-circuit-portal-resource-manager/environment-diagram.png" alt-text="Diagram of ExpressRoute circuit deployment environment using Azure PowerShell." lightbox="media/expressroute-howto-circuit-portal-resource-manager/environment-diagram.png":::
Check to see if your connectivity provider is listed there. Make a note of the f
You're now ready to create an ExpressRoute circuit.
+### Get the list of resilient locations
+
+If you're creating an ExpressRoute circuit with a resiliency type of **Maximum Resiliency**, you need to know the list of resilient locations. Here are the steps to retrieve this information:
+
+#### Clone the script
+
+```azurepowershell-interactive
+# Clone the setup script from GitHub.
+git clone https://github.com/Azure-Samples/azure-docs-powershell-samples/
+# Change to the directory where the script is located.
+CD azure-docs-powershell-samples/expressroute/
+```
+
+#### Run resilient locations script
+
+Run the **Get-AzExpressRouteResilientLocations.ps1** script to get the list of resilient locations. The following example shows how to get the resilient locations for a specific subscription sorted by distance from Silicon Valley:
+
+```azurepowershell-interactive
+$SubscriptionId = Get-AzureSubscription -SubscriptionName "<SubscriptionName>"
+highAvailabilitySetup/Get-AzExpressRouteResilientLocations.ps1 -SubscriptionId $SubscriptionId -RelativeLocation "silicon valley"
+```
+If you don't specify the location, you get a list of all resilient locations.
+ ### Create an ExpressRoute circuit
-If you don't already have a resource group, you must create one before you create your ExpressRoute circuit. You can do so by running the following command:
+If you don't already have a resource group, you must create one before you create your ExpressRoute circuit. You can do so by running the **New-AzResourceGroup** cmdlet:
```azurepowershell-interactive
-New-AzResourceGroup -Name "ExpressRouteResourceGroup" -Location "West US"
+$resourceGroupName = (New-AzResourceGroup -Name "ExpressRouteResourceGroup" -Location "West US").ResourceGroupName
```
-The following example shows how to create a 200-Mbps ExpressRoute circuit through Equinix in Silicon Valley. If you're using a different provider and different settings, replace that information when you make your request. Use the following example to request a new service key:
+If you already have a resource group, you can use **Get-AzResourceGroup** to get the resource group name into a variable:
+
+```azurepowershell-interactive
+$resourceGroupName = (Get-AzResourceGroup -Name "<ResourceGroupName>").ResourceGroupName
+```
+
+# [**Maximum Resiliency**](#tab/maximum)
+
+**Maximum Resiliency** (Recommended) provides the highest level of resiliency for your ExpressRoute connection. It provides two ExpressRoute circuits with local redundancy in two different ExpressRoute edge locations.
+
+The following example shows how to create two ExpressRoute circuits through Equinix with local redundancy in Silicon Valley and Washington DC. If you're using a different provider and different settings, replace that information when you make your request.
+
+> [!NOTE]
+> This example uses the **New-AzHighAvailabilityExpressRouteCircuits.ps1** script. You must clone the script from GitHub to create the circuits. For more information, see [Clone the script](#clone-the-script).
+
+```azurepowershell-interactive
+$SubscriptionId = Get-AzureSubscription -SubscriptionName "<SubscriptionName>"
+highAvailabilitySetup/New-AzHighAvailabilityExpressRouteCircuits.ps1 -SubscriptionId $SubscriptionId -ResourceGroupName $resourceGroupName -Location "westus" -Name1 $circuit1Name -Name2 $circuit2Name -SkuFamily1 "MeteredData" -SkuFamily2 "MeteredData" -SkuTier1 "Standard" -SkuTier2 "Standard" -ServiceProviderName1 "Equinix" -ServiceProviderName2 "Equinix" -PeeringLocation1 "Silicon Valley" -PeeringLocation2 "Washington DC" -BandwidthInMbps 1000
+```
++
+> [!NOTE]
+> Maximum Resiliency provides maximum protection against location wide outages and connectivity failures in an ExpressRoute location. This option is strongly recommended for all critical and production workloads.
+
+# [**High Resiliency**](#tab/high)
+
+**High Resiliency** provides resiliency against location wide outages through a single ExpressRoute circuit across two locations in a metropolitan area.
+
+The following example shows how to create an ExpressRoute circuit through Equinix in Amsterdam Metro. If you're using a different provider and different settings, replace that information when you make your request. Use the following example to request a new service key.
+
+```azurepowershell-interactive
+New-AzExpressRouteCircuit -Name "ExpressRouteARMCircuit" -ResourceGroupName "ExpressRouteResourceGroup" -Location "West EU" -SkuTier Standard -SkuFamily MeteredData -ServiceProviderName "Equinix" -PeeringLocation "Amsterdam Metro" -BandwidthInMbps 200
+```
+
+# [**Standard Resiliency**](#tab/standard)
+
+**Standard Resiliency** provides a single ExpressRoute circuit with local redundancy at a single ExpressRoute location.
+
+The following example shows how to create an ExpressRoute circuit through Equinix in Silicon Valley. If you're using a different provider and different settings, replace that information when you make your request. Use the following example to request a new service key.
```azurepowershell-interactive New-AzExpressRouteCircuit -Name "ExpressRouteARMCircuit" -ResourceGroupName "ExpressRouteResourceGroup" -Location "West US" -SkuTier Standard -SkuFamily MeteredData -ServiceProviderName "Equinix" -PeeringLocation "Silicon Valley" -BandwidthInMbps 200 ```++ Make sure that you specify the correct SKU tier and SKU family:
Make sure that you specify the correct SKU tier and SKU family:
> [!IMPORTANT] > Your ExpressRoute circuit is billed from the moment a service key is issued. Ensure that you perform this operation when the connectivity provider is ready to provision the circuit.
->
The response contains the service key. You can get detailed descriptions of all the parameters by running the following command:
frontdoor How To Enable Private Link Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-enable-private-link-application-gateway.md
This article guides you through the steps to configure an Azure Front Door Premi
[!INCLUDE [cloud-shell-try-it.md](~/reusable-content/ce-skilling/azure/includes/cloud-shell-try-it.md)] -- Have a functioning Azure Application Gateway. For more information on how to create an Application Gateway, see [Direct web traffic with Azure Application Gateway using Azure PowerShell](../application-gateway/quick-create-powershell.md)- - Have a functioning Azure Front Door Premium profile and an endpoint. For more information on how to create an Azure Front Door profile, see [Create a Front Door - PowerShell](create-front-door-powershell.md). - Have a functioning Azure Application Gateway. For more information on how to create an Application Gateway, see [Direct web traffic with Azure Application Gateway using Azure PowerShell](../application-gateway/quick-create-powershell.md)
Follow the instructions in [Configure Azure Application Gateway Private Link](..
Get-AzPrivateEndpointConnection -ResourceGroupName myResourceGroup -ServiceName myAppGateway -PrivateLinkResourceType Microsoft.Network/applicationgateways ```
-2. Run [Get-AzPrivateEndpointConnection](/powershell/module/az.network/get-azprivateendpointconnection) to retrieve the private endpoint connection details. Use the *Name* value from the output in the next step for approving the connection.
+2. Run [Approve-AzPrivateEndpointConnection](/powershell/module/az.network/approve-azprivateendpointconnection) to approve the private endpoint connection details. Use the *Name* value from the output in the previous step for approving the connection.
```azurepowershell-interactive Get-AzPrivateEndpointConnection -Name aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb.bbbbbbbb-1111-2222-3333-cccccccccccc -ResourceGroupName myResourceGroup -ServiceName myAppGateway -PrivateLinkResourceType Microsoft.Network/applicationgateways
The following are common mistakes when configuring an Azure Application Gateway
## Next steps
-Learn about [Private Link service with storage account](../storage/common/storage-private-endpoints.md).
+Learn about [Private Link service with storage account](../storage/common/storage-private-endpoints.md).
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
The Azure Front Door Private Link feature is region agnostic but for the best la
* Learn how to [connect Azure Front Door Premium to a storage account origin with Private Link](standard-premium/how-to-enable-private-link-storage-account.md). * Learn how to [connect Azure Front Door Premium to an internal load balancer origin with Private Link](standard-premium/how-to-enable-private-link-internal-load-balancer.md). * Learn how to [connect Azure Front Door Premium to a storage static website origin with Private Link](how-to-enable-private-link-storage-static-website.md).-
governance Migrate From Automanage Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/migrate-from-automanage-best-practices.md
Title: Azure Automanage Best Practices to Azure Policy migration planning
-description: This article provides process and technical guidance for customers interested in moving from Automanage Best Practices to Azure Policy.
+description: This article provides process and technical guidance for customers interested in moving from Azure Automanage Best Practices to Azure Policy.
Last updated 08/21/2024
-# Overview
+# Automanage Best Practices to Azure Policy migration planning
> [!CAUTION]
-> On September 30, 2027, the Automanage Best Practices product will be retired. Migrate to Azure Policy before that date. [Migrate here](https://ms.portal.azure.com/).
+> On September 30, 2027, the Azure Automanage Best Practices service will be retired. Migrate to Azure Policy before that date. For more information on migration, see the [Azure portal](https://ms.portal.azure.com/).
-Azure Policy is a more robust cloud resource governance, enforcement and compliance offering with full parity with the Automanage Best Practices service. When possible, you should plan to move your content and machines to the new service. This
-article provides guidance on developing a migration strategy from Azure Automation to machine
-configuration. Azure Policy implements a robust array of features including:
+Azure Policy is a more robust cloud resource governance, enforcement, and compliance offering with full parity with the Azure Automanage Best Practices service. When possible, you should plan to move your content and machines to the new service. This article provides guidance on developing a migration strategy from Azure Automation to machine
+configuration. Azure Policy implements a robust array of features, including:
-- *Granular Control and Flexibility:* Azure Policy allows for highly granular control over resources. You can create custom policies tailored to your specific regulatory and organizational compliance needs, ensuring that every aspect of your infrastructure meets the required standards. This level of customization may not be as easily achievable with the predefined configurations in Automanage.
+- **Granular control and flexibility:** Azure Policy allows for highly granular control over resources. You can create custom policies tailored to your specific regulatory and organizational compliance needs to ensure that every aspect of your infrastructure meets the required standards. This level of customization might not be as easy to achieve with the predefined configurations in Automanage.
+- **Comprehensive compliance management:** Azure Policy offers comprehensive compliance management by continuously assessing and auditing your resources. Detailed reports and dashboards help you to track compliance status. These features help you to quickly detect and rectify noncompliance issues across your environment.
+- **Scalability:** Azure Policy is built to manage large-scale environments efficiently. You can apply policies at different scopes, such as management group, subscription, and resource group levels. This capability helps you to enforce compliance across multiple resources and regions systematically.
+- **Integration with Azure Security Center:** Azure Policy integrates seamlessly with Azure Security Center. You have the ability to manage security policies and ensure that your servers adhere to best practices. This integration provides more insights and recommendations, which further strengthen your security posture.
-- *Comprehensive Compliance Management:* Azure Policy offers comprehensive compliance management by continuously assessing and auditing your resources. It provides detailed reports and dashboards to track compliance status, helping you to quickly detect and rectify non-compliance issues across your environment.
+Before you begin, read the conceptual overview information on the [Azure Policy][01] webpage.
-- *Scalability:* Azure Policy is built to manage large-scale environments efficiently. It allows you to apply policies at different scopes (for example, Management Group, Subscription, Resource Group levels), making it easier to enforce compliance across multiple resources and regions systematically.
+## Understand migration
-- *Integration with Azure Security Center:* Azure Policy integrates seamlessly with Azure Security Center, enhancing your ability to manage security policies and ensuring your servers adhere to best practices. This integration provides more insights and recommendations, further strengthening your security posture.
+The best approach to migration is to identify how to map services in an Automanage configuration profile to the respective Azure Policy content first. Then offboard your subscriptions from Automanage. This section outlines the expected steps for migration.
-Before you begin, it's a good idea to read the conceptual overview information at the page
-[Azure Policy][01].
+Automanage designers created an experience for Azure customers to onboard new and existing virtual machines (VMs) to a recommended set of Azure services to ensure compliance with Azure best practices. The capabilities include a configuration profile, a reusable template of management, monitoring, security, and resiliency services that customers can opt into. The profile is assigned to a set of VMs that are onboarded to those services, and customers then receive reports on the state of their machines.
-## Understand migration
+This functionality is available in Azure Policy as an initiative with various configurable parameters, Azure services, regional availability, compliance states, and remediation actions. Configuration profiles are the main onboarding vehicle for Automanage customers. Just like Azure Policy initiatives, Automanage configuration profiles apply to VMs at the subscription and resource group level. They enable further specification of the zone of
+applicability. The following Automanage feature parities are available in Azure Policy.
-The best approach to migration is to identify how to map services in an Automanage configuration profile to respective Azure Policy content first, and then offboard your subscriptions from Automanage. This section outlines the expected steps for migration. AutomanageΓÇÖs capabilities involved creating a deploy-and-forget experience for Azure customers to onboard new and existing virtual machines to a recommended set of Azure Services to ensure compliance with AzureΓÇÖs best practices. These capabilities were achieved by the creation of a configuration profile, a reusable template of management, monitoring,
-security and resiliency services that customers could opt into. The profile is then assigned to a set of VMs that are onboarded to those services and receive reports on the state of their machines.
+### Azure Monitor agent
+The Azure Monitor agent collects monitoring data from the guest operating system of Azure and hybrid VMs. The agent delivers the data to Azure Monitor for use by features, insights, and other services, such as Microsoft Sentinel and Microsoft Defender for Cloud. The Azure Monitor agent replaces all of the Azure Monitor legacy monitoring agents.
-This functionality is available in Azure Policy as an initiative with a variety of configurable parameters, Azure services, regional availability, compliance states, and remediation actions. Configuration Profiles are the main onboarding vehicle for Automanage customers. Just like Azure Policy Initiatives, Automanage configuration profiles are applicable to VMs at the
-subscription and resource group level and enables further specification of the zone of
-applicability. The following Automanage feature parities are available in Azure Policy:
+Deploy this extension by using the following policies:
-### Azure Monitoring Agent
+- Configure Linux VMs to run the Azure Monitor agent with user-assigned managed-identity-based authentication.
+- Configure Windows machines to associate with a data collection rule or a data collection endpoint.
+- Configure Windows VMs to run the Azure Monitor agent with user-assigned managed-identity-based authentication.
+- Configure Linux machines to associate with a data collection rule or a data collection endpoint.
+- Deploy a dependency agent for Linux VMs with Azure Monitor agent settings.
+- Deploy a dependency agent that you can enable on Windows VMs with Azure Monitor agent settings.
-Azure Monitor Agent (AMA) collects monitoring data from the guest operating system of
-Azure and hybrid virtual machines and delivers it to Azure Monitor for use by features,
-insights, and other services, such as Microsoft Sentinel and Microsoft Defender for Cloud.
-Azure Monitor Agent replaces all of Azure Monitor's legacy monitoring agents. This
-extension is deployable using the following Azure Policies:
+### Azure Backup
-- Configure Linux virtual machines to run Azure Monitor Agent with user-assigned
-managed identity-based authentication
-- Configure Windows Machines to be associated with a Data Collection Rule or a
-Data Collection Endpoint
-- Configure Windows virtual machines to run Azure Monitor Agent with user-assigned
-managed identity-based authentication
-- Configure Linux Machines to be associated with a Data Collection Rule or a Data
-Collection Endpoint
-- Deploy Dependency agent for Linux virtual machines with Azure Monitoring Agent
-settings
-- Deploy Dependency agent to be enabled on Windows virtual machines with Azure
-Monitoring Agent settings
+Azure Backup provides independent and isolated backups to guard against unintended destruction of the data on your VMs. Backups are stored in a Recovery Services vault with built-in management of recovery points. To back up Azure VMs, Backup installs an extension on the VM agent running on the machine.
-### Azure Backup
+Configure Backup by using the following policies:
+
+- Configure backup on VMs with a specific tag to an existing Recovery Services vault in the same location.
+- Enable Backup for VMs.
+
+To configure Backup time and duration, create a custom Azure policy based on the properties of the Backup policy resource or by a REST API call. For more information, see [Create Recovery Services backup policies by using the REST API][02].
+
+### Microsoft Antimalware for Azure
+
+Microsoft Antimalware for Azure Cloud Services and Virtual Machines offers free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. The Azure Guest agent (or the Microsoft Fabric agent) opens the Microsoft Antimalware for Azure extension and applies the antimalware configuration settings that were supplied as input. This step enables the antimalware service with either default or custom configuration settings.
+
+Deploy the following Microsoft Antimalware for Azure policies in Azure Policy:
+
+- Configure Microsoft Antimalware for Azure to automatically update protection signatures.
+- Deploy the Microsoft `IaaSAntimalware` extension on Windows servers.
+- Deploy the default Microsoft `IaaSAntimalware` extension for Windows Server.
+
+You can create a custom Azure policy based on the properties of the Azure `IaaSAntimalware` policy resource or by using an Azure Resource Manager template (ARM template). You can use the custom policy to:
+
+- Configure excluded files, locations, file extensions, and processes.
+- Enable real-time protection.
+- Schedule a scan, and scan type, day, and time.
+
+For more information, see [this webpage][03].
+
+### Azure Monitor Insights and analytics
+
+Azure Monitor Insights is a suite of tools within Azure Monitor designed to enhance the performance, reliability, and quality of your applications. It offers features like application performance management, monitoring alerts, metrics analysis, diagnostic settings, and logs. With Azure Monitor Insights, you can gain valuable insights into your application's behavior, troubleshoot issues, and optimize performance.
+
+The following policies provide the same capabilities as Automanage:
-Azure Backup provides independent and isolated backups to guard against unintended
-destruction of the data on your VMs. Backups are stored in a Recovery Services vault with
-built-in management of recovery points. To back up Azure VMs, Azure Backup installs an
-extension on the VM agent running on the machine. Azure Backup can be configured using
-the following policies:
--- Configure backup on virtual machines with a given tag to an existing recovery
-services vault in the same location
-- Azure Backup should be enabled for Virtual Machines-
-To configure backup time and duration, you can create a custom Azure policy based on the
-properties of the Azure backup policy resource or by a REST API call. Learn more [here][02].
-
-### Azure Antimalware
-
-Microsoft Antimalware for Azure is a free real-time protection that helps identify and
-remove viruses, spyware, and other malicious software. It generates alerts when known
-malicious or unwanted software tries to install itself or run on your Azure systems. The
-Azure Guest Agent (or the Fabric Agent) launches the Antimalware Extension, applying the
-Antimalware configuration settings supplied as input. This step enables the Antimalware
-service with either default or custom configuration settings.
-The following Azure Antimalware policies are deployable in Azure Policy:
--- Microsoft Antimalware for Azure should be configured to automatically update
-protection signatures
-- Microsoft IaaSAntimalware extension should be deployed on Windows servers-- Deploy default Microsoft IaaSAntimalware extension for Windows Server-
-To configure excluded files, locations, file extensions and processes, enable real-time
-protection and schedule scan and scan type, day and time, you can create a custom Azure
-policy based on the properties of the Azure IaaSAntimalware policy resource or by an ARM
-Template. Learn more [here][03].
-
-### Azure Insights and Analytics
-
-Azure Insights is a suite of tools within Azure Monitor designed to enhance the
-performance, reliability, and quality of your applications. It offers features like application
-performance management (APM), monitoring alerts, metrics analysis, diagnostic settings,
-logs, and more. With Azure Insights, you can gain valuable insights into your applicationΓÇÖs
-behavior, troubleshoot issues, and optimize performance. The following policies provide
-the same capabilities as Automanage:
--- Assign Built-In User-Assigned Managed Identity to Virtual Machines-- Configure Linux virtual machines to run Azure Monitor Agent with user-assigned
-managed identity-based authentication
-- Configure Windows virtual machines to run Azure Monitor Agent with user-assigned managed identity-based authentication-- Deploy Dependency agent to be enabled on Windows virtual machines with
-Azure Monitoring Agent settings
-- Deploy Dependency agent for Linux virtual machines with Azure Monitoring
-Agent settings
-- Configure Linux Machines to be associated with a Data Collection Rule or a Data
-Collection Endpoint
-- Configure Windows Machines to be associated with a Data Collection Rule or a
-Data Collection Endpoint
-
-All the previous options are configurable by deploying the Enable Azure Monitor for VMs with Azure
-Monitoring Agent (AMA) Policy initiative.
+- Assign a built-in user-assigned managed identity to VMs.
+- Configure Linux VMs to run the Azure Monitor agent with user-assigned authentication based on managed identity.
+- Configure Windows VMs to run the Azure Monitor agent with user-assigned authentication based on managed identity.
+- Deploy a dependency agent that you can enable on Windows VMs with Azure Monitor agent settings.
+- Deploy a dependency agent for Linux VMs with Azure Monitor agent settings.
+- Configure Linux machines to associate with a data collection rule or a data collection endpoint.
+- Configure Windows machines to associate with a data collection rule or a data collection endpoint.
+
+To configure all the previous options, deploy the **Enable Azure Monitor for VMs with Azure
+Monitoring Agent (AMA)** policy initiative.
### Change Tracking and Inventory
-Change Tracking and Inventory is a feature within Azure Automation that monitors changes
-in virtual machines across Azure, on-premises, and other cloud environments. It tracks
-modifications to installed software, files, registry keys, and services on both Windows and
-Linux systems. By using the Log Analytics agent, the Change Tracking service collects data and forwards it to
-Azure Monitor Logs for analysis. Additionally, it integrates with Microsoft Defender for
-Cloud File Integrity Monitoring (FIM) to enhance security and operational insights. The
-following policies enable change tracking on VMs:
+Change Tracking and Inventory is a feature within Automation that monitors changes in VMs across Azure, on-premises, and in other cloud environments. It tracks modifications to installed software, files, registry keys, and services on both Windows and Linux systems. Change Tracking and Inventory uses the Log Analytics agent to collect data and then forwards it to Azure Monitor Logs for analysis. It also integrates with Microsoft Defender for Cloud File Integrity Monitoring to enhance security and operational insights.
+
+Enable change tracking on VMs by using the following policies:
-- Assign Built-In User-Assigned Managed Identity to Virtual Machines-- Configure Windows VMs to install AMA for ChangeTracking and Inventory with user-assigned managed identity-- Configure Linux VMs to install AMA for ChangeTracking and Inventory with user-assigned managed identity-- Configure ChangeTracking Extension for Windows virtual machines-- Configure ChangeTracking Extension for Linux virtual machines-- Configure Windows Virtual Machines to be associated with a Data Collection Rule
-for ChangeTracking and Inventory
+- Assign built-in user-assigned managed identity to VMs.
+- Configure Windows VMs to install the Azure Monitor agent for Change Tracking and Inventory with user-assigned managed identity.
+- Configure Linux VMs to install the Azure Monitor agent for Change Tracking and Inventory with a user-assigned managed identity.
+- Configure the Change Tracking and Inventory extension for Windows VMs.
+- Configure the Change Tracking and Inventory extension for Linux VMs.
+- Configure Windows VMs to associate with a data collection rule for Change Tracking and Inventory.
-The above Azure policies are configurable in bulk using the following Policy initiatives:
+Configure the preceding Azure policies in bulk by using the following Azure Policy initiatives:
-- [Preview]: Enable ChangeTracking and Inventory for virtual machine scale sets-- [Preview]: Enable ChangeTracking and Inventory for virtual machines-- [Preview]: Enable ChangeTracking and Inventory for Arc-enabled virtual machines
+- [Preview]: Enable Change Tracking and Inventory for virtual machine scale sets.
+- [Preview]: Enable Change Tracking and Inventory for VMs.
+- [Preview]: Enable Change Tracking and Inventory for Azure Arc-enabled VMs.
### Microsoft Defender for Cloud
-Microsoft Defender for Cloud provides unified security management and advanced threat
-protection across hybrid cloud workloads. MDC is configurable in Policy through the
-following policy initiatives:
--- Configure multiple Microsoft Defender for Endpoint integration settings with
-Microsoft Defender for Cloud
-- Microsoft cloud security benchmark-- Configure Microsoft Defender for Cloud plans-
-### Update Management
-
-Azure Update Management is a service included as part of your Azure Subscription that
-enables you to assess your update status across your environment and manage your
-Windows and Linux server patching from a single pane of glass, both for on-premises and
-Azure. It provides a unified solution to help you keep your systems up to date by overseeing
-update compliance, deploying critical updates, and offering flexible patching options.
-Azure Update Management is configurable in Azure Policy through the following policies:
--- Configure periodic checking for missing system updates on Azure Arc-enabled
-servers
-- Machines should be configured to periodically check for missing system updates-- Schedule recurring updates using Azure Update Manager-- [Preview]: Set prerequisite for Scheduling recurring updates on Azure virtual
-machines.
-- Configure periodic checking for missing system updates on Azure virtual machines-
-### Azure Automation Account
-
-Azure Automation is a cloud-based service that provides consistent management across
-both your Azure and non-Azure environments. It allows you to automate repetitive tasks,
-enforce configuration consistency, and manage updates for virtual machines. By
-leveraging runbooks and shared assets, you can streamline operations and reduce
-operational costs. Azure Automation is configurable in Azure Policy through the following
-policies:
--- Automation Account should have Managed Identity-- Configure private endpoint connections on Azure Automation accounts-- Automation accounts should disable public network access-- Configure Azure Automation accounts with private DNS zones-- Azure Automation accounts should use customer-managed keys to encrypt data at
-rest
-- Azure Automation account should have local authentication method disabled-- Automation account variables should be encrypted-- Configure Azure Automation account to disable local authentication-- Configure Azure Automation accounts to disable public network access-- Private endpoint connections on Automation Accounts should be enabled-
-### Boot Diagnostics
-
-Azure Boot Diagnostics is a debugging feature for Azure virtual machines (VM) that allows
-diagnosis of VM boot failures. It enables a user to observe the state of their VM as it is
-booting up by collecting serial log information and screenshots. Enabling Boot Diagnostics
-feature allows Microsoft Azure cloud platform to inspect the virtual machine operating
-system (OS) for provisioning errors, helping to provide deeper information on the root
-causes of the startup failures. Boot diagnostics is enabled by default when we create a VM
-and is enforced by the _Boot Diagnostics should be enabled on virtual machines_ policy.
+Microsoft Defender for Cloud provides unified security management and advanced threat protection across hybrid cloud workloads.
+
+Configure Defender for Cloud in Azure Policy through the following policy initiatives:
+
+- Configure multiple Microsoft Defender for Endpoint integration settings with Defender for Cloud.
+- Download the Microsoft cloud security benchmark.
+- Configure Defender for Cloud plans.
+
+### Azure Update Manager
+
+Azure Update Manager is a service included as part of your Azure subscription. Use it to assess your update status across your environment and manage your Windows and Linux server patching from a single pane of glass, both for on-premises and Azure. It provides a unified solution to help you keep your systems up to date. Update Manager oversees update compliance, deploys critical updates, and offers flexible patching options.
+
+Configure Update Manager in Azure Policy through the following policies:
+
+- Configure periodic checking for missing system updates on servers enabled by Azure Arc.
+- Configure machines periodically to check for missing system updates.
+- Schedule recurring updates by using Update Manager.
+- [Preview]: Set prerequisites for scheduling recurring updates on Azure VMs.
+- Configure periodic checking for missing system updates on Azure VMs.
+
+### Azure Automation account
+
+Automation is a cloud-based service that provides consistent management across your Azure and non-Azure environments. Use it to automate repetitive tasks, enforce configuration consistency, and manage updates for VMs. By using runbooks and shared assets, you can streamline operations and reduce operational costs.
+
+Configure Automation in Azure Policy through the following policies:
+
+- Use managed identity for Automation accounts.
+- Configure private endpoint connections on Automation accounts.
+- Disable public network access for Automation accounts.
+- Configure Automation accounts with private DNS zones.
+- Use customer-managed keys to encrypt data at rest for Automation accounts.
+- Disable the local authentication method for the Automation account.
+- Encrypt Automation account variables.
+- Configure Automation accounts to disable local authentication.
+- Configure Automation accounts to disable public network access.
+- Enable private endpoint connections on Automation accounts.
+
+### Boot diagnostics
+
+Boot diagnostics is a debugging feature for Azure VMs that you can use to diagnose VM boot failures. The feature collects serial log information and screenshots so that you can observe the state of your VM as it boots up. After you enable the boot diagnostics feature, the Azure cloud platform can inspect the VM operating system for provisioning errors. The feature helps to provide deeper information on the root causes of startup failures. Boot diagnostics is enabled by default when you create a VM and is enforced by the **Boot Diagnostics should be enabled on virtual machines** policy.
### Windows Admin Center
-Azure Boot Diagnostics is a debugging feature for Azure virtual machines (VM) that allows
-diagnosis of VM boot failures by collecting serial log information and screenshots during
-the boot process. It's configurable either through an ARM template or a custom Azure Policy. Learn more [here][04].
+You can now use Windows Admin Center in the Azure portal to manage the Windows operating system inside an Azure VM. You can also manage operating system functions from the Azure portal and work with files in the VM without using Remote Desktop or PowerShell. You can use an ARM template or a custom Azure Policy for configuration. For more information, see [Manage a Windows VM by using Windows Admin Center in Azure][04].
-### Log Analytics Workspace
+### Log Analytics workspace
-Azure Log Analytics is a service that monitors your cloud and on-premises resources and
-applications. It allows you to collect and analyze data generated by resources in your
-cloud and on-premises environments. With Azure Log Analytics, you can search, analyze,
-and visualize data to identify trends, troubleshoot issues, and monitor your systems. On August 31, 2024, both Automation Update Management and the Log Analytics agent it uses
-will be retired. Migrate to Azure Update Manager before that. Refer to guidance on
-migrating to Azure Update Manager [here][05]. We advise you to migrate [now][06] as this feature will
-no longer be supported in Automanage.
+Log Analytics is an Azure Monitor feature that monitors your cloud and on-premises resources and applications. Use it to collect and analyze data generated by resources in your cloud and on-premises environments. With Log Analytics, you can search, analyze, and visualize data to identify trends, troubleshoot issues, and monitor your systems.
+
+On August 31, 2024, both Automation Update Management and the Log Analytics agent that it used were retired. You should have migrated to Azure Update Manager before that date. For guidance on how to migrate to Azure Update Manager, see [Overview of migration from Automation Update Management to Azure Update Manager][05]. We advise you to migrate [now][06] because this feature is no longer supported in Automanage.
## Pricing
-As you migrate, it's worthwhile to note that Automanage Best Practices is a cost-free service. As such, you won't receive a bill from the Automanage service.
-However, if you used Automanage to enable paid services like Azure Insights, there may be usage charges incurred that are billed directly by those services.
-Read more about Automanage and pricing [here][09].
+Automanage Best Practices is a free service, so you don't receive a bill from Automanage. If you used Automanage to enable paid services like Azure Monitor Insights, you might incur usage charges. Those services bill you directly.
+
+Read more about Automanage and pricing on the [Azure Automanage pricing webpage][09].
## Next steps
-Now that you have an overview of Azure Policy and some of the key concepts, here are the suggested
-next steps:
+Now that you have an overview of Azure Policy and some of the key concepts, here are the suggested next steps:
-- [Review the policy definition structure][07].-- [Assign a policy definition using the portal][08].
+- [Review the policy definition structure][07]
+- [Assign a policy definition by using the portal][08]
<!-- Reference link definitions --> [01]: ../overview.md
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md
# Configure Azure RBAC for FHIR
-In this article, you'll learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the Azure API for FHIR data plane. Azure RBAC is the preferred methods for assigning data plane access when data plane users are managed in the Microsoft Entra tenant associated with your Azure subscription. If you're using an external Microsoft Entra tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md).
+In this article, you learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the Azure API for FHIR&reg; data plane. Azure RBAC is the preferred methods for assigning data plane access when data plane users are managed in the Microsoft Entra tenant associated with your Azure subscription. If you're using an external Microsoft Entra tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md).
## Confirm Azure RBAC mode
-To use Azure RBAC, your Azure API for FHIR must be configured to use your Azure subscription tenant for data plane and there should be no assigned identity object IDs. You can verify your settings by inspecting the **Authentication** blade of your Azure API for FHIR:
+To use Azure RBAC, your Azure API for FHIR must be configured to use your Azure subscription tenant for data plane, and there should be no assigned identity object IDs. You can verify your settings by inspecting the **Authentication** of your Azure API for FHIR:
:::image type="content" source="media/rbac/confirm-azure-rbac-mode.png" alt-text="Confirm Azure RBAC mode":::
-The **Authority** should be set to the Microsoft Entra tenant associated with your subscription and there should be no GUIDs in the box labeled **Allowed object IDs**. You'll also notice that the box is disabled and a label indicates that Azure RBAC should be used to assign data plane roles.
+The **Authority** should be set to the Microsoft Entra tenant associated with your subscription and there should be no GUIDs in the box labeled **Allowed object IDs**. Notice the box is disabled and a label indicates that Azure RBAC should be used to assign data plane roles.
## Assign roles
-To grant users, service principals or groups access to the FHIR data plane, select **Access control (IAM)**, then select **Role assignments** and select **+ Add**:
+To grant users, service principals, or groups access to the FHIR data plane, select **Access control (IAM)**, then select **Role assignments** and select **+ Add**.
:::image type="content" source="media/rbac/add-azure-rbac-role-assignment.png" alt-text="Add Azure role assignment":::
-In the **Role** selection, search for one of the built-in roles for the FHIR data plane:
+In the **Role** selection, search for one of the built-in roles for the FHIR data plane.
:::image type="content" source="media/rbac/built-in-fhir-data-roles.png" alt-text="Built-in FHIR data roles":::
-You can choose between:
+You can choose from among the following.
-* FHIR Data Reader: Can read (and search) FHIR data.
-* FHIR Data Writer: Can read, write, and soft delete FHIR data.
-* FHIR Data Exporter: Can read and export (`$export` operator) data.
-* FHIR Data Contributor: Can perform all data plane operations.
+* FHIR Data Reader: Can read (and search) FHIR data
+* FHIR Data Writer: Can read, write, and soft delete FHIR data
+* FHIR Data Exporter: Can read and export (`$export` operator) data
+* FHIR Data Contributor: Can perform all data plane operations
In the **Select** box, search for a user, service principal, or group that you wish to assign the role to.
In the **Select** box, search for a user, service principal, or group that you w
## Caching behavior
-The Azure API for FHIR will cache decisions for up to 5 minutes. If you grant a user access to the FHIR server by adding them to the list of allowed object IDs, or you remove them from the list, you should expect it to take up to five minutes for changes in permissions to propagate.
+The Azure API for FHIR caches decisions for up to 5 minutes. If you grant a user access to the FHIR server by adding them to the list of allowed object IDs, or you remove them from the list, you should expect it to take up to five minutes for changes in permissions to propagate.
## Next steps
In this article, you learned how to assign Azure roles for the FHIR data plane.
>[!div class="nextstepaction"] >[Configure Private Link](configure-private-link.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-cross-origin-resource-sharing.md
# Configure cross-origin resource sharing in Azure API for FHIR
-Azure API for FHIR supports [cross-origin resource sharing (CORS)](https://wikipedia.org/wiki/Cross-Origin_Resource_Sharing). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request.
+Azure API for FHIR&reg; supports [cross-origin resource sharing (CORS)](https://wikipedia.org/wiki/Cross-Origin_Resource_Sharing). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request.
CORS is often used in a single-page app that must call a RESTful API to a different domain. ## Configure CORS settings
-To configure a CORS setting in the Azure API for FHIR, specify the following settings:
+To configure a CORS setting in the Azure API for FHIR, specify the following settings.
- **Origins (Access-Control-Allow-Origin)**. A list of domains allowed to make cross-origin requests to the Azure API for FHIR. Each domain (origin) must be entered in a separate line. You can enter an asterisk (*) to allow calls from any domain, but we don't recommend it because it's a security risk.
In this article, you learned how to configure cross-origin resource sharing in A
>[!div class="nextstepaction"] >[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-database.md
Last updated 09/27/2023
-# Configure database settings
+# Configure database settings
-Azure API for FHIR uses database to store its data. Performance of the underlying database depends on the number of Request Units (RU) selected during service provisioning or in database settings after the service has been provisioned.
-Azure API for FHIR borrows the concept of [Request Units (RUs) in Azure Cosmos DB](/azure/cosmos-db/request-units)) when setting the performance of underlying database.
+Azure API for FHIR&reg; uses a database to store its data. Performance of the underlying database depends on the number of Request Units (RU) selected during service provisioning or in database settings after the service has been provisioned.
-Throughput must be provisioned to ensure that sufficient system resources are available for your database at all times. How many RUs you need for your application depends on operations you perform. Operations can range from simple read and writes to more complex queries.
+Azure API for FHIR borrows the concept of [Request Units (RUs) in Azure Cosmos DB](/azure/cosmos-db/request-units) when setting the performance of underlying database.
+
+Throughput must be provisioned to ensure that sufficient system resources are always available for your database. How many RUs you need for your application depends on operations you perform. Operations can range from simple read and writes to more complex queries.
> [!NOTE]
-> As different operations consume different number of RU, we return the actual number of RUs consumed in every API call in response header. This way you can profile the number of RUs consumed by your application.
+> As different operations consume a different number of RUs, we return the actual number of RUs consumed in every API call in the response header. This way you can profile the number of RUs consumed by your application.
## Update throughput
To change this setting in the Azure portal, navigate to your Azure API for FHIR
If the database throughput is greater than 10,000 RU/s or if the data stored in the database is more than 50 GB, your client application must be capable of handling continuation tokens. A new partition is created in the database for every throughput increase of 10,000 RU/s or if the amount of data stored is more than 50 GB. Multiple partitions create a multi-page response in which pagination is implemented by using continuation tokens. > [!NOTE]
-> Higher value means higher Azure API for FHIR throughput and higher cost of the service.
+> A higher RU value means higher Azure API for FHIR throughput and higher cost of the service.
![Configure Azure Cosmos DB](media/database/database-settings.png)
Or you can deploy a fully managed Azure API for FHIR:
>[!div class="nextstepaction"] >[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-export-data.md
# Configure export settings in Azure API for FHIR
-Azure API for FHIR supports the $export command, which allows you to export the data out of an Azure API for FHIR instance to a storage account.
+Azure API for FHIR&reg; supports the `$export` command, which allows you to export the data out of an Azure API for FHIR instance to a storage account.
The steps are:
It's here that you add the role [Storage Blob Data Contributor](../../role-based
:::image type="content" source="~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-page.png" alt-text="Screenshot showing RBAC assignment page." lightbox="~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-page.png":::
-Next, select the storage account in Azure API for FHIR as a default storage account for $export.
+Next, select the storage account in Azure API for FHIR as a default storage account for `$export`.
## Select the storage account for $export
The final step is to assign the Azure storage account to export the data to. Go
:::image type="content" source="media/export-data/fhir-export-storage.png" alt-text="Screenshot showing selection of the storage account for export." lightbox="media/export-data/fhir-export-storage.png":::
-After you complete this final step, youΓÇÖre ready to export the data by using the $export command.
+After you complete this final step, youΓÇÖre ready to export the data by using the `$export` command.
> [!Note]
-> Only storage accounts in the same subscription as Azure API for FHIR can be registered as the destination for $export operations.
+> Only storage accounts in the same subscription as Azure API for FHIR can be registered as the destination for `$export` operations.
## Next steps [Additional settings](azure-api-for-fhir-additional-settings.md)
healthcare-apis Configure Local Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-local-rbac.md
ms.devlang: azurecli
# Configure local RBAC for FHIR
-This article explains how to configure the Azure API for FHIR to use a secondary Microsoft Entra tenant for data access. Use this mode only if it isn't possible for you to use the Microsoft Entra tenant associated with your subscription.
+This article explains how to configure the Azure API for FHIR&reg; to use a secondary Microsoft Entra tenant for data access. Use this mode only if it isn't possible for you to use the Microsoft Entra tenant associated with your subscription.
> [!NOTE] > If your FHIR service is configured to use your primary Microsoft Entra tenant associated with your subscription, [use Azure RBAC to assign data plane roles](configure-azure-rbac.md). ## Add a new service principal or use an existing one
-Local RBAC allows you to use a service principal in the secondary Microsoft Entra tenant with your FHIR server. You can create a new service principal through the Azure portal, PowerShell or CLI commands, or use an existing service principal. The process is also known as [application registration](../register-application.md). You can review and modify the service principals through Microsoft Entra ID from the portal or using scripts.
+Local role-based access control (RBAC) allows you to use a service principal in the secondary Microsoft Entra tenant with your FHIR server. You can create a new service principal through the Azure portal, PowerShell or CLI commands, or use an existing service principal. The process is also known as [application registration](../register-application.md). You can review and modify the service principals through Microsoft Entra ID from the portal or using scripts.
-The PowerShell and CLI scripts below, which are tested and validated in Visual Studio Code, create a new service principal (or client application), and add a client secret. The service principal ID is used for local RBAC and the application ID and client secret will be used to access the FHIR service later.
+The following PowerShell and CLI scripts, which are tested and validated in Visual Studio Code, create a new service principal (or client application), and add a client secret. The service principal ID is used for local RBAC and the application ID and client secret is used to access the FHIR service later.
You can use the `Az` PowerShell module:
clientsecret=$(az ad app credential reset --id $appid --append --credential-desc
## Configure local RBAC
-You can configure the Azure API for FHIR to use a secondary Microsoft Entra tenant in the **Authentication** blade:
+You can configure the Azure API for FHIR to use a secondary Microsoft Entra tenant in the **Authentication** blade.
![Local RBAC assignments](media/rbac/local-rbac-guids.png)
-In the authority box, enter a valid secondary Microsoft Entra tenant. Once the tenant has been validated, the **Allowed object IDs** box should be activated and you can enter one or a list of Microsoft Entra service principal object IDs. These IDs can be the identity object IDs of:
+In the authority box, enter a valid secondary Microsoft Entra tenant. Once the tenant is validated, the **Allowed object IDs** box should be activated and you can enter one or a list of Microsoft Entra service principal object IDs. These IDs can be the identity object IDs of:
* A Microsoft Entra user. * A Microsoft Entra service principal.
The local RBAC setting is only visible from the authentication blade; it isn't v
## Caching behavior
-The Azure API for FHIR will cache decisions for up to 5 minutes. If you grant a user access to the FHIR server by adding them to the list of allowed object IDs, or you remove them from the list, you should expect it to take up to five minutes for changes in permissions to propagate.
+The Azure API for FHIR caches decisions for up to 5 minutes. If you grant a user access to the FHIR server by adding them to the list of allowed object IDs, or you remove them from the list, you should expect it to take up to five minutes for changes in permissions to propagate.
## Next steps
-In this article, you learned how to assign FHIR data plane access using an external (secondary) Microsoft Entra tenant. Next learn about additional settings for the Azure API for FHIR:
+In this article, you learned how to assign FHIR data plane access using an external (secondary) Microsoft Entra tenant. Next learn about additional settings for the Azure API for FHIR.
>[!div class="nextstepaction"] >[Configure CORS](configure-cross-origin-resource-sharing.md)
In this article, you learned how to assign FHIR data plane access using an exter
>[!div class="nextstepaction"] >[Configure Private Link](configure-private-link.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-private-link.md
# Configure private link
-Private link enables you to access Azure API for FHIR over a private endpoint, which is a network interface that connects you privately and securely using a private IP address from your virtual network. With private link, you can access our services securely from your VNet as a first party service without having to go through a public Domain Name System (DNS). This article describes how to create, test, and manage your private endpoint for Azure API for FHIR.
+Private link enables you to access Azure API for FHIR&reg; over a private endpoint, which is a network interface that connects you privately and securely using a private IP address from your virtual network. With private link, you can access our services securely from your virtual network as a first party service without having to go through a public Domain Name System (DNS). This article describes how to create, test, and manage your private endpoint for Azure API for FHIR.
>[!Note] >Neither Private Link nor Azure API for FHIR can be moved from one resource group or subscription to another once Private Link is enabled. To make a move, delete the Private Link first, then move Azure API for FHIR. Create a new Private Link once the move is complete. Assess potential security ramifications before deleting Private Link.
Private link enables you to access Azure API for FHIR over a private endpoint, w
## Prerequisites
-Before creating a private endpoint, there are some Azure resources that you'll need to create first:
+Before creating a private endpoint, you need to create Azure resources that first.
-- Resource Group ΓÇô The Azure resource group that will contain the virtual network and private endpoint.
+- Resource Group ΓÇô The Azure resource group that contains the virtual network and private endpoint.
- Azure API for FHIR ΓÇô The FHIR resource you would like to put behind a private endpoint.-- Virtual Network ΓÇô The VNet to which your client services and Private Endpoint will be connected.
+- Virtual Network (VNet) ΓÇô The VNet to which your client services and Private Endpoint will be connected.
For more information, see [Private Link Documentation](../../private-link/index.yml). ## Create private endpoint
-To create a private endpoint, a developer with Role-based access control (RBAC) permissions on the FHIR resource can use the Azure portal, [Azure PowerShell](../../private-link/create-private-endpoint-powershell.md), or [Azure CLI](../../private-link/create-private-endpoint-cli.md). This article will guide you through the steps on using Azure portal. Using the Azure portal is recommended as it automates the creation and configuration of the Private DNS Zone. For more information, see [Private Link Quick Start Guides](../../private-link/create-private-endpoint-portal.md).
+To create a private endpoint, a developer with role-based access control (RBAC) permissions on the FHIR resource can use the Azure portal, [Azure PowerShell](../../private-link/create-private-endpoint-powershell.md), or [Azure CLI](../../private-link/create-private-endpoint-cli.md). This article guides you through the steps on using Azure portal. Azure portal is recommended as it automates the creation and configuration of the Private DNS Zone. For more information, see [Private Link Quick Start Guides](../../private-link/create-private-endpoint-portal.md).
There are two ways to create a private endpoint. Auto Approval flow allows a user that has RBAC permissions on the FHIR resource to create a private endpoint without a need for approval. Manual Approval flow allows a user without permissions on the FHIR resource to request a private endpoint to be approved by owners of the FHIR resource.
After the deployment is complete, you can go back to "Private endpoint connectio
## VNet Peering
-With Private Link configured, you can access the FHIR server in the same VNet or a different VNet that is peered to the VNet for the FHIR server. Follow the steps below to configure VNet peering and Private Link DNS zone configuration.
+With Private Link configured, you can access the FHIR server in the same VNet or a different VNet that is peered to the VNet for the FHIR server. Use the following steps to configure VNet peering and Private Link DNS zone configuration.
### Configure VNet Peering
-You can configure VNet peering from the portal or using PowerShell, CLI scripts, and Azure Resource Manager (ARM) template. The second VNet can be in the same or different subscriptions, and in the same or different regions. Make sure that you grant the **Network contributor** role. For more information on VNet Peering, see [Create a virtual network peering](../../virtual-network/create-peering-different-subscriptions.md).
+You can configure VNet peering from the portal or using PowerShell, CLI scripts, and an Azure Resource Manager (ARM) template. The second VNet can be in the same or different subscriptions, and in the same or different regions. Make sure that you grant the **Network contributor** role. For more information on VNet Peering, see [Create a virtual network peering](../../virtual-network/create-peering-different-subscriptions.md).
### Add VNet link to the private link zone In the Azure portal, select the resource group of the FHIR server. Select and open the Private DNS zone, **privatelink.azurehealthcareapis.com**. Select **Virtual network links** under the *settings* section. Select the **Add** button to add your second VNet to the private DNS zone. Enter the link name of your choice, select the subscription and the VNet you created. Optionally, you can enter the resource ID for the second VNet. Select **Enable auto registration**, which automatically adds a DNS record for your VM connected to the second VNet. When you delete a VNet link, the DNS record for the VM is also deleted.
-For more information on how private link DNS zone resolves the private endpoint IP address to the fully qualified domain name (FQDN) of the resource such as the FHIR server, see [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md).
+For more information on how a private link DNS zone resolves the private endpoint IP address to the fully qualified domain name (FQDN) of the resource, such as the FHIR server, see [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md).
:::image type="content" source="media/private-link/private-link-add-vnet-link.png" alt-text="Add VNet link." lightbox="media/private-link/private-link-add-vnet-link.png":::
-You can add more VNet links if needed, and view all VNet links you've added from the portal.
+You can add more VNet links if needed, and view all VNet links you added from the portal.
:::image type="content" source="media/private-link/private-link-vnet-links.png" alt-text="Private Link VNet links." lightbox="media/private-link/private-link-vnet-links.png":::
Private endpoints and the associated network interface controller (NIC) are visi
### Delete
-Private endpoints can only be deleted from the Azure portal from the **Overview** blade or by selecting the **Remove** option under the **Networking Private endpoint connections** tab. Selecting **Remove** will delete the private endpoint and the associated NIC. If you delete all private endpoints to the FHIR resource and the public network, access is disabled and no request will make it to your FHIR server.
+Private endpoints can only be deleted from the Azure portal from the **Overview** blade or by selecting the **Remove** option under the **Networking Private endpoint connections** tab. Selecting **Remove** deletes the private endpoint and the associated NIC. If you delete all private endpoints to the FHIR resource and the public network, access is disabled and no request will make it to your FHIR server.
![Delete Private Endpoint](media/private-link/private-link-delete.png)
To ensure your private endpoint can send traffic to your server:
### Use nslookup
-You can use the **nslookup** tool to verify connectivity. If the private link is configured properly, you should see the FHIR server URL resolves to the valid private IP address, as shown below. Note that the IP address **168.63.129.16** is a virtual public IP address used in Azure. For more information, see [What is IP address 168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md)
+You can use the **nslookup** tool to verify connectivity. If the private link is configured properly, you should see the FHIR server URL resolves to the valid private IP address, as follows. Note that the IP address **168.63.129.16** is a virtual public IP address used in Azure. For more information, see [What is IP address 168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md).
``` C:\Users\testuser>nslookup fhirserverxxx.azurehealthcareapis.com
Address: 172.21.0.4
Aliases: fhirserverxxx.azurehealthcareapis.com ```
-If the private link isn't configured properly, you may see the public IP address instead and a few aliases including the Traffic Manager endpoint. This indicates that the private link DNS zone canΓÇÖt resolve to the valid private IP address of the FHIR server. When VNet peering is configured, one possible reason is that the second peered VNet hasn't been added to the private link DNS zone. As a result, you'll see the HTTP error 403, "Access to xxx was denied", when trying to access the /metadata endpoint of the FHIR server.
+If the private link isn't configured properly, you may instead see the public IP address and a few aliases including the Traffic Manager endpoint. This indicates that the private link DNS zone canΓÇÖt resolve to the valid private IP address of the FHIR server. When VNet peering is configured, one possible reason is that the second peered VNet hasn't been added to the private link DNS zone. As a result, you see the HTTP error 403, "Access to xxx was denied", when trying to access the /metadata endpoint of the FHIR server.
``` C:\Users\testuser>nslookup fhirserverxxx.azurehealthcareapis.com
For more information, see [Troubleshoot Azure Private Link connectivity problems
## Next steps
-In this article, you've learned how to configure the private link and VNet peering. You also learned how to troubleshoot the private link and VNet configurations.
+In this article, you learned how to configure the private link and VNet peering. You also learned how to troubleshoot the private link and VNet configurations.
-Based on your private link setup and for more information about registering your applications, see
+Based on your private link setup, and for more information about registering your applications, refer to the following.
* [Register a resource application](register-resource-azure-ad-client-app.md) * [Register a confidential client application](register-confidential-azure-ad-client-app.md) * [Register a public client application](register-public-azure-ad-client-app.md) * [Register a service application](register-service-azure-ad-client-app.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
-
healthcare-apis Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md
Refer to the table for details about resolution dates or possible workarounds.
|Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- |
-|Changes in private link configuration at the workspace level don't propagate to the child services.|September 4,2024 9:00 am PST| To fix this issue a service reprovisioning is required. To reprovision the service, reach out to FHIR service team|--|
-|Customers accessing the FHIR Service via a private endpoint are experiencing difficulties, specifically receiving a 403 error when making API calls from within the vNet. This problem affects FHIR instances provisioned after August 19th that utilize private link.|August 22,2024 11:00 am PST|-- | September 3.2024 9:00 am PST|
-|FHIR Applications were down in EUS2 region|January 8, 2024 2 pm PST|--|January 8, 2024 4:15 pm PST|
-|API queries to FHIR service returned Internal Server error in UK south region |August 10, 2023 9:53 am PST|--|August 10, 2023 10:43 am PST|
+|For FHIR instances created after August 15,2024, diagnostic logs aren't available in log analytics workspace. |September 19,2024 9:00 am PST| -- | -- |
+|For FHIR instances created after August 15,2024, changes in private link configuration at the workspace level causes FHIR service to be stuck in 'Updating' state. |September 24,2024 9:00 am PST| To fix this issue create support ticket with FHIR service team| -- |
+|Changes in private link configuration at the workspace level don't propagate to the child services.|September 4,2024 9:00 am PST| To fix this issue a service reprovisioning is required. To reprovision the service, reach out to FHIR service team| September 17,2024 9:00am PST|
+|Customers accessing the FHIR Service via a private endpoint are experiencing difficulties, specifically receiving a 403 error when making API calls from within the vNet. This problem affects FHIR instances provisioned after August 19th that utilize private link.|August 22,2024 11:00 am PST|-- | September 3,2024 9:00 am PST|
+|FHIR Applications were down in EUS2 region|January 8, 2024 2 pm PST|--|January 8,2024 4:15 pm PST|
+|API queries to FHIR service returned Internal Server error in UK south region |August 10,2023 9:53 am PST|--|August 10,2023 10:43 am PST|
## Related content
high-performance-computing Lift And Shift Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-overview.md
+
+ Title: "End-to-end high-performance computing (HPC) lift and shift architecture overview"
+description: Learn about how to conduct a lift and shift migration of HPC infrastructure and workloads from an on-premises environment to the cloud.
++ Last updated : 08/30/2024+++++
+# End-to-end HPC lift and shift architecture overview
+
+"Lift and shift" in the context of High-Performance Computing (HPC) mostly refers to the process of migrating an on-premises environment and workload to the cloud. Ideally, modifications are kept to a minimum (for example, applications, job schedulers, and their configurations should remain mostly the same). Adjustments on storage and hardware are natural to happen because resources are different from on-premises to cloud platforms. With the lift and shift approach, organizations can start benefiting from the cloud more quickly.
+
+The following figure represents a typical on-premises HPC cluster in a production environment, which the hardware manufacturer often delivers. Such on-premises environment comprises a set of compute nodes, which may or may not work with virtual machine images and containers. Such nodes execute workloads managed by a job scheduler, which can be Slurm, PBS, or LSF typically. The workloads come from multiple users that have identity management associated with them. Usually there are home directories, scratch disks, and long term storage. Some form of monitoring to check the performance of jobs and health of compute nodes are also available. Users can access the environment via command line, browsers, or some kind of remote visualization technology. The entire environment is hosted in a private network, so users have some mechanism to access the computing facility, either via VPN or via portal.
++
+As we see throughout this document, the environment in the cloud following the Infrastructure-as-a-Service model, conceptually speaking, isn't so different. Some technologies need some updates and some steps during the migration from on-premises to the cloud are necessary.
+
+This document therefore:
+
+- Goes through the options for the migration process;
+- Provides pointers to products and best practices for each component;
+- And provides recommendations to avoid pitfalls in the process.
+
+Before jumping into the architecture description, it's relevant to understand
+the different personas in this context, their needs, and expectations.
+
+## Personas and user experience
+
+There are different people who need to access the HPC environment. Their activities and how they interact with the environment vary quite a bit.
+
+### End-user (engineer / scientist / researcher)
+
+This persona represents the subject matter expert (for example, biologist, physicist, engineer, etc.) who wants to run experiments (that is, submit jobs) and analyze results. End-users interact with system administrators to fine-tune the computing environment whenever needed. They may have some experience using CLI-based tools, but some of them may rely only on web portals or graphical user interfaces via VDI to submit their jobs and interact with the generated results.
+
+**New responsibilities in cloud HPC environment:**
+
+- End-user shouldn't have any new responsibilities based on the work from both the HPC Administrator and Cloud Administrator. Depending on the on-premises environment, end-users have access to a larger capacity and variety of computing resources to become more productive.
+
+### HPC administrator
+
+This persona represents the one who has HPC expertise and is responsible for deploying the initial computing infrastructure and adapting it according to business and end-user needs. This persona is also responsible for verifying the health of the system and performing troubleshooting. HPC administrators are comfortable accessing the architecture and its components via CLI, SDKs, and web portals. They're also the first point of contact when end-users face any challenge with the computing environment.
+
+**New responsibilities in cloud HPC environment:**
+
+- Managing cloud resources and services (for example, virtual machines, storage, networking) via cloud management platforms.
+- Implementing and managing clusters and resources via new resource orchestration tools (for example, CycleCloud).
+- Optimizing application deployment by understanding infrastructure details (that is, VM types, storage, and network options).
+- Optimizing resource utilization and costs by using cloud-specific features such as autoscaling and spot instances.
+
+### Cloud administrator
+
+This persona works with the HPC administrator to help deploy and maintain the computing infrastructure. This persona isn't (necessarily) an HPC expert, but a Cloud expert with deep knowledge of the overall company IT infrastructure, including network configurations/policies, user access rights, and user devices. Depending on the case, the HPC administrator and Cloud administrator may be the same person.
+
+**New responsibilities in cloud HPC environment:**
+
+- Collaborating with HPC administrators to ensure seamless integration of HPC workloads with cloud infrastructure.
+- Monitoring and managing cloud infrastructure performance, security, and compliance.
+- Helping with the configuration of cloud-based networking and storage solutions to support HPC workloads.
+
+### Business manager / owner
+
+This persona represents the one who is responsible for the business, which includes taking care of budget and projects to meet organizational goals. For this persona, the accounting component of the architecture is relevant to understand costs for each project. This persona works with HPC admins and end-users to understand platform needs, including storage, network, computing resources. They also plan for future workloads.
+
+**New responsibilities in cloud HPC environment:**
+
+- Analyzing detailed cost reports and usage metrics provided by cloud service providers to manage budgets and forecast expenses.
+- Making strategic decisions based on cloud resource usage and cost optimization opportunities.
+- Planning and approving cloud infrastructure investments to support future HPC workloads and business objectives.
+
+## Lift and shift architecture overview
++
+A production HPC environment in the cloud comprises several components. There are some core components to stand up an environment, such as a job scheduler, a resource provider, an entry pointer for the user to access the environment, compute and storage devices, among others. As the environment gets into production, monitoring, observability, health checks, security, identity management, accountability, different storage options, among other components, start to play a critical role.
+
+There are also extensions that could be in place, such as sign-in nodes, data movers, use of containers, license managers, among others that are dependent on the installation.
+
+This production-level environment may have various components to be set up. Therefore, environment deployers and managers become key to automate its initial deployment and upgrade it along the way, respectively. More advanced installations can also have environment templates (or specifications) with software versions and configurations that are more optimal and tested properly. Once the environment is in production with all the required components in place, over time, adjustments may be required to meet user demands, including changes in VM types or storage options/capabilities.
+
+## Instantiating the lift and shift HPC cloud architecture
+
+Here we provide more details for each architecture component, including pointers to official Azure products, tech blogs with some best practices, git repositories, and links to non-product solutions.
+
+**Quick start.** For a quick start solution to create an HPC environment in the cloud with basic building blocks, we recommend using [Azure CycleCloud Slurm workspace](https://techcommunity.microsoft.com/t5/azure-high-performance-computing/introducing-azure-cyclecloud-slurm-workspace-preview/ba-p/4158433).
high-performance-computing Lift And Shift Production Level Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-production-level-overview.md
+
+ Title: "Production-level environment migration guide overview"
+description: Learn about what a production-level environment migration entails.
++ Last updated : 08/30/2024+++++
+# Production-level environment migration guide overview
+
+When you move an HPC infrastructure from the on-premises environment to the cloud, there are various aspects to be taken into account. This document provides guidance on how to create such HPC environment in the cloud. We recommend
+a two-phase approach. First, a proof-of-concept, and then a production-level environment. Once the production environment is up and running, only certain components should be modified over time, including changes on VM types and storage capabilities to best meet the varying requirements of users, projects, and business.
+
+In this article and the following articles, we guide you through a product-level environment migration.
+
+## Prerequisites
+
+You need an Azure subscription to provision cloud resources.
+
+## Migrating from on-premises to the cloud: production level
+
+After the proof-of-concept phase, planning is required to get ready for creating a production-level HPC environment. This new environment can represent part of the on-premises infrastructure (for example, an HPC cluster from a group of clusters or queue/partition from an existing cluster), or the entire computing capability.
+
+Due to component dependencies, the deployment of this HPC cloud environment is based on a sequence of deployments, which consists of:
+
+1. Basic infrastructure, which includes creation of a resource group, network access and
+ network security rules;
+1. Base services, which include identity management, job scheduler and resource;
+ provisioner, along with their respective configurations;
+1. Storage;
+1. Compute nodes' specifications;
+1. End user entry point.
+
+In the following articles, we cover each deployment step and the components involved. In the descriptions of the components, we highlight their relevant dependencies in more detail. It's also worth noting that the component deployment steps can be executed in several ways. We provide a few tips to help get started with the deployment components via the Azure portal. But at a production level, we recommend the creation of an environment deployer that leverages infrastructure-as-code (for example, via bicep, Terraform, or Azure CLI). By doing so, one can create an environment in an automated and replicable fashion.
+
+For each step, certain topics need to be assessed before starting the migration process.
high-performance-computing Lift And Shift Proof Of Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-proof-of-concept.md
+
+ Title: "Proof-of-concept migration overview"
+description: Learn about what a proof-of-concept migration entails and follow the guide through one.
++ Last updated : 08/30/2024+++++
+# Proof-of-concept migration overview
+
+When you move an HPC infrastructure from the on-premises environment to the cloud, there are various aspects to be taken into account. This document provides guidance on how to create such HPC environment in the cloud. We recommend
+a two-phase approach. First, a proof-of-concept, and then a production-level environment. Once the production environment is up and running, only certain components should be modified over time, including changes on VM types and storage capabilities to best meet the varying requirements of users, projects, and business.
+
+In this article, we guide you through a proof-of-concept migration.
+
+## Prerequisites
+
+You need an Azure subscription to provision cloud resources.
+
+## Migrating from on-premises to the cloud: proof-of-concept (PoC)
+
+We recommend starting with a proof-of-concept (PoC) by provisioning a simple cluster in Azure, using Azure CycleCloud as a resource orchestrator, with one well-known scheduler, such as Slurm, PBS, or LSF. This approach allows one to start understanding Azure technology, assess the functionality of user applications, and investigate performance/costs trade-offs in comparison to the on-premises environment.
+
+If one is flexible with the job scheduler, or already uses Slurm scheduler, we recommend using Azure CycleCloud Slurm workspace, which is an offering that helps create a CycleCloud based cluster, with Slurm scheduler, and the basic setup for networking and storage options available. Some details on this process are available in the Resource Orchestrator section from this document.
high-performance-computing Lift And Shift Step 1 Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-1-networking.md
+
+ Title: "Deployment step 1: basic infrastructure - network access component"
+description: Learn about the configuration of network access during migration deployment step one.
++ Last updated : 08/30/2024+++++
+# Deployment step 1: basic infrastructure - network access component
+
+Mechanism to allow users access cloud environment in a secure way. It's a common practice in production environments to have resources with private IP addresses, and with rules to define how resources should be accessed.
+
+This component should:
+
+- Allow users to access private network hosting the high performance computing (HPC) environment;
+- Refine network security rules such as source and target ports and IP addresses that can access resources.
+
+## Define network needs
+
+* **Estimate cluster size for proper network setup:**
+ - Different subnets have different ranges of IP addresses.
+
+* **Security rules:**
+ - Understand how users access the HPC environment and security rules to be in places (for example, ports and IPs open/closed).
+
+## Tools and Services
+
+* **Private network access:**
+ - In Azure, the two major components to help access private network are Azure Bastion and Azure VPN Gateway.
+
+* **Network rules:**
+ - Another key component for network setup is Azure Network security groups, which is used to filter network traffic between Azure resources in an Azure virtual network.
+
+* **DNS:**
+ - Azure DNS Private Resolver allows query Azure DNS private zones from an on-premises environment and vice versa without deploying VM based DNS servers.
+
+## Best practices for network in HPC lift and shift architecture
+
+* **Have good understanding on cluster sizes and services to be used:**
+ - Different cluster sizes require different IP ranges, and proper planning helps avoid major changes in parts of the infrastructure. Also, some services may need exclusive subnets, and having clarity on those subnets is essential.
+
+## Example steps for setup and deployment
+
+Networking is a vast topic itself. In a production level environment, it's good practice to not use public IP addresses. So one could start by testing such functionality by provisioning a VM and using Bastion.
+
+For instance
+
+1. **Provision a VM via portal with no public IP address:**
+ - Follow the standard steps to provision a VM (that is, setup resource group, network, VM image, disk, etc.)
+ - During the VM create, a Virtual Network needs to be created if it's not already available
+ - Make sure the VM doesn't have a public IP address
+
+2. **Use bastion:**
+ - Once the VM is provisioned, go to the VM via Azure portal
+ - Select the option "Bastion" from "Connect" section.
+ - Select option "Deploy Bastion"
+ - Once the bastion is provisioned, the VM can be access through it.
+
+## Resources
+
+- VPN Gateway documentation: [product website](/azure/vpn-gateway/)
+- Azure Bastion documentation: [product website](/azure/bastion/)
+- Network Security groups: [product website](/azure/virtual-network/network-security-groups-overview)
+- Azure DNS Private Resolver: [product website](/azure/dns/dns-private-resolver-overview)
high-performance-computing Lift And Shift Step 1 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-1-overview.md
+
+ Title: "Deployment step 1: basic infrastructure - overview"
+description: Learn about production-level environment migration deployment step one.
++ Last updated : 08/30/2024+++++
+# Deployment step 1: basic infrastructure - overview
+
+The critical foundational components required to establish a landing zone in the cloud for an HPC environment are outlined here. The focus is on setting up resource groups, networking, and basic storage, which serve as the backbone of a successful HPC lift-and-shift deployment.
+
+This section provides a clear understanding of the purpose and requirements of these components, along with available tools and best practices tailored to HPC workloads. A quick start guide is also included to help users efficiently deploy and manage these core components, with the expectation that more advanced automation will be implemented as the HPC environment evolves.
+
+## Resource group
+
+Resource groups in Azure serve as containers that hold related resources for an Azure solution. In an HPC environment, organizing resources into appropriate resource groups is essential for effective management, access control, and cost tracking.
+
+## Networking
+
+When provisioning resources in the cloud, it's important to have an understanding on the virtual networks, subnets, security roles, among other networking-related configurations (for example, DNS). It's important to make sure public IP addresses are avoided, and that technologies such as Azure Bastion and VPNs are used.
+
+## Storage
+
+In any Azure subscription, setting up basic storage is essential for managing data, applications, and resources effectively. While more advanced and HPC-specific storage configurations are addressed separately, a solid foundation of basic storage is crucial for general resource management and initial deployment needs.
+
+For details check the description of the following component:
+
+- [Resource group](lift-and-shift-step-1-resource-group.md)
+- [Network access](lift-and-shift-step-1-networking.md)
+- [Basic Storage](lift-and-shift-step-1-storage.md)
+
+Here we describe each component. Each section includes:
+
+- An overview description of what the component is
+- What the requirements for the component are (that is, what do we need from the component)
+- Tools and services available
+- Best practices for the component in the context of HPC lift & shift
+- An example of a quick start setup
+
+The goal of the quick start is to have a sense on how to start using the component. As the HPC cloud deployment matures, one is expected to automate the usage of the component, by using, for instance, Infrastructure as Software tools such as Terraform or Bicep.
high-performance-computing Lift And Shift Step 1 Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-1-resource-group.md
+
+ Title: "Deployment step 1: basic infrastructure - resource group component"
+description: Learn about the configuration of resource groups during migration deployment step one.
++ Last updated : 08/30/2024+++++
+# Deployment step 1: basic infrastructure - resource group component
+
+Resource groups in Azure serve as containers that hold related resources for an Azure solution. In an HPC environment, organizing resources into appropriate resource groups is essential for effective management, access control, and cost tracking.
+
+## Define resource group needs
+
+* **Project-based grouping:**
+ - Organize resources by project or workload to simplify management and cost tracking.
+
+* **Environment-based grouping:**
+ - Separate resources into different groups based on environments (for example, development, testing, production) to apply different policies and controls.
+
+### This component should
+
+* **Organize resources:**
+ - Group related HPC resources (for example, VMs, storage accounts, and network components) into resource groups based on project, department, or environment (for example, development, testing, production).
+
+* **Simplify management:**
+ - Use resource groups to apply access controls, manage resource lifecycles, and monitor costs efficiently.
+
+## Best practices for resource groups in HPC lift and shift architecture
+
+* **Consistency in naming conventions:**
+ - Establish and follow consistent naming conventions for resource groups to facilitate easy identification and management.
+
+* **Resource group policies:**
+ - Apply Azure Policy to resource groups to enforce organizational standards and compliance requirements.
+
+## Example steps for resource group setup
+
+1. **Create a resource group:**
+
+ - Navigate to the Azure portal.
+ - Select "Resource groups" and select "Create."
+ - Provide a name for the resource group and select a subscription and region.
+ - Select "Review + create" and then "Create."
+
+2. **Add resources to the resource group:**
+
+ - When creating resources (for example, VMs, storage accounts), assign them to the appropriate resource group.
+ - Use tags to further organize resources within the group for better cost management and reporting.
+
+## Resources
+
+- Resource Groups Documentation: [product website](/azure/azure-resource-manager/management/manage-resource-groups-portal)
+- Azure Policy Documentation: [product website](/azure/governance/policy/overview)
+- Azure Tags Documentation: [product website](/azure/azure-resource-manager/management/tag-resources)
high-performance-computing Lift And Shift Step 1 Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-1-storage.md
+
+ Title: "Deployment step 1: basic infrastructure - storage component"
+description: Learn about the configuration of basic storage during migration deployment step one.
++ Last updated : 08/30/2024+++++
+# Deployment step 1: basic infrastructure - storage component
+
+In any Azure subscription, setting up basic storage is essential for managing data, applications, and resources effectively. While more advanced and HPC-specific storage configurations are addressed separately, a solid foundation of basic storage is crucial for general resource management and initial deployment needs.
+
+## Define basic storage needs
+
+* **General-purpose storage:**
+ - Create storage accounts to handle non-HPC-specific data such as logs, diagnostic data, and backups.
+
+* **Security and access control:**
+ - Implement access controls using Azure Active Directory (AD) and role-based access control (RBAC) to manage permissions for different users and services.
+ - Enable encryption for data at rest to ensure compliance with organizational security policies.
+
+> [!NOTE]
+> For information about HPC specific storage needs, visit the [Storage Overview](lift-and-shift-step-3-overview.md) page.
+
+### This component should
+
+* **Establish foundational storage resources:**
+ - Set up basic storage accounts that can be used for general-purpose data storage, such as logs, backups, and configuration files.
+
+* **Ensure accessibility and security:**
+ - Configure access policies and encryption to ensure that the storage resources are secure and accessible to authorized users and services only.
+
+## Tools and services
+
+* **Azure storage accounts:**
+ - Use Azure Storage Accounts to create scalable, secure, and durable storage for general-purpose data.
+ - Configure storage account types based on the type of data being stored (for example, Standard, Premium, Blob, File).
+
+* **Access control and security:**
+ - Implement RBAC to manage who has access to storage accounts and what they can do with the data.
+ - Enable Azure Storage encryption by default to protect data at rest.
+
+## Best practices for basic storage in Azure Landing Zone
+
+* **Consistency in naming conventions:**
+ - Establish and adhere to consistent naming conventions for storage accounts to simplify management and ensure clarity.
+
+* **Resource tagging:**
+ - Use tags to organize storage accounts by department, project, or purpose to aid in cost management and reporting.
+
+* **Data redundancy and availability:**
+ - Choose the appropriate redundancy option (for example, LRS, GRS) based on the criticality of the data to ensure high availability and durability.
+
+* **Cost management:**
+ - Monitor and analyze storage costs regularly using Microsoft Cost Management tools to optimize and control expenses.
+
+## Example steps for basic storage setup
+
+1. **Create a storage account:**
+
+ - Navigate to the Azure portal.
+ - Select "Storage accounts" and select "Create."
+ - Provide a name for the storage account, select a subscription, resource group, and region.
+ - Choose the performance tier (Standard or Premium) and redundancy option (LRS, GRS, etc.).
+ - Select "Review + create" and then "Create."
+
+2. **Configure access control:**
+
+ - Once the storage account is created, navigate to the "Access control (IAM)" section.
+ - Assign roles to users or groups to manage permissions (for example, Storage Blob Data Contributor, Storage Account Contributor).
+
+3. **Enable data encryption:**
+
+ - By default, Azure Storage accounts have encryption enabled. Verify this setting under "Settings" -> "Encryption" to ensure that data at rest is encrypted.
+
+## Resources
+
+- Azure Storage Accounts Documentation: [product website](/azure/storage/common/storage-account-overview)
+- Azure Storage Security Guide: [product website](/azure/storage/common/storage-security-guide)
+- Microsoft Cost Management: [product website](/azure/cost-management-billing/costs/)
high-performance-computing Lift And Shift Step 2 Accounting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-2-accounting.md
+
+ Title: "Deployment step 2: base services - accounting component"
+description: Learn about the configuration of accounting during migration deployment step two.
++ Last updated : 08/30/2024+++++
+# Deployment step 2: base services - accounting component
+
+Accounting in HPC environments involves tracking and managing resource usage to ensure efficient utilization, cost management, and compliance. Slurm Accounting is a powerful tool that helps administrators monitor and report on job and resource usage, providing insights into workload performance and user activity.
+
+## Define accounting needs
+
+* **Resource usage tracking:**
+ - To ensure efficient utilization, monitor compute node usage, job execution times, and resource allocation.
+ - Track user and group activities to understand workload patterns and resource demands.
+
+* **Cost management:**
+ - Implement policies to manage and optimize costs by tracking resource consumption.
+ - Use accounting data to allocate costs to different departments, projects, or users based on resource usage.
+
+* **Compliance and reporting:**
+ - Generate detailed reports on resource usage for compliance with organizational policies and external regulations.
+ - Maintain historical records of job execution and resource consumption for auditing and analysis.
+
+## Tools and services
+
+**Slurm accounting:**
+ - Use Slurm Accounting to track and manage job and resource usage in HPC environments.
+ - To collect and store accounting data, configure Slurm Accounting with the necessary settings.
+ - Generate reports and analyze accounting data to optimize resource utilization and cost management.
+
+## Best practices
+
+* **Accurate data collection:**
+ - Ensure that Slurm Accounting is properly configured to collect comprehensive data on job and resource usage.
+ - To maintain reliable records, regularly verify the accuracy and completeness of the accounting data.
+
+* **Effective cost management:**
+ - Use accounting data to identify cost-saving opportunities, such as optimizing job scheduling and resource allocation.
+ - Implement chargeback policies to allocate costs to departments or projects based on actual resource usage.
+
+* **Compliance and auditing:**
+ - Generate regular reports to comply with organizational policies and external regulations.
+ - To ensure accountability and transparency, maintain historical records and perform periodic audits.
+
+* **Data analysis and reporting:**
+ - Use accounting data to analyze workload performance and identify trends in resource usage.
+ - Generate custom reports to provide insights to stakeholders and support decision-making.
+
+## Example Slurm accounting commands
+
+**Query job accounting data:**
+
+```bash
+#!/bin/bash
+
+# Query job accounting data for a specific user and time period
+sacct -S 2023-01-01 -E 2023-01-31 -u john_doe -o JobID,JobName,Account,User,State,Elapsed,TotalCPU
+```
+
+## Resources
+
+- Setting up Slurm Job Accounting with Azure CycleCloud and Azure Database for MySQL Flexible Server: [blog post](https://techcommunity.microsoft.com/t5/azure-high-performance-computing/setting-up-slurm-job-accounting-with-azure-cyclecloud-and-azure/ba-p/4083685)
+- Slurm accounting: [external](https://slurm.schedmd.com/accounting.html)
high-performance-computing Lift And Shift Step 2 Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-2-identity.md
+
+ Title: "Deployment step 2: base services - identity management component"
+description: Learn about the configuration of identity management during migration deployment step two.
++ Last updated : 08/30/2024+++++
+# Deployment step 2: base services - identity management component
+
+Component to handle user identity and access levels. Identity management system should:
+
+- Allow creation/deletion of users and groups;
+- Allow update/reset of password;
+- Support single sign-on.
+
+## Define identity management needs
+
+**User IDs, passwords, home directories:**
+ - Users require IDs, passwords, location of their home directories for the all resources they need to access.
+ - Understand where just definitions should be located, which can be on-premises, in the cloud, or a combination of those options
+
+## Tools and services
+
+* **Active Directory domain
+ - Some enterprises already use Windows-based Active Directory Domain Services, which could be applied in the new HPC cloud environment.
+
+* **Microsoft Entra ID**:
+ - Some services can also make use of Microsoft Entra ID, a cloud-based identity and access management service, especially when single sign-on is required.
+
+## Best practices
+
+* **Resources to be accessed:**
+ - Have clarity on all resources users need to have access to, including cluster nodes and storage devices.
+
+* **Performance:**
+ - One can start addressing performance by using the on-premises identity and access service. It's important to verify the performance of such solutions and trade-offs of speed to authenticate and complexity of the service.
+
+## Example identity management setup
+
+The following Azure HPC blog post describes in details the steps to authenticate users in an Azure CycleCloud HPC cluster via Active Directory:
+
+Authenticating Active Directory users to an Azure CycleCloud HPC cluster: [blog post](https://techcommunity.microsoft.com/t5/azure-high-performance-computing/authenticating-active-directory-users-to-an-azure-cyclecloud-hpc/ba-p/3757085)
+
+## Resources
+
+- Active Directory Domain
+- Microsoft Entra ID: [product website](/entra/identity/)
+- Authenticating Active Directory users to an Azure CycleCloud HPC cluster: [blog post](https://techcommunity.microsoft.com/t5/azure-high-performance-computing/authenticating-active-directory-users-to-an-azure-cyclecloud-hpc/ba-p/3757085)
high-performance-computing Lift And Shift Step 2 Job Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-2-job-scheduler.md
+
+ Title: "Deployment step 2: base services - job scheduler component"
+description: Learn about the configuration of the job scheduler during migration deployment step two.
++ Last updated : 08/30/2024+++++
+# Deployment step 2: base services - job scheduler component
+
+Job schedulers are responsible for scheduling user jobs, that is, determining where and when jobs should be executed. In the context of the cloud, job schedulers interact with resource orchestrators to acquire/release resources on-demand, which is different from an on-premises environment where resources are fixed and fully available all the time. The most common HPC job schedulers are Slurm, OpenPBS, PBSPro, and LSF.
+
+## Define job scheduler needs
+
+* **Scheduler deployment:**
+ - Migrate existing job scheduler configurations to the cloud environment.
+ - Utilize the same scheduler available within CycleCloud or migrate to a different scheduler if necessary.
+
+* **Configuration management:**
+ - Configure job schedulers to define partitions/queues, Azure SKUs, compute node hostnames, and other parameters.
+ - Automatically update scheduler configurations on-the-fly based on changes in resource availability and job requirements.
+
+* **Job submission and management:**
+ - Allow end-users to submit jobs for execution according to scheduling and resource access policy rules.
+ - Monitor and manage job queues, resource utilization, and job statuses.
+
+## Tools and services
+
+**Job scheduler via CycleCloud:**
+ - Use CycleCloud to deploy and manage HPC job schedulers in the cloud.
+ - Configure job schedulers like Slurm, OpenPBS, PBSPro, and LSF within the CycleCloud environment.
+ - Manage job submissions, queues, and resource allocations through the CycleCloud portal or CLI.
+
+## Best practices for job schedulers in HPC lift and shift architecture
+
+* **Efficient scheduler deployment:**
+ - Plan and test the migration of existing job scheduler configurations to the cloud environment to ensure compatibility, performance, and user experience.
+ - Use CycleCloud's built-in support for schedulers like Slurm, OpenPBS, PBSPro, and LSF for a smoother deployment process.
+
+* **Optimized configuration management:**
+ - To align with changing resource availability and job requirements, regularly update scheduler configurations (for example, scheduler queues/partitions).
+ - Automate configuration changes using scripts and tools to minimize manual intervention and reduce the risk of errors.
+
+* **Robust job submission and management:**
+ - Implement a user-friendly interface for job submission and management to facilitate end-user interaction with the scheduler.
+ - To identify and address potential issues promptly, continuously monitor job queues, resource utilization, and job statuses.
+
+* **Scalability and performance:**
+ - Configure dynamic scaling policies to automatically adjust the number of compute nodes based on job demand, optimizing resource utilization and cost.
+ - Use performance metrics and monitoring tools to continuously assess and improve the performance of the job scheduler and the overall HPC environment.
+
+These best practices help ensure a smooth transition to cloud-based job scheduling, maintaining efficiency, scalability, and performance for HPC workloads.
+
+## Example steps for setup and deployment
+
+This section provides an overview on deploying and configuring a job scheduler using Azure CycleCloud. It includes steps for selecting and deploying the scheduler, configuring its settings, and migrating an existing on-premises scheduler to the cloud environment.
+
+1. **Using CycleCloud to deploy a job scheduler:**
+
+ - **Deploy job scheduler:**
+ - Navigate to the Azure CycleCloud portal and select the desired job scheduler from the available options (for example, Slurm, PBSPro).
+ - Follow the prompts to deploy the job scheduler, specifying the required parameters such as resource group, location, and virtual network.
+ - Example command for deploying a Slurm scheduler:
+
+ ```bash
+ cyclecloud create_cluster -n slurm-cluster -c slurm
+ ```
+
+ - **Configure job scheduler:**
+ - Once the job scheduler is deployed, configure the scheduler settings through the CycleCloud portal.
+ - Define partitions/queues, Azure SKUs, compute node hostnames, and other parameters.
+
+2. **Migrating existing job scheduler settings to CycleCloud:**
+
+ **Slurm**
+ - **Export existing configuration:**
+ - Export the configuration of the existing on-premises job scheduler.
+ - Example command for exporting Slurm configuration:
+
+ ```bash
+ scontrol show config > slurm_config.txt
+ ```
+
+ - **Evaluate and adjust Slurm configuration:**
+
+ - Open the exported configuration file and evaluate each setting to determine which ones are necessary and relevant for the cloud environment.
+
+ - Common settings to consider include:
+
+ - ControlMachine
+ - SlurmdPort
+ - StateSaveLocation
+ - SlurmdSpoolDir
+ - SlurmctldPort
+ - ProctrackType
+ - AuthType
+ - SchedulerType
+ - SelectType
+ - SelectTypeParameters
+ - AccountingStorageType
+ - JobCompType
+
+ - **Prepare the CycleCloud Slurm template:**
+
+ - Open the CycleCloud Slurm template configuration file. You can access this file through the CycleCloud UI under the cluster's configuration settings.
+ - Locate the section where Slurm configurations are specified.
+
+ - **Add adjusted Slurm settings to CycleCloud Slurm configuration:**
+
+ - For each relevant setting from your on-premises configuration, add it to the Slurm configuration textbox within the CycleCloud Slurm template. Adjust the values as needed to reflect the cloud environment specifics.
+
+## Example job scheduler submission
+
+**Submit Slurm interactive job using srun:**
+
+```bash
+#!/bin/bash
+
+# Submit a job using srun
+srun --partition=debug --ntasks=1 --time=00:10:00 --job-name=test_job --output=output.txt my_application
+
+```
+
+**Submit Slurm batch script using sbatch:**
+
+```bash
+#!/bin/bash
+
+# Create a Slurm batch script
+echo "#!/bin/bash
+#SBATCH --partition=debug
+#SBATCH --ntasks=1
+#SBATCH --time=00:10:00
+#SBATCH --job-name=test_job
+#SBATCH --output=output.txt
+
+# Run the application
+my_application" > job_script.sh
+
+# Submit the batch job
+sbatch job_script.sh
+
+```
+
+## Resources
+
+- Azure CycleCloud Scheduling and Autoscaling: [product website](/azure/cyclecloud/concepts/scheduling?view=cyclecloud-8&preserve-view=true)
+- IBM Spectrum LSF: [external](https://www.ibm.com/docs/en/spectrum-lsf/10.1.0)
+- OpenPBS: [external](https://www.openpbs.org/)
+- PBSPro: [external](https://altair.com/pbs-professional)
+- Slurm: [external](https://slurm.schedmd.com/)
high-performance-computing Lift And Shift Step 2 Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-2-monitor.md
+
+ Title: "Deployment step 2: base services - monitoring component"
+description: Learn about the configuration of monitoring during migration deployment step two.
++ Last updated : 08/30/2024+++++
+# Deployment step 2: base services - monitoring component
+
+Monitoring is a crucial aspect of managing an HPC environment in the cloud, ensuring optimal performance, reliability, and security. Effective monitoring allows administrators to gain real-time insights into system performance, detect and address issues promptly, and make informed decisions to optimize resource utilization. Key metrics such as CPU and memory usage, job execution times, and network throughput provide valuable information about the health and efficiency of the infrastructure.
+
+By using tools like Azure Monitor, Azure Managed Grafana, and Azure Managed Prometheus administrators can visualize these metrics, set up alerts for critical events, and analyze logs for troubleshooting. Implementing robust monitoring practices helps maintain high availability, enhances user satisfaction, and ensures that the cloud environment meets the dynamic needs of HPC workloads.
+
+When migrating HPC workloads to Azure, it's important to replicate and enhance the monitoring capabilities you had on-premises. This process includes tracking the same metrics and possibly adding new ones that are relevant to the cloud environment. Using Azure-specific monitoring tools can provide deeper insights into cloud resources, which are crucial for managing and optimizing cloud infrastructure effectively. For example, in a cloud environment, a valuable new metric to track is cost, which isn't typically monitored in on-premises setups.
+
+## Define monitoring key metric needs
+
+* **Common HPC metrics:**
+ - **Infrastructure metrics:** CPU, memory usage, disk I/O, network throughput.
+ - **Application metrics:** Job queue lengths, job failure rates, execution times.
+ - **User metrics:** Active users, job submission rates.
+
+* **Cloud-specific HPC metrics:**
+ - **Cost metrics:** Cost Per Resource, Monthly Cost, Budget Alerts.
+ - **Scalability metrics:** Autoscaling Events, resource utilization.
+ - **Provisioning metrics:** Provisioning time, provisioning success rate.
+
+## Tools and services
+
+* **Azure Monitor:**
+ - Configure Azure Monitor to collect metrics and logs from all resources.
+ - Set up alerts for critical thresholds (for example, CPU usage > 80%).
+ - Use Log Analytics to query and analyze logs.
+
+* **Azure Managed Grafana:**
+ - Integrate Grafana with Azure Monitor for dashboard visualizations.
+ - Create custom dashboards for different personas (for example, HPC administrators, business managers).
+ -
+* **Azure Managed Prometheus:**
+ - Deploy and manage Prometheus instances in Azure.
+ - Configure Prometheus to scrape metrics from your HPC nodes and applications.
+ - Integrate Prometheus with Grafana for advanced dashboards.
+ -
+* **Azure Moneo:**
+ - Configure Moneo to collect metrics across multi-GPU systems.
+ - Information regarding Moneo can be found on [GitHub](https://github.com/Azure/Moneo)
+
+## Best practices
+
+Implementing best practices for monitoring ensures that your HPC environment remains efficient, secure, and resilient. Here are some key best practices to follow:
+
+* **Regularly review and update monitoring configurations:**
+ - To ensure your monitoring configurations remain aligned with your infrastructure and business needs, schedule periodic reviews of them.
+ - Update thresholds and alert settings based on historical data and changing performance requirements.
+
+* **Implement comprehensive logging:**
+ - To aggregate and analyze log data, use centralized logging solutions like Azure Log Analytics.
+ - Regularly review log data to identify patterns and potential issues before they escalate.
+
+* **Set up redundancy and failover mechanisms:**
+ - Implement redundancy for critical monitoring components to ensure continuous availability.
+ - Set up failover mechanisms to automatically switch to backup systems if there's a primary system failure.
+
+* **Automate responses to common issues:**
+ - To create automated responses for common issues, use automation tools like Azure Automation and Logic Apps.
+ - Develop runbooks and workflows that can automatically remediate known problems, such as restarting services or scaling resources.
+
+* **Monitor security metrics:**
+ - Include security-related metrics in your monitoring setup, such as unauthorized access attempts, configuration changes, and compliance status.
+ - Set up alerts for critical security events to ensure prompt response and mitigation.
+
+* **Setup health checks**
+ - Implement automated health checks using scripts or Azure Automation.
+ - Monitor health checks and trigger automated responses or alerts for issues.
+ - Set up alerts in Azure Monitor to notify you when autoscaling events occur.
+
+## Example steps for setup and deployment
+
+This section provides a comprehensive guide for setting up Azure Monitor and configuring Grafana dashboards to effectively monitor your HPC environment. It includes detailed steps for creating an Azure Monitor workspace, linking it to resources, configuring data collection, deploying Azure Managed Grafana, and setting up alerts and automated health checks.
+
+### Setting up Azure Monitor
+
+1. **Navigate to Azure Monitor:**
+
+ - Go to the [Azure portal](https://portal.azure.com).
+ - In the left-hand navigation pane, select **Monitor**.
+
+2. **Create an Azure Monitor workspace:**
+
+ - Select on "Workspaces" under the "Monitoring" section.
+ - Select "Create" to set up a new Azure Monitor workspace.
+ - Provide a name, select a subscription, resource group, and location.
+ - Select "Review + create" and then "Create" to deploy the workspace.
+
+3. **Link Azure Monitor workspace to resources:**
+
+ - Go to the resource you want to monitor (for example, a Virtual Machine).
+ - Under the "Monitoring" section, select "Diagnostics settings."
+ - Select "Add diagnostic setting" and configure the logs and metrics to be sent to the Azure Monitor workspace.
+
+4. **Configure Data Collection:**
+
+ - In the Azure Monitor section, select "Data Collection Rules" to set up and manage the rules for collecting logs and metrics from various Azure resources.
+
+> [!NOTE]
+> For detailed information about Azure Monitor, visit the [Azure Monitor Metrics Overview](/azure/azure-monitor/overview) page.
+
+### Configuring Grafana dashboards
+
+1. **Deploy Azure Managed Grafana:**
+
+ - Navigate to the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?filters=azure-managed-grafana).
+ - Search for "Azure Managed Grafana" and choose **Create**.
+ - Fill in the required details such as subscription, resource group, and instance details.
+ - Select **Review + create** and then **Create** to deploy the Grafana instance.
+
+2. **Connect Grafana to Azure Monitor:**
+
+ - Once Grafana is deployed, access it through the Azure portal or directly via its public endpoint.
+ - In Grafana, go to "Configuration" -> "Data Sources" -> "Add data source."
+ - Select "Azure Monitor" from the list of available data sources.
+ - Provide the necessary details such as subscription ID, tenant ID, and client ID, and authenticate using Azure credentials.
+
+3. **Create custom dashboards:**
+
+ - After the data source is added, go to "Dashboards" -> "Manage" -> "New Dashboard."
+ - Use the panel editor to add visualizations (for example, graphs, charts) based on the metrics collected by Azure Monitor.
+ - Customize the dashboard to display key metrics such as CPU usage, memory usage, disk I/O, network throughput, job queue lengths, and job execution times.
+ - Save the dashboard and share it with relevant stakeholders.
+
+> [!NOTE]
+> For detailed information about Azure Managed Grafana, visit the [Azure Managed Grafana](/azure/managed-grafana/overview) page.
+
+### Configuring Prometheus
+
+**Deploy Azure Managed Prometheus:**
+ - Navigate to the Azure Marketplace
+ - Search for "Azure Managed Prometheus" and select "Create."
+ - Fill in the required details:
+ - Provide necessary information such as subscription, resource group, and instance details.
+ - Review the settings and select "Create" to deploy the instance.
+
+> [!NOTE]
+> For detailed information about Azure Managed Prometheus, visit the [Azure Monitor managed service for Prometheus](/azure/azure-monitor/essentials/prometheus-metrics-overview) page.
+
+### Integrate Prometheus with Grafana
+
+1. **Add Prometheus as a data source in Grafana:**
+ - In Grafana, go to "Configuration" -> "Data Sources" -> "Add data source."
+ - Select "Prometheus" and provide the Prometheus endpoint URL.
+2. **Create Custom Dashboards:**
+ - Go to "Dashboards" -> "Manage" -> "New Dashboard."
+ - Add visualizations based on metrics collected by Prometheus.
+ - Customize and save the dashboard for key metrics display.
+
+## Creating alerts
+
+1. Navigate to Azure Monitor and select **Alerts**.
+2. Select **New alert rule** to create a new alert.
+3. Define the scope by selecting the resource you want to monitor (for example, a VM, storage account, or network interface).
+4. Set conditions based on metrics or log data. For example, you might set an alert for CPU usage exceeding 80%, disk space usage above 90%, or a VM being unresponsive.
+
+ - **Defining Action Groups:**
+ - Specify actions to take when an alert is triggered, such as sending an email, triggering an Azure Function, or executing a webhook.
+ - Create action groups to manage and organize these responses efficiently.
+
+### Further steps for enhanced monitoring
+
+1. **Set up alerts:**
+ - In Azure Monitor, go to "Alerts" -> "New alert rule."
+ - Define the scope by selecting the resource you want to monitor.
+ - Set conditions for the alert (for example, CPU usage > 80%).
+ - Configure actions such as sending email notifications or triggering an Azure Function.
+
+2. **Implement automated health checks:**
+ - Use Azure Automation to create and schedule runbooks that perform health checks on your HPC environment.
+ - Ensure these runbooks check the status of critical services, resource availability, and system performance.
+ - Set up alerts to notify administrators if any health checks fail or indicate issues.
+
+3. **Regularly review and update monitoring configurations:**
+ - Periodically review the metrics and alerts configured in Azure Monitor and Grafana.
+ - Adjust thresholds, add new metrics, or modify visualizations based on changes in the HPC environment or business requirements.
+ - Train staff on interpreting monitoring dashboards, responding to alerts, and using monitoring tools effectively.
+
+### Example implementation
+
+Automated health check script:
+
+```bash
+#!/bin/bash
+
+# Check if a VM is running
+vm_status=$(az vm get-instance-view --name <vm-name> --resource-group <resource-group> --query instanceView.statuses[1] --output tsv)
+
+if [[ "$vm_status" != "PowerState/running" ]]; then
+# Restart VM if not running
+az vm start --name <vm-name> --resource-group <resource-group>
+fi
+```
+
+## Resources
+
+- Azure Managed Grafana documentation: [product](/azure/managed-grafana/)
+- AzureHPC Node Health Check: [git](https://github.com/Azure/azurehpc-health-checks)
+- Slurm Job Accounting with Azure CycleCloud: [blog](https://techcommunity.microsoft.com/t5/azure-high-performance-computing/setting-up-slurm-job-accounting-with-azure-cyclecloud-and-azure/ba-p/4083685)
+- Azure CycleCloud log files: [product](/azure/cyclecloud/log_locations?view=cyclecloud-8&preserve-view=true)
+- Monitoring CycleCloud Clusters: [product](/azure/cyclecloud/how-to/monitor-clusters?view=cyclecloud-8&preserve-view=true)
+Azure Moneo: [git](https://github.com/Azure/Moneo)
high-performance-computing Lift And Shift Step 2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-2-overview.md
+
+ Title: "Deployment step 2: base services - overview"
+description: Learn about production-level environment migration deployment step two.
++ Last updated : 08/30/2024+++++
+# Deployment step 2: base services - overview
+
+One of the key component's users interact with in an on-premises environment is the job scheduler (for example, Slurm, PBS, and LSF). During a lift-and-shift process, users should retain the same level of interaction with these schedulers. However, the difference is that resources are no longer static; they're provisioned on-demand.
+
+This section covers the core components related to the job scheduler, including the resource orchestrator for provisioning and setting up resources, identity management for user authentication, monitoring (including node health checks), and accounting to better understand the status and usage of resources. Each component plays a crucial role in ensuring the performance, scalability, and security of the HPC environment. By utilizing familiar on-premises technologies like Active Directory and established application runtimes, organizations can transition to the cloud more smoothly while maintaining continuity. A comprehensive overview of tools, best practices, and quick-start setups is provided, with the goal of progressively automating these services as the cloud environment evolves.
+
+## User identity
+
+Using technologies such as Active Directory Services and LDAP, user accounts and properties in use on-premises could be reused in the cloud environment. We recommend you apply the existing on-premises user identity technologies as much as possible.
+
+## Monitoring
+
+Monitoring is a vast area, as not only jobs need to be monitored, but the entire infrastructure. Our major recommendation in this service is to consider not only the existing metrics from on-premises environments, but also the new ones going to the cloud, which are related to costs, and to the state the infrastructure. In cloud, resources are provisioned and deprovisioned depending on usage demand, which is different from an on-premises environment. For instance, may be interesting to create alerts for cost-related threshold, which could be per user, department, or project.
+
+## Node health checks
+
+Related to monitoring, node health checks are relevant to see if the provisioned cluster nodes pass all health-related tests. We recommend using the node health checks Azure offers for HPC instances. But one may want to add new tests if necessary.
+
+## Autoscaling rules
+
+Autoscaling is a key differentiator compared to on-premises environment. Autoscaling rules determine when nodes join or leave a cluster. Always having all expected nodes on may bring efficiency for starting jobs as the nodes. However, when idle, they may become a considerable waste of money. Our recommendation is to keep nodes off when not in use. If the business demands quicker starting times, a buffer with some nodes on may be interesting, but this option has to be properly defined to assess the trade-offs of quick start time of jobs and costs.
+
+## Applications and runtimes
+
+Here, we recommend using the existing on-premises technology as much as possible as well. Technologies such as spack, easybuild, EESSI, or even a repository of compiled applications can be reused. However, it's worth noting that the hardware in the cloud may be different what is available in the on-premises environment. Therefore, recompilation and adjustment of scripts are necessary and can bring performance benefits.
+
+For details check the description of the following components:
+
+- [Job scheduler](lift-and-shift-step-2-job-scheduler.md)
+- [Resource orchestrator](lift-and-shift-step-2-resource-orchestrator.md)
+- [Identity management](lift-and-shift-step-2-identity.md)
+- [Accounting](lift-and-shift-step-2-accounting.md)
+- [Monitoring](lift-and-shift-step-2-monitor.md)
+
+Here we describe each component. Each section includes:
+
+- An overview description of what the component is
+- What the requirements for the component are (that is, what do we need from the component)
+- Tools and services available
+- Best practices for the component in the context of HPC lift & shift
+- An example of a quick start setup
+
+The goal of the quick start is to have a sense on how to start using the component. As the HPC cloud deployment matures, one is expected to automate the usage of the component, by using, for instance, Infrastructure as Software tools such as Terraform or Bicep.
high-performance-computing Lift And Shift Step 2 Resource Orchestrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-2-resource-orchestrator.md
+
+ Title: "Deployment step 2: base services - resource orchestrator component"
+description: Learn about the configuration of the resource orchestrator during migration deployment step two.
++ Last updated : 08/30/2024+++++
+# Deployment step 2: base services - resource orchestrator component
+
+Typically, resources in an on-premises environment are fully available for usage. When you migrate to the cloud, resources need to be provisioned (that is, set-up and configured). This requirement is a core difference between on-premises and cloud environments. Resource orchestrator's provisions the compute nodes and other components (for example, storage and network), **on demand**, to allow the execution of user jobs. In the context of a lift and shift architecture, this component would:
+
+- Provision resources and install the software for job execution based on end-user job submission requests to the job scheduler.
+- Verify all resources are healthy for job execution.
+
+When working with lift and shift scenarios, `Azure CycleCloud` can be used to provision a traditional HPC job scheduler in a cloud environment. Azure CycleCloud offers several features to make a smoother transition from on-premises to the cloud environment.
+
+## Define resource needs
+
+* **Compute nodes:**
+ - Provision high-performance compute nodes based on job requirements. Configure node types, sizes, and scaling policies to optimize performance and cost.
+
+* **Job scheduler:**
+ - Integrate with HPC job schedulers like Slurm, PBS Pro, or LSF. Manage job submissions, monitor job status, and optimize job execution.
+
+* **Login nodes:**
+ - Provide access for users to submit and manage jobs. Configure sign-in nodes to handle user authentication and secure shell (SSH) access to the HPC environment.
+
+* **Storage:**
+ - Set up storage solutions for job data, results, and logs. Use Azure Managed Lustre, Azure NetApp Files, or Azure Blob Storage based on performance and capacity requirements.
+
+* **Network:**
+ - Configure network settings for secure and high-performance communication between compute nodes, storage, and other resources. Use Azure Virtual Networks and Network Security Groups (NSG) to manage network traffic.
+
+## Tools and services
+
+* **Azure CycleCloud:**
+ - Use Azure CycleCloud for managing and optimizing HPC environments in the cloud.
+ - Deploy and configure HPC clusters through the Azure CycleCloud portal.
+ - Set up and manage compute nodes, job schedulers, and storage resources for efficient HPC workloads.
+
+* **Dynamic scaling:**
+ - Automatically scale compute resources up or down based on job demand.
+ - Configure scaling policies to specify the minimum and maximum number of nodes.
+ - Set scaling triggers and cooldown periods.
+
+* **Template-based deployments:**
+ - Use predefined templates to deploy various HPC cluster configurations quickly.
+ - Define compute node types, network configurations, storage options, and installed software in templates.
+ - Customize templates to meet specific requirements, such as including specialized software or configuring specific network settings.
+
+* **Support for multiple schedulers:**
+ - Integrate CycleCloud with popular HPC job schedulers like Slurm, PBS Pro, and LSF.
+ - Use CycleCloudΓÇÖs built-in scheduler support or configure custom integrations based on existing on-premises setup.
+
+- **Unified job management:**
+ - Manage jobs across hybrid environments from a single interface.
+ - Submit, monitor, and control jobs running both on-premises and in the cloud.
+ - Use job arrays, dependencies, and other advanced scheduling features to optimize job execution and resource utilization.
+
+## Best practices
+
+* **Plan and test:**
+ - Carefully plan your cluster configurations, including node types, storage options, and network settings.
+ - Perform test deployments and workloads to ensure everything is set up correctly before scaling up.
+
+* **Automate configuration:**
+ - Utilize CycleCloud templates and automation scripts for consistent and repeatable cluster deployments.
+ - Automate updates to cluster configurations to respond quickly to changing requirements or new software versions.
+
+* **Monitor and optimize:**
+ - Continuously monitor resource utilization and job performance through the CycleCloud portal.
+ - To improve performance and reduce costs, optimize cluster configurations based on monitoring data.
+
+* **Secure access:**
+ - Implement robust access controls using Azure Active Directory and SSH keys for sign-in nodes.
+ - Ensure that only authorized users can access compute and storage resources.
+
+* **Documentation and training:**
+ - Maintain detailed documentation of cluster configurations, deployment processes, and operational procedures.
+ - Provide training for HPC administrators and users to ensure effective and efficient use of CycleCloud-managed resources.
+
+## Example steps for setup and deployment
+
+This section outlines the steps for installing and configuring Azure CycleCloud, specifically using the CycleCloud Slurm Workspace. It includes instructions for setting up the environment, configuring basic settings, and deploying an HPC cluster with a predefined template.
+
+1. **Install and configure Azure CycleCloud:**
+
+ - **Install CycleCloud Slurm workspace:**
+
+ - Navigate to the Azure Marketplace and search for "Azure CycleCloud Slurm Workspace."
+ - Follow the prompts to deploy the CycleCloud Slurm Workspace, specifying the required parameters such as resource group, location, and virtual network.
+ - After deployment, configure the environment through the CycleCloud portal.
+ - Ensure the Slurm scheduler is set up and ready for job submissions.
+
+ > [!NOTE]
+ > For detailed information about Azure CycleCloud Slurm Workspace, visit this [blog post](https://techcommunity.microsoft.com/t5/azure-high-performance-computing/introducing-azure-cyclecloud-slurm-workspace-preview/ba-p/4158433).
+
+ - **Configure the environment:**
+ - Use the CycleCloud CLI or web portal to configure the basic settings, such as cloud provider credentials, default regions, and network configurations.
+ - Storage accounts and other necessary resources for CycleCloud to use for cluster deployments were already deployed using the preceding Cyclecloud Slurm Workspace marketplace solution.
+
+2. **Create and deploy an HPC cluster:**
+
+ - **Define Cluster Template:**
+ - Create a cluster template that specifies the compute node types, job scheduler, software packages, and other configuration details.
+
+ > [!NOTE]
+ > An existing Slurm template will have already been created by the Slurm Workspace deployment setup.
+
+ - **Deploy the Cluster:**
+ - Use the CycleCloud CLI or web portal to deploy the cluster based on the defined template. Monitor the deployment process to ensure all resources are provisioned and configured correctly.
+ - Example command to deploy a cluster:
+
+ ```bash
+ cyclecloud create_cluster -f hpc-cluster-template.txt
+ ```
+
+## Resources
+
+- Azure CycleCloud Documentation: [product website](/azure/cyclecloud/?view=cyclecloud-8&preserve-view=true)
+- Azure CycleCloud Overview: [product website](/azure/cyclecloud/overview?view=cyclecloud-8&preserve-view=true)
+- Azure CycleCloud Slurm Workspace: [blog post](https://techcommunity.microsoft.com/t5/azure-high-performance-computing/introducing-azure-cyclecloud-slurm-workspace-preview/ba-p/4158433)
high-performance-computing Lift And Shift Step 3 Data Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-3-data-migration.md
+
+ Title: "Deployment step 3: storage - data migration component"
+description: Learn about data migration during migration deployment step three.
++ Last updated : 08/30/2024+++++
+# Deployment step 3: storage - data migration component
+
+Process to allow users to migrate data from an on-premises environment to the cloud environment in a secure and reliable way. Moving data closer to the cloud environmentΓÇÖs computing nodes is essential to meet the needs of throughput and IOPS.
+
+This component should:
+
+- Retain all existing file and directory structure from source to target.
+- Retain all metadata related to the files, including user and group ownership, permissions, modification time, and access time.
+- Report on the results of the data migration or copy tool.
+- Implement a data migration restart process.
+
+## Define data migration needs
+
+* **Data integrity:**
+ - Ensure that all files and directories retain their original structure and metadata during the migration process.
+
+* **Security:**
+ - Maintain data security throughout the migration process by using encrypted transfer methods and secure access controls.
+
+* **Performance:**
+ - Optimize the data migration process to handle large volumes of data efficiently, minimizing downtime and disruption.
+
+## Tools and services
+
+* **Azure Data Box:**
+ - Use Azure Data Box for large-scale offline data transfers.
+ - Deploy the Data Box appliance to transfer large amounts of data to Azure quickly and safely.
+ - Set up and manage data transfers through the Azure portal.
+
+* **AzCopy:**
+ - Use AzCopy for command-line data transfer.
+ - Perform high-performance, reliable data transfer between on-premises storage and Azure Blob Storage, Azure Files, and Azure Table Storage.
+ - Support both synchronous and asynchronous transfer modes.
+
+**Rsync:**
+ - Use rsync for efficient and secure data transfer between on-premises storage and Azure storage.
+ - Retain file and directory structure and file metadata during the transfer.
+ - Utilize rsync options to ensure data integrity and transfer efficiency.
+
+## Best practices for data migration
+
+* **Plan and test:**
+ - Thoroughly plan your data migration strategy, including the selection of tools (AzCopy, rsync) and target storage (Blob Storage, Azure NetApp Files, Azure Managed Lustre).
+ - Perform test migrations with a subset of data to validate the process and ensure that the tools and configurations work as expected.
+
+* **Maintain data integrity:**
+ - Use options in AzCopy and rsync that preserve file metadata (permissions, timestamps, ownership).
+ - Verify the integrity of the migrated data by comparing checksums or using built-in verification tools.
+
+* **Optimize performance:**
+ - Compress data during transfer (using rsyncΓÇÖs `-z` option) to reduce bandwidth usage.
+ - Use parallel transfers in AzCopy to increase throughput and reduce migration time.
+
+* **Secure data transfers:**
+ - Encrypt data during transfer to protect it from unauthorized access. Use secure transfer options in AzCopy and rsync.
+ - Ensure that access controls and permissions are correctly set up on both the source and target environments.
+
+* **Monitor and report:**
+ - Continuously monitor the data migration process to detect any issues early.
+ - Generate and review detailed reports from AzCopy and rsync to ensure that all data migrated successfully and to identify any errors or discrepancies.
+
+## Example steps for data migration
+
+This section outlines the steps for using Azure Data Box, AzCopy, and rsync to transfer data from on-premises storage to Azure. It includes detailed instructions for deploying and configuring Azure Data Box, installing and using AzCopy for data transfer, and setting up and using rsync to ensure secure and efficient data migration.
+
+1. **Using Azure Data Box:**
+
+ - **Deploy Azure Data Box:**
+ - Navigate to the Azure portal and order an Azure Data Box.
+ - Follow the instructions to set up the Data Box appliance at your on-premises location.
+ - Copy the data to the Data Box and ship it back to Azure.
+
+ - **Configure data transfer:**
+ - Once the Data Box arrives at the Azure data center, the data is uploaded to your specified storage account.
+ - Verify the data transfer status and integrity through the Azure portal.
+
+2. **Using AzCopy:**
+
+ - **Install AzCopy:**
+ - Download and install AzCopy on your on-premises server.
+ - Configure AzCopy with the necessary permissions to access your Azure storage account.
+
+ - **Perform data transfer:**
+ - Use AzCopy commands to transfer data from on-premises storage to Azure Blob Storage.
+ - Example command for data transfer:
+
+ ```bash
+ azcopy copy 'https://<storage_account>.blob.core.windows.net/<container>/<path>' '<local_path>' --recursive
+ ```
+
+ > [!NOTE]
+ > For detailed information about AzCopy, visit [Get started with AzCopy](/azure/storage/common/storage-use-azcopy-v10).
+
+3. **Using rsync:**
+
+ - **Install rsync:**
+ - Ensure rsync is installed on your on-premises server. Most Linux distributions include `rsync` by default.
+ - Install rsync on your server if it isn't already installed:
+
+ ```bash
+ sudo apt-get install rsync # For Debian-based systems
+ sudo yum install rsync # For Red Hat-based systems
+ ```
+
+ - **Perform data transfer:**
+ - Use rsync to transfer data from on-premises storage to Azure storage.
+ - Example command for data transfer:
+
+ ```bash
+ rsync -avz /path/to/local/data/ user@remote:/path/to/azure/data/
+ ```
+
+ - Options explained:
+ - `-a`: Archive mode: preserves permissions, timestamps, symbolic links, and other metadata.
+ - `-v`: Verbose mode: provides detailed output of the transfer process.
+ - `-z`: Compresses data during transfer to reduce bandwidth usage.
+
+ > [!NOTE]
+ > For examples using Rsync, visit [rysnc examples](https://rsync.samba.org/examples.html).
+
+### Example data migration implementation
+
+**Data migration script using AzCopy:**
+
+```bash
+#!/bin/bash
+
+# Define storage account and container
+storage_account="<storage_account_name>"
+container_name="<container_name>"
+local_path="<local_path>"
+
+# Perform data transfer using AzCopy
+azcopy copy "https://$storage_account.blob.core.windows.net/$container_name" "$local_path" --recursive
+
+# Verify transfer and generate report
+azcopy jobs show --latest > migration_report.txt
+```
+
+**Data Migration Script using rsync:**
+
+```bash
+#!/bin/bash
+
+# Define variables
+local_path="/path/to/local/data"
+remote_user="user"
+remote_host="remote"
+remote_path="/path/to/azure/data/"
+
+# Perform data transfer using rsync
+rsync -avz $local_path $remote_user@$remote_host:$remote_path
+
+# Verify transfer and generate report
+rsync -avz --dry-run $local_path $remote_user@$remote_host:$remote_path > migration_report.txt
+```
+
+## Resources
+
+- [AzCopy](/azure/storage/common/storage-use-azcopy-v10)
+- [Migrate to NFS Azure file shares (rsync, fpsync)](/azure/storage/files/storage-files-migration-nfs?tabs=ubuntu)
+- [Azure Data Box](/azure/databox/)
+- [rsync](https://rsync.samba.org/)
high-performance-computing Lift And Shift Step 3 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-3-overview.md
+
+ Title: "Deployment step 3: storage - overview"
+description: Learn about production-level environment migration deployment step three.
++ Last updated : 08/30/2024+++++
+# Deployment step 3: Storage - Overview
+
+With the cloud offering a broader range of storage solutions compared to on-premises systems, it's essential to define where different types of dataΓÇösuch as user home directories, project data, and scratch disksΓÇöshould be stored. The section also discusses data migration strategies, whether it involves a one-time transfer or continuous synchronization between on-premises systems and the cloud. Organizations can optimize costs and performance by carefully selecting storage options and utilizing tools for efficient data movement.
+
+This section highlights the critical considerations for managing storage in an HPC cloud environment, focusing on the variety of cloud storage options and the processes for migrating data. Also, it offers practical guidance for setting up storage and managing data migration, with an emphasis on scalability and automation as the HPC environment evolves.
+
+## Storage options in the cloud
+
+Compared to on-premises environment, the variety and capacity for storage options in the cloud increase. A good practice is to define the major places to put data, such as user home directories, project data, scratch disks, and long term storage. As one of the key benefits of cloud is to obtain resources on demand, it's more important at the beginning to define the storage options. As environment evolves, the amount of data required for storage options becomes clearer.
+
+## Data migration
+
+To move data in and out of an on-premises system to an HPC environment in Azure, several methods and tools can be employed. Depending on the scenario, data migration might be a one-time copy or involve regular synchronization to keep data up-to-date. Accessing on-premises data from Azure jobs can be managed by using appropriate protocols such as NFS or SMB, considering the effect on networking infrastructure. Additionally, tiering mechanisms can be used to optimize costs by automatically moving data between different storage tiers based on access patterns and data lifecycle policies.
+
+For details check the description of the following component:
+
+- [Storage](lift-and-shift-step-3-storage.md)
+- [Data migration](lift-and-shift-step-3-data-migration.md)
+
+Here we describe each component. Each section includes:
+
+- An overview description of what the component is
+- What the requirements for the component are (that is, what do we need from the component)
+- Tools and services available
+- Best practices for the component in the context of HPC lift & shift
+- An example of a quick start setup
+
+The goal of the quick start is to have a sense on how to start using the component. As the HPC cloud deployment matures, one is expected to automate the usage of the component, by using, for instance, Infrastructure as Software tools such as Terraform or Bicep.
high-performance-computing Lift And Shift Step 3 Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-3-storage.md
+
+ Title: "Deployment step 3: storage - storage component"
+description: Learn about the configuration of storage during migration deployment step three.
++ Last updated : 08/30/2024+++++
+# Deployment step 3: storage - storage component
+
+When migrating HPC environments to the cloud, it's essential to define and implement an effective storage strategy that meets your performance, scalability, and cost requirements. An effective storage strategy ensures that your HPC workloads can access and process data efficiently, securely, and reliably. This approach includes considering different types of storage solutions for various needs such as long-term data archiving, high-performance scratch space, and shared storage for collaborative work.
+
+Proper data management practices, such as lifecycle policies and access controls, help maintain the integrity and security of your data. Additionally, efficient data movement techniques are necessary to handle large-scale data transfers and automate ETL processes to streamline workflows. Here are the key steps and considerations for setting up storage in the cloud:
+
+## Define storage needs
+
+* **Storage types:**
+ - **Long-term storage:** Use Azure Blob Storage for data archiving. Azure Blob Storage provides a cost-effective solution for storing large volumes of data that are infrequently accessed but must be retained for compliance or historical purposes. It offers various access tiers (Hot, Cool, and Archive) to optimize costs based on how frequently the data is accessed.
+ - **High-performance storage:** Use Azure Managed Lustre or Azure NetApp Files for scratch space and high IOPS requirements. Azure Managed Lustre is ideal for HPC workloads that require high throughput and low latency, making it suitable for applications that process large datasets quickly.
+
+ Azure NetApp Files provide enterprise-grade performance and features such as snapshots, cloning, and data replication, which are essential for critical HPC applications.
+ - **Shared storage:** Use Azure Files or NFS on Blob for user home directories and shared data. Azure Files offers fully managed file shares in the cloud that can be accessed via the industry-standard SMB protocol, making it easy for multiple users and applications to share data.
+
+ NFS on Blob allows for POSIX-compliant shared access to Azure Blob Storage, enabling seamless integration with existing HPC workflows and applications.
+
+* **Data management:**
+ - **Implement data lifecycle policies:** To manage data movement between hot, cool, and archive tiers, implement data lifecycle policies that automatically move data to the most appropriate storage tier based on usage patterns. This approach helps optimize storage costs by ensuring that frequently accessed data is kept in high-performance storage, while rarely accessed data is moved to more cost-effective archival storage.
+ - **Set up access controls:** Use Azure Active Directory (AD) and role-based access control (RBAC) to set up granular access controls for your storage resources. Azure AD provides identity and access management capabilities, while RBAC allows you to assign specific permissions to users and groups based on their roles. This strategy ensures that only authorized users can access sensitive data, enhancing security and compliance.
+
+* **Data movement:**
+ - **Azure Data Box:** Use Azure Data Box for large-scale offline data transfers. Azure Data Box is a secure, ruggedized appliance that allows you to transfer large amounts of data to Azure quickly and safely, minimizing the time and cost associated with network-based data transfer.
+ - **Azure Data Factory:** Use Azure Data Factory for orchestrating and automating data movement and transformation. Azure Data Factory provides a fully managed ETL service that allows you to move data between on-premises and cloud storage solutions, schedule data workflows, and transform data as needed.
+ - **AzCopy:** Use AzCopy for command-line data transfer. AzCopy is a command-line utility that provides high-performance, reliable data transfer between on-premises storage and Azure Blob Storage, Azure Files, and Azure Table Storage. It supports both synchronous and asynchronous transfer modes, making it suitable for various data movement scenarios.
+
+## Tools and services
+
+* **Azure Managed Lustre:**
+ - Use Azure Managed Lustre for high-performance storage needs in HPC workloads.
+ - Deploy and configure Lustre file systems through the Azure Marketplace.
+ - Set up and manage mount points on HPC nodes to access the Lustre file system.
+
+* **Azure NetApp Files:**
+ - Utilize Azure NetApp Files for enterprise-grade performance and data management features.
+ - Configure snapshots, cloning, and data replication for critical HPC applications.
+ - Integrate with Azure services and manage multiple protocols (NFS, SMB) for versatile usage.
+
+* **Azure Blob Storage:**
+ - Use Azure Blob Storage for cost-effective long-term data archiving.
+ - Implement data lifecycle policies to automatically move data between access tiers (Hot, Cool, Archive).
+ - Set up access controls and integrate with data analytics services for efficient data management.
+
+* **Azure Files:**
+ - Use Azure Files for fully managed file shares accessible via SMB protocol.
+ - Configure Azure AD and RBAC for secure access management and compliance.
+ - Ensure high availability with options for geo-redundant storage to protect against regional failures.
+
+## Best practices for HPC storage
+
+* **Define clear storage requirements:**
+ - Identify the specific storage needs for different workloads, such as high-performance scratch space, long-term archiving, and shared storage.
+ - Choose the appropriate storage solutions (for example, Azure Managed Lustre, Azure Blob Storage, Azure NetApp Files) based on performance, scalability, and cost requirements.
+
+* **Implement data lifecycle management:**
+ - Set up automated lifecycle policies to manage data movement between different storage tiers (Hot, Cool, Archive) to optimize costs and performance.
+ - Regularly review and adjust lifecycle policies to ensure data is stored in the most cost-effective and performance-appropriate tier.
+
+* **Ensure data security and compliance:**
+ - Use Azure Active Directory (AD) and role-based access control (RBAC) to enforce granular access controls on storage resources.
+ - Implement encryption for data at rest and in transit to meet security and compliance requirements.
+
+* **Optimize data movement:**
+ - Utilize tools like Azure Data Box for large-scale offline data transfers and AzCopy or rsync for efficient online data transfers.
+ - Monitor and optimize data transfer processes to minimize downtime and ensure data integrity during migration.
+
+* **Monitor and manage storage performance:**
+ - Continuously monitor storage performance and usage metrics to identify and address bottlenecks.
+ - Use Azure Monitor and Azure Metrics to gain insights into storage performance and capacity utilization, and make necessary adjustments to meet workload demands.
+
+These best practices ensure that your HPC storage strategy is effective, cost-efficient, and capable of meeting the performance and scalability requirements of your workloads.
+
+## Example steps for storage setup and deployment
+
+This section provides detailed instructions for setting up various storage solutions for HPC in the cloud. It covers the deployment and configuration of Azure Managed Lustre, Azure NetApp Files, NFS on Azure Blob, and Azure Files, including how to deploy these services and configure mount points on HPC nodes.
+
+1. **Setting up Azure Managed Lustre:**
+ - **Deploy a Lustre filesystem:**
+ - Navigate to the Azure Marketplace and search for "Azure Managed Lustre."
+ - Follow the prompts to deploy the Lustre filesystem, specifying the required parameters such as resource group, location, and storage size.
+ - Confirm the deployment and wait for the Lustre filesystem to be provisioned.
+ - **Configure mount points:**
+ - Once the Lustre filesystem is deployed, obtain the necessary mount information from the Azure portal.
+ - On each HPC node, install the Lustre client packages if not already present.
+ - Use the mount information to configure the mount points by adding entries to the `/etc/fstab` file or using the `mount` command directly.
+ - Example:
+ ```bash
+ sudo mount -t lustre <LUSTRE_FILESYSTEM_URL> /mnt/lustre
+ ```
+
+2. **Setting up Azure NetApp Files:**
+
+ - **Deploy an Azure NetApp Files volume:**
+ - Navigate to the Azure portal and search for "Azure NetApp Files."
+ - Create a NetApp account if not already existing.
+ - Create a capacity pool by specifying the required parameters such as resource group, location, and pool size.
+ - Create a new volume within the capacity pool by providing details like volume size, protocol type (NFS or SMB), and virtual network.
+
+ - **Configure mount points:**
+ - Once the NetApp volume is created, obtain the necessary mount information from the Azure portal.
+ - On each HPC node, install the necessary client packages for the protocol used (NFS or SMB) if not already present.
+ - Use the mount information to configure the mount points by adding entries to the `/etc/fstab` file or using the `mount` command directly.
+ - Example for NFS:
+ ```bash
+ sudo mount -t nfs <NETAPP_VOLUME_URL>:/<VOLUME_NAME> /mnt/netapp
+ ```
+
+3. **Implementing NFS on Azure Blob:**
+ - **Create an Azure Storage account:**
+ - Navigate to the Azure portal and create a new storage account.
+ - Enable NFS v3 support during the creation process by selecting the appropriate options under "File shares."
+ - **Configure NFS client:**
+ - On each HPC node, install NFS client packages if not already present.
+ - Configure the NFS client by adding entries to the `/etc/fstab` file or using the `mount` command to mount the Azure Blob storage.
+ - Example:
+
+ ```bash
+ sudo mount -t nfs <STORAGE_ACCOUNT_URL>:/<FILE_SHARE_NAME> /mnt/blob
+ ```
+
+4. **Setting up Azure Files:**
+
+ - **Deploy an Azure File Share:**
+ - Navigate to the Azure portal and search for "Azure Storage accounts."
+ - Create a new storage account if not already existing by specifying parameters such as resource group, location, and performance tier (Standard or Premium).
+ - Within the storage account, navigate to the "File shares" section and create a new file share by specifying the name and quota (size).
+
+ - **Configure mount points:**
+ - Once the file share is created, obtain the necessary mount information from the Azure portal.
+ - On each HPC node, install the necessary client packages for the protocol used (SMB) if not already present.
+ - Use the mount information to configure the mount points by adding entries to the `/etc/fstab` file or using the `mount` command directly.
+ - Example for SMB:
+
+ ```bash
+ sudo mount -t cifs //<STORAGE_ACCOUNT_NAME>.file.core.windows.net/<FILE_SHARE_NAME> /mnt/azurefiles -o vers=3.0,username=<STORAGE_ACCOUNT_NAME>,password=<STORAGE_ACCOUNT_KEY>,dir_mode=0777,file_mode=0777,sec=ntlmssp
+ ```
+
+## Resources
+
+- [Lustre - Azure Managed Lustre File System documentation](/azure/azure-managed-lustre)
+- [Lustre - Robinhood for Azure Managed Lustre File System](https://techcommunity.microsoft.com/t5/azure-high-performance-computing/azure-managed-lustre-with-automatic-synchronisation-to-azure/ba-p/3997202)
+- [Lustre - AzureHPC Lustre On Marketplace](https://techcommunity.microsoft.com/t5/azure-high-performance-computing/azurehpc-lustre-marketplace-offer/ba-p/3272689)
+- [Lustre - Lustre File System Template on CycleCloud](https://techcommunity.microsoft.com/t5/azure-high-performance-computing/lustre-on-azure/ba-p/1052536)
+- [NFS on Blob](/azure/storage/blobs/network-file-system-protocol-support)
+- [NFS on Azure Files](/azure/storage/files/files-nfs-protocol)
+- [Azure NetApp Files](https://azure.microsoft.com/products/netapp)
high-performance-computing Lift And Shift Step 4 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-4-overview.md
+
+ Title: "Deployment step 4: compute nodes - overview"
+description: Learn about production-level environment migration deployment step four.
++ Last updated : 08/30/2024+++++
+# Deployment step 4: compute nodes - overview
+
+Managing compute nodes in an HPC cloud environment involves careful consideration of virtual machine (VM) types, images, and quota limits. Testing key on-premises workloads in the cloud helps assess the cost-benefit of different VM SKUs, allowing for more informed hardware decisions over time. Azure provides preconfigured HPC images for Ubuntu and Alma Linux, which include necessary drivers and libraries, simplifying the deployment process. Custom images can also be created using available resources from the Azure HPC image repository. Additionally, itΓÇÖs important to plan resource usage carefully and consult with Azure to avoid quota limitations, especially when scaling across multiple regions.
+
+This section provides guidance on selecting and managing compute resources efficiently for HPC workloads in the cloud.
+
+## Virtual machine (VM) types (SKUs)
+
+We recommend you test a few key on-premises workloads in the cloud to develop an understanding of cost-benefit for different SKUS. In the cloud, the hardware options allow decisions to be refined over time.
+
+## VM images
+
+Azure offers HPC images for ubuntu and alma linux, containing various drivers, libraries, and some HPC-related configurations. We recommended that you use these images as much as possible. However, if custom images are required, one can see from the Azure HPC images GitHub repository how these images were built, and use the scripts there.
+
+## Quota
+
+If large amounts of resources are required, it's beneficial to have a proper planning and discussion with Azure team to minimize chances of reaching quota limits. Depending on the case, it can be beneficial to explore multiple regions whenever possible.
+
+For details check the description of the following component:
+
+- [VM images](lift-and-shift-step-4-vm-images.md)
+
+Here we describe each component. Each section includes:
+
+- An overview description of what the component is
+- What the requirements for the component are (that is, what do we need from the component)
+- Tools and services available
+- Best practices for the component in the context of HPC lift & shift
+- An example of a quick start setup
+
+The goal of the quick start is to have a sense on how to start using the component. As the HPC cloud deployment matures, one is expected to automate the usage of the component, by using, for instance, Infrastructure as Software tools such as Terraform or Bicep.
high-performance-computing Lift And Shift Step 4 Vm Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-4-vm-images.md
+
+ Title: "Deployment step 4: compute nodes - VM images component"
+description: Learn about the configuration of virtual machine (VM) images during migration deployment step four.
++ Last updated : 08/30/2024+++++
+# Deployment step 4: compute nodes - VM images component
+
+A Virtual Machine (VM) image is a snapshot of a virtual machine's operating system, software, configurations, and data stored at a specific point in time. It's a valuable asset that encapsulates most of what is required to enable virtual machines to run end-user jobs.
+
+In the context of HPC environments, VM images should have or could have support for drivers (for example, IB, GPUs), MPI libraries (for example, mpich, intel-mpi, pmix), and other HPC relevant software (for example, CUDA, NCCL, compilers, health checkers).
+
+## Define VM image needs
+
+* **Libraries, middleware, drivers:**
+ - Understand the major libraries (for example, MPI flavors) and, eventually, middleware (for example, Slurm/PBS/LFS) needed for the HPC applications. Drivers to support GPUs, for instance, can also be placed in the image.
+
+* **Utilities and configurations:**
+ - Small utilities (for example, healthchecks) or configurations (for example, ulimit) used by most users.
+
+## Tools and services
+
+**Azure HPC images:**
+ - Azure HPC images are available for usage, which contains several packages relevant for HPC settings.
+ - Azure HPC images contain both Ubuntu and AlmaLinux Linux distributions.
+
+## Best practices for HPC images in HPC lift and shift architecture
+
+* **Make use of Azure HPC images:**
+ - These images are extensively tested to run in Azure SKUs and Azure HPC systems, such as CycleCloud.
+
+* **Custom images and other Linux distributions:**
+ - If a custom image needs to be created, we recommend using the Azure HPC image GitHub repo as much as possible. It contains all scripts used to create the Azure HPC images.
+
+## Example steps for setup and deployment
+
+This section provides an overview on deploying a VM using an Azure HPC image via Azure portal.
+
+1. **Go to the Azure portal and select HPC VM image to create a VM:**
+
+ - **Select VM image:**
+ - Navigate in Azure portal to create the VM following the standard VM provisioning steps.
+ - When selecting the VM image (from marketplace), look for either "AlmaLinux HPC" or "Ubuntu-based HPC and AI"
+ - Fill up all fields (including networking, disk, management, etc.)
+ - Provision VM
+ - Define partitions/queues, Azure SKUs, compute node hostnames, and other parameters.
+
+2. **Test the VM:**
+ - **SSH into the VM:**
+ - Here you can use your operating system ssh tool or use the Azure portal cloud shell, or go into the VM access tab to select, for instance, access via Bastion (depending on the network setup)
+ - **See some HPC related tools:**
+ - One can observe the Azure HPC image in action by the following command, which lists the module available, including various mpi implementations
+
+ ```bash
+ module av
+ ```
+
+ - To load `openmpi`:
+
+ ```bash
+ module load mpi/openmpi
+ which mpirun
+ ```
+
+ - HPC tools and libraries can be found in `/opt/` directory.
+
+## Resources
+
+- Azure HPC SKUs and supported images: [product website](/azure/virtual-machines/configure#vm-images)
+- Azure HPC image overview: [product website](/azure/virtual-machines/configure#centos-hpc-vm-images)
+- Azure HPC image release notes (with software + software versions): [GitHub](https://github.com/Azure/azhpc-images/releases)
+- Azure HPC image installation scripts: [GitHub](https://github.com/Azure/azhpc-images)
+- Image creation (general purpose): [product website](/azure/virtual-machines/image-version)
high-performance-computing Lift And Shift Step 5 End User Entry Point https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-5-end-user-entry-point.md
+
+ Title: "Deployment step 5: end-user entry point - end-user entry point component"
+description: Learn about the configuration of end-user entry points during migration deployment step five.
++ Last updated : 08/30/2024+++++
+# Deployment step 5: end-user entry point - end-user entry point component
+
+Users access the computing environment in different ways. A common access point is through a terminal with ssh via command line interface (CLI). Other mechanisms are graphical user interface via VDI or web portals with on-demand, Jupyter lab, or r-studio. Some users may also rely on ssh via Visual Studio Code (VS Code). An end-user entry point component is key to shape the user experience accessing the HPC cloud resources, and it's highly dependent on the user workflow and application.
+
+Once all the basic infrastructure is deployed, the end-user entry point would:
+
+- Allow end-user to sign in into the machine and submit jobs;
+- Allow end-user to request a remote desktop session;
+- Allow end-user to request web browser-based sessions to run applications such
+ as Jupyter lab or r-studio.
+
+## Define user entry point needs
+
+* **SSH access:**
+ - Enable users to sign in into the HPC environment via SSH for job submission and management.
+ - Ensure secure authentication and connection protocols are in place.
+
+* **Remote desktop access:**
+ - Allow users to request and establish remote desktop sessions for graphical applications.
+ - Provide VDI solutions that support various operating systems and applications.
+
+* **Web browser-based access:**
+ - Support web browser-based sessions for running applications such as Jupyter Lab or RStudio.
+ - Ensure seamless integration with the HPC environment and resource management.
+
+## Tools and services
+
+* **SSH access:**
+ - Use standard SSH protocols to provide secure command-line access to HPC resources.
+ - Configure SSH keys and user permissions to ensure secure and efficient access.
+
+* **Remote desktop access:**
+ - Utilize VDI solutions such as Windows Virtual Desktop or non-Microsoft VDI providers.
+ - Configure remote desktop protocols (RDP, VNC) and ensure compatibility with user applications.
+
+* **Web browser-based access:**
+ - Deploy web-based platforms like JupyterHub or RStudio Server for interactive sessions.
+ - To allow seamless access to compute resources, integrate these platforms with the HPC environment.
+
+## Best practices
+
+* **Secure authentication and access control:**
+ - Implement multifactor authentication (MFA) and SSH key-based authentication for secure access.
+ - Use role-based access control (RBAC) to manage user permissions and ensure compliance with security policies.
+
+* **Optimize user experience:**
+ - Provide clear documentation and training for users on how to access and utilize different entry points.
+ - To ensure a smooth user experience, continuously monitor and optimize the performance of access points.
+
+* **Ensure compatibility and integration:**
+ - Test and validate the compatibility of remote desktop and web-based access solutions with HPC applications.
+ - To provide seamless resource management, integrate access solutions with the existing HPC infrastructure.
+
+* **Scalability and performance:**
+ - Configure access points to scale based on user demand, ensuring availability and performance during peak usage.
+ - Use performance metrics to monitor and optimize the entry point infrastructure regularly.
+
+## Example steps for setup and deployment
+
+**Setting up SSH access:**
+
+1. **Configure SSH server:**
+
+ - Install and configure an SSH server on the sign-in nodes.
+ - Generate and distribute SSH keys to users and configure user permissions.
+
+ ```bash
+ sudo apt-get install openssh-server
+ sudo systemctl enable ssh
+ sudo systemctl start ssh
+ ```
+
+2. **User authentication:**
+
+ - Set up SSH key-based authentication and configure the SSH server to disable password authentication for added security.
+
+ ```bash
+ ssh-keygen -t rsa -b 4096
+ ssh-copy-id user@hpc-login-node
+ ```
+
+**Setting up remote desktop access:**
+
+1. **Deploy VDI solution:**
+
+ - Choose and deploy a VDI solution that fits your HPC environment (for example, Windows Virtual Desktop, VNC).
+ - Configure remote desktop protocols and ensure they're compatible with user applications.
+2. **Configure remote desktop access:**
+
+ - Set up remote desktop services on the HPC sign-in nodes and configure user permissions.
+
+ ```bash
+ sudo apt-get install xrdp
+ sudo systemctl enable xrdp
+ sudo systemctl start xrdp
+ ```
+
+**Setting up web browser-based access:**
+
+1. **Deploy JupyterHub or RStudio Server:**
+
+ - Install and configure JupyterHub or RStudio Server on the HPC environment.
+
+ ```bash
+ sudo apt-get install jupyterhub
+ sudo systemctl enable jupyterhub
+ sudo systemctl start jupyterhub
+ ````
+
+2. **Integrate with HPC resources:**
+
+ - Configure the web-based platforms to integrate with the HPC scheduler and compute resources.
+
+ ```bash
+ jupyterhub --no-ssl --port 8000
+ ```
+
+## Resources
+
+- Azure CycleCloud CLI installation guide: [product website](/azure/cyclecloud/how-to/install-cyclecloud-cli?view=cyclecloud-8&preserve-view=true)
+- Azure CycleCloud CLI reference guide: [product website](/azure/cyclecloud/cli?view=cyclecloud-8&preserve-view=true)
+- Azure CycleCloud REST API reference guide: [product website](/azure/cyclecloud/api?view=cyclecloud-8&preserve-view=true)
+- Azure CycleCloud Python API reference guide: [product website](/azure/cyclecloud/python-api?view=cyclecloud-8&preserve-view=true)
+- Remote visualization via OnDemand and AzHop: [blog post](https://techcommunity.microsoft.com/t5/azure-high-performance-computing/azure-hpc-ondemand-platform-cloud-hpc-made-easy/ba-p/2537338)
+- LSF Scheduler CLI commands: [external](https://www.ibm.com/docs/en/spectrum-lsf/10.1.0?topic=reference-command)
+- PBS Scheduler CLI commands: [external](https://2021.help.altair.com/2021.1.2/PBS%20Professional/PBSUserGuide2021.1.2.pdf)
+- Slurm Scheduler CLI commands: [external](https://slurm.schedmd.com/pdfs/summary.pdf)
+- Open OnDemand: [external](https://openondemand.org/)
high-performance-computing Lift And Shift Step 5 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/high-performance-computing/lift-and-shift-step-5-overview.md
+
+ Title: "Deployment step 5: end-user entry point - overview"
+description: Learn about production-level environment migration deployment step five.
++ Last updated : 08/30/2024+++++
+# Deployment step 5: end-user entry point - overview
+
+Providing a consistent end-user entry point is crucial for ensuring a smooth transition from on-premises to the cloud in an HPC environment. Whether users access resources through an SSH sign-in node or a web portal, maintaining a familiar experience helps minimize disruptions.
+
+This section explores the options for user interaction, emphasizing the importance of addressing potential latency issues that may arise when moving to the cloud. It also provides guidance on tools, services, and best practices to optimize the user entry point for HPC lift-and-shift deployments. A quick start setup is included to help establish this component efficiently, with the goal of automating it as the cloud infrastructure matures.
+
+## Options for user interaction
+
+End-users may benefit from having similar experience accessing resources on-premises and in the cloud. Whether users go to a sign-in node via ssh or a web portal to submit jobs, we recommend you keep the same user experience and access if there are any latency issues that users may face compared to on-premises environment.
+
+For details, check the description of the following component:
+
+- [End-user entry point](lift-and-shift-step-5-end-user-entry-point.md)
+
+Here we describe each component. Each section includes:
+
+- An overview description of what the component is
+- What the requirements for the component are (that is, what do we need from the component)
+- Tools and services available
+- Best practices for the component in the context of HPC lift & shift
+- An example of a quick start setup
+
+The goal of the quick start is to have a sense on how to start using the component. As the HPC cloud deployment matures, one is expected to automate the usage of the component, by using, for instance, Infrastructure as Software tools such as Terraform or Bicep.
iot-edge Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-networking.md
It's possible to configure the EFLOW virtual machine to use a specific DNS serve
To check the DNS servers assigned to the EFLOW VM, from inside the EFLOW VM, use the command: `resolvectl status`. The command's output shows a list of the DNS servers configured for each interface. In particular, it's important to check the *eth0* interface status, which is the default interface for the EFLOW VM communication. Also, make sure to check the IP addresses of the **Current DNS Server**s and **DNS Servers** fields of the list. If there's no IP address, or the IP address isn't a valid DNS server IP address, then the DNS service won't work.
-![Screenshot of console showing sample output from resolvectl command.](./media/iot-edge-for-linux-on-windows-networking/resolvctl-status.png)
- ### Static MAC Address Hyper-V allows you to create virtual machines with a **static** or **dynamic** MAC address. During EFLOW virtual machine creation, the MAC address is randomly generated and stored locally to keep the same MAC address across virtual machine or Windows host reboots. To query the EFLOW virtual machine MAC address, you can use the following command.
load-balancer Distribution Mode Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/distribution-mode-concepts.md
Previously updated : 06/26/2024 Last updated : 09/25/2024 #Customer intent: As a administrator, I want to learn about the different distribution modes of Azure Load Balancer so that I can configure the distribution mode for my application.
The five-tuple consists of:
* **Protocol type** The hash is used to route traffic to healthy backend instances within the backend pool. The algorithm provides stickiness only within a transport session. When the client starts a new session from the same source IP, the source port changes and causes the traffic to go to a different backend instance.
-In order to configure hash based distribution, you must select session persistence to be **None** in the Azure portal. This specifies that successive requests from the same client can be handled by any virtual machine.
-![Hash-based distribution](./media/load-balancer-overview/load-balancer-distribution.png)
+In order to configure hash based distribution, you must select session persistence to be **None** in the Azure portal. This specifies that successive requests from the same client can be handled by any virtual machine.
-*Figure: Default five-tuple hash based distribution*
## Session persistence
load-balancer Load Balancer Floating Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-floating-ip.md
In the diagrams, you see how IP address mapping works before and after enabling
You configure Floating IP on a Load Balancer rule via the Azure portal, REST API, CLI, PowerShell, or other client. In addition to the rule configuration, you must also configure your virtual machine's Guest OS in order to use Floating IP. For this scenario, every VM in the backend pool has three network interfaces:
load-balancer Load Balancer Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-virtual-machine-scale-sets.md
When Virtual Machine Scale Sets with [public IPs per instance](/azure/virtual-ma
To create an outbound rule for a backend pool that's already referenced by a load-balancing rule, select **No** under **Create implicit outbound rules** in the Azure portal when the inbound load-balancing rule is created.
- :::image type="content" source="./media/vm-scale-sets/load-balancer-and-vm-scale-sets.png" alt-text="Screenshot that shows load-balancing rule creation." border="true":::
+ :::image type="content" source="./media/load-balancer-standard-virtual-machine-scale-sets/load-balancer-and-vm-scale-sets.png" alt-text="Screenshot that shows load-balancing rule creation." border="true":::
Use the following methods to deploy a Virtual Machine Scale Sets with an existing instance of Load Balancer:
logic-apps Add Run Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/add-run-javascript.md
To perform custom integration tasks inline with your workflow in Azure Logic App
|--|-||--|--|-| | **Execute JavaScript Code** | JavaScript | **Standard**: <br>Node.js 16.x.x <br><br>**Consumption**: <br>Node.js 8.11.1 <br><br>For more information, review [Standard built-in objects](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects). | Finishes in 5 seconds or fewer. | Handles data up to 50 MB. | - Doesn't require working with the [**Variables** actions](logic-apps-create-variables-store-values.md), which are unsupported by the action. <br><br>- Doesn't support the `require()` function for running JavaScript. |
-To run code that doesn't fit these attributes, you can [create and call a function using Azure Functions](logic-apps-azure-functions.md).
+To run code that doesn't fit these attributes, you can [create and call a function using Azure Functions](call-azure-functions-from-workflows.md).
This guide shows how the action works in an example workflow that starts with an Office 365 Outlook trigger. The workflow runs when a new email arrives in the associated Outlook email account. The sample code snippet extracts any email addresses that exist the email body and returns those addresses as output that you can use in a subsequent action.
logic-apps Authenticate With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/authenticate-with-managed-identity.md
After you [enable the managed identity for your logic app resource](#azure-porta
> [!IMPORTANT] > > If you have an Azure function where you want to use the system-assigned identity,
-> first [enable authentication for Azure Functions](logic-apps-azure-functions.md#enable-authentication-functions).
+> first [enable authentication for Azure Functions](call-azure-functions-from-workflows.md#enable-authentication-functions).
The following steps show how to use the managed identity with a trigger or action using the Azure portal. To specify the managed identity in a trigger or action's underlying JSON definition, see [Managed identity authentication](logic-apps-securing-a-logic-app.md#managed-identity-authentication).
logic-apps Call Azure Functions From Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/call-azure-functions-from-workflows.md
For a richer experience when you work with function parameters in the workflow d
1. Under **Allowed Origins**, add the asterisk (**`*`**) wildcard character, but remove all the other origins in the list, and select **Save**.
- :::image type="content" source="media/logic-apps-azure-functions/function-cors-origins.png" alt-text="Screenshot shows Azure portal, CORS pane, and wildcard character * entered under Allowed Origins." lightbox="media/logic-apps-azure-functions/function-cors-origins.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/function-cors-origins.png" alt-text="Screenshot shows Azure portal, CORS pane, and wildcard character * entered under Allowed Origins." lightbox="media/call-azure-functions-from-workflows/function-cors-origins.png":::
### Access property values inside HTTP requests
To call an Azure function from your workflow, you can add that functions like an
1. From the functions list, select the function, and then select **Add Action**, for example:
- :::image type="content" source="media/logic-apps-azure-functions/select-function-app-function-consumption.png" alt-text="Screenshot shows Consumption workflow with a selected function app and function." lightbox="media/logic-apps-azure-functions/select-function-app-function-consumption.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/select-function-app-function-consumption.png" alt-text="Screenshot shows Consumption workflow with a selected function app and function." lightbox="media/call-azure-functions-from-workflows/select-function-app-function-consumption.png":::
1. In the selected function's action box, follow these steps:
To call an Azure function from your workflow, you can add that functions like an
The following example specifies a JSON object with the **`content`** attribute and a token representing the **From** output from the email trigger as the **Request Body** value:
- :::image type="content" source="media/logic-apps-azure-functions/function-request-body-example-consumption.png" alt-text="Screenshot shows Consumption workflow and a function with a Request Body example for the context object payload." lightbox="media/logic-apps-azure-functions/function-request-body-example-consumption.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/function-request-body-example-consumption.png" alt-text="Screenshot shows Consumption workflow and a function with a Request Body example for the context object payload." lightbox="media/call-azure-functions-from-workflows/function-request-body-example-consumption.png":::
Here, the context object isn't cast as a string, so the object's content gets added directly to the JSON payload. Here's the complete example:
- :::image type="content" source="media/logic-apps-azure-functions/request-body-example-complete.png" alt-text="Screenshot shows Consumption workflow and a function with a complete Request Body example for the context object payload." lightbox="media/logic-apps-azure-functions/request-body-example-complete.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/request-body-example-complete.png" alt-text="Screenshot shows Consumption workflow and a function with a complete Request Body example for the context object payload." lightbox="media/call-azure-functions-from-workflows/request-body-example-complete.png":::
If you provide a context object other than a JSON token that passes a string, a JSON object, or a JSON array, you get an error. However, you can cast the context object as a string by enclosing the token in quotation marks (**""**), for example, if you wanted to use the **Received Time** token:
- :::image type="content" source="media/logic-apps-azure-functions/function-request-body-string-cast-example.png" alt-text="Screenshot shows Consumption workflow and a Request Body example that casts context object as a string." lightbox="media/logic-apps-azure-functions/function-request-body-string-cast-example.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/function-request-body-string-cast-example.png" alt-text="Screenshot shows Consumption workflow and a Request Body example that casts context object as a string." lightbox="media/call-azure-functions-from-workflows/function-request-body-string-cast-example.png":::
1. To specify other details such as the method to use, request headers, query parameters, or authentication, open the **Advanced parameters** list, and select the parameters that you want. For authentication, your options differ based on your selected function. For more information, review [Enable authentication for functions](#enable-authentication-functions).
To call an Azure function from your workflow, you can add that functions like an
1. From the functions list, select the function, and then select **Create New**, for example:
- :::image type="content" source="media/logic-apps-azure-functions/select-function-app-function-standard.png" alt-text="Screenshot shows Standard workflow designer with selected function app and function." lightbox="media/logic-apps-azure-functions/select-function-app-function-standard.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/select-function-app-function-standard.png" alt-text="Screenshot shows Standard workflow designer with selected function app and function." lightbox="media/call-azure-functions-from-workflows/select-function-app-function-standard.png":::
1. In the **Call an Azure function** action box, follow these steps:
To call an Azure function from your workflow, you can add that functions like an
- **Method**: **GET** - **Request Body**: A JSON object with the **`content`** attribute and a token representing the **From** output from the email trigger.
- :::image type="content" source="media/logic-apps-azure-functions/function-request-body-example-standard.png" alt-text="Screenshot shows Standard workflow and a function with a Request Body example for the context object payload." lightbox="media/logic-apps-azure-functions/function-request-body-example-standard.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/function-request-body-example-standard.png" alt-text="Screenshot shows Standard workflow and a function with a Request Body example for the context object payload." lightbox="media/call-azure-functions-from-workflows/function-request-body-example-standard.png":::
Here, the context object isn't cast as a string, so the object's content gets added directly to the JSON payload. Here's the complete example:
- :::image type="content" source="media/logic-apps-azure-functions/request-body-example-complete.png" alt-text="Screenshot shows Standard workflow and a function with a complete Request Body example for the context object payload." lightbox="media/logic-apps-azure-functions/request-body-example-complete.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/request-body-example-complete.png" alt-text="Screenshot shows Standard workflow and a function with a complete Request Body example for the context object payload." lightbox="media/call-azure-functions-from-workflows/request-body-example-complete.png":::
If you provide a context object other than a JSON token that passes a string, a JSON object, or a JSON array, you get an error. However, you can cast the context object as a string by enclosing the token in quotation marks (**""**), for example, if you wanted to use the **Received Time** token:
- :::image type="content" source="media/logic-apps-azure-functions/function-request-body-string-cast-example.png" alt-text="Screenshot shows Standard workflow and a Request Body example that casts context object as a string." lightbox="media/logic-apps-azure-functions/function-request-body-string-cast-example.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/function-request-body-string-cast-example.png" alt-text="Screenshot shows Standard workflow and a Request Body example that casts context object as a string." lightbox="media/call-azure-functions-from-workflows/function-request-body-string-cast-example.png":::
1. To specify other details such as the method to use, request headers, query parameters, or authentication, open the **Advanced parameters** list, and select the parameters that you want. For authentication, your options differ based on your selected function. For more information, review [Enable authentication for functions](#enable-authentication-functions).
For your function to use your Consumption logic app's managed identity, you must
1. On the function app resource menu, under **Development tools**, select **Advanced Tools** > **Go**.
- :::image type="content" source="media/logic-apps-azure-functions/open-advanced-tools-kudu.png" alt-text="Screenshot shows function app menu with selected options for Advanced Tools and Go." lightbox="media/logic-apps-azure-functions/open-advanced-tools-kudu.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/open-advanced-tools-kudu.png" alt-text="Screenshot shows function app menu with selected options for Advanced Tools and Go." lightbox="media/call-azure-functions-from-workflows/open-advanced-tools-kudu.png":::
1. After the **Kudu Plus** page opens, on the Kudu website's title bar, from the **Debug Console** menu, select **CMD**.
- :::image type="content" source="medi." lightbox="media/logic-apps-azure-functions/open-debug-console-kudu.png":::
+ :::image type="content" source="medi." lightbox="media/call-azure-functions-from-workflows/open-debug-console-kudu.png":::
1. After the next page appears, from the folder list, select **site** > **wwwroot** > *your-function*. The following steps use an example function named **FabrikamAzureFunction**.
- :::image type="content" source="media/logic-apps-azure-functions/select-site-wwwroot-function-folder.png" alt-text="Screenshot shows folder list with the opened folders for the site, wwwroot, and your function." lightbox="media/logic-apps-azure-functions/select-site-wwwroot-function-folder.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/select-site-wwwroot-function-folder.png" alt-text="Screenshot shows folder list with the opened folders for the site, wwwroot, and your function." lightbox="media/call-azure-functions-from-workflows/select-site-wwwroot-function-folder.png":::
1. Open the **function.json** file for editing.
- :::image type="content" source="media/logic-apps-azure-functions/edit-function-json-file.png" alt-text="Screenshot shows the function.json file with selected edit command." lightbox="media/logic-apps-azure-functions/edit-function-json-file.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/edit-function-json-file.png" alt-text="Screenshot shows the function.json file with selected edit command." lightbox="media/call-azure-functions-from-workflows/edit-function-json-file.png":::
1. In the **bindings** object, check whether the **authLevel** property exists. If the property exists, set the property value to **`anonymous`**. Otherwise, add that property, and set the value.
- :::image type="content" source="media/logic-apps-azure-functions/set-authentication-level-function-app.png" alt-text="Screenshot shows bindings object with authLevel property set to anonymous." lightbox="media/logic-apps-azure-functions/set-authentication-level-function-app.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/set-authentication-level-function-app.png" alt-text="Screenshot shows bindings object with authLevel property set to anonymous." lightbox="media/call-azure-functions-from-workflows/set-authentication-level-function-app.png":::
1. When you're done, save your settings. Continue to the next section.
Either run the PowerShell command named [**Get-AzureAccount**](/powershell/modul
1. Copy and save your tenant ID for later use, for example:
- :::image type="content" source="media/logic-apps-azure-functions/tenant-id.png" alt-text="Screenshot shows Microsoft Entra ID Properties page with tenant ID's copy button selected." lightbox="media/logic-apps-azure-functions/tenant-id.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/tenant-id.png" alt-text="Screenshot shows Microsoft Entra ID Properties page with tenant ID's copy button selected." lightbox="media/call-azure-functions-from-workflows/tenant-id.png":::
<a name="find-object-id"></a>
After you enable the managed identity for your Consumption logic app resource, f
Copy the identity's **Object (principal) ID**:
- :::image type="content" source="media/logic-apps-azure-functions/system-identity-consumption.png" alt-text="Screenshot shows Consumption logic app's Identity page with selected tab named System assigned." lightbox="media/logic-apps-azure-functions/system-identity-consumption.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/system-identity-consumption.png" alt-text="Screenshot shows Consumption logic app's Identity page with selected tab named System assigned." lightbox="media/call-azure-functions-from-workflows/system-identity-consumption.png":::
- **User assigned** 1. Select the identity:
- :::image type="content" source="media/logic-apps-azure-functions/user-identity-consumption.png" alt-text="Screenshot shows Consumption logic app's Identity page with selected tab named User assigned." lightbox="media/logic-apps-azure-functions/user-identity-consumption.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/user-identity-consumption.png" alt-text="Screenshot shows Consumption logic app's Identity page with selected tab named User assigned." lightbox="media/call-azure-functions-from-workflows/user-identity-consumption.png":::
1. Copy the identity's **Object (principal) ID**:
- :::image type="content" source="media/logic-apps-azure-functions/user-identity-object-id.png" alt-text="Screenshot shows Consumption logic app's user-assigned identity Overview page with the object (principal) ID selected." lightbox="media/logic-apps-azure-functions/user-identity-object-id.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/user-identity-object-id.png" alt-text="Screenshot shows Consumption logic app's user-assigned identity Overview page with the object (principal) ID selected." lightbox="media/call-azure-functions-from-workflows/user-identity-object-id.png":::
<a name="find-enterprise-application-id"></a>
When you enable a managed identity on your logic app resource, Azure automatical
1. On the **All applications** page, in the search box, enter the object ID for your managed identity. From the results, find the matching enterprise application, and copy the **Application ID**:
- :::image type="content" source="media/logic-apps-azure-functions/find-enterprise-application-id.png" alt-text="Screenshot shows Entra tenant page named All applications, with enterprise application object ID in search box, and selected matching application ID." lightbox="media/logic-apps-azure-functions/find-enterprise-application-id.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/find-enterprise-application-id.png" alt-text="Screenshot shows Entra tenant page named All applications, with enterprise application object ID in search box, and selected matching application ID." lightbox="media/call-azure-functions-from-workflows/find-enterprise-application-id.png":::
1. Now, use the copied application ID to [add an identity provider to your function app](#create-app-registration).
Now that you have the tenant ID and the application ID, you can set up your func
1. On the function app menu, under **Settings**, select **Authentication**, and then select **Add identity provider**.
- :::image type="content" source="media/logic-apps-azure-functions/add-identity-provider.png" alt-text="Screenshot shows function app menu with Authentication page and selected option named Add identity provider." lightbox="media/logic-apps-azure-functions/add-identity-provider.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/add-identity-provider.png" alt-text="Screenshot shows function app menu with Authentication page and selected option named Add identity provider." lightbox="media/call-azure-functions-from-workflows/add-identity-provider.png":::
1. On the **Add an identity provider** pane, under **Basics**, from the **Identity provider** list, select **Microsoft**.
Now that you have the tenant ID and the application ID, you can set up your func
| Property | Required | Value | Description | |-|-|-|-| | **Application (client) ID** | Yes | <*application-ID*> | The unique identifier to use for this app registration. For this example, use the application ID that you copied for the Enterprise application associated with your managed identity. |
- | **Client secret** | Optional, but recommended | <*client-secret*> | The secret value that the app uses to prove its identity when requesting a token. The client secret is created and stored in your app's configuration as a slot-sticky [application setting](../app-service/configure-common.md#configure-app-settings) named **MICROSOFT_PROVIDER_AUTHENTICATION_SECRET**. <br><br>- Make sure to regularly rotate secrets and store them securely. For example, manage your secrets in Azure Key Vault where you can use a managed identity to retrieve the key without exposing the value to an unauthorized user. You can update this setting to use [Key Vault references](../app-service/app-service-key-vault-references.md). <br><br>- If you provide a client secret value, sign-in operations use the hybrid flow, returning both access and refresh tokens. <br><br>- If you don't provide a client secret, sign-in operations use the [OAuth 2.0 implicit grant flow](/entra/identity-platform/v2-oauth2-implicit-grant-flow). This method directly returns only an ID token or access token. These tokens are sent by the provider and stored in the EasyAuth token store. <br><br>**Important**: Due to security risks, the implict grant flow is [no longer a suitable authentication method](/entra/identity-platform/v2-oauth2-implicit-grant-flow#prefer-the-auth-code-flow). Instead, use either [authorization code flow with Proof Key for Code Exchange (PKCE)](/entra/msal/dotnet/advanced/spa-authorization-code) or [single-page application (SPA) authorization codes](/entra/msal/dotnet/advanced/spa-authorization-code). |
+ | **Client secret** | Optional, but recommended | <*client-secret*> | The secret value that the app uses to prove its identity when requesting a token. The client secret is created and stored in your app's configuration as a slot-sticky [application setting](../app-service/configure-common.md#configure-app-settings) named **MICROSOFT_PROVIDER_AUTHENTICATION_SECRET**. <br><br>- Make sure to regularly rotate secrets and store them securely. For example, manage your secrets in Azure Key Vault where you can use a managed identity to retrieve the key without exposing the value to an unauthorized user. You can update this setting to use [Key Vault references](../app-service/app-service-key-vault-references.md). <br><br>- If you provide a client secret value, sign-in operations use the hybrid flow, returning both access and refresh tokens. <br><br>- If you don't provide a client secret, sign-in operations use the [OAuth 2.0 implicit grant flow](/entra/identity-platform/v2-oauth2-implicit-grant-flow). This method directly returns only an ID token or access token. These tokens are sent by the provider and stored in the EasyAuth token store. <br><br>**Important**: Due to security risks, the implicit grant flow is [no longer a suitable authentication method](/entra/identity-platform/v2-oauth2-implicit-grant-flow#prefer-the-auth-code-flow). Instead, use either [authorization code flow with Proof Key for Code Exchange (PKCE)](/entra/msal/dotnet/advanced/spa-authorization-code) or [single-page application (SPA) authorization codes](/entra/msal/dotnet/advanced/spa-authorization-code). |
| **Issuer URL** | No | **<*authentication-endpoint-URL*>/<*Entra-tenant-ID*>/v2.0** | This URL redirects users to the correct Microsoft Entra tenant and downloads the appropriate metadata to determine the appropriate token signing keys and token issuer claim value. For apps that use Azure AD v1, omit **/v2.0** from the URL. <br><br>For this scenario, use the following URL: **`https://sts.windows.net/`<*Entra-tenant-ID*>** | | **Allowed token audiences** | No | <*application-ID-URI*> | The application ID URI (resource ID) for the function app. For a cloud or server app where you want to allow authentication tokens from a web app, add the application ID URI for the web app. The configured client ID is always implicitly considered as an allowed audience. <br><br>For this scenario, the value is **`https://management.azure.com`**. Later, you can use the same URI in the **Audience** property when you [set up your function action in your workflow to use the managed identity](create-managed-service-identity.md#authenticate-access-with-identity). <br><br>**Important**: The application ID URI (resource ID) must exactly match the value that Microsoft Entra ID expects, including any required trailing slashes. | At this point, your version looks similar to this example:
- :::image type="content" source="media/logic-apps-azure-functions/identity-provider-authentication-settings.png" alt-text="Screenshot shows app registration for your logic app and identity provider for your function app." lightbox="media/logic-apps-azure-functions/identity-provider-authentication-settings.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/identity-provider-authentication-settings.png" alt-text="Screenshot shows app registration for your logic app and identity provider for your function app." lightbox="media/call-azure-functions-from-workflows/identity-provider-authentication-settings.png":::
If you're setting up your function app with an identity provider for the first time, the **App Service authentication settings** section also appears. These options determine how your function app responds to unauthenticated requests. The default selection redirects all requests to log in with the new identity provider. You can customize this behavior now or adjust these settings later from the main **Authentication** page by selecting **Edit** next to **Authentication settings**. To learn more about these options, review [Authentication flow - Authentication and authorization in Azure App Service and Azure Functions](../app-service/overview-authentication-authorization.md#authentication-flow).
Now that you have the tenant ID and the application ID, you can set up your func
1. Copy the app registration's **App (client) ID** to use later in the Azure Functions action's **Audience** property for your workflow.
- :::image type="content" source="media/logic-apps-azure-functions/identity-provider-application-id.png" alt-text="Screenshot shows new identity provider for function app." lightbox="media/logic-apps-azure-functions/identity-provider-application-id.png":::
+ :::image type="content" source="media/call-azure-functions-from-workflows/identity-provider-application-id.png" alt-text="Screenshot shows new identity provider for function app." lightbox="media/call-azure-functions-from-workflows/identity-provider-application-id.png":::
1. Return to the designer and follow the [steps to authenticate access with the managed identity](create-managed-service-identity.md#authenticate-access-with-identity) by using the built-in Azure Functions action.
logic-apps Logic Apps Examples And Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-examples-and-scenarios.md
You can access, convert, and transform multiple content types by using the many
Azure Logic Apps integrates with many services, such as Azure Functions, Azure API Management, Azure App Service, and custom HTTP endpoints, for example, REST and SOAP.
-* [Call Azure Functions from Azure Logic Apps](../logic-apps/logic-apps-azure-functions.md)
+* [Call Azure Functions from Azure Logic Apps](call-azure-functions-from-workflows.md)
* [Tutorial: Create a streaming customer insights dashboard with Azure Logic Apps and Azure Functions](../logic-apps/logic-apps-scenario-social-serverless.md) * [Tutorial: Create a function that integrates with Azure Logic Apps and Azure AI services to analyze X post sentiment](../azure-functions/functions-twitter-email.md) * [Tutorial: Build an AI-powered social dashboard by using Power BI and Azure Logic Apps](/shows/)
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
When the [managed identity](/entra/identity/managed-identities-azure-resources/o
1. Before your logic app can use a managed identity, follow the steps in [Authenticate access to Azure resources by using managed identities in Azure Logic Apps](authenticate-with-managed-identity.md). These steps enable the managed identity on your logic app and set up that identity's access to the target Azure resource.
-1. Before an Azure function can use a managed identity, first [enable authentication for Azure functions](logic-apps-azure-functions.md#enable-authentication-functions).
+1. Before an Azure function can use a managed identity, first [enable authentication for Azure functions](call-azure-functions-from-workflows.md#enable-authentication-functions).
1. In the trigger or action that supports using a managed identity, provide this information:
If your organization doesn't permit connecting to specific resources by using th
* Standard logic app workflows can privately and securely communicate with an Azure virtual network through private endpoints that you set up for inbound traffic and virtual network integration for outbound traffic. For more information, review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md).
-* To run your own code or perform XML transformation, [create and call an Azure function](../logic-apps/logic-apps-azure-functions.md), rather than use the [inline code capability](../logic-apps/logic-apps-add-run-inline-code.md) or provide [assemblies to use as maps](../logic-apps/logic-apps-enterprise-integration-maps.md), respectively. Also, set up the hosting environment for your function app to comply with your isolation requirements.
+* To run your own code or perform XML transformation, [create and call an Azure function](call-azure-functions-from-workflows.md), rather than use the [inline code capability](../logic-apps/logic-apps-add-run-inline-code.md) or provide [assemblies to use as maps](../logic-apps/logic-apps-enterprise-integration-maps.md), respectively. Also, set up the hosting environment for your function app to comply with your isolation requirements.
For example, to meet Impact Level 5 requirements, create your function app with the [App Service plan](../azure-functions/dedicated-plan.md) using the [**Isolated** pricing tier](../app-service/overview-hosting-plans.md) along with an [App Service Environment (ASE)](../app-service/environment/intro.md) that also uses the **Isolated** pricing tier. In this environment, function apps run on dedicated Azure virtual machines and dedicated Azure virtual networks, which provide network isolation on top of compute isolation for your apps and maximum scale-out capabilities.
logic-apps Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/plan-manage-costs.md
To help you reduce costs on your logic aps and related resources, try these opti
* If possible, use [built-in triggers and actions](../connectors/built-in.md), which cost less to run per execution than [managed connector triggers and actions](../connectors/managed.md).
- For example, you might be able to reduce costs when accessing other resources by using the [HTTP action](../connectors/connectors-native-http.md) or by calling a function that you created by using the [Azure Functions service](../azure-functions/functions-overview.md) and using the [built-in Azure Functions action](../logic-apps/logic-apps-azure-functions.md). However, using Azure Functions also incurs costs, so make sure that you compare your options.
+ For example, you might be able to reduce costs when accessing other resources by using the [HTTP action](../connectors/connectors-native-http.md) or by calling a function that you created by using the [Azure Functions service](../azure-functions/functions-overview.md) and using the [built-in Azure Functions action](call-azure-functions-from-workflows.md). However, using Azure Functions also incurs costs, so make sure that you compare your options.
* [Specify precise trigger conditions](logic-apps-workflow-actions-triggers.md#trigger-conditions) for running a workflow.
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
For the **Standard** logic app workflow, these capabilities have changed, or the
* The following triggers and actions have either changed or are currently limited, unsupported, or unavailable:
- * The built-in action, [Azure Functions - Choose an Azure function](logic-apps-azure-functions.md) is now **Azure Functions Operations - Call an Azure function**. This action currently works only for functions that are created from the **HTTP Trigger** template.
+ * The built-in action, [Azure Functions - Choose an Azure function](call-azure-functions-from-workflows.md) is now **Azure Functions Operations - Call an Azure function**. This action currently works only for functions that are created from the **HTTP Trigger** template.
In the Azure portal, you can select an HTTP trigger function that you can access by creating a connection through the user experience. If you inspect the function action's JSON definition in code view or the **workflow.json** file using Visual Studio Code, the action refers to the function by using a `connectionName` reference. This version abstracts the function's information as a connection, which you can find in your logic app project's **connections.json** file, which is available after you create a connection in Visual Studio Code.
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
For example, you can calculate values by using math functions, such as the [add(
| - | | | | Return a string in lowercase format. | toLower('<*text*>') <br><br>For example: toLower('Hello') | "hello" | | Return a globally unique identifier (GUID). | guid() |"c2ecc88d-88c8-4096-912c-d6f2e2b138ce" |
-||||
To find functions [based on their general purpose](#ordered-by-purpose), review the following tables. Or, for detailed information about each function, see the [alphabetical list](#alphabetical-list).
Here are some other general ways that you can use functions in expressions:
| 1. Get the *parameterName*'s value by using the nested `parameters()` function. <br>2. Perform work with the result by passing that value to *functionName*. | "\@<*functionName*>(parameters('<*parameterName*>'))" | | 1. Get the result from the nested inner function *functionName*. <br>2. Pass the result to the outer function *functionName2*. | "\@<*functionName2*>(<*functionName*>(<*item*>))" | | 1. Get the result from *functionName*. <br>2. Given that the result is an object with property *propertyName*, get that property's value. | "\@<*functionName*>(<*item*>).<*propertyName*>" |
-|||
For example, the `concat()` function can take two or more string values as parameters. This function combines those strings into one string. You can either pass in string literals, for example, "Sophia" and "Owen" so that you get a combined string, "SophiaOwen":
To work with strings, you can use these string functions and also some [collecti
| [toLower](../logic-apps/workflow-definition-language-functions-reference.md#toLower) | Return a string in lowercase format. | | [toUpper](../logic-apps/workflow-definition-language-functions-reference.md#toUpper) | Return a string in uppercase format. | | [trim](../logic-apps/workflow-definition-language-functions-reference.md#trim) | Remove leading and trailing whitespace from a string, and return the updated string. |
-|||
<a name="collection-functions"></a>
To work with collections, generally arrays, strings, and sometimes, dictionaries
| [sort](../logic-apps/workflow-definition-language-functions-reference.md#sort) | Sort items in a collection. | | [take](../logic-apps/workflow-definition-language-functions-reference.md#take) | Return items from the front of a collection. | | [union](../logic-apps/workflow-definition-language-functions-reference.md#union) | Return a collection that has *all* the items from the specified collections. |
-|||
<a name="comparison-functions"></a>
To work with conditions, compare values and expression results, or evaluate vari
| [lessOrEquals](../logic-apps/workflow-definition-language-functions-reference.md#lessOrEquals) | Check whether the first value is less than or equal to the second value. | | [not](../logic-apps/workflow-definition-language-functions-reference.md#not) | Check whether an expression is false. | | [or](../logic-apps/workflow-definition-language-functions-reference.md#or) | Check whether at least one expression is true. |
-|||
<a name="conversion-functions"></a>
To change a value's type or format, you can use these conversion functions. For
| [uriComponentToBinary](../logic-apps/workflow-definition-language-functions-reference.md#uriComponentToBinary) | Return the binary version for a URI-encoded string. | | [uriComponentToString](../logic-apps/workflow-definition-language-functions-reference.md#uriComponentToString) | Return the string version for a URI-encoded string. | | [xml](../logic-apps/workflow-definition-language-functions-reference.md#xml) | Return the XML version for a string. |
-|||
<a name="implicit-data-conversions"></a>
For the full reference about each function, see the
| [rand](../logic-apps/workflow-definition-language-functions-reference.md#rand) | Return a random integer from a specified range. | | [range](../logic-apps/workflow-definition-language-functions-reference.md#range) | Return an integer array that starts from a specified integer. | | [sub](../logic-apps/workflow-definition-language-functions-reference.md#sub) | Return the result from subtracting the second number from the first number. |
-|||
<a name="date-time-functions"></a>
For the full reference about each function, see the
| [subtractFromTime](../logic-apps/workflow-definition-language-functions-reference.md#subtractFromTime) | Subtract a number of time units from a timestamp. See also [getPastTime](../logic-apps/workflow-definition-language-functions-reference.md#getPastTime). | | [ticks](../logic-apps/workflow-definition-language-functions-reference.md#ticks) | Return the `ticks` property value for a specified timestamp. | | [utcNow](../logic-apps/workflow-definition-language-functions-reference.md#utcNow) | Return the current timestamp as a string. |
-|||
<a name="workflow-functions"></a>
For the full reference about each function, see the
| [triggerOutputs](../logic-apps/workflow-definition-language-functions-reference.md#triggerOutputs) | Return a trigger's output at runtime, or values from other JSON name-and-value pairs. See [trigger](../logic-apps/workflow-definition-language-functions-reference.md#trigger). | | [variables](../logic-apps/workflow-definition-language-functions-reference.md#variables) | Return the value for a specified variable. | | [workflow](../logic-apps/workflow-definition-language-functions-reference.md#workflow) | Return all the details about the workflow itself during run time. |
-|||
<a name="uri-parsing-functions"></a>
For the full reference about each function, see the
| [uriPort](../logic-apps/workflow-definition-language-functions-reference.md#uriPort) | Return the `port` value for a uniform resource identifier (URI). | | [uriQuery](../logic-apps/workflow-definition-language-functions-reference.md#uriQuery) | Return the `query` value for a uniform resource identifier (URI). | | [uriScheme](../logic-apps/workflow-definition-language-functions-reference.md#uriScheme) | Return the `scheme` value for a uniform resource identifier (URI). |
-|||
<a name="manipulation-functions"></a>
For the full reference about each function, see the
| [removeProperty](../logic-apps/workflow-definition-language-functions-reference.md#removeProperty) | Remove a property from a JSON object and return the updated object. | | [setProperty](../logic-apps/workflow-definition-language-functions-reference.md#setProperty) | Set the value for a JSON object's property and return the updated object. | | [xpath](../logic-apps/workflow-definition-language-functions-reference.md#xpath) | Check XML for nodes or values that match an XPath (XML Path Language) expression, and return the matching nodes or values. |
-|||
##
action().outputs.body.<property>
| Parameter | Required | Type | Description | | | -- | - | -- | | <*property*> | No | String | The name for the action object's property whose value you want: **name**, **startTime**, **endTime**, **inputs**, **outputs**, **status**, **code**, **trackingId**, and **clientTrackingId**. In the Azure portal, you can find these properties by reviewing a specific run history's details. For more information, see [REST API - Workflow Run Actions](/rest/api/logic/workflowrunactions/get). |
-|||||
| Return value | Type | Description | | | --| -- | | <*action-output*> | String | The output from the current action or property |
-||||
<a name="actions"></a>
actions('<actionName>').outputs.body.<property>
| | -- | - | -- | | <*actionName*> | Yes | String | The name for the action object whose output you want | | <*property*> | No | String | The name for the action object's property whose value you want: **name**, **startTime**, **endTime**, **inputs**, **outputs**, **status**, **code**, **trackingId**, and **clientTrackingId**. In the Azure portal, you can find these properties by reviewing a specific run history's details. For more information, see [REST API - Workflow Run Actions](/rest/api/logic/workflowrunactions/get). |
-|||||
| Return value | Type | Description | | | --| -- | | <*action-output*> | String | The output from the specified action or property |
-||||
*Example*
add(<summand_1>, <summand_2>)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*summand_1*>, <*summand_2*> | Yes | Integer, Float, or mixed | The numbers to add |
-|||||
| Return value | Type | Description | | | --| -- | | <*result-sum*> | Integer or Float | The result from adding the specified numbers |
-||||
*Example*
addDays('<timestamp>', <days>, '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*days*> | Yes | Integer | The positive or negative number of days to add | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-timestamp*> | String | The timestamp plus the specified number of days |
-||||
*Example 1*
addHours('<timestamp>', <hours>, '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*hours*> | Yes | Integer | The positive or negative number of hours to add | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-timestamp*> | String | The timestamp plus the specified number of hours |
-||||
*Example 1*
addMinutes('<timestamp>', <minutes>, '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*minutes*> | Yes | Integer | The positive or negative number of minutes to add | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-timestamp*> | String | The timestamp plus the specified number of minutes |
-||||
*Example 1*
addProperty(<object>, '<property>', <value>)
| <*object*> | Yes | Object | The JSON object where you want to add a property | | <*property*> | Yes | String | The name for the property to add | | <*value*> | Yes | Any | The value for the property |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-object*> | Object | The updated JSON object with the specified property |
-||||
To add a parent property to an existing property, use the `setProperty()` function, not the `addProperty()` function. Otherwise, the function returns only the child object as output.
setProperty(<object>, '<parent-property>', addProperty(<object>['<parent-propert
| <*parent-property*> | Yes | String | The name for parent property where you want to add the child property | | <*child-property*> | Yes | String | The name for the child property to add | | <*value*> | Yes | Any | The value to set for the specified property |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-object*> | Object | The updated JSON object whose property you set |
-||||
*Example 1*
addSeconds('<timestamp>', <seconds>, '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*seconds*> | Yes | Integer | The positive or negative number of seconds to add | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-timestamp*> | String | The timestamp plus the specified number of seconds |
-||||
*Example 1*
addToTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
| <*interval*> | Yes | Integer | The number of specified time units to add | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-timestamp*> | String | The timestamp plus the specified number of time units |
-||||
*Example 1*
and(<expression1>, <expression2>, ...)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*expression1*>, <*expression2*>, ... | Yes | Boolean | The expressions to check |
-|||||
| Return value | Type | Description | | | --| -- | | true or false | Boolean | Return true when all expressions are true. Return false when at least one expression is false. |
-||||
*Example 1*
array('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The string for creating an array |
-|||||
| Return value | Type | Description | | | - | -- | | [<*value*>] | Array | An array that contains the single specified input |
-||||
*Example*
base64('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The input string |
-|||||
| Return value | Type | Description | | | - | -- | | <*base64-string*> | String | The base64-encoded version for the input string |
-||||
*Example*
base64ToBinary('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The base64-encoded string to convert |
-|||||
| Return value | Type | Description | | | - | -- | | <*binary-for-base64-string*> | String | The binary version for the base64-encoded string |
-||||
*Example*
base64ToString('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The base64-encoded string to decode |
-|||||
| Return value | Type | Description | | | - | -- | | <*decoded-base64-string*> | String | The string version for a base64-encoded string |
-||||
*Example*
binary('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The string to convert |
-|||||
| Return value | Type | Description | | | - | -- | | <*binary-for-input-value*> | String | The base64-encoded binary version for the specified string |
-||||
*Example*
body('<actionName>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*actionName*> | Yes | String | The name for the action's `body` output that you want |
-|||||
| Return value | Type | Description | | | --| -- | | <*action-body-output*> | String | The `body` output from the specified action |
-||||
*Example*
bool(<value>)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | Any | The value to convert to Boolean. |
-|||||
If you're using `bool()` with an object, the value of the object must be a string or integer that can be converted to Boolean. | Return value | Type | Description | | | - | -- | | `true` or `false` | Boolean | The Boolean version of the specified value. |
-||||
*Outputs*
chunk([<collection>], '<length>')
| | -- | - | -- | | <*collection*> | Yes | String or Array | The collection to split | | <*length*> | Yes | The length of each chunk |
-|||||
| Return value | Type | Description | | | - | -- | | <*collection*> | Array | An array of chunks with the specified length |
-||||
*Example 1*
coalesce(<object_1>, <object_2>, ...)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*object_1*>, <*object_2*>, ... | Yes | Any, can mix types | One or more items to check for null |
-|||||
| Return value | Type | Description | | | - | -- | | <*first-non-null-item*> | Any | The first item or value that isn't null. If all parameters are null, this function returns null. |
-||||
*Example*
concat('<text1>', '<text2>', ...)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*text1*>, <*text2*>, ... | Yes | String | At least two strings to combine |
-|||||
| Return value | Type | Description | | | - | -- | | <*text1text2...*> | String | The string created from the combined input strings. <br><br><br><br>**Note**: The length of the result must not exceed 104,857,600 characters. |
-||||
> [!NOTE] > Azure Logic Apps automatically or implicitly performs base64 encoding and decoding, so you don't have to manually
Specifically, this function works on these collection types:
| | -- | - | -- | | <*collection*> | Yes | String, Array, or Dictionary | The collection to check | | <*value*> | Yes | String, Array, or Dictionary, respectively | The item to find |
-|||||
| Return value | Type | Description | | | - | -- | | true or false | Boolean | Return true when the item is found. Return false when not found. |
-||||
*Example 1*
convertFromUtc('<timestamp>', '<destinationTimeZone>', '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, review [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones). | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*converted-timestamp*> | String | The timestamp converted to the target time zone without the timezone UTC offset. |
-||||
*Example 1*
convertTimeZone('<timestamp>', '<sourceTimeZone>', '<destinationTimeZone>', '<fo
| <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. | | <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, see [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*converted-timestamp*> | String | The timestamp converted to the target time zone |
-||||
*Example 1*
convertToUtc('<timestamp>', '<sourceTimeZone>', '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*converted-timestamp*> | String | The timestamp converted to UTC |
-||||
*Example 1*
createArray('<object1>', '<object2>', ...)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*object1*>, <*object2*>, ... | Yes | Any, but not mixed | At least two items to create the array |
-|||||
| Return value | Type | Description | | | - | -- | | [<*object1*>, <*object2*>, ...] | Array | The array created from all the input items |
-||||
*Example*
dataUri('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The string to convert |
-|||||
| Return value | Type | Description | | | - | -- | | <*data-uri*> | String | The data URI for the input string |
-||||
*Example*
dataUriToBinary('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The data URI to convert |
-|||||
| Return value | Type | Description | | | - | -- | | <*binary-for-data-uri*> | String | The binary version for the data URI |
-||||
*Example*
dataUriToString('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The data URI to convert |
-|||||
| Return value | Type | Description | | | - | -- | | <*string-for-data-uri*> | String | The string version for the data URI |
-||||
*Example*
dateDifference('<startDate>', '<endDate>')
| | -- | - | -- | | <*startDate*> | Yes | String | A string that contains a timestamp | | <*endDate*> | Yes | String | A string that contains a timestamp |
-|||||
| Return value | Type | Description | | | - | -- | | <*timespan*> | String | The difference between the two timestamps, which is a timestamp in string format. If `startDate` is more recent than `endDate`, the result is a negative value. |
-||||
*Example*
dayOfMonth('<timestamp>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-|||||
| Return value | Type | Description | | | - | -- | | <*day-of-month*> | Integer | The day of the month from the specified timestamp |
-||||
*Example*
dayOfWeek('<timestamp>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-|||||
| Return value | Type | Description | | | - | -- | | <*day-of-week*> | Integer | The day of the week from the specified timestamp where Sunday is 0, Monday is 1, and so on |
-||||
*Example*
dayOfYear('<timestamp>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-|||||
| Return value | Type | Description | | | - | -- | | <*day-of-year*> | Integer | The day of the year from the specified timestamp |
-||||
*Example*
decimal('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The decimal number in a string |
-|||||
| Return value | Type | Description | | | - | -- | | <*decimal*> | Decimal Number | The decimal number for the input string |
-||||
*Example 1*
decodeDataUri('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The data URI string to decode |
-|||||
| Return value | Type | Description | | | - | -- | | <*binary-for-data-uri*> | String | The binary version for a data URI string |
-||||
*Example*
decodeUriComponent('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The string with the escape characters to decode |
-|||||
| Return value | Type | Description | | | - | -- | | <*decoded-uri*> | String | The updated string with the decoded escape characters |
-||||
*Example*
div(<dividend>, <divisor>)
| | -- | - | -- | | <*dividend*> | Yes | Integer or Float | The number to divide by the *divisor* | | <*divisor*> | Yes | Integer or Float | The number that divides the *dividend*, but can't be zero |
-|||||
| Return value | Type | Description | | | - | -- |
-| <*quotient-result*> | Integer or Float | The result from dividing the first number by the second number. If either the dividend or divisor has Float type, the result has Float type. <br><br><br><br>**Note**: To convert the float result to an integer, try [creating and calling a function in Azure](../logic-apps/logic-apps-azure-functions.md) from your logic app. |
-||||
+| <*quotient-result*> | Integer or Float | The result from dividing the first number by the second number. If either the dividend or divisor has Float type, the result has Float type. <br><br><br><br>**Note**: To convert the float result to an integer, try [creating and calling a function in Azure](call-azure-functions-from-workflows.md) from your logic app. |
*Example 1*
encodeUriComponent('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The string to convert to URI-encoded format |
-|||||
| Return value | Type | Description | | | - | -- | | <*encoded-uri*> | String | The URI-encoded string with escape characters |
-||||
*Example*
empty([<collection>])
| Parameter | Required | Type | Description | | | -- | - | -- | | <*collection*> | Yes | String, Array, or Object | The collection to check |
-|||||
| Return value | Type | Description | | | - | -- | | true or false | Boolean | Return true when the collection is empty. Return false when not empty. |
-||||
*Example*
endsWith('<text>', '<searchText>')
| | -- | - | -- | | <*text*> | Yes | String | The string to check | | <*searchText*> | Yes | String | The ending substring to find |
-|||||
| Return value | Type | Description | | | - | -- | | true or false | Boolean | Return true when the ending substring is found. Return false when not found. |
-||||
*Example 1*
equals('<object1>', '<object2>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*object1*>, <*object2*> | Yes | Various | The values, expressions, or objects to compare |
-|||||
| Return value | Type | Description | | | - | -- | | true or false | Boolean | Return true when both are equivalent. Return false when not equivalent. |
-||||
*Example*
first([<collection>])
| Parameter | Required | Type | Description | | | -- | - | -- | | <*collection*> | Yes | String or Array | The collection where to find the first item |
-|||||
| Return value | Type | Description | | | - | -- | | <*first-collection-item*> | Any | The first item in the collection |
-||||
*Example*
float('<value>', '<locale>'?)
| | -- | - | -- | | <*value*> | Yes | String | The string that has a valid floating-point number to convert. The minimum and maximum values are the same as the limits for the float data type. | | <*locale*> | No | String | The RFC 4646 locale code to use. <br><br>If not specified, default locale is used. <br><br>If *locale* isn't a valid value, an error is generated that the provided locale isn't valid or doesn't have an associated locale. |
-|||||
| Return value | Type | Description | | | - | -- | | <*float-value*> | Float | The floating-point number for the specified string. The minimum and maximum values are the same as the limits for the float data type. |
-||||
*Example 1*
formatDateTime('<timestamp>', '<format>'?, '<locale>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. | | <*locale*> | No | String | The locale to use. If unspecified, the value is `en-us`. If *locale* isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | |--||-| | <*reformatted-timestamp*> | String | The updated timestamp in the specified format and locale, if specified. |
-||||
*Examples*
formDataMultiValues('<actionName>', '<key>')
| | -- | - | -- | | <*actionName*> | Yes | String | The action whose output has the key value you want | | <*key*> | Yes | String | The name for the key whose value you want |
-|||||
| Return value | Type | Description | | | - | -- | | [<*array-with-key-values*>] | Array | An array with all the values that match the specified key |
-||||
*Example*
formDataValue('<actionName>', '<key>')
| | -- | - | -- | | <*actionName*> | Yes | String | The action whose output has the key value you want | | <*key*> | Yes | String | The name for the key whose value you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*key-value*> | String | The value in the specified key |
-||||
*Example*
formatNumber(<number>, <format>, <locale>?)
| <*number*> | Yes | Integer or Double | The value that you want to format. | | <*format*> | Yes | String | A composite format string that specifies the format that you want to use. For the supported numeric format strings, see [Standard numeric format strings](/dotnet/standard/base-types/standard-numeric-format-strings), which are supported by `number.ToString(<format>, <locale>)`. | | <*locale*> | No | String | The locale to use as supported by `number.ToString(<format>, <locale>)`. If unspecified, the value is `en-us`. If *locale* isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*formatted-number*> | String | The specified number as a string in the format that you specified. You can cast this return value to an `int` or `float`. |
-||||
*Example 1*
getFutureTime(<interval>, <timeUnit>, <format>?)
| <*interval*> | Yes | Integer | The number of time units to add | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" | | <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-timestamp*> | String | The current timestamp plus the specified number of time units |
-||||
*Example 1*
getPastTime(<interval>, <timeUnit>, <format>?)
| <*interval*> | Yes | Integer | The number of specified time units to subtract | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" | | <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-timestamp*> | String | The current timestamp minus the specified number of time units |
-||||
*Example 1*
greater('<value>', '<compareTo>')
| | -- | - | -- | | <*value*> | Yes | Integer, Float, or String | The first value to check whether greater than the second value | | <*compareTo*> | Yes | Integer, Float, or String, respectively | The comparison value |
-|||||
| Return value | Type | Description | | | - | -- | | true or false | Boolean | Return true when the first value is greater than the second value. Return false when the first value is equal to or less than the second value. |
-||||
*Example*
greaterOrEquals('<value>', '<compareTo>')
| | -- | - | -- | | <*value*> | Yes | Integer, Float, or String | The first value to check whether greater than or equal to the second value | | <*compareTo*> | Yes | Integer, Float, or String, respectively | The comparison value |
-|||||
| Return value | Type | Description | | | - | -- | | true or false | Boolean | Return true when the first value is greater than or equal to the second value. Return false when the first value is less than the second value. |
-||||
*Example*
guid('<format>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*format*> | No | String | A single [format specifier](/dotnet/api/system.guid.tostring#system_guid_tostring_system_string_) for the returned GUID. By default, the format is "D", but you can use "N", "D", "B", "P", or "X". |
-|||||
| Return value | Type | Description | | | - | -- | | <*GUID-value*> | String | A randomly generated GUID |
-||||
*Example*
if(<expression>, <valueIfTrue>, <valueIfFalse>)
| <*expression*> | Yes | Boolean | The expression to check | | <*valueIfTrue*> | Yes | Any | The value to return when the expression is true | | <*valueIfFalse*> | Yes | Any | The value to return when the expression is false |
-|||||
| Return value | Type | Description | | | - | -- | | <*specified-return-value*> | Any | The specified value that returns based on whether the expression is true or false |
-||||
*Example*
indexOf('<text>', '<searchText>')
| | -- | - | -- | | <*text*> | Yes | String | The string that has the substring to find | | <*searchText*> | Yes | String | The substring to find |
-|||||
| Return value | Type | Description | | | - | -- | | <*index-value*>| Integer | The starting position or index value for the specified substring. <br><br>If the string isn't found, return the number -1. |
-||||
*Example*
int('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The string version for the integer to convert. The minimum and maximum values are the same as the limits for the integer data type. |
-|||||
| Return value | Type | Description | | | - | -- | | <*integer-result*> | Integer | The integer version for the specified string. The minimum and maximum values are the same as the limits for the integer data type. |
-||||
*Example*
isFloat('<string>', '<locale>'?)
| | -- | - | -- | | <*value*> | Yes | String | The string to examine | | <*locale*> | No | String | The RFC 4646 locale code to use |
-|||||
| Return value | Type | Description | | | - | -- |
isInt('<string>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*string*> | Yes | String | The string to examine |
-|||||
| Return value | Type | Description | | | - | -- |
item()
| Return value | Type | Description | | | - | -- | | <*current-array-item*> | Any | The current item in the array for the action's current iteration |
-||||
*Example*
items('<loopName>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*loopName*> | Yes | String | The name for the for-each loop |
-|||||
| Return value | Type | Description | | | - | -- | | <*item*> | Any | The item from the current cycle in the specified for-each loop |
-||||
*Example*
iterationIndexes('<loopName>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*loopName*> | Yes | String | The name for the Until loop |
-|||||
| Return value | Type | Description | | | - | -- | | <*index*> | Integer | The index value for the current iteration inside the specified Until loop |
-||||
*Example*
json(xml('value'))
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String or XML | The string or XML to convert |
-|||||
| Return value | Type | Description | | | - | -- | | <*JSON-result*> | JSON native type, object, or array | The JSON native type value, object, or array of objects from the input string or XML. <br><br><br><br>- If you pass in XML that has a single child element in the root element, the function returns a single JSON object for that child element. <br><br> - If you pass in XML that has multiple child elements in the root element, the function returns an array that contains JSON objects for those child elements. <br><br>- If the string is null, the function returns an empty object. |
-||||
*Example 1*
intersection('<collection1>', '<collection2>', ...)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*collection1*>, <*collection2*>, ... | Yes | Array or Object, but not both | The collections from where you want *only* the common items |
-|||||
| Return value | Type | Description | | | - | -- | | <*common-items*> | Array or Object, respectively | A collection that has only the common items across the specified collections |
-||||
*Example*
join([<collection>], '<delimiter>')
| | -- | - | -- | | <*collection*> | Yes | Array | The array that has the items to join | | <*delimiter*> | Yes | String | The separator that appears between each character in the resulting string |
-|||||
| Return value | Type | Description | | | - | -- | | <*char1*><*delimiter*><*char2*><*delimiter*>... | String | The resulting string created from all the items in the specified array. <br><br><br><br>**Note**: The length of the result must not exceed 104,857,600 characters. |
-||||
*Example*
last([<collection>])
| Parameter | Required | Type | Description | | | -- | - | -- | | <*collection*> | Yes | String or Array | The collection where to find the last item |
-|||||
| Return value | Type | Description | | | - | -- | | <*last-collection-item*> | String or Array, respectively | The last item in the collection |
-||||
*Example*
lastIndexOf('<text>', '<searchText>')
| | -- | - | -- | | <*text*> | Yes | String | The string that has the substring to find | | <*searchText*> | Yes | String | The substring to find |
-|||
| Return value | Type | Description | | | - | -- | | <*ending-index-value*> | Integer | The starting position or index value for the last occurrence of the specified substring. |
-|||
If the string or substring value is empty, the following behavior occurs:
length([<collection>])
| Parameter | Required | Type | Description | | | -- | - | -- | | <*collection*> | Yes | String or Array | The collection with the items to count |
-|||||
| Return value | Type | Description | | | - | -- | | <*length-or-count*> | Integer | The number of items in the collection |
-||||
*Example*
less('<value>', '<compareTo>')
| | -- | - | -- | | <*value*> | Yes | Integer, Float, or String | The first value to check whether less than the second value | | <*compareTo*> | Yes | Integer, Float, or String, respectively | The comparison item |
-|||||
| Return value | Type | Description | | | - | -- | | true or false | Boolean | Return true when the first value is less than the second value. Return false when the first value is equal to or greater than the second value. |
-||||
*Example*
lessOrEquals('<value>', '<compareTo>')
| | -- | - | -- | | <*value*> | Yes | Integer, Float, or String | The first value to check whether less than or equal to the second value | | <*compareTo*> | Yes | Integer, Float, or String, respectively | The comparison item |
-|||||
| Return value | Type | Description | | | - | -- | | true or false | Boolean | Return true when the first value is less than or equal to the second value. Return false when the first value is greater than the second value. |
-||||
*Example*
listCallbackUrl()
| Return value | Type | Description | | | - | -- | | <*callback-URL*> | String | The callback URL for a trigger or action |
-||||
*Example*
max([<number1>, <number2>, ...])
| | -- | - | -- | | <*number1*>, <*number2*>, ... | Yes | Integer, Float, or both | The set of numbers from which you want the highest value | | [<*number1*>, <*number2*>, ...] | Yes | Array - Integer, Float, or both | The array of numbers from which you want the highest value |
-|||||
| Return value | Type | Description | | | - | -- | | <*max-value*> | Integer or Float | The highest value in the specified array or set of numbers |
-||||
*Example*
min([<number1>, <number2>, ...])
| | -- | - | -- | | <*number1*>, <*number2*>, ... | Yes | Integer, Float, or both | The set of numbers from which you want the lowest value | | [<*number1*>, <*number2*>, ...] | Yes | Array - Integer, Float, or both | The array of numbers from which you want the lowest value |
-|||||
| Return value | Type | Description | | | - | -- | | <*min-value*> | Integer or Float | The lowest value in the specified set of numbers or specified array |
-||||
*Example*
mod(<dividend>, <divisor>)
| | -- | - | -- | | <*dividend*> | Yes | Integer or Float | The number to divide by the *divisor* | | <*divisor*> | Yes | Integer or Float | The number that divides the *dividend*, but can't be zero |
-|||||
| Return value | Type | Description | | | - | -- | | <*modulo-result*> | Integer or Float | The remainder from dividing the first number by the second number |
-||||
*Example 1*
mul(<multiplicand1>, <multiplicand2>)
| | -- | - | -- | | <*multiplicand1*> | Yes | Integer or Float | The number to multiply by *multiplicand2* | | <*multiplicand2*> | Yes | Integer or Float | The number that multiples *multiplicand1* |
-|||||
| Return value | Type | Description | | | - | -- | | <*product-result*> | Integer or Float | The product from multiplying the first number by the second number |
-||||
*Example*
multipartBody('<actionName>', <index>)
| | -- | - | -- | | <*actionName*> | Yes | String | The name for the action that has output with multiple parts | | <*index*> | Yes | Integer | The index value for the part that you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*body*> | String | The body for the specified part |
-||||
## N
not(<expression>)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*expression*> | Yes | Boolean | The expression to check |
-|||||
| Return value | Type | Description | | | - | -- | | true or false | Boolean | Return true when the expression is false. Return false when the expression is true. |
-||||
*Example 1*
nthIndexOf('<text>', '<searchText>', <occurrence>)
| <*text*> | Yes | String | The string that contains the substring to find | | <*searchText*> | Yes | String | The substring to find | | <*occurrence*> | Yes | Integer | A number that specifies the *n*th occurrence of the substring to find. If *occurrence* is negative, start searching from the end. |
-|||||
| Return value | Type | Description | |--||-| | <*index-value*> | Integer | The starting position or index value for the *n*th occurrence of the specified substring. If the substring isn't found or fewer than *n* occurrences of the substring exist, return `-1`. |
-||||
*Examples*
or(<expression1>, <expression2>, ...)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*expression1*>, <*expression2*>, ... | Yes | Boolean | The expressions to check |
-|||||
| Return value | Type | Description | | | - | -- | | true or false | Boolean | Return true when at least one expression is true. Return false when all expressions are false. |
-||||
*Example 1*
outputs('<actionName>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*actionName*> | Yes | String | The name for the action's output that you want |
-|||||
| Return value | Type | Description | | | --| -- | | <*output*> | String | The output from the specified action |
-||||
*Example*
parameters('<parameterName>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*parameterName*> | Yes | String | The name for the parameter whose value you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*parameter-value*> | Any | The value for the specified parameter |
-||||
*Example*
parseDateTime('<timestamp>', '<locale>'?, '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*locale*> | No | String | The locale to use. <br><br>If not specified, the default locale is `en-us`. <br><br>If *locale* isn't a valid value, an error is generated. | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. If the format isn't specified, attempt parsing with multiple formats that are compatible with the provided locale. If the format isn't a valid value, an error is generated. |
-||||
| Return value | Type | Description | |--||-| | <*parsed-timestamp*> | String | The parsed timestamp in ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK) format, which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
-||||
*Examples*
rand(<minValue>, <maxValue>)
| | -- | - | -- | | <*minValue*> | Yes | Integer | The lowest integer in the range | | <*maxValue*> | Yes | Integer | The integer that follows the highest integer in the range that the function can return |
-|||||
| Return value | Type | Description | | | - | -- | | <*random-result*> | Integer | The random integer returned from the specified range |
-||||
*Example*
range(<startIndex>, <count>)
| | -- | - | -- | | <*startIndex*> | Yes | Integer | An integer value that starts the array as the first item | | <*count*> | Yes | Integer | The number of integers in the array. The `count` parameter value must be a positive integer that doesn't exceed 100,000. <br><br><br><br>**Note**: The sum of the `startIndex` and `count` values must not exceed 2,147,483,647. |
-|||||
| Return value | Type | Description | | | - | -- | | [<*range-result*>] | Array | The array with integers starting from the specified index |
-||||
*Example*
removeProperty(<object>, '<property>')
| | -- | - | -- | | <*object*> | Yes | Object | The JSON object from where you want to remove a property | | <*property*> | Yes | String | The name for the property to remove |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-object*> | Object | The updated JSON object without the specified property |
-||||
To remove a child property from an existing property, use this syntax:
removeProperty(<object>['<parent-property>'], '<child-property>')
| <*object*> | Yes | Object | The JSON object whose property you want to remove | | <*parent-property*> | Yes | String | The name for parent property with the child property that you want to remove | | <*child-property*> | Yes | String | The name for the child property to remove |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-object*> | Object | The updated JSON object whose child property that you removed |
-||||
*Example 1*
replace('<text>', '<oldText>', '<newText>')
| <*text*> | Yes | String | The string that has the substring to replace | | <*oldText*> | Yes | String | The substring to replace | | <*newText*> | Yes | String | The replacement string |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-text*> | String | The updated string after replacing the substring <br><br>If the substring isn't found, return the original string. |
-||||
*Example*
result('<scopedActionName>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*scopedActionName*> | Yes | String | The name of the scoped action where you want the inputs and outputs from the top-level actions inside that scope |
-||||
| Return value | Type | Description | | | - | -- | | <*array-object*> | Array object | An array that contains arrays of inputs and outputs from each top-level action inside the specified scope |
-||||
*Example*
reverse([<collection>])
| Parameter | Required | Type | Description | | | -- | - | -- | | <*collection*> | Yes | Array | The collection to reverse |
-|||||
| Return value | Type | Description | | | - | -- | | [<*updated-collection*>] | Array | The reversed collection |
-||||
*Example*
setProperty(<object>, '<property>', <value>)
| <*object*> | Yes | Object | The JSON object whose property you want to set | | <*property*> | Yes | String | The name for the existing or new property to set | | <*value*> | Yes | Any | The value to set for the specified property |
-|||||
To set the child property in a child object, use a nested `setProperty()` call instead. Otherwise, the function returns only the child object as output.
setProperty(<object>, '<parent-property>', setProperty(<object>['parentProperty'
| <*parent-property*> | Yes | String | The name for parent property with the child property that you want to set | | <*child-property*> | Yes | String | The name for the child property to set | | <*value*> | Yes | Any | The value to set for the specified property |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-object*> | Object | The updated JSON object whose property you set |
-||||
*Example 1*
skip([<collection>], <count>)
| | -- | - | -- | | <*collection*> | Yes | Array | The collection whose items you want to remove | | <*count*> | Yes | Integer | A positive integer for the number of items to remove at the front |
-|||||
| Return value | Type | Description | | | - | -- | | [<*updated-collection*>] | Array | The updated collection after removing the specified items |
-||||
*Example*
slice('<text>', <startIndex>, <endIndex>?)
| <*text*> | Yes | String | The string that contains the substring to find | | <*startIndex*> | Yes | Integer | The zero-based starting position or value for where to begin searching for the substring <br><br>- If *startIndex* is greater than the string length, return an empty string. <br><br>- If *startIndex* is negative, start searching at the index value that's the sum of the string length and *startIndex*. | | <*endIndex*> | No | Integer | The zero-based ending position or value for where to end searching for the substring. The character located at the ending index value isn't included in the search. <br><br>- If *endIndex* isn't specified or greater than the string length, search up to the end of the string. <br><br>- If *endIndex* is negative, end searching at the index value that the sum of the string length and *endIndex*. |
-|||||
| Return value | Type | Description | |--||-| | <*slice-result*> | String | A new string that contains the found substring |
-||||
*Examples*
sort([<collection>], <sortBy>?)
| | -- | - | -- | | <*collection*> | Yes | Array | The collection with the items to sort | | <*sortBy*> | No | String | The key to use for sorting the collection objects |
-|||||
| Return value | Type | Description | | | - | -- | | [<*updated-collection*>] | Array | The sorted collection |
-||||
*Example 1*
split('<text>', '<delimiter>')
| | -- | - | -- | | <*text*> | Yes | String | The string to separate into substrings based on the specified delimiter in the original string | | <*delimiter*> | Yes | String | The character in the original string to use as the delimiter |
-|||||
| Return value | Type | Description | | | - | -- | | [<*substring1*>,<*substring2*>,...] | Array | An array that contains substrings from the original string, separated by commas |
-||||
*Example 1*
startOfDay('<timestamp>', '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-timestamp*> | String | The specified timestamp but starting at the zero-hour mark for the day |
-||||
*Example*
startOfHour('<timestamp>', '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-timestamp*> | String | The specified timestamp but starting at the zero-minute mark for the hour |
-||||
*Example*
startOfMonth('<timestamp>', '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-timestamp*> | String | The specified timestamp but starting on the first day of the month at the zero-hour mark |
-||||
*Example 1*
startsWith('<text>', '<searchText>')
| | -- | - | -- | | <*text*> | Yes | String | The string to check | | <*searchText*> | Yes | String | The starting string to find |
-|||||
| Return value | Type | Description | | | - | -- | | true or false | Boolean | Return true when the starting substring is found. Return false when not found. |
-||||
*Example 1*
string(<value>)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | Any | The value to convert. If this value is null or evaluates to null, the value is converted to an empty string (`""`) value. <br><br><br><br>For example, if you assign a string variable to a non-existent property, which you can access with the `?` operator, the null value is converted to an empty string. However, comparing a null value isn't the same as comparing an empty string. |
-|||||
| Return value | Type | Description | | | - | -- | | <*string-value*> | String | The string version for the specified value. If the *value* parameter is null or evaluates to null, this value is returned as an empty string (`""`) value. |
-||||
*Example 1*
sub(<minuend>, <subtrahend>)
| | -- | - | -- | | <*minuend*> | Yes | Integer or Float | The number from which to subtract the *subtrahend* | | <*subtrahend*> | Yes | Integer or Float | The number to subtract from the *minuend* |
-|||||
| Return value | Type | Description | | | - | -- | | <*result*> | Integer or Float | The result from subtracting the second number from the first number |
-||||
*Example*
substring('<text>', <startIndex>, <length>)
| <*text*> | Yes | String | The string whose characters you want | | <*startIndex*> | Yes | Integer | A positive number equal to or greater than 0 that you want to use as the starting position or index value | | <*length*> | No | Integer | A positive number of characters that you want in the substring |
-|||||
> [!NOTE] > Make sure that the sum from adding the *startIndex* and *length* parameter values is less than the length of the string that you provide for the *text* parameter.
substring('<text>', <startIndex>, <length>)
| Return value | Type | Description | | | - | -- | | <*substring-result*> | String | A substring with the specified number of characters, starting at the specified index position in the source string |
-||||
*Example*
subtractFromTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
| <*interval*> | Yes | Integer | The number of specified time units to subtract | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*updated-timestamp*> | String | The timestamp minus the specified number of time units |
-||||
*Example 1*
take([<collection>], <count>)
| | -- | - | -- | | <*collection*> | Yes | String or Array | The collection whose items you want | | <*count*> | Yes | Integer | A positive integer for the number of items that you want from the front |
-|||||
| Return value | Type | Description | | | - | -- | | <*subset*> or [<*subset*>] | String or Array, respectively | A string or array that has the specified number of items taken from the front of the original collection |
-||||
*Example*
ticks('<timestamp>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string for a timestamp |
-|||||
| Return value | Type | Description | | | - | -- | | <*ticks-number*> | Integer | The number of ticks since the specified timestamp |
-||||
<a name="toLower"></a>
toLower('<text>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*text*> | Yes | String | The string to return in lowercase format |
-|||||
| Return value | Type | Description | | | - | -- | | <*lowercase-text*> | String | The original string in lowercase format |
-||||
*Example*
toUpper('<text>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*text*> | Yes | String | The string to return in uppercase format |
-|||||
| Return value | Type | Description | | | - | -- | | <*uppercase-text*> | String | The original string in uppercase format |
-||||
*Example*
trigger()
| Return value | Type | Description | | | - | -- | | <*trigger-output*> | String | The output from a trigger at runtime |
-||||
<a name="triggerBody"></a>
triggerBody()
| Return value | Type | Description | | | - | -- | | <*trigger-body-output*> | String | The `body` output from the trigger |
-||||
<a name="triggerFormDataMultiValues"></a>
triggerFormDataMultiValues('<key>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*key*> | Yes | String | The name for the key whose value you want |
-|||||
| Return value | Type | Description | | | - | -- | | [<*array-with-key-values*>] | Array | An array with all the values that match the specified key |
-||||
*Example*
triggerFormDataValue('<key>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*key*> | Yes | String | The name for the key whose value you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*key-value*> | String | The value in the specified key |
-||||
*Example*
triggerMultipartBody(<index>)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*index*> | Yes | Integer | The index value for the part that you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*body*> | String | The body for the specified part in a trigger's multipart output |
-||||
<a name="triggerOutputs"></a>
triggerOutputs()
| Return value | Type | Description | | | - | -- | | <*trigger-output*> | String | The output from a trigger at runtime |
-||||
<a name="trim"></a>
trim('<text>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*text*> | Yes | String | The string that has the leading and trailing whitespace to remove |
-|||||
| Return value | Type | Description | | | - | -- | | <*updatedText*> | String | An updated version for the original string without leading or trailing whitespace |
-||||
*Example*
union([<collection1>], [<collection2>], ...)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*collection1*>, <*collection2*>, ... | Yes | Array or Object, but not both | The collections from where you want *all* the items |
-|||||
| Return value | Type | Description | | | - | -- | | <*updatedCollection*> | Array or Object, respectively | A collection with all the items from the specified collections - no duplicates |
-||||
*Example*
uriComponent('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The string to convert to URI-encoded format |
-|||||
| Return value | Type | Description | | | - | -- | | <*encoded-uri*> | String | The URI-encoded string with escape characters |
-||||
*Example*
uriComponentToBinary('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The URI-encoded string to convert |
-|||||
| Return value | Type | Description | | | - | -- | | <*binary-for-encoded-uri*> | String | The binary version for the URI-encoded string. The binary content is base64-encoded and represented by `$content`. |
-||||
*Example*
uriComponentToString('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The URI-encoded string to decode |
-|||||
| Return value | Type | Description | | | - | -- | | <*decoded-uri*> | String | The decoded version for the URI-encoded string |
-||||
*Example*
uriHost('<uri>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*uri*> | Yes | String | The URI whose `host` value you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*host-value*> | String | The `host` value for the specified URI |
-||||
*Example*
uriPath('<uri>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*uri*> | Yes | String | The URI whose `path` value you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*path-value*> | String | The `path` value for the specified URI. If `path` doesn't have a value, return the "/" character. |
-||||
*Example*
uriPathAndQuery('<uri>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*uri*> | Yes | String | The URI whose `path` and `query` values you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*path-query-value*> | String | The `path` and `query` values for the specified URI. If `path` doesn't specify a value, return the "/" character. |
-||||
*Example*
uriPort('<uri>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*uri*> | Yes | String | The URI whose `port` value you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*port-value*> | Integer | The `port` value for the specified URI. If `port` doesn't specify a value, return the default port for the protocol. |
-||||
*Example*
uriQuery('<uri>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*uri*> | Yes | String | The URI whose `query` value you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*query-value*> | String | The `query` value for the specified URI |
-||||
*Example*
uriScheme('<uri>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*uri*> | Yes | String | The URI whose `scheme` value you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*scheme-value*> | String | The `scheme` value for the specified URI |
-||||
*Example*
Optionally, you can specify a different format with the <*format*> parameter.
| Parameter | Required | Type | Description | | | -- | - | -- | | <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
-|||||
| Return value | Type | Description | | | - | -- | | <*current-timestamp*> | String | The current date and time |
-||||
*Example 1*
variables('<variableName>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*variableName*> | Yes | String | The name for the variable whose value you want |
-|||||
| Return value | Type | Description | | | - | -- | | <*variable-value*> | Any | The value for the specified variable |
-||||
*Example*
workflow().<property>
| Parameter | Required | Type | Description | | | -- | - | -- | | <*property*> | No | String | The name for the workflow property whose value you want <br><br><br><br>By default, a workflow object has these properties: `name`, `type`, `id`, `location`, `run`, and `tags`. <br><br><br><br>- The `run` property value is a JSON object that includes these properties: `name`, `type`, and `id`. <br><br><br><br>- The `tags` property is a JSON object that includes [tags that are associated with your logic app in Azure Logic Apps or flow in Power Automate](../azure-resource-manager/management/tag-resources.md) and the values for those tags. For more information about tags in Azure resources, review [Tag resources, resource groups, and subscriptions for logical organization in Azure](../azure-resource-manager/management/tag-resources.md). <br><br><br><br>**Note**: By default, a logic app has no tags, but a Power Automate flow has the `flowDisplayName` and `environmentName` tags. |
-|||||
*Example 1*
xml('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The string with the JSON object to convert <br><br>The JSON object must have only one root property, which can't be an array. <br>Use the backslash character (\\) as an escape character for the double quotation mark ("). |
-|||||
| Return value | Type | Description | | | - | -- | | <*xml-version*> | Object | The encoded XML for the specified string or JSON object |
-||||
*Example 1*
xpath('<xml>', '<xpath>')
| | -- | - | -- | | <*xml*> | Yes | Any | The XML string to search for nodes or values that match an XPath expression value | | <*xpath*> | Yes | Any | The XPath expression used to find matching XML nodes or values |
-|||||
| Return value | Type | Description | | | - | -- | | <*xml-node*> | XML | An XML node when only a single node matches the specified XPath expression | | <*value*> | Any | The value from an XML node when only a single value matches the specified XPath expression | | [<*xml-node1*>, <*xml-node2*>, ...] -or- [<*value1*>, <*value2*>, ...] | Array | An array with XML nodes or values that match the specified XPath expression |
-||||
*Example 1*
managed-grafana How To Deterministic Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-deterministic-ip.md
This example demonstrates how to disable public access to Azure Data Explorer an
1. Open an Azure Data Explorer Cluster instance in the Azure portal, and under **Settings**, select **Networking**. 1. In the **Public Access** tab, select **Disabled** to disable public access to the data source.
-1. Under **Firewall**, check the box **Add your client IP address ('88.126.99.17')** and under **Address range**, enter the IP addresses found in your Azure Managed Grafana workspace.
-1. Select **Save** to finish adding the Azure Managed Grafana outbound IP addresses to the allowlist.
- :::image type="content" source="media/deterministic-ips/add-ip-data-source-firewall.png" alt-text="Screenshot of the Azure platform. Add Azure Managed Grafana outbound IPs to datasource firewall allowlist.":::
+ :::image type="content" source="media/deterministic-ips/add-ip-data-source-firewall.png" alt-text="Screenshot of the Azure platform. Add Disable public network access.":::
+
+1. Under **Firewall**, check the box **Add your client IP address** and under **Address range**, enter the IP addresses found in your Azure Managed Grafana workspace.
+1. Select **Save** to finish adding the Azure Managed Grafana outbound IP addresses to the allowlist.
You have limited access to your data source by disabling public access, activating a firewall and allowing access from Azure Managed Grafana IP addresses.
Check if the Azure Managed Grafana endpoint can still access your data source.
1. Go to **Configuration > Data Source > Azure Data Explorer Datasource > Settings** and at the bottom of the page, select **Save & test**: - If the message "Success" is displayed, Azure Managed Grafana can access your data source.
- - If the following error message is displayed, Azure Managed Grafana can't access the data source: `Post "https://<Azure-Data-Explorer-URI>/v1/rest/query": dial tcp 13.90.24.175:443: i/o timeout`. Make sure that you've entered the IP addresses correctly in the data source firewall allowlist.
+ - If the following error message is displayed, Azure Managed Grafana can't access the data source: `Post "https://<Azure-Data-Explorer-URI>/v1/rest/query": dial tcp ...: i/o timeout`. Make sure that you've entered the IP addresses correctly in the data source firewall allowlist.
### [Azure CLI](#tab/azure-cli)
managed-grafana Tutorial Mpe Oss Prometheus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/tutorial-mpe-oss-prometheus.md
Title: Connect to self-hosted Prometheus on an AKS cluster via managed private e
description: In this tutorial, learn how to connect to self-hosted Prometheus on an AKS Cluster using a managed private endpoint. -+ Last updated 02/21/2024-+ # Tutorial: connect to a self-hosted Prometheus service on an AKS cluster using a managed private endpoint
spec:
1. The private link service with name `promManagedPls` is created in the AKS managed resource group. This process takes a few minutes.
- :::image type="content" source="media/tutorial-managed-private-endpoint/private-link-service-prometheus.png" alt-text="Screenshot of the Azure platform: showing the created Private Link Service resource.":::
- ## Connect with a managed private endpoint 1. If you don't have an Azure Managed Grafana workspace yet, create one by following the [Azure Managed Grafana quickstart](./quickstart-managed-grafana-portal.md).
-1. Open your Azure Managed Grafana workspace and go to **Networking** > **Managed Private Endpoint** > **Create**.
+1. Open your Azure Managed Grafana workspace and go to **Networking** > **Managed Private Endpoint** > **Add**.
:::image type="content" source="media/tutorial-managed-private-endpoint/create-managed-private-endpoint.png" alt-text="Screenshot of the Azure platform showing the managed private endpoints page within an Azure Managed Grafana resource.":::
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
For Linux servers, you can create a user account in one of two ways:
> [!Note] > If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Linux servers, it recommended to use Option 1.
-### Option 2
-- If you can't provide user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry on the appliance server and provide a non-root account with the required capabilities using the following commands:
+### Option 2: Discover using non-sudo user account
+- If you can't provide user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry on the appliance server.
+- Provide a non-sudo user account with the required capabilities.
+ - Sign in as root user. Create a non-sudo user account by running the `sudo useradd <account-name>` command. Set a password for the non-sudo user account using the `sudo passwd <account-name>` command.
+ - Add the non-sudo user account to the wheel group using this command: `sudo usermod ΓÇôaG wheel <account-name>`. Users in this group have permissions to run setcap commands as detailed below.
+ - Sign in to the non-sudo user account that was created and run the following commands:
**Command** | **Purpose** | |
- setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/fdisk <br></br> setcap CAP_DAC_READ_SEARCH+eip /sbin/fdisk _(if /usr/sbin/fdisk is not present)_ | To collect disk configuration data
- setcap "cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_setuid,<br> cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,<br> cap_sys_resource,cap_audit_control,cap_setfcap=+eip" /sbin/lvm | To collect disk performance data
- setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/dmidecode | To collect BIOS serial number
- chmod a+r /sys/class/dmi/id/product_uuid | To collect BIOS GUID
+ setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/fdisk <br></br> setcap CAP_DAC_READ_SEARCH+eip /sbin/fdisk _(if /usr/sbin/fdisk is not present)_ | To collect disk configuration data.
+ setcap "cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,<br> cap_sys_resource,cap_audit_control,cap_setfcap=+eip" /sbin/lvm | To collect disk performance data.
+ setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/dmidecode | To collect BIOS serial number.
+ chmod a+r /sys/class/dmi/id/product_uuid | To collect BIOS GUID.
+ sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat | To perform agentless dependency analysis on the server, set the required permissions on /bin/netstat and /bin/ls files.
-- To perform agentless dependency analysis on the server, ensure that you also set the required permissions on /bin/netstat and /bin/ls files by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+- Running all the above commands will prompt for a password. Enter the password of the non-sudo user account for each prompt.
+- Add the credentials of the non-sudo user account to the Azure Migrate appliance.
+- The non-sudo user account will execute the commands listedΓÇ»[here](discovered-metadata.md#linux-server-metadata) periodically.
### Create an account to access servers
Check that the zipped file is secure, before you deploy it.
### 3. Run the Azure Migrate installer script
-1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
2. Launch PowerShell on the above server with administrative (elevated) privilege.
Set up the appliance for the first time.
1. Open a browser on any server that can connect to the appliance, and open the URL of the appliance web app: **https://*appliance name or IP address*: 44368**. Alternately, you can open the app from the desktop by selecting the app shortcut.
-1. Accept the **license terms**, and read the third-party information.
+1. Accept the **license terms**, and read the third party information.
#### Set up prerequisites and register the appliance
In the configuration manager, select **Set up prerequisites**, and then complete
1. **Connectivity**: The appliance checks that the server has internet access. If the server uses a proxy: - Select **Setup proxy** to specify the proxy address (in the form `http://ProxyIPAddress` or `http://ProxyFQDN`, where *FQDN* refers to a *fully qualified domain name*) and listening port. - Enter credentials if the proxy needs authentication.
- - If you have added proxy details or disabled the proxy or authentication, select **Save** to trigger connectivity and check connectivity again.
+ - If you have added proxy details or disabled the proxy or authentication, select **Save** to trigger connectivity, and check connectivity again.
Only HTTP proxy is supported. 1. **Time sync**: Check that the time on the appliance is in sync with internet time for discovery to work properly.
Now, connect from the appliance to the physical servers to be discovered, and st
1. In **Step 1: Provide credentials for discovery of Windows and Linux physical or virtual serversΓÇï**, select **Add credentials**. 1. For Windows server, select the source type as **Windows Server**, specify a friendly name for credentials, add the username and password. Select **Save**. 1. If you're using password-based authentication for Linux server, select the source type as **Linux Server (Password-based)**, specify a friendly name for credentials, add the username and password. Select **Save**.
-1. If you're using SSH key-based authentication for Linux server, you can select source type as **Linux Server (SSH key-based)**, specify a friendly name for credentials, add the username, browse and select the SSH private key file. Select **Save**.
+1. If you're using SSH key-based authentication for Linux server, you can select source type as **Linux Server (SSH key-based)**, specify a friendly name for credentials, add the username, browse, and select the SSH private key file. Select **Save**.
- Azure Migrate supports the SSH private key generated by ssh-keygen command using RSA, DSA, ECDSA, and ed25519 algorithms. - Currently Azure Migrate doesn't support passphrase-based SSH key. Use an SSH key without a passphrase. - Currently Azure Migrate doesn't support SSH private key file generated by PuTTY.
- - The SSH key file supports CRLF to mark a line break in the text file that you upload. SSH keys created on Linux systems most commonly have LF as their newline character so you can convert them to CRLF by opening the file in vim, typing `:set textmode` and saving the file.
+ - The SSH key file supports CRLF to mark a line break in the text file that you upload. SSH keys created on Linux systems most commonly have LF as their newline character so you can convert them to CRLF by opening the file in vim, typing `:set textmode`, and saving the file.
- If your Linux servers support the older version of RSA key, you can generate the key using the `$ ssh-keygen -m PEM -t rsa -b 4096` command. - Azure Migrate supports OpenSSH format of the SSH private key file as shown below:
Now, connect from the appliance to the physical servers to be discovered, and st
- If you choose **Add single item**, you can choose the OS type, specify friendly name for credentials, add server **IP address/FQDN** and select **Save**.
- - If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. **Verify** the added records and select **Save**.
- - If you choose **Import CSV** _(selected by default)_, you can download a CSV template file, populate the file with the server **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and select **Save**.
+ - If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. **Verify** the added records, and select **Save**.
+ - If you choose **Import CSV** _(selected by default)_, you can download a CSV template file, populate the file with the server **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file, and select **Save**.
-1. Select Save. The appliance tries validating the connection to the servers added and shows the **Validation status** in the table against each server.
+1. Select **Save**. The appliance tries validating the connection to the servers added and shows the **Validation status** in the table against each server.
- If validation fails for a server, review the error by selecting on **Validation failed** in the Status column of the table. Fix the issue, and validate again. - To remove a server, select **Delete**. 1. You can **revalidate** the connectivity to servers anytime before starting the discovery.
oracle Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/database-overview.md
Oracle Database@Azure runs on infrastructure managed by Oracle's expert Cloud In
## Oracle Database@Azure interfaces
-You can provision Oracle Database@Azure using the Azure portal and Azure APIs, SDKs and Terraform. Management of Oracle database system infrastructure and VM cluster resources takes place in the Azure portal as well.
+You can provision Oracle Database@Azure using the Azure portal and Azure APIs, SDKs, and Terraform. Management of Oracle database system infrastructure and VM cluster resources takes place in the Azure portal as well.
For Oracle Container Databases (CDB) and Oracle Pluggable Databases (PDB), some management tasks are completed using the OCI console.
Database and application developers work in the Azure portal or use Azure tools
## Purchase Oracle Database@Azure
-To purchase Oracle Database@Azure, contact [Oracle's sales team](https://go.oracle.com/LP=138489) or your Oracle sales representative for a sale offer. Oracle Sales team creates an Azure Private Offer in the Azure Marketplace for your service. After an offer has been created for your organization, you can accept the offer and complete the purchase in the Azure portal's Marketplace service. For more information on Azure private offers, see [Overview of the commercial marketplace and enterprise procurement](/marketplace/procurement-overview).
+To purchase Oracle Database@Azure, contact [Oracle's sales team](https://go.oracle.com/LP=138489) or your Oracle sales representative for a sale offer. Oracle Sales team creates an Azure Private Offer in the Azure Marketplace for your service. After an offer is made for your organization, you can accept the offer and complete the purchase in the Azure portal's Marketplace service. For more information on Azure private offers, see [Overview of the commercial marketplace and enterprise procurement](/marketplace/procurement-overview).
Billing and payment for the service is done through Azure. Payment for Oracle Database@Azure counts toward your Microsoft Azure Consumption Commitment (MACC). Existing Oracle Database software customers can use the Bring Your Own License (BYOL) option or Unlimited License Agreements (ULAs). On your regular Microsoft Azure invoices, you can see charges for Oracle Database@Azure alongside charges for your other Azure Marketplace services.
-## Compliance
-
-Oracle Database@Azure is an Oracle Cloud database service that runs Oracle Database workloads in a customer's Azure environment. Oracle Database@Azure offers various Oracle Database Services through customerΓÇÖs Microsoft Azure environment. This service allows customers to monitor database metrics, audit logs, events, logging data, and telemetry natively in Azure. It runs on infrastructure managed by Oracle's Cloud Infrastructure operations team who performs software patching, infrastructure updates, and other operations through a connection to Oracle Cloud.
-All infrastructure for Oracle Database@Azure is co-located in Azure's physical data centers and uses Azure Virtual Network for networking, managed within the Azure environment. Federated identity and access management for Oracle Database@Azure is provided by Microsoft Entra ID.
-
-For detailed information on the compliance certifications please visit [Microsoft Services Trust Portal](https://servicetrust.microsoft.com/) and [Oracle compliance website](https://docs.oracle.com/en-us/iaas/Content/multicloud/compliance.htm). If you have further questions about OracleDB@Azure compliance please reach out to your account team and/or get information through [Oracle and Microsoft support for Oracle Database@Azure](https://docs.oracle.com/en-us/iaas/Content/multicloud/oaahelp.htm).
-
-## Available regions
-
-Oracle Database@Azure is available in the following locations. Oracle Database@Azure infrastructure resources must be provisioned in the Azure regions listed.
-
-|Azure region|Oracle Exadata Database@Azure|Oracle Autonomous Database@Azure|
-|-|:-:|:--:|
-|East US |&check; | &check;|
-|Germany West Central | &check;|&check; |
-|France Central |&check; | &check;|
-|UK South |&check; |&check; |
-|Canada Central |&check; |&check; |
-|Australia East |&check; |&check; |
-
-## Azure Support scope and contact information
-
-See [Contact Microsoft Azure Support](https://support.microsoft.com/topic/contact-microsoft-azure-support-2315e669-8b1f-493b-5fb1-d88a8736ffe4) in the Azure documentation for information on Azure support. For SLA information about the service offering, please refer to the [Oracle PaaS and IaaS Public Cloud Services Pillar Document](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.oracle.com%2Fcontracts%2Fdocs%2Fpaas_iaas_pub_cld_srvs_pillar_4021422.pdf%3Fdownload%3Dfalse&data=05%7C02%7Cjacobjaygbay%40microsoft.com%7Cc226ce0d176442b3302608dc3ed3a6d0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638454325970975560%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=VZvhVUJzmUCzI25kKlf9hKmsf5GlrMPsQujqjGNsJbk%3D&reserved=0)
## Next steps
-
- [Onboard with Oracle Database@Azure](onboard-oracle-database.md) - [Provision and manage Oracle Database@Azure](provision-oracle-database.md) - [Oracle Database@Azure support information](oracle-database-support.md) - [Network planning for Oracle Database@Azure](oracle-database-network-plan.md)-- [Groups and roles for Oracle Database@Azure](oracle-database-groups-roles.md)
+- [Groups and roles for Oracle Database@Azure](oracle-database-groups-roles.md)
oracle Exadata Examples Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/exadata-examples-services.md
+
+ Title: Terraform/OpenTofu examples for Exadata Services
+description: Learn about Terraform/OpenTofu examples for Exadata services
++++ Last updated : 08/01/2024++
+# Terraform/OpenTofu examples for Exadata services
+
+In this article, you learn about how to use HashiCorp Terraform, to provision and manage resources for Oracle Database@Azure using the Terraform tool that enables you to provision and manage infrastructure in Oracle Cloud Infrastructure (OCI).
+
+For more information on reference implementations for Terraform or OpenTofu modules, sees the following links:
+* [QuickStart Oracle Database@Azure with Terraform or OpenTofu Modules](https://docs.oracle.com/en/learn/dbazure-terraform/https://docsupdatetracker.net/index.html)
+* [OCI Landing Zones](https://github.com/oci-landing-zones/)
+* [Azure Verified Modules](https://aka.ms/avm)
+
+ >[!NOTE]
+ > This document describes examples of provisioning and management of Oracle Database@Azure resources through Terraform provider `AzAPI`. For detailed AzAPI provider resources and data sources documentation, see [https://registry.terraform.io/providers/Azure/azapi/latest/docs](https://registry.terraform.io/providers/Azure/azapi/latest/docs)
+
+The samples use example values for illustration purposes. You must replace them with your own settings.
+The samples use [AzAPI Dynamic Properties](https://techcommunity.microsoft.com/t5/azure-tools-blog/announcing-azapi-dynamic-properties/ba-p/4121855) instead of `JSONEncode` for more native Terraform behavior.
+
+## Oracle Exadata services
+In this section, you will find examples of how to use the `AzAPI` provider to manage Oracle Exadata services in Azure.
+### Exadata Infrastructure
+In this section, you will find examples of how to use the `AzAPI` provider to manage Oracle Exadata infrastructure in Azure.
+#### Create an Oracle Exadata Infrastructure
+```resource "azapi_resource" "resource_group" {
+ type = "Microsoft.Resources/resourceGroups@2023-07-01"
+ name = "ExampleRG"
+ location = "eastus"
+}
+
+// OperationId: CloudExadataInfrastructures_CreateOrUpdate, CloudExadataInfrastructures_Get, CloudExadataInfrastructures_Delete
+// PUT /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudExadataInfrastructures/{cloudexadatainfrastructurename}
+resource "azapi_resource" "cloudExadataInfrastructure" {
+ type = "Oracle.Database/cloudExadataInfrastructures@2023-09-01"
+ parent_id = azapi_resource.resource_group.id
+ name = "ExampleName"
+ body = {
+ "location" : "eastus",
+ "zones" : [
+ "2"
+ ],
+ "tags" : {
+ "createdby" : "ExampleName"
+ },
+ "properties" : {
+ "computeCount" : 2,
+ "displayName" : "ExampleName",
+ "maintenanceWindow" : {
+ "leadTimeInWeeks" : 0,
+ "preference" : "NoPreference",
+ "patchingMode" : "Rolling"
+ },
+ "shape" : "Exadata.X9M",
+ "storageCount" : 3
+ }
+ }
+ schema_validation_enabled = false
+}
+```
+#### List Oracle Exadata Infrastructures by Subscription
+```data "azapi_resource" "subscription" {
+ type = "Microsoft.Resources/subscriptions@2020-06-01"
+ response_export_values = ["*"]
+}
+
+// OperationId: CloudExadataInfrastructures_ListBySubscription
+// GET /subscriptions/{subscriptionId}/providers/Oracle.Database/cloudExadataInfrastructures
+data "azapi_resource_list" "listCloudExadataInfrastructuresBySubscription" {
+ type = "Oracle.Database/cloudExadataInfrastructures@2023-09-01"
+ parent_id = data.azapi_resource.subscription.id
+}
+```
+
+#### List Oracle Exadata Infrastructures by Resource Group
+```data "azurerm_resource_group" "example" {
+ name = "existing"
+}
+
+// OperationId: CloudExadataInfrastructures_ListByResourceGroup
+// GET /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudExadataInfrastructures
+data "azapi_resource_list" "listCloudExadataInfrastructuresByResourceGroup" {
+ type = "Oracle.Database/cloudExadataInfrastructures@2023-09-01"
+ parent_id = azurerm_resource_group.example.id
+}
+```
+
+#### Patch an Oracle Exadata Infrastructure
+ >[!NOTE]
+ > Only Microsoft Azure tags on the resource can be updated through the AzAPI provider.
+
+```data "azapi_resource" "subscription" {
+ type = "Microsoft.Resources/subscriptions@2020-06-01"
+ response_export_values = ["*"]
+}
+
+// OperationId: CloudExadataInfrastructures_Update
+// PATCH /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudExadataInfrastructures/{cloudexadatainfrastructurename}
+resource "azapi_resource_action" "patch_cloudExadataInfrastructure" {
+ type = "Oracle.Database/cloudExadataInfrastructures@2023-09-01"
+ resource_id = azapi_resource.cloudExadataInfrastructure.id
+ action = ""
+ method = "PATCH"
+ body = {
+ "tags" : {
+ "updatedby" : "ExampleName"
+ }
+ }
+}
+```
+
+#### List database servers on an Oracle Exadata infrastructure
+```// OperationId: DbServers_Get
+// GET /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudExadataInfrastructures/{cloudexadatainfrastructurename}/dbServers/{dbserverocid}
+data "azapi_resource" "dbServer" {
+ type = "Oracle.Database/cloudExadataInfrastructures/dbServers@2023-09-01"
+ parent_id = azapi_resource.cloudExadataInfrastructure.id
+ name = var.resource_name
+}
+```
+
+## Exadata VM Cluster
+
+### Create an Oracle Exadata VM Cluster
+```resource "azapi_resource" "resource_group" {
+ type = "Microsoft.Resources/resourceGroups@2023-07-01"
+ name = "ExampleRG" location = "eastus"
+}
+
+// OperationId: CloudExadataInfrastructures_CreateOrUpdate, CloudExadataInfrastructures_Get, CloudExadataInfrastructures_Delete
+// PUT /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudExadataInfrastructures/{cloudexadatainfrastructurename}
+resource "azapi_resource" "cloudExadataInfrastructure" {
+ type = "Oracle.Database/cloudExadataInfrastructures@2023-09-01"
+ parent_id = azapi_resource.resource_group.id
+ name = "ExampleName"
+ body = {
+ "location" : "eastus",
+ "zones" : [
+ "2"
+ ],
+ "tags" : {
+ "createdby" : "ExampleName"
+ },
+ "properties" : {
+ "computeCount" : 2,
+ "displayName" : "ExampleName",
+ "maintenanceWindow" : {
+ "leadTimeInWeeks" : 0,
+ "preference" : "NoPreference",
+ "patchingMode" : "Rolling"
+ },
+ "shape" : "Exadata.X9M",
+ "storageCount" : 3
+ }
+ }
+ schema_validation_enabled = false
+}
+
+//-VMCluster resources
+// OperationId: CloudVmClusters_CreateOrUpdate, CloudVmClusters_Get, CloudVmClusters_Delete
+// PUT GET DELETE /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudVmClusters/{cloudvmclustername}
+resource "azapi_resource" "cloudVmCluster" {
+ type = "Oracle.Database/cloudVmClusters@2023-09-01"
+ parent_id = azapi_resource.resourceGroup.id
+ name = local.exa_cluster_name
+ schema_validation_enabled = false
+ depends_on = [azapi_resource.cloudExadataInfrastructure]
+ body = {
+ "properties": {
+ "dataStorageSizeInTbs": 1000,
+ "dbNodeStorageSizeInGbs": 1000,
+ "memorySizeInGbs": 1000,
+ "timeZone": "UTC",
+ "hostname": "hostname1",
+ "domain": "domain1",
+ "cpuCoreCount": 2,
+ "ocpuCount": 3,
+ "clusterName": "cluster1",
+ "dataStoragePercentage": 100,
+ "isLocalBackupEnabled": false,
+ "cloudExadataInfrastructureId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg000/providers/Oracle.Database/cloudExadataInfrastructures/infra1",
+ "isSparseDiskgroupEnabled": false,
+ "sshPublicKeys": [
+ "ssh-key 1"
+ ],
+ "nsgCidrs": [
+ {
+ "source": "10.0.0.0/16",
+ "destinationPortRange": {
+ "min": 1520,
+ "max": 1522
+ }
+ },
+ {
+ "source": "10.10.0.0/24"
+ }
+ ],
+ "licenseModel": "LicenseIncluded",
+ "scanListenerPortTcp": 1050,
+ "scanListenerPortTcpSsl": 1025,
+ "vnetId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg000/providers/Microsoft.Network/virtualNetworks/vnet1",
+ "giVersion": "19.0.0.0",
+ "subnetId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg000/providers/Microsoft.Network/virtualNetworks/vnet1/subnets/subnet1",
+ "backupSubnetCidr": "172.17.5.0/24",
+ "dataCollectionOptions": {
+ "isDiagnosticsEventsEnabled": false,
+ "isHealthMonitoringEnabled": false,
+ "isIncidentLogsEnabled": false
+ },
+ "displayName": "cluster 1",
+ "dbServers": [
+ "ocid1..aaaa"
+ ]
+ },
+ "location": "eastus"
+ }
+ response_export_values = ["properties.ocid"]
+}
+```
+
+### List Oracle Exadata VM Clusters by Subscription
+```data "azapi_resource" "subscription" {
+ type = "Microsoft.Resources/subscriptions@2020-06-01"
+ response_export_values = ["*"]
+}
+
+// OperationId: CloudExadataInfrastructures_ListBySubscription
+// GET /subscriptions/{subscriptionId}/providers/Oracle.Database/cloudExadataInfrastructures
+data "azapi_resource_list" "listCloudExadataInfrastructuresBySubscription" {
+ type = "Oracle.Database/cloudVmClusters@2023-09-01"
+ parent_id = data.azapi_resource.subscription.id
+}
+```
+
+### List Oracle Exadata VM Clusters by Resource Group
+```data "azurerm_resource_group" "example" {
+ name = "existing"
+}
+
+// OperationId: CloudExadataInfrastructures_ListByResourceGroup
+// GET /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudExadataInfrastructures
+data "azapi_resource_list" "listCloudExadataInfrastructuresByResourceGroup" {
+ type = "Oracle.Database/cloudVmClusters@2023-09-01"
+ parent_id = azurerm_resource_group.example.id
+}
+```
+
+### List Database Nodes on an Oracle Exadata VM Cluster
+```// OperationId: DbNodes_Get
+// GET /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudVmClusters/{cloudvmclustername}/dbNodes/{dbnodeocid}
+data "azapi_resource" "dbNode" {
+ type = "Oracle.Database/cloudVmClusters/dbNodes@2023-09-01"
+ parent_id = azapi_resource.cloudVmCluster.id. // VM Cluster Id
+ name = var.resource_name
+}
+```
+
+### Add a Virtual Network Address to an Exadata VM Cluster
+```// OperationId: VirtualNetworkAddresses_CreateOrUpdate, VirtualNetworkAddresses_Get, VirtualNetworkAddresses_Delete
+// PUT GET DELETE /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudVmClusters/{cloudvmclustername}/virtualNetworkAddresses/{virtualnetworkaddressname}
+resource "azapi_resource" "virtualNetworkAddress" {
+ type = "Oracle.Database/cloudVmClusters/virtualNetworkAddresses@2023-09-01"
+ parent_id = azapi_resource.cloudVmCluster.id
+ name = var.resource_name
+ body = {
+ "properties": {
+ "ipAddress": "192.168.0.1",
+ "vmOcid": "ocid1..aaaa"
+ }
+ }
+ schema_validation_enabled = false
+}
+```
+
+### List Virtual Network Addresses on an Oracle Exadata VM Cluster
+```// OperationId: VirtualNetworkAddresses_ListByCloudVmCluster
+// GET /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudVmClusters/{cloudvmclustername}/virtualNetworkAddresses
+data "azapi_resource_list" "listVirtualNetworkAddressesByCloudVmCluster" {
+ type = "Oracle.Database/cloudVmClusters/virtualNetworkAddresses@2023-09-01"
+ parent_id = azapi_resource.cloudVmCluster.id
+}
+```
+
+## Exadata Database Shape
+In this section, you will find examples of how to use the `AzAPI` provider to manage Oracle Exadata Database shapes in Azure.
+### List an Oracle Exadata Database Shape
+```data "azapi_resource_id" "location" {
+ type = "Oracle.Database/locations@2023-12-12"
+ parent_id = data.azapi_resource.subscription.id
+ name = "eastus"
+}
+
+// OperationId: DbSystemShapes_Get
+// GET /subscriptions/{subscriptionId}/providers/Oracle.Database/locations/{location}/dbSystemShapes/{dbsystemshapename}
+data "azapi_resource" "dbSystemShape" {
+ type = "Oracle.Database/locations/dbSystemShapes@2023-09-01"
+ parent_id = data.azapi_resource_id.location.id
+ name = var.resource_name
+}
+```
+
+### List Oracle Exadata Databases by Location
+``` // OperationId: DbSystemShapes_ListByLocation
+// GET /subscriptions/{subscriptionId}/providers/Oracle.Database/locations/{location}/dbSystemShapes
+data "azapi_resource_list" "listDbSystemShapesByLocation" {
+ type = "Oracle.Database/locations/dbSystemShapes@2023-09-01"
+ parent_id = data.azapi_resource_id.location.id
+}
+```
+
+## Combined Exadata Services
+In this section, you will find examples of how to use the `AzAPI` provider to manage Oracle Exadata services in Azure.
+
+### Create an Oracle Database Home on an Exadata VM Cluster on an Exadata Infrastructure with a Delegated Subnet in Microsoft Azure
+ >[!NOTE]
+ >The following script creates an Oracle Exadata Infrastructure and an Oracle Exadata VM Cluster using the `AzAPI` Terraform provider followed by creating an Exadata Database deployment using the [OCI Terraform provider](https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/database_db_home).
+
+```terraform {
+ required_providers {
+ azapi = {
+ source = "Azure/azapi"
+ }
+ oci = {
+ source = "oracle/oci"
+ }
+ }
+}
+
+provider "azapi" {
+ skip_provider_registration = false
+}
+
+provider "oci" {
+ user_ocid = <user_ocid>
+ fingerprint = <user_fingerprint>
+ tenancy_ocid = <oci_tenancy_ocid>
+ region = "us-ashburn-1"
+ private_key_path = <Path to API Key>
+}
+
+locals {
+ resource_group_name = "TestResourceGroup"
+ user = "Username"
+ location = "eastus"
+}
+
+resource "azapi_resource" "resource_group" {
+ type = "Microsoft.Resources/resourceGroups@2023-07-01"
+ name = local.resource_group_name
+ location = local.location
+}
+
+resource "azapi_resource" "virtual_network" {
+ type = "Microsoft.Network/virtualNetworks@2023-04-01"
+ name = "${local.resource_group_name}_vnet"
+ location = local.location
+ parent_id = azapi_resource.resource_group.id
+ body = {
+ properties = {
+ addressSpace = {
+ addressPrefixes = [
+ "10.0.0.0/16"
+ ]
+ }
+ subnets = [
+ {
+ name = "delegated"
+ properties = {
+ addressPrefix = "10.0.1.0/24"
+ delegations = [
+ {
+ name = "Oracle.Database.networkAttachments"
+ properties = {
+ serviceName = "Oracle.Database/networkAttachments"
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+ }
+}
+
+data "azapi_resource_list" "listVirtualNetwork" {
+ type = "Microsoft.Network/virtualNetworks/subnets@2023-09-01"
+ parent_id = azapi_resource.virtual_network.id
+ depends_on = [azapi_resource.virtual_network]
+ response_export_values = ["*"]
+}
+
+resource "tls_private_key" "generated_ssh_key" {
+ algorithm = "RSA"
+ rsa_bits = 4096
+}
+
+resource "azapi_resource" "ssh_public_key" {
+ type = "Microsoft.Compute/sshPublicKeys@2023-09-01"
+ name = "${local.resource_group_name}_key"
+ location = local.location
+ parent_id = azapi_resource.resource_group.id
+ body = {
+ properties = {
+ publicKey = "${tls_private_key.generated_ssh_key.public_key_openssh}"
+ }
+ }
+}
+
+// OperationId: CloudExadataInfrastructures_CreateOrUpdate, CloudExadataInfrastructures_Get, CloudExadataInfrastructures_Delete
+// PUT /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudExadataInfrastructures/{cloudexadatainfrastructurename}
+resource "azapi_resource" "cloudExadataInfrastructure" {
+ type = "Oracle.Database/cloudExadataInfrastructures@2023-09-01"
+ parent_id = azapi_resource.resource_group.id
+ name = "OFake_terraform_deploy_infra_${local.resource_group_name}"
+ timeouts {
+ create = "1h30m"
+ delete = "20m"
+ }
+ body = {
+ "location" : "${local.location}",
+ "zones" : [
+ "2"
+ ],
+ "tags" : {
+ "createdby" : "${local.user}"
+ },
+ "properties" : {
+ "computeCount" : 2,
+ "displayName" : "OFake_terraform_deploy_infra_${local.resource_group_name}",
+ "maintenanceWindow" : {
+ "leadTimeInWeeks" : 0,
+ "preference" : "NoPreference",
+ "patchingMode" : "Rolling"
+ },
+ "shape" : "Exadata.X9M",
+ "storageCount" : 3
+ }
+
+ }
+ schema_validation_enabled = false
+}
+
+// OperationId: DbServers_ListByCloudExadataInfrastructure
+// GET /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudExadataInfrastructures/{cloudexadatainfrastructurename}/dbServers
+data "azapi_resource_list" "listDbServersByCloudExadataInfrastructure" {
+ type = "Oracle.Database/cloudExadataInfrastructures/dbServers@2023-09-01"
+ parent_id = azapi_resource.cloudExadataInfrastructure.id
+ depends_on = [azapi_resource.cloudExadataInfrastructure]
+ response_export_values = ["*"]
+}
+
+// OperationId: CloudVmClusters_CreateOrUpdate, CloudVmClusters_Get, CloudVmClusters_Delete
+// PUT /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Oracle.Database/cloudVmClusters/{cloudvmclustername}
+resource "azapi_resource" "cloudVmCluster" {
+ type = "Oracle.Database/cloudVmClusters@2023-09-01"
+ parent_id = azapi_resource.resource_group.id
+ name = "OFake_terraform_deploy_cluster_${local.resource_group_name}"
+ schema_validation_enabled = false
+ depends_on = [azapi_resource.cloudExadataInfrastructure]
+ timeouts {
+ create = "1h30m"
+ delete = "20m"
+ }
+ body = {
+ "location" : "${local.location}",
+ "tags" : {
+ "createdby" : "${local.user}"
+ },
+ "properties" : {
+ "subnetId" : "${data.azapi_resource_list.listVirtualNetwork.output.value[0].id}"
+ "cloudExadataInfrastructureId" : "${azapi_resource.cloudExadataInfrastructure.id}"
+ "cpuCoreCount" : 4
+ "dataCollectionOptions" : {
+ "isDiagnosticsEventsEnabled" : true,
+ "isHealthMonitoringEnabled" : true,
+ "isIncidentLogsEnabled" : true
+ },
+ "dataStoragePercentage" : 80,
+ "dataStorageSizeInTbs" : 2,
+ "dbNodeStorageSizeInGbs" : 120,
+ "dbServers" : [
+ "${data.azapi_resource_list.listDbServersByCloudExadataInfrastructure.output.value[0].properties.ocid}",
+ "${data.azapi_resource_list.listDbServersByCloudExadataInfrastructure.output.value[1].properties.ocid}"
+ ]
+ "displayName" : "OFake_terraform_deploy_cluster_${local.resource_group_name}",
+ "giVersion" : "19.0.0.0",
+ "hostname" : "${local.user}",
+ "isLocalBackupEnabled" : false,
+ "isSparseDiskgroupEnabled" : false,
+ "licenseModel" : "LicenseIncluded",
+ "memorySizeInGbs" : 60,
+ "sshPublicKeys" : ["${tls_private_key.generated_ssh_key.public_key_openssh}"],
+ "timeZone" : "UTC",
+ "vnetId" : "${azapi_resource.virtual_network.id}",
+ "provisioningState" : "Succeeded"
+ }
+ }
+ response_export_values = ["properties.ocid"]
+}
+
+resource "oci_database_db_home" "exa_db_home" {
+ source = "VM_CLUSTER_NEW"
+ vm_cluster_id = azapi_resource.cloudVmCluster.output.properties.ocid
+ db_version = "19.20.0.0"
+ display_name = "TFDBHOME"
+
+ database {
+ db_name = "TFCDB"
+ pdb_name = "TFPDB"
+ admin_password = "TestPass#2024#"
+ db_workload = "OLTP"
+ }
+ depends_on = [azapi_resource.cloudVmCluster]
+}
+```
oracle Exadata Manage Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/exadata-manage-resources.md
+
+ Title: Manage Exadata resources
+description: Learn about how to manage Exadata resources.
++++ Last updated : 08/01/2024++
+# Manage Exadata resources
+
+After provisioning an OracleDB@Azure resource, for example an Oracle Exadata Infrastructure or an Oracle Exadata VM Cluster, you can use the Microsoft Azure blade for a limited set of management functions, and those functions are described in this document.
+
+## Prerequisites
+
+There are prerequisites that must be completed before you can provision Exadata Services. You need to complete the following:
+
+- An existing Azure subscription
+- An Azure VNet with a subnet delegated to the Oracle Database@Azure service (`Oracle.Database/networkAttachments`)
+- Permissions in Azure to create resources in the region, with the following conditions:
+ * No policies prohibiting the creation of resources without tags, because the OracleSubscription resource is created automatically without tags during onboarding.
+ * No policies enforcing naming conventions, because the OracleSubscription resource is created automatically with a default resource name.
+- Purchase OracleDB@Azure in the Azure portal.
+- Select your Oracle Cloud Infrastructure (OCI) account.
+For more detailed documentation, including optional steps, see [Onboarding with Oracle Database@Azure](https://docs.oracle.com/iaas/Content/database-at-azure/oaaonboard.htm).
+
+## Common Management Functions from the Microsoft Azure Blade
+
+The following management functions are available for all resources from the Microsoft Azure blade for that resource.
+
+### Access the resource blade
+1. From the Microsoft Azure portal, select OracleDB@Azure application.
+1. From the left menu, select **Oracle Exadata Database@Azure**.
+1. If the blade lists and manages several resources, select the resource type at the top of the blade. For example, the **Oracle Exadata Database@Azure** blade accesses both Oracle Exadata Infrastructure and Oracle Exadata VM Cluster resources.
+
+### List status for all resources of the same type
+1. Follow the steps to **Access the resource blade**.
+1. Resources will be shown in the list as **Succeeded**, **Failed**, or **Provisioning**.
+1. Access the specifics of that resource by selecting the link in the **Name** field in the table.
+
+### Provision a new resource
+
+1. Follow the steps to **Access the resource blade**.
+
+1. Select the **+ Create** icon at the top of the blade.
+1. Follow the provisioning flow for the resource.
+ * [Provision Exadata infrastructure](exadata-provision-infrastructure.md)
+ * [Provision an Exadata VM cluster](exadata-provision-vm-cluster.md)
+
+### Refresh the blade's info
+
+1. Follow the steps to **Access the resource blade**.
+1. Select the **Refresh** icon at the top of the blade.
+1. Wait for the blade to reload.
+
+### Remove a resource
+
+1. Follow the steps to **Access the resource blade**.
+1. You can remove a single or multiple resources from the blade by selecting the checkbox on the left side of the table. Once you have selected the resource(s) to remove, you can then select the **Delete** icon at the top of the blade.
+1. You can also remove a single resource by selecting the link to the resource from the **Name** field in the table. From the resource's detail page, select the **Delete** icon at the top of the blade.
+
+### Add, manage, or delete resource tags
+
+1. Follow the steps to **Access the resource blade**.
+1. Select the link to the resource from the **Name** field in the table.
+1. From the resource's overview page, select the **Edit** link on the **Tags** field.
+1. To create a new tag, enter values in the **Name** and **Value** fields.
+1. To edit an existing tag, change the value in the existing tag's **Value** field.
+1. To delete an existing tag, select the **Trashcan** icon at the right-side of the tag.
+
+### Start, stop, or restart Oracle Exadata VM Cluster VMs
+1. Follow the steps to **Access the resource blade**.
+1. Select the Oracle Exadata VM Cluster blade.
+1. Select the link to the resource from the **Name** field in the table.
+1. From the resource's overview page, select the **Settings > Virtual machines** link on the left-side menu.
+1. To start a virtual machine (VM), select the **Start** icon. The **Start virtual machine** panel opens. Select the VM to start from the **Virtual machine** drop-down list. The drop-down list only populates with any unavailable VMs. Select the **Submit** button to start that VM, or the **Cancel** button to cancel the operation.
+1. To stop a virtual machine (VM), select the **Stop** icon. The **Stop virtual machine** panel opens. Select the VM to stop from the **Virtual machine** drop-down list. The drop-down list only populates with any available VMs. NOTE: Stopping a node may disrupt ongoing back-end software operations and database availability. Select the **Submit** button to stop that VM, or the **Cancel** button to cancel the operation.
+1. To restart a virtual machine (VM), select the **Restart** icon. The **Restart virtual machine** panel opens. Select the VM to restart from the Virtual machine drop-down list. The drop-down list only populates with any available VMs.
+
+ >[!NOTE]
+ >Restarting shuts down the node and then starts it. For single-node systems, databases are offline while the reboot is in progress.
+
+1. Select the **Submit** button to restart that VM, or the **Cancel** button to cancel the operation.
+
+### Access the OCI console
+1. Follow the steps to **Access the resource blade**.
+1. Select the link to the resource from the **Name** field in the table.
+1. From the resource's detail page, select the **Go to OCI** link on the **OCI Database URL** field.
+1. Log in to OCI.
+1. Manage the resource from within the OCI console.
+
+### Perform a connectivity test
+
+1. Follow the steps to Access the OCI console.
+1. In the OCI console, navigate to the **Pluggable Database Details** page for the database you want to test.
+1. Select the **PDB connection** button.
+1. Select **Show** link to expand the details for the **Connection Strings**.
+1. Open Oracle SQL Developer. If you don't have SQL Developer installed, download [SQL Developer](https://www.oracle.com/database/sqldeveloper/technologies/download/) and install.
+1. Within SQL Developer, open a new connection with the following information.
+ 1. **Name** - Enter a name of your choice used to save your connection.
+ 1. **Username** - Enter **SYS**.
+ 1. **Password** - Enter the password used when creating the PDB.
+ 1. **Role** - Select **SYSDBA**.
+ 1. **Save Password** - Select the box if your security rules allow. If not, you will need to enter the PDB password every time you use this connection in SQL Developer.
+ 1. **Connection Type** - Select **Basic**.
+ 1. **Hostname** - Enter one of the host IPs from the **Connection Strings** above.
+ 1. **Port** - The default is 1521. You only need to change this if you have altered default port settings for the PDB.
+ 1. **Service Name** - Enter the **SERVICE_NAME** value from the host IP you previously selected. This is from the **Connection Strings** above.
+ 1. Select the **Test** button. The Status at the bottom of the connections list, should show as **Success**. If the connection is not a success, one or more of the **Hostname**, **Port**, and **Service Name** fields is incorrect, or the PDB is not currently running.
+ 1. Select the **Save** button.
+ 1. Select the **Connect** button.
+
+### Manage network security group (NSG) rules
+
+1. Follow the steps to access the Oracle Exadata VM Cluster resource blade.
+1. Select the link to the resource from the **Name** field in the table.
+1. From the resource's detail page, select the **Go to OCI** link on the **OCI network security group URL** field.
+1. Log in to OCI.
+1. Manage the NSG rules from within the OCI console.
+1. For additional information on NSG rules and considerations within OracleDB@Azure, see the **Automatic Network Ingress Configuration** section of [Troubleshooting and Known Issues for Exadata Services](exadata-troubleshoot-services.md).
+
+### Support for OracleDB@Azure
+
+1. Follow the steps to Access the OCI console.
+1. From the OCI console, there are two ways to access support resources.
+ 1. At the top of the page, select the Help (?) icon at the top-right of the menu bar.
+ 1. On the right-side of the page, select the floating Support icon.
+
+ >[!NOTE]
+ >This icon can be moved by the user, and the precise horizontal location can vary from user to user.
+
+1. You have several support options from here, including documentation, requesting help via chat, visiting the Support Center, posting a question to a forum, submitting feedback, requesting a limit increase, and creating a support request.
+1. If you need to create a support request, select that option.
+1. The support request page will auto-populate with information needed by Oracle Support Services, including resource name, resource OCID, service group, service, and several other items dependent upon the specific OracleDB@Azure resource.
+1. Select the support option from the following options:
+ 1. Critical outage for critical production system outage or a critical business function is unavailable or unstable. You or an alternate contact must be available to work this issue 24x7 if needed.
+ 1. Significant impairment for critical system or a business function experiencing severe loss of service. Operations can continue in a restricted manner. You or an alternate contact are available to work this issue during normal business hours.
+ 1. Technical issue where functionality, errors, or a performance issue impact some operations.
+ 1. General guidance where a product or service usage question, product or service setup, or documentation clarification is needed.
+1. Select the **Create Support Request** button.
+1. The support ticket is created. This ticket can be monitored within the OCI console or via [My Oracle Support (MOS)](https://support.oracle.com/).
oracle Exadata Manage Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/exadata-manage-services.md
+
+ Title: Exadata services
+description: Learn about how to manage Exadata services.
++++ Last updated : 08/01/2024++
+# Exadata services
+
+Oracle Database@Azure (OracleDB@Azure) provides you with seamless integration of Oracle resources within your Microsoft Azure cloud environment.
+
+You access the OracleDB@Azure service through the Microsoft Azure portal. You create and manage Oracle Exadata Infrastructure and Oracle Exadata VM Cluster resources with direct access to the Oracle Cloud Infrastructure (OCI) portal for creation and management of Oracle Exadata Databases, including all Container Databases (CDBs) and Pluggable Databases (PDBs).
+
+There are IP address requirement differences between Oracle Database@Azure and Oracle Cloud Infrastructure (OCI). In the [Requirements for IP Address Space](https://docs.oracle.com/iaas/exadatacloud/exacs/ecs-network-setup.html#GUID-D5C577A1-BC11-470F-8A91-77609BBEF1EA) documentation, the following changes for Oracle Database@Azure must be considered.
+* Oracle Database@Azure only supports Exadata X9M. Other shapes are unsupported.
+* Oracle Database@Azure reserves 13 IP addresses for the client subnet versus 3 for OCI requirements.
+
+The following articles provide specifics of the creation and management tasks associated with each resource type.
+
+Articles:
+* [What's New in Exadata Services](exadata-manage-services.md)
+* [Provision Exadata Infrastructure](exadata-provision-infrastructure.md)
+* [Provision an Exadata VM Cluster](exadata-provision-vm-cluster.md)
+* [Manage Exadata Resources](exadata-manage-resources.md)
+* [Operate processes for Exadata resources](exadata-operations-processes-services.md)
+* [OCI Multicloud Landing Zone for Azure](exadata-multicloud-landing-zone-azure-services.md)
+* [Terraform/OpenTofu Examples for Exadata Services](exadata-examples-services.md)
+* [Troubleshooting and Known Issues for Exadata Services](exadata-troubleshoot-services.md)
+
+For more information on specific Oracle Exadata Infrastructure or Oracle Exadata VM Cluster articles beyond their implementation and use within OracleDB@Azure, see the following articles:
+
+* [Exadata Database Service on Dedicated Infrastructure](https://docs.oracle.com/en/engineered-systems/exadata-cloud-service/ecscm/https://docsupdatetracker.net/index.html#Oracle%C2%AE-Cloud)
+* [Manage Databases on Exadata Cloud Infrastructure](https://docs.oracle.com/en/engineered-systems/exadata-cloud-service/ecscm/manage-databases.html#GUID-51424A67-C26A-48AD-8CBA-B015F88F841A)
+* [Oracle Exadata Database Service on Dedicated Infrastructure Overview](https://docs.oracle.com/en/engineered-systems/exadata-cloud-service/ecscm/exadata-cloud-infrastructure-overview.html)
oracle Exadata Multicloud Landing Zone Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/exadata-multicloud-landing-zone-azure-services.md
+
+ Title: Exadata - OCI multicloud landing zone for Azure
+description: Learn about Exadata - OCI multicloud landing zone for Azure.
++++ Last updated : 08/01/2024++
+# Exadata - OCI Multicloud landing zone for Azure
+
+Oracle Cloud Infrastructure (OCI) partnered with Microsoft Azure to develop and distribute HashiCorp Terraform/OpenTofu modules that streamline the provisioning process.
+
+Both OCI Multicloud Landing Zone for Azure (OCI LZ) and Microsoft Verified Modules (MVM) use multiple templates to empower Oracle Database@Azure. These Terraform/OpenTofu modules use four (4) terraform providers, AzureRM, AzureAD, AzAPI, and OCI, covering IAM, networking, and database layer resources. Apply these reference implementations for a quick start deployment, or customize them for a more complex topology fit to your needs.
+
+The following diagram illustrates where Terraform or OpenTofu can be introduced to streamline the identity, access, networking, and provisioning processes within Oracle Database@Azure.
+++
+## Prerequisites
+
+- Complete, at a minimum, steps 1-2 of the [Onboarding with Oracle Database@Azure](onboard-oracle-database.md).
+- Have a Terraform/OpenTofu, OCI CLI, Azure CLI, and python (minimum 3.4) environment. For more information, see the [Oracle Multicloud Landing Zone for Azure README](https://github.com/oracle-quickstart/terraform-oci-multicloud-azure?tab=readme-ov-file#prerequisites).
+
+## Dependencies
+
+The [Oracle Multicloud Landing Zone for Azure](https://github.com/oracle-quickstart/terraform-oci-multicloud-azure) modules and templates use multiple Terraform providers.
+
+| Terraform/OpenTofu Providers | Terraform/OpenTofu Modules |
+| - | -- |
+| [AzAPI](/azure/developer/terraform/overview-azapi-provider) | [OCI Landing Zone modules](https://github.com/oci-landing-zones/) |
+| [AzureAD](https://registry.terraform.io/providers/hashicorp/azuread/latest/docs) | [Azure Verified Modules](https://aka.ms/avm) |
+| [AzureRM](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs) | |
+| [OCI](https://registry.terraform.io/providers/oracle/oci/latest/docs) | |
+
+## Templates
+
+For module details, see [Oracle Multicloud Landing Zone for Azure](https://github.com/oracle-quickstart/terraform-oci-multicloud-azure).
++
+| Template | Use Case and Configurations | Terraform/OpenTofu Providers |
+| -- | | - |
+| [az-oci-exa-pdb](https://github.com/oracle-quickstart/terraform-oci-multicloud-azure/tree/main/templates/az-oci-exa-pdb) | Quick start Exadata Database Service | [hashicorp/azurerm](https://registry.terraform.io/providers/hashicorp/azurerm) |
+| | 1. Configuring Azure virtual network with [delegated subnet limits](oracle-database-delegated-subnet-limits.md) | [azure/azapi](https://registry.terraform.io/providers/Azure/azapi) |
+| | 2. [Provision Exadata infrastructure](exadata-provision-infrastructure.md) | [hashicorp/oci](https://registry.terraform.io/providers/hashicorp/oci) |
+| | 3. [Provision an Exadata VM Cluster](exadata-provision-vm-cluster.md) | |
+| | 4. [Creating Database Home](https://docs.oracle.com/iaas/exadata/doc/ecc-creating-first-db-home-on-exacc.html) | |
+| | 5. [Creating Container Database (CDB)](https://docs.oracle.com/iaas/exadata/doc/ecc-create-first-db.html) | |
+| | 6. [Creating Pluggable Database (PDB)](https://docs.oracle.com/iaas/exadata/doc/ecc-create-first-db.html) | |
+| [az-oci-rbac-n-sso-fed](https://github.com/oracle-quickstart/terraform-oci-multicloud-azure/tree/main/templates/az-oci-rbac-n-sso-fed) | Set up both identity federation and RBAC roles/groups | All the following |
+| [az-oci-sso-federation](https://github.com/oracle-quickstart/terraform-oci-multicloud-azure/tree/main/templates/az-oci-sso-federation) | Set up [SSO Between OCI and Microsoft Entra ID](https://docs.oracle.com/iaas/Content/Identity/tutorials/azure_ad/sso_azure/azure_sso.htm) | [hashicorp/azuread](https://registry.terraform.io/providers/hashicorp/azuread/) |
+| | 1. Get service provider metadata from OCI IAM. | [hashicorp/azurerm](https://registry.terraform.io/providers/hashicorp/azurerm) |
+| | 2. Create an Microsoft Entra ID application. | [hashicorp/oci](https://registry.terraform.io/providers/hashicorp/oci) |
+| | 3. Set up SAML SSO for the Microsoft Entra ID application. | |
+| | 4. Set up attributes and claims in the Microsoft Entra ID application. | |
+| | 5. Assign a test user to the Microsoft Entra ID application. | |
+| | 6. Enable the Microsoft Entra ID application as the Identity Provider (IdP) for OCI IAM. | |
+| | 7. Set up [Identity Lifecycle Management Between OCI IAM and Microsoft Entra ID](https://docs.oracle.com/iaas/Content/Identity/tutorials/azure_ad/lifecycle_azure/azure_lifecycle.htm#azure-lifecycle). | |
+| [az-odb-rbac](https://github.com/oracle-quickstart/terraform-oci-multicloud-azure/tree/main/templates/az-odb-rbac) | Create [roles and groups in Azure](https://docs.oracle.com/iaas/Content/multicloud/oaagroupsroles.htm) for Exadata and Autonomous Database services. | [hashicorp/azuread](https://registry.terraform.io/providers/hashicorp/azuread/) |
+| | 1. Create Azure role definition for ADBS Administrator role.| [hashicorp/azurerm](https://registry.terraform.io/providers/hashicorp/azurerm) |
+| | 2. Create Azure group. | |
+| | 3. Create Azure role assignment. | |
+
+## More Terraform/OpenTofu resources
+
+* [QuickStart Oracle Database@Azure with Terraform or OpenTofu Modules](https://docs.oracle.com/en/learn/dbazure-terraform/https://docsupdatetracker.net/index.html) [Terraform: Set Up OCI Terraform](https://docs.oracle.com/iaas/developer-tutorials/tutorials/tf-provider/01-summary.htm)
+* [Import OCI Resources into a Terraform State File](https://docs.oracle.com/en/learn/terraform-statefile-oci-resources/https://docsupdatetracker.net/index.html)
+* [Azure Verified Module for Virtual Network](https://github.com/Azure/terraform-azurerm-avm-res-network-virtualnetwork)
+* [Quickstart: Install and Configure Terraform For Azure](/azure/developer/terraform/quickstart-configure)
+* [Authenticate Terraform to Azure](/azure/developer/terraform/authenticate-to-azure)
oracle Exadata Operations Processes Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/exadata-operations-processes-services.md
+
+ Title: Operation processes for Exadata services
+description: Learn about operation processes for Exadata services.
++++ Last updated : 08/01/2024++
+# Operations processes for Exadata services
+
+There are Oracle processes that are accessible from Microsoft Azure, but are set up and maintained from the Oracle Cloud Infrastructure (OCI) console.
+
+## Oracle Database Autonomous Recovery Service@Azure
+
+Oracle Database Autonomous Recovery Service@Azure (RCV) is the preferred backup solution for OracleDB@Azure resources. The key customer benefits are as follows:
+
+* Allows use of Microsoft Azure Consumption Commitment (MACC) to pay for your backup storage.
+* Allows choice of backup storage locations to meet corporate data residency and compliance requirements.
+* Provides zero data loss with real-time database protection, enabling recovery to less than a second after an outage or ransomware attack.
+* Provides backup immutability using a policy-based backup retention lock preventing backup deletion or alteration by any user in the tenancy.
+* Improves data theft prevention with mandatory and automatic encryption for backup data throughout the entire lifecycle.
+* Provides higher operational efficiency by eliminating weekly full backups that reduces the CPU, memory, and I/O overhead when running backups lowering overall cloud costs.
+* Shortens the backup window with an incremental forever paradigm that moves smaller amounts of backup data between the database and RCV.
+* Improves recoverability with automated zero-impact recovery validation for database backups.
+* Speeds recovery to regions with optimized backups eliminating the need to recover multiple incremental backups.
+* Centralizes database protection insights with a granular recovery health dashboard.
+
+## High-level Steps to Enable Autonomous Recovery Service@Azure
+
+1. Access the OCI console for the database you want to enable for Autonomous Recovery Service@Azure. For details on this, see Access the OCI console in [Managing Exadata Resources](exadata-manage-resources.md).
+1. Configure or create an Autonomous Recovery Service@Azure protection policy with Store backups in the same cloud provider as the database set.
+1. Use the protection policy to Configure automated backups.
+1. When the backup completes, subscription and backup location details appear in the database within OCI.
+
+For more information for Autonomous Recovery Service@Azure, see the following documents:
+* [Multicloud Oracle Database Backup Support](https://docs.oracle.com/en/cloud/paas/recovery-service/dbrsu/azure-multicloud-recoveryservice.html)
+* [Backup Automation and Storage in Oracle Cloud](https://docs.oracle.com/en/cloud/paas/recovery-service/dbrsu/backup-automation.html)
+* [Enable Automatic Backups to Recovery Service](https://docs.oracle.com/en/cloud/paas/recovery-service/dbrsu/enable-automatic-backup.html#GUID-B8A2D342-3331-42C9-8FDD-D0DB0E25F4CE)
+* [About Configuring Protection Policies](https://docs.oracle.com/en/cloud/paas/recovery-service/dbrsu/overview-protection-policy.html#GUID-8C097EAF-E2B0-4231-8027-0067A2E81A00)
+* [Creating a Protection Policy](https://docs.oracle.com/en/cloud/paas/recovery-service/dbrsu/create-protection-policy.html#GUID-C73E254E-2019-4EDA-88E0-F0BA68082A65)
+* [Viewing Protection Policy Details](https://docs.oracle.com/en/cloud/paas/recovery-service/dbrsu/view-protection-policy.html#GUID-5101A7ED-8891-4A6B-B1C4-F13F55A68FF0)
oracle Exadata Provision Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/exadata-provision-infrastructure.md
+
+ Title: Provision Exadata infrastructure
+description: Learn about provisioning an Exadata infrastructure.
++++ Last updated : 08/01/2024++
+# Provision Exadata infrastructure
+
+Provisioning Oracle Exadata Infrastructure is a time-consuming process. Provisioning an Oracle Exadata Infrastructure is a prerequisite for provisioning Oracle Exadata VM Clusters and any Oracle Exadata Databases.
+
+## Prerequisites
+
+There are prerequisites that must be completed before you can provision Exadata Services. You need to complete the following:
+
+- An existing Azure subscription
+- An Azure virtual network with a subnet delegated to the Oracle Database@Azure service (`Oracle.Database/networkAttachments`)
+- Permissions in Azure to create resources in the region, with the following conditions:
+ * No policies prohibiting the creation of resources without tags, because the OracleSubscription resource is created automatically without tags during onboarding.
+ * No policies enforcing naming conventions, because the OracleSubscription resource is created automatically with a default resource name.
+- Purchase OracleDB@Azure in the Azure portal.
+- Select your Oracle Cloud Infrastructure (OCI) account.
+For more detailed documentation, including optional steps, see [Onboarding with Oracle Database@Azure](https://docs.oracle.com/iaas/Content/database-at-azure/oaaonboard.htm).
+
+>[!NOTE]
+> Review the [Troubleshoot issues for Exadata services](exadata-troubleshoot-services.md), specifically the IP Address Requirement Differences, to ensure you have all the information needed for a successful provisioning flow.
+
+## Provision Oracle Exadata infrastructure and VM Cluster resources
+
+1. Provision your Oracle Exadata Infrastructure and Oracle Exadata VM Cluster resources from the OracleDB@Azure blade. By default, the Oracle Exadata Infrastructure tab is selected. To create an Oracle Exadata VM Cluster resource, select that tab first.
+1. Select the **+ Create** icon at the top of the blade to begin the provisioning flow.
+1. Check that you're the **Create** Oracle Exadata Infrastructure flow. If not, exit the flow.
+1. From the **Basics** tab of the Create Oracle Exadata Infrastructure flow, enter the following information.
+ 1. Select the Microsoft Azure subscription to which the Oracle Exadata Infrastructure will be provisioned and billed.
+ 1. Select an existing **Resource group** or select the **Create new** link to create and use a new Resource group for this resource. A resource group is a collection of resources sharing the same lifecycle, permissions, and policies.
+ 1. Enter a unique **Name** for the Oracle Exadata Infrastructure on this subscription.
+ 1. Select the **Region** where this Oracle Exadata Infrastructure is provisioned.
+ >[!NOTE]
+ >The regions where the OracleDB@Azure service is available are limited.
+ 1. Select the **Availability zone** where this Oracle Exadata Infrastructure is provisioned.
+ > [!NOTE]
+ > The availability zones where the OracleDB@Azure service is available are limited.
+ 1. The **Oracle Cloud account name** field is display-only. If the name isn't showing correctly, your OracleDB@Azure account setup hasn't been successfully completed.
+ 1. Select **Next** to continue.
+1. From the Configuration tab of the Create Oracle Exadata Infrastructure flow, enter the following information.
+ 1. From the dropdown list, select the **Exadata infrastructure model** you want to use for this deployment.
+ > [!NOTE]
+ > Not all Oracle Exadata Infrastructure models are available. For more information, see [Oracle Exadata Infrastructure Models](https://docs.oracle.com/iaas/exadatacloud/exacs/ecs-ovr-x8m-scable-infra.html#GUID-15EB1E00-3898-4718-AD94-81BDE271C843).
+ 1. The **Database servers** selector can be used to select a range from 2 to 32.
+ 1. The **Storage servers** selector can be used to select a range from 3 to 64.
+ 1. The **OCPUs** and **Storage** fields are automatically updated based on the settings of the **Database servers** and **Storage servers** selectors.
+ 1. Select **Next** to continue.
+1. From the **Maintenance** tab of the Create Oracle Exadata Infrastructure flow, enter the following information.
+ 1. The **Maintenance method** is selectable to either Rolling or Nonrolling based on your patching preferences.
+ 1. By default, the **Maintenance schedule** is set to **No preference**.
+ 1. If you select **Specify a schedule** for the **Maintenance schedule**, another options open for you to tailor a maintenance schedule that meets your requirements. Each of these selections requires at least one option in each field.
+ 1. You can then enter up to 10 **Names** and **Email addresses** that are used as contacts for the maintenance process.
+ 1. Select **Next** to continue.
+1. From the **Consent** tab of the Create Oracle Exadata Infrastructure flow, you must agree to the terms of service, privacy policy, and agree to access permissions. Once accepted, select **Next** to continue.
+1. From the **Tags** tab of the Create Oracle Exadata Infrastructure flow, you define Microsoft Azure tags.
+ >[!NOTE]
+ > These tags aren't propagated to the Oracle Cloud Infrastructure (OCI) portal. Once you have created the tags, if any, for your environment, select **Next** to continue.
+
+1. From the **Review _+ create** tab of the Create Oracle Exadata Infrastructure flow, a short validation process is run to check the values that you entered from the previous steps. If the validation fails, you must correct any errors before you can start the provisioning process.
+1. Select the **Create** button to start the provisioning flow.
+1. Return to the Oracle Exadata Infrastructure blade to monitor and manage the state of your Oracle Exadata Infrastructure environments.
oracle Exadata Provision Vm Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/exadata-provision-vm-cluster.md
+
+ Title: Provision Exadata virtual machine clusters
+description: Learn about how to provision Exadata virtual machine clusters.
++++ Last updated : 08/01/2024++
+# Provision Exadata virtual machine clusters
+
+Provisioning an Oracle Exadata VM Cluster requires the existence of an Oracle Exadata Infrastructure, and is a prerequisite for Oracle Exadata Databases that runs on the cluster.
+
+## Prerequisites
+
+There are prerequisites that must be completed before you can provision Exadata Services. You need to complete the following:
+
+- An existing Azure subscription
+- An Azure virtual network with a subnet delegated to the Oracle Database@Azure service (`Oracle.Database/networkAttachments`)
+- Permissions in Azure to create resources in the region, with the following conditions:
+ * No policies prohibiting the creation of resources without tags, because the OracleSubscription resource is created automatically without tags during onboarding.
+ * No policies enforcing naming conventions, because the OracleSubscription resource is created automatically with a default resource name.
+- Purchase OracleDB@Azure in the Azure portal.
+- Select your Oracle Cloud Infrastructure (OCI) account.
+For more detailed documentation, including optional steps, see [Onboarding with Oracle Database@Azure](onboard-oracle-database.md).
+
+>[!NOTE]
+>Review the [Troubleshoot Exadata services](exadata-troubleshoot-services.md), specifically the IP Address Requirement Differences, to ensure you have all the information needed for a successful provisioning flow.
+
+1. You provision Oracle Exadata Infrastructure and Oracle Exadata VM Cluster resources from the OracleDB@Azure blade. By default, the Oracle Exadata Infrastructure tab is selected.
+To create an Oracle Exadata VM Cluster resource, select that tab first and follow these instructions.
+
+1. Select the **+ Create** icon at the top of the blade to begin the provisioning flow.
+1. Check that you're using the **Create** Oracle Exadata VM Cluster flow. If not, exit the flow.
+1. From the **Basics** tab of the Create Oracle Exadata VM Cluster flow, enter the following information.
+ > [!NOTE]
+ > Before you can provision an Oracle Exadata VM Cluster, you must have a provisioned Oracle Exadata Infrastructure which you'll assign for your Oracle Exadata VM Cluster.
+1. Select the Microsoft Azure subscription to which the Oracle Exadata VM Cluster will be provisioned.
+ 1. Select an existing **Resource group** or select the **Create new** link to create and use a new Resource group for this resource.
+ 1. Enter a unique **Name** for the Oracle Exadata VM Cluster on this subscription.
+ 1. Select the **Region** where this Oracle Exadata Infrastructure is provisioned. NOTE: The regions where the OracleDB@Azure service is available are limited, and you should assign the Oracle Exadata VM Cluster to the same region as the parent Oracle Exadata Infrastructure.
+ 1. The **Cluster name** should match the Name to avoid additional naming conflicts.
+ 1. Select the existing **Exadata infrastructure** that is the parent for your Oracle Exadata VM Cluster.
+ 1. The **License type** is either **License included** or **Bring your own license (BYOL)**. Your selection affects your billing.
+ 1. The default **Time zone** is UTC. There's also an option to **Select another time zone**.
+ 1. If you choose the **Select another time zone** option, two additional required fields open, **Region or country** and **Selected time zone**. Both of these fields are drop-down lists with selectable values. Once you select the **Region or country**, the **Selected time zone** is populated with the available values for that **Region or country**.
+ 1. The **Grid Infrastructure Version** is selectable based on your previous selections. The **Grid Infrastructure Version** limits the Oracle Database versions that the Oracle Exadata VM Cluster supports.
+ 1. If selected, the **Choose Exadata Image version** checkbox allows you to select whether or not to **Include Exadata Image minor versions** as selectable, and then to choose the specific **Exadata Image version** from the drop-down field based on whether or not you allowed **Include Exadata Image minor versions**.
+ 1. The **SSH public key source** can be selected to **Generate new key pair**, **Use existing key stored in Azure**, or **Use existing public key**. If you select **Generate new key pair**, you must give your newly generated key a unique name. If you select **Use existing key stored in Azure**, you must select that key from a dropdown of defined key for your subscription. If you select **Use existing public key**, you must provide an RSA public key in sing-line format (starting with "ssh-rsa") or the multi-line PEM format. You can generate SSH keys using ssh-keygen or Linux and OS X, or PuTTYGen on Windows.
+ 1. Select **Next** to continue.
+1. From the **Configuration** tab of the Create Oracle Exadata VM Cluster flow, enter the following information.
+ 1. The **Change database servers** checkbox is optional. If selected, it allows you to select a single database server for VM cluster placement. If you don't select this checkbox, the minimum database servers are two (2). Maximum resources vary based on allocation per VM cluster based on the number of database servers. Select from the available configurations.
+ 1. If you select the **Change database servers** checkbox, a drop-down box for **Select database servers** appears. Use this drop-down control to select the specific database servers for your configuration.
+ 1. **Database servers** and **System Model** fields are read-only and based on the available resources.
+ 1. The **OCPU count per VM**, **Memory per VM**, and **Local storage per VM** are limited by the Oracle Exadata Infrastructure.
+ 1. **Total requested OCPU count**, **Total requested memory**, and **Total local storage** are computed based on the local values that you accept or select.
+ 1. **Usable Exadata Storage (TB)** is limited by the Oracle Exadata Infrastructure.
+ 1. **Use Exadata sparse snapshots**, **Use local backups**, and **Usable storage allocation** are options that can only be set at this time before the Oracle Exadata VM Cluster has been provisioned.
+ 1. Select **Next** to continue.
+1. From the **Networking** tab of the Create Oracle Exadata VM Cluster flow, enter the following information.
+ 1. The **Virtual network** is limited based on the **Subscription** and **Resource group** that you selected earlier in the provisioning flow.
+ 1. The **Client subnet** is selectable based on the selected **Virtual network**.
+ 1. To use a custom DNS domain, select the **Custom DNS** checkbox. If unchecked, the Oracle Exadata VM Cluster uses the default domain, oraclevcn.com.
+ 1. If checked, a list of existing DNS private views from OCI is presented. Select the view to use. To create a new private view and zones, see [Configure Private DNS](https://docs.oracle.com/iaas/exadatacloud/exacs/ecs-network-setup.html#ECSCM-GUID-69CF2720-31BE-455B-93E3-D2E39B2DA44B).
+ > [!NOTE]
+ > In order for the list of DNS private views to be populated correctly, the network link's compartment in OCI must match the Microsoft Azure subscription.
+ 1. Enter the **Host name prefix**. The prefix forms the first portion of the Oracle Exadata VM Cluster host name.
+ 1. The **Host domain name** and **Host and domain URL** for your Oracle Exadata VM Cluster are read-only and populated with derived naming.
+ 1. Within the **Network ingress rules** section, the **Add additional network ingress rules** checkbox allows you to define addition ingress CIDR rules. Additional network CIDR ranges (such as application or hub subnet ranges) can be added, during provisioning, to the network security group (NSG) ingress rules for the VM cluster. The selected virtual network's CIDR is added by default. CIDR ranges are specified. The port can be a single port, port range (for example, 80-8080), a comma-delimited list of ports (for example, 80,8080), or any combination of these. This only updates the OCI network security group ingress rules. Microsoft Azure virtual network network security rules must be updated in the specific virtual network in Microsoft Azure.
+ 1. Select **Next** to continue.
+1. From the **Diagnostics Collection** tab of the Create Oracle Exadata VM Cluster flow allows you to specify the diagnostic events, health monitoring, and incident logs and tracing that Oracle can use to identify, track, and resolve issues. Select **Next** to continue.
+1. From the **Consent** tab of the Create Oracle Exadata VM Cluster flow, you must agree to the terms of service, privacy policy, and agree to access permissions. Select **Next** to continue.
+1. From the **Tags** tab of the Create Oracle Exadata VM Cluster flow, you can define Microsoft Azure tags. NOTE: These tags aren't propagated to the Oracle Cloud Infrastructure (OCI) portal. Select **Next** to continue.
+1. From the **Review _+ create** tab of the Create Oracle Exadata VM Cluster flow, a short validation process is run to check the values that you entered from the previous steps. If the validation fails, you must correct any errors before you can start the provisioning process.
+1. Select the **Create** button to start the provisioning flow.
+1. Return to the Oracle Exadata VM Cluster blade to monitor and manage the state of your Oracle Exadata VM Cluster environments.
+
oracle Exadata Troubleshoot Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/exadata-troubleshoot-services.md
+
+ Title: Troubleshoot Exadata services
+description: Learn about how to Troubleshoot for Exadata services.
++++ Last updated : 08/01/2024++
+# Troubleshoot Exadata services
+
+Use the information in this article to resolve common errors and provisioning issues in your Oracle Database@Azure environments.
+
+The issues covered in this guide don't cover general issues related to Oracle Database@Azure configuration, settings, and account setup. For more information on those articles, see [Oracle Database@Azure Overview](https://docs.oracle.com/iaas/Content/multicloud/oaaoverview.htm).
+
+## Terminations and Microsoft Azure locks
+
+Oracle advises removal of all Microsoft Azure locks to Oracle Database@Azure resources before terminating the resource. For example, if you created a Microsoft Azure private endpoint, you should remove that resource first. If you have a policy to prevent the deletion of locked resources, the Oracle Database@Azure workflow to delete the resource fails because Oracle Database@Azure can't delete the lock.
+
+## IP Address Requirement Differences
+
+There are IP address requirement differences between Oracle Database@Azure and Oracle Cloud Infrastructure (OCI). In the [Requirements for IP Address Space](https://docs.oracle.com/iaas/exadatacloud/exacs/ecs-network-setup.html#GUID-D5C577A1-BC11-470F-8A91-77609BBEF1EA) documentation, the following changes for Oracle Database@Azure must be considered.
+* Oracle Database@Azure only supports Exadata X9M. All other shapes are unsupported.
+* Oracle Database@Azure reserves 13 IP addresses for the client subnet versus 3 for OCI requirements.
+
+## Private DNS Zone Limitation
+
+When provisioning Exadata Services, a private DNS zone can only select zones with four labels or less. For example, a.b.c.d is allowed, while a.b.c.d.e is not allowed.
+
+## Automatic Network Ingress Configuration
+
+You can connect a Microsoft Azure VM to an Oracle Exadata VM Cluster if both are in the same virtual network (VNet). The functionality is automatic and requires no additional changes to network security group (NSG) rules. If you need to connect a Microsoft Azure VM from a different VNet than the one where the Oracle Exadata VM Cluster was created, an additional step to configure NSG traffic rules to allow the other VNet's traffic to flow properly. As an example, if you have two (2) VNets (A and B) with VNet A serving the Microsoft Azure VM and VNet B serving the Oracle Exadata VM Cluster, you need to add VNet A's CIDR address to the NSG route table in OCI.
+
+| Direction | Source or Destination | Protocol | Details | Description |
+| | | -- | - | -- |
+| Direction: Egress <br /> Stateless: No | Destination Type: CIDR <br /> Destination: 0.0.0.0/0 | All Protocols | Allow: All traffic for all ports | Default NSG egress rule |
+| Direction: Ingress <br /> Stateless: No | Source Type: CIDR <br /> Source: Microsoft Azure VNet CIDR | TCP | Source Port Range: All <br /> Destination Port Range: All <br /> Allow: TCP traffic for ports: All | Ingress all TCP from Microsoft Azure VNet. |
+| Direction: Ingress <br /> Stateless: No | Source Type: CIDR <br /> Source: Microsoft AzureVNet CIDR | ICMP | Type: All <br /> Code: All <br /> Allow: ICMP traffic for: All | Ingress all ICMP from Microsoft Azure VNet. |
+
+| Direction | Source or Destination | Protocol | Details | Description |
+| | | -- | - | -- |
+| Direction: Egress <br /> Stateless: No | Destination Type: Service <br /> Destination: OCI IAD object storage | TCP | Source Port Range: All <br /> Destination Port Range: 443 <br /> Allow: TCP traffic for ports: 443 HTTPS | Allows access to object storage. |
+| Direction: Ingress <br /> Stateless: No | Source Type: CIDR <br /> Source: 0.0.0.0/0 | ICMP | Type: 3 <br /> Code: 4 <br /> Allow: ICMP traffic for: 3, 4 Destination Unreachable: Fragmentation Needed and Don't Fragment was Set | Allows Path MTU Discovery fragmentation messages. |
oracle Exadata Whats New Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/exadata-whats-new-services.md
+
+ Title: What's new in Exadata services
+description: Learn about what's new in Exadata services.
++++ Last updated : 08/01/2024++
+# What's new in Exadata services
+
+Oracle Database@Azure (OracleDB@Azure) provides you with seamless integration of Oracle resources within your Microsoft Azure cloud environment.
+
+## July 2024
+
+| Month/Year | Feature | Description |
+| - | - | |
+| July 2024 | Added a Quickstart Terraform Templates and Modules section. | This section grows as the templates and modules are revised and new content is added. |
+
+The above table only lists the changes within the Oracle Database@Azure product specific to Oracle Exadata Infrastructures or Oracle Exadata VM Clusters. For changes to the Oracle Exadata Infrastructure or the Oracle Exadata VM Cluster products, see [WhatΓÇÖs New in Oracle Exadata Database Service on Dedicated Infrastructure](https://docs.oracle.com/en/engineered-systems/exadata-cloud-service/ecscm/exa-whats-new.html).
+
+## Next steps
+* [Provision Exadata Infrastructure](exadata-provision-infrastructure.md)
+* [Provision an Exadata VM Cluster](exadata-provision-vm-cluster.md)
+* [Manage Exadata Resources](exadata-manage-resources.md)
+* [Operate processes for Exadata resources](exadata-operations-processes-services.md)
+* [OCI Multicloud Landing Zone for Azure](exadata-multicloud-landing-zone-azure-services.md)
+* [Terraform/OpenTofu Examples for Exadata Services](exadata-examples-services.md)
+* [Troubleshooting and Known Issues for Exadata Services](exadata-troubleshoot-services.md)
oracle Oracle Database Delegated Subnet Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/oracle-database-delegated-subnet-limits.md
+
+ Title: Delegated subnet limits
+description: Learn about Delegated subnet limits for Oracle Database@Azure.
++++ Last updated : 08/01/2024++
+# Delegated subnet limits
+
+In this article, you learn about delegated subnet limits for Oracle Database@Azure.
+
+Oracle Database@Azure infrastructure resources are connected to your Azure virtual network using a virtual NIC from your [delegated subnets](/azure/virtual-network/subnet-delegation-overview) (delegated to `Oracle.Database/networkAttachement`). By default, the Oracle Database@Azure service can use up to five delegated subnets. If you need more delegated subnet capacity, you can request a service limit increase.
+
+## Service limits in the OCI Console
+
+For information on viewing and increasing service limits in the OCI Console, see the following articles:
+- [To view the tenancy's limits and usage (by region)](https://docs.oracle.com/en-us/iaas/Content/General/Concepts/servicelimits.htm#To_view_your_tenancys_limits_and_usage_by_region)
+- [Requesting a service limit increase](https://docs.oracle.com/en-us/iaas/Content/General/Concepts/servicelimits.htm#Requesti)
+
+When submitting a service limit increase, note the following:
+
+- The service name is `Multicloud`.
+- The resource name is `Delegated Subnet Multicloud Links`.
+- The service limit name for Oracle Database@Azure delegated subnets is `azure-delegated-subnet-count`.
+- The limit is applied at the regional level.
+
+## Next steps
+
+[Network planning for Oracle Database@Azure](oracle-database-network-plan.md) in the Azure documentation for information about network topologies and constraints for Oracle Database@Azure.
+
oracle Provision Oracle Exadata Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/provision-oracle-exadata-infrastructure.md
Title: Provision Exadata infrastructure for Oracle Database@Azure
-description: Provision Exadata infrastructure for Oracle Database@Azure
+ Title: Provision an Exadata infrastructure for Oracle Database@Azure
+description: Provision an Exadata infrastructure for Oracle Database@Azure
-# Provision Exadata infrastructure
+# Provision an Exadata infrastructure
Provisioning Oracle Exadata Infrastructure is a time-consuming process. Provisioning an Oracle Exadata Infrastructure is a prerequisite for provisioning Oracle Exadata VM Clusters and any Oracle Exadata Databases.
sentinel Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/restore.md
Title: Restore archived logs from search - Microsoft Sentinel
description: Learn how to restore archived logs from search job results. Previously updated : 03/03/2024 Last updated : 09/25/2024 appliesto: - Microsoft Sentinel in the Azure portal
Restore data from an archived log to use in high performing queries and analytics.
-Before you restore data in an archived log, see [Start an investigation by searching large datasets (preview)](investigate-large-datasets.md) and [Restore in Azure Monitor](/azure/azure-monitor/logs/restore).
- [!INCLUDE [unified-soc-preview](includes/unified-soc-preview.md)]
+## Prerequisites
+
+Before you restore data in an archived log, see [Start an investigation by searching large datasets (preview)](investigate-large-datasets.md) and [Restore in Azure Monitor](/azure/azure-monitor/logs/restore).
+ ## Restore archived log data
-To restore archived log data in Microsoft Sentinel, specify the table and time range for the data you want to restore. Within a few minutes, the log data is available within the Log Analytics workspace. Then you can use the data in high-performance queries that support full Kusto Query Language (KQL).
+To restore archived log data in Microsoft Sentinel, specify the table and time range for the data you want to restore. Within a few minutes, the log data is available within the Log Analytics workspace. Then you can use the data in high-performance queries that support full Kusto Query Language (KQL).
-You can restore archived data directly from the **Search** page or from a saved search.
+Restore archived data directly from the **Search** page or from a saved search.
-1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **General**, select **Search**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Search**.
-1. Restore log data in one of two ways:
- - At the top of **Search** page, select **Restore**.
- :::image type="content" source="media/restore/search-page-restore.png" alt-text="Screenshot of restore button at the top of the search page.":::
- - Select the **Saved Searches** tab and **Restore** on the appropriate search.
- :::image type="content" source="media/restore/search-results-restore.png" alt-text="Screenshot of the restore link on a saved search.":::
+1. In Microsoft Sentinel, select **Search**. In the [Azure portal](https://portal.azure.com), this page is listed under **General**. In the [Defender portal](https://security.microsoft.com/), this page is at the Microsoft Sentinel root level.
-1. Select the table you want to restore.
-1. Select the time range of the data that you want restore.
-1. Select **Restore**.
+1. Restore log data using one of the following methods:
- :::image type="content" source="media/restore/restoration-page.png" alt-text="Screenshot of the restoration page with table and time range selected.":::
+ - Select :::image type="icon" source="media/restore/restore-button.png" border="false"::: **Restore** at the top of the page. In the **Restoration** pane on the side, select the table and time range you want to restore, and then select **Restore at the bottom of the pane**.
+
+ - Select **Saved searches**, locate the search results you want to restore, and then select **Restore**. If you have multiple tables, select the one you want to restore and then select **Actions > Restore** in the side pane. For example:
+
+ :::image type="content" source="media/restore/restore-azure.png" alt-text="Screenshot of restoring a specific site search.":::
1. Wait for the log data to be restored. View the status of your restoration job by selecting on the **Restoration** tab.
View the status and results of the log data restore by going to the **Restoratio
1. In Microsoft Sentinel, select **Search** > **Restoration**.
- :::image type="content" source="media/restore/restoration-tab.png" alt-text="Screenshot of the restoration tab on the search page.":::
-
-1. When your restore job is complete, select the table name.
+1. When your restore job is complete and the status is updated, select the table name and review the results.
- :::image type="content" source="media/restore/data-available-select-table.png" alt-text="Screenshot that shows rows with completed restore jobs and a table selected.":::
+ In the [Azure portal](https://portal.azure.com), results are shown in the **Logs** query page. In the [Defender portal](https://security.microsoft.com/), results are shown in the **Advanced hunting** page.
-1. Review the results.
+ For example:
:::image type="content" source="media/restore/restored-data-logs-view.png" alt-text="Screenshot that shows the logs query pane with the restored table results.":::
- The Logs query pane shows the name of table containing the restored data. The **Time range** is set to a custom time range that uses the start and end times of the restored data.
+ The **Time range** is set to a custom time range that uses the start and end times of the restored data.
## Delete restored data tables
-To save costs, we recommend you delete the restored table when you no longer need it. When you delete a restored table, Azure doesn't delete the underlying source data.
+To save costs, we recommend you delete the restored table when you no longer need it. When you delete a restored table, the underlying source data isn't deleted.
+1. In Microsoft Sentinel, select **Search** > **Restoration** and identify the table you want to delete.
-1. In Microsoft Sentinel, select **Search** > **Restoration**.
-1. Identify the table you want to delete.
-1. Select **Delete** for that table row.
-
- :::image type="content" source="media/restore/delete-restored-table.png" alt-text="Screenshot of restoration tab that shows the delete button on each row.":::
+1. Select **Delete** for that table row to delete the restored table.
## Next steps
sentinel Summary Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/summary-rules.md
This section reviews common scenarios for creating summary rules in Microsoft Se
1. **Run a subsequent search or correlation with other data** to complete the attack story.
-### Detect potential SPN scanning in your network
-
-Detect potential Service Principal Name (SPN) scanning in your network traffic.
-
-**Scenario**: You're a SOC engineer who needs to create a highly accurate detection for any SPN scanning performed by a user account. The detection currently does the following:
-
-1. Looks for Security events with EventID 4796 (A Kerberos service ticket was requested).
-1. Creates a baseline with the number of unique tickets typically requested by a user account per day.
-1. Generates an alert when there's a major deviation from that baseline.
-
-**Challenge**: The current detection runs on 14 days, or the maximum data lookback in the Analytics table, and creates many false positives. While the detection includes thresholds that are designed to prevent false positives, alerts are still generated for legitimate requests as long as there are more requests than usual. This might happen for vulnerability scanners, administration systems, and in misconfigured systems. On your team, there were so many false positives that they needed to turn off some of the analytics rules. To create a more accurate baseline, you'll need more than 14 days of baseline data.
-
-The current detection also runs a summary query on a separate logic app for each alert. This involves extra work for the setup and maintenance of those logic apps, and incurs extra costs.
-
-**Solution**: We recommend using summary rules to do the following:
-
-1. Generate a daily summary of the count of unique tickets per user. This summarizes the `SecurityEvents` table data for EventID 4769, with extra filtering for specific user accounts.
-
-1. In the summary rule, to generate potential SPN scanning alerts:
-
- - Reference at least 30 days worth of summary data to create a strong baseline.
- - Apply `percentile()` in your query to calculate the deviation from the baseline
-
- For example:
-
- ```kusto
- let starttime = 14d;
- let endtime = 1d;
- let timeframe = 1h;
- let threshold=10;
- let Kerbevent = SecurityEvent
- | where TimeGenerated between(ago(starttime) .. now())
- | where EventID == 4769
- | parse EventData with * 'TicketEncryptionType">' TicketEncryptionType "<" *
- | parse EventData with * 'ServiceName">' ServiceName "<" *
- | where ServiceName !contains "$" and ServiceName !contains "krbtgt"
- | parse EventData with * 'TargetUserName">' TargetUserName "<" *
- | where TargetUserName !contains "$@" and TargetUserName !contains ServiceName
- | parse EventData with * 'IpAddress">::ffff:' ClientIPAddress "<" *; let baseline = Kerbevent
- | where TimeGenerated >= ago(starttime) and TimeGenerated < ago(endtime)
- | make-series baselineDay=dcount(ServiceName) default=1 on TimeGenerated in range(ago(starttime), ago(endtime), 1d) by TargetUserName | mvexpand TimeGenerated , baselineDay
- | extend baselineDay = toint(baselineDay)
- | summarize p95CountDay = percentile(baselineDay, 95) by TargetUserName; let current = Kerbevent
- | where TimeGenerated between(ago(timeframe) .. now())
- | extend encryptionType = case(TicketEncryptionType in ("0x1","0x3"), "DES", TicketEncryptionType in ("0x11","0x12"), "AES", TicketEncryptionType in ("0x17","0x18"), "RC4", "Failure")
- | where encryptionType in ("AES","DES","RC4")
- | summarize currentCount = dcount(ServiceName), ticketsRequested=make_set(ServiceName), encryptionTypes=make_set(encryptionType), ClientIPAddress=any(ClientIPAddress), Computer=any(Computer) by TargetUserName; current
- | join kind=leftouter baseline on TargetUserName
- | where currentCount > p95CountDay*2 and currentCount > threshold
- | project-away TargetUserName1
- | extend context_message = strcat("Potential SPN scan performed by user ", TargetUserName, "\nUser generally requests ", p95CountDay, " unique service tickets in a day.", "\nUnique service tickets requested by user in the last hour: ", currentCount)
- ```
- ### Generate alerts on threat intelligence matches against network data Generate alerts on threat intelligence matches against noisy, high volume, and low-security value network data.
service-bus-messaging Duplicate Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/duplicate-detection.md
The `MessageId` can always be some GUID, but anchoring the identifier to the bus
>- When **partitioning** is **enabled**, `MessageId+PartitionKey` is used to determine uniqueness. When sessions are enabled, partition key and session ID must be the same. >- When **partitioning** is **disabled** (default), only `MessageId` is used to determine uniqueness. >- For information about `SessionId`, `PartitionKey`, and `MessageId`, see [Use of partition keys](service-bus-partitioning.md#use-of-partition-keys).
+>- When using **partitioning** and sending **batches** of messages, you should ensure that they do not contain any partition identifying properties. Since deduplication relies on explicitly setting message IDs to determine uniqueness, it is not recommended to use deduplication and batching together with partitioning.
> [!NOTE] > Scheduled messages are included in duplicate detection. Therefore, if you send a scheduled message and then send a duplicate non-scheduled message, the non-scheduled message gets dropped. Similarly, if you send a non-scheduled message and then a duplicate scheduled message, the scheduled message is dropped.
service-connector How To Integrate Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md
The table below shows which combinations of authentication methods and clients a
|--|-|--|-|-| | .NET | Yes | Yes | Yes | Yes | | Java | Yes | Yes | Yes | Yes |
-| Java - Spring Boot | No | No | Yes | No |
+| Java - Spring Boot | Yes | Yes | Yes | Yes |
| Node.js | Yes | Yes | Yes | Yes | | Python | Yes | Yes | Yes | Yes | | Go | Yes | Yes | Yes | Yes |
Reference the connection details and sample code in the following tables, accord
### System-assigned managed identity
-For default environment variables and sample code of other authentication type, please choose from beginning of the documentation.
+#### SpringBoot client
+
+Authenticating with a system-assigned managed identity is only available for Spring Cloud Azure version 4.0 or higher.
+
+| Default environment variable name | Description | Example value |
+||--||
+| spring.cloud.azure.storage.blob.credential.managed-identity-enabled | Whether to enable managed identity | `True` |
+| spring.cloud.azure.storage.blob.account-name | Name for the storage account | `storage-account-name` |
+| spring.cloud.azure.storage.blob.endpoint | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
+
+#### Other clients
| Default environment variable name | Description | Example value | | - | | |
Refer to the steps and code below to connect to Azure Blob Storage using a syste
### User-assigned managed identity
-For default environment variables and sample code of other authentication type, please choose from beginning of the documentation.
+#### SpringBoot client
+
+Authenticating with a user-assigned managed identity is only available for Spring Cloud Azure version 4.0 or higher.
+
+| Default environment variable name | Description | Example value |
+||--||
+| spring.cloud.azure.storage.blob.credential.managed-identity-enabled | Whether to enable managed identity | `True` |
+| spring.cloud.azure.storage.blob.account-name | Name for the storage account | `storage-account-name` |
+| spring.cloud.azure.storage.blob.endpoint | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
+| spring.cloud.azure.storage.blob.credential.client-id | Client ID of the user-assigned managed identity | `00001111-aaaa-2222-bbbb-3333cccc4444` |
+
+#### Other clients
| Default environment variable name | Description | Example value | | - | | |
Refer to the steps and code below to connect to Azure Blob Storage using a user-
> [!WARNING] > Microsoft recommends that you use the most secure authentication flow available. The authentication flow described in this procedure requires a very high degree of trust in the application, and carries risks that are not present in other flows. You should only use this flow when other more secure flows, such as managed identities, aren't viable.
-For default environment variables and sample code of other authentication type, please choose from beginning of the documentation.
-
-#### SpringBoot client type
+#### SpringBoot client
| Application properties | Description | Example value | | | | |
For default environment variables and sample code of other authentication type,
| spring.cloud.azure.storage.blob.account-key | Your Blob Storage account key for Spring Cloud Azure version 4.0 or above | `<account-key>` | | spring.cloud.azure.storage.blob.endpoint | Your Blob Storage endpoint for Spring Cloud Azure version 4.0 or above | `https://<storage-account-name>.blob.core.windows.net/` |
-#### Other client types
+#### Other clients
+ | Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob Storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
Refer to the steps and code below to connect to Azure Blob Storage using a conne
### Service principal
-For default environment variables and sample code of other authentication type, please choose from beginning of the documentation.
+#### SpringBoot client
+
+Authenticating with a service principal is only available for Spring Cloud Azure version 4.0 or higher.
+
+| Default environment variable name | Description | Example value |
+||--||
+| spring.cloud.azure.storage.blob.account-name | Name for the storage account | `storage-account-name` |
+| spring.cloud.azure.storage.blob.endpoint | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
+| spring.cloud.azure.storage.blob.credential.client-id | Client ID of the service principal | `00001111-aaaa-2222-bbbb-3333cccc4444` |
+| spring.cloud.azure.storage.blob.credential.client-secret | Client secret to perform service principal authentication | `Aa1Bb~2Cc3.-Dd4Ee5Ff6Gg7Hh8Ii9_Jj0Kk1Ll2` |
+
+#### Other clients
| Default environment variable name | Description | Example value | | - | | |
service-connector How To Integrate Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-queue.md
The table below shows which combinations of authentication methods and clients a
|--|-|--|-|-| | .NET | Yes | Yes | Yes | Yes | | Java | Yes | Yes | Yes | Yes |
-| Java - Spring Boot | No | No | Yes | No |
+| Java - Spring Boot | Yes | Yes | Yes | Yes |
| Node.js | Yes | Yes | Yes | Yes | | Python | Yes | Yes | Yes | Yes |
Use the connection details below to connect compute services to Queue Storage. F
### System-assigned managed identity
+#### SpringBoot client
+
+Authenticating with a system-assigned managed identity is only available for Spring Cloud Azure version 4.0 or higher.
+
+| Default environment variable name | Description | Example value |
+||--||
+| spring.cloud.azure.storage.queue.credential.managed-identity-enabled | Whether to enable managed identity | `True` |
+| spring.cloud.azure.storage.queue.account-name | Name for the storage account | `storage-account-name` |
+| spring.cloud.azure.storage.queue.endpoint | Queue Storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` |
+
+#### Other clients
+ | Default environment variable name | Description | Example value | | -- | - | - | | AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` |
Refer to the steps and code below to connect to Azure Queue Storage using a syst
### User-assigned managed identity
+#### SpringBoot client
+
+Authenticating with a user-assigned managed identity is only available for Spring Cloud Azure version 4.0 or higher.
+
+| Default environment variable name | Description | Example value |
+||--||
+| spring.cloud.azure.storage.queue.credential.managed-identity-enabled | Whether to enable managed identity | `True` |
+| spring.cloud.azure.storage.queue.account-name | Name for the storage account | `storage-account-name` |
+| spring.cloud.azure.storage.queue.endpoint | Queue Storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` |
+| spring.cloud.azure.storage.queue.credential.client-id | Client ID of the user-assigned managed identity | `00001111-aaaa-2222-bbbb-3333cccc4444` |
+
+#### Other clients
++ | Default environment variable name | Description | Example value | | -- | - | - | | AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` |
Refer to the steps and code below to connect to Azure Queue Storage using a user
> [!WARNING] > Microsoft recommends that you use the most secure authentication flow available. The authentication flow described in this procedure requires a very high degree of trust in the application, and carries risks that are not present in other flows. You should only use this flow when other more secure flows, such as managed identities, aren't viable.
-#### SpringBoot client type
+#### SpringBoot client
+ | Application properties | Description | Example value | |-|-|--|
Refer to the steps and code below to connect to Azure Queue Storage using a user
| spring.cloud.azure.storage.queue.account-key | Queue storage account key for Spring Cloud Azure version above 4.0 | `<account-key>` | | spring.cloud.azure.storage.queue.endpoint | Queue storage endpoint for Spring Cloud Azure version above 4.0 | `https://<storage-account-name>.queue.core.windows.net/` |
-#### Other client types
+#### Other clients
| Default environment variable name | Description | Example value | |-||-|
Refer to the steps and code below to connect to Azure Queue Storage using a conn
### Service principal
+#### SpringBoot client
+
+Authenticating with a service principal is only available for Spring Cloud Azure version 4.0 or higher.
+
+| Default environment variable name | Description | Example value |
+||--||
+| spring.cloud.azure.storage.queue.account-name | Name for the storage account | `storage-account-name` |
+| spring.cloud.azure.storage.queue.endpoint | Queue Storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` |
+| spring.cloud.azure.storage.queue.credential.client-id | Client ID of the service principal | `00001111-aaaa-2222-bbbb-3333cccc4444` |
+| spring.cloud.azure.storage.queue.credential.client-secret | Client secret to perform service principal authentication | `Aa1Bb~2Cc3.-Dd4Ee5Ff6Gg7Hh8Ii9_Jj0Kk1Ll2` |
+
+#### Other clients
+ | Default environment variable name | Description | Example value | | -- | - | - | | AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` |
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
Title: Configure networking for Azure Elastic SAN
description: Learn how to secure Azure Elastic SAN volumes through access configuration. - Previously updated : 05/29/2024+ Last updated : 09/25/2024
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
# Mount SMB Azure file shares on Linux clients
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- Azure file shares can be mounted in Linux distributions using the [SMB kernel client](https://wiki.samba.org/index.php/LinuxCIFS). The recommended way to mount an Azure file share on Linux is using SMB 3.1.1. By default, Azure Files requires encryption in transit, which is supported by SMB 3.0+. Azure Files also supports SMB 2.1, which doesn't support encryption in transit, but you can't mount Azure file shares with SMB 2.1 from another Azure region or on-premises for security reasons. Unless your application specifically requires SMB 2.1, use SMB 3.1.1. SMB 2.1 support was added to Linux kernel version 3.7, so if you're using a version of the Linux kernel after 3.7, it should support SMB 2.1.
update-manager Tutorial Webhooks Using Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/tutorial-webhooks-using-runbooks.md
Title: Create pre and post events using a webhook with Automation runbooks. description: In this tutorial, you learn how to create the pre and post events using webhook with Automation runbooks. Previously updated : 09/04/2024 Last updated : 09/24/2024
-#Customer intent: As an IT admin, I want create pre and post events using a webhook with Automation runbooks.
+#Customer intent: As an IT admin, I want create pre and post events using a webhook with Automation runbooks.
# Tutorial: Create pre and post events using a webhook with Automation **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure VMs :heavy_check_mark: Azure Arc-enabled servers.
-Pre and post events, also known as pre/post-scripts, allow you to execute user-defined actions before and after the schedule patch installation. One of the most common scenarios is to start and stop a VM. With pre-events, you can run a prepatching script to start the VM before initiating the schedule patching process. Once the schedule patching is complete, and the server is rebooted, a post-patching script can be executed to safely shut down the VM.
+Pre and post events, also known as pre/post-scripts, allow you to execute user-defined actions before and after the schedule patch installation. One of the most common scenarios is to start and stop a Virtual Machine (VM). With pre-events, you can run a prepatching script to start the VM before initiating the schedule patching process. Once the schedule patching is complete, and the server is rebooted, a post-patching script can be executed to safely shut down the VM.
This tutorial explains how to create pre and post events to start and stop a VM in a schedule patch workflow using a webhook.
In this tutorial, you learn how to:
```powershell param
-
( [Parameter(Mandatory=$false)]
-
[object] $WebhookData ) $notificationPayload = ConvertFrom-Json -InputObject $WebhookData.RequestBody $eventType = $notificationPayload[0].eventType
- if ($eventType -ne ΓÇ£Microsoft.Maintenance.PreMaintenanceEventΓÇ¥ -and $eventType ΓÇône ΓÇ£Microsoft.Maintenance.PostMaintenanceEventΓÇ¥ ) {
+ if ($eventType -ne "Microsoft.Maintenance.PreMaintenanceEvent" -and $eventType ΓÇône "Microsoft.Maintenance.PostMaintenanceEvent" ) {
Write-Output "Webhook not triggered as part of pre or post patching for maintenance run" return }
In this tutorial, you learn how to:
```powershell param ( - [Parameter(Mandatory=$false)] [object] $WebhookData )
In this tutorial, you learn how to:
} ```
- 1. To customize you can use either your existing scripts with the above modifications done or use the following scripts.
+ 1. To customize, you can use either your existing scripts with the above modifications done or use the following scripts.
### Sample scripts
param
[Parameter(Mandatory=$false)] [object] $WebhookData ) - Connect-AzAccount -Identity # Install the Resource Graph module from PowerShell Gallery
Connect-AzAccount -Identity
$notificationPayload = ConvertFrom-Json -InputObject $WebhookData.RequestBody $eventType = $notificationPayload[0].eventType
-if ($eventType -ne ΓÇ£Microsoft.Maintenance.PreMaintenanceEventΓÇ¥) {
- Write-Output "Webhook not triggered as part of pre-patching for
- maintenance run"
+if ($eventType -ne "Microsoft.Maintenance.PreMaintenanceEvent") {
+ Write-Output "Webhook not triggered as part of pre-patching for maintenance run"
return }
foreach($id in $jobsList)
```powershell param ( - [Parameter(Mandatory=$false)] [object] $WebhookData )
-
Connect-AzAccount -Identity # Install the Resource Graph module from PowerShell Gallery
Connect-AzAccount -Identity
$notificationPayload = ConvertFrom-Json -InputObject $WebhookData.RequestBody $eventType = $notificationPayload[0].eventType
-if ($eventType -ne ΓÇ£Microsoft.Maintenance.PostMaintenanceEventΓÇ¥) {
- Write-Output "Webhook not triggered as part of post-patching for maintenance run"
+if ($eventType -ne "Microsoft.Maintenance.PostMaintenanceEvent") {
+ Write-Output "Webhook not triggered as part of post-patching for maintenance run"
return }
if ($resourceSubscriptionIds.Count -eq 0) {
Start-Sleep -Seconds 30 Write-Output "Querying ARG to get machine details [MaintenanceRunId=$maintenanceRunId][ResourceSubscriptionIdsCount=$($resourceSubscriptionIds.Count)]" $argQuery = @" - maintenanceresources - | where type =~ 'microsoft.maintenance/applyupdates' - | where properties.correlationId =~ '$($maintenanceRunId)' - | where id has '/providers/microsoft.compute/virtualmachines/' - | project id, resourceId = tostring(properties.resourceId) - | order by id asc "@
foreach($id in $jobsList)
#### [Cancel a schedule](#tab/script-cancel) ```powershell
+param
+(
+ [Parameter(Mandatory=$false)]
+ [object] $WebhookData
+)
+Connect-AzAccount -Identity
+
+# Install the Resource Graph module from PowerShell Gallery
+# Install-Module -Name Az.ResourceGraph
+$notificationPayload = ConvertFrom-Json -InputObject $WebhookData.RequestBody
+$maintenanceRunId = $notificationPayload[0].data.CorrelationId
+ Invoke-AzRestMethod ` --Path "<Correlation ID from EventGrid Payload>?api-version=2023-09-01-preview" `
+-Path "$maintenanceRunId`?api-version=2023-09-01-preview" `
-Payload '{ "properties": {
virtual-desktop Create Fslogix Profile Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-fslogix-profile-container.md
After you create the volume, configure the volume access parameters.
5. At this point, the new volume will start to deploy. Once deployment is complete, you can use the Azure NetApp Files share.
-6. To see the mount path, select **Go to resource** and look for it in the Overview tab.
-
- > [!div class="mx-imgBorder"]
- > ![A screenshot of the Overview screen with a red arrow pointing at the mount path.](media/overview-mount-path.png)
+6. To see the mount path, select **Go to resource** and look for it in the Overview tab. The mount path is in the format `\\<share-name>\<folder-name>`.
## Configure FSLogix on session host virtual machines (VMs)
This section is based on [Create a profile container for a host pool using a fil
5. Go to the **Overview** tab and confirm that the FSLogix profile container is using space.
-6. Connect directly to any VM part of the host pool using Remote Desktop and open the **File Explorer.** Then navigate to your **Mount path**. Within this folder, there should be a profile VHD (or VHDX).
+6. Connect directly to any VM part of the host pool using Remote Desktop and open the **File Explorer.** Then navigate to your **Mount path**. Within this folder, there should be a `.VHD` or .`VHDX` file for the profile.
virtual-network How To Create Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/how-to-create-encryption.md
Title: Create a virtual network with encryption - Azure portal
-description: Learn how to create an encrypted virtual network using the Azure portal. A virtual network lets Azure resources communicate with each other and with the internet.
+description: Learn how to create an encrypted virtual network by using the Azure portal. A virtual network lets Azure resources communicate with each other and the internet.
-# Create a virtual network with encryption using the Azure portal
+# Create a virtual network with encryption by using the Azure portal
-Azure Virtual Network encryption is a feature of Azure Virtual Network. Virtual network encryption allows you to seamlessly encrypt and decrypt internal network traffic over the wire, with minimal effect to performance and scale. Azure Virtual Network encryption protects data traversing your virtual network virtual machine to virtual machine and virtual machine to on-premises.
+Azure Virtual Network encryption is a feature of Azure Virtual Network. With Virtual Network encryption, you can seamlessly encrypt and decrypt internal network traffic over the wire, with minimal effect to performance and scale. Virtual Network encryption protects data that traverses your virtual network from virtual machine to virtual machine and from virtual machine to on-premises.
## Prerequisites ### [Portal](#tab/portal) -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
### [PowerShell](#tab/powershell) -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- Azure PowerShell installed locally or Azure Cloud Shell.-
+- Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Install Azure PowerShell locally or use Azure Cloud Shell.
- Sign in to Azure PowerShell and select the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).--- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command Get-InstalledModule -Name `Az.Network`. If the module requires an update, use the command Update-Module -Name `Az.Network` if necessary.
+- Ensure that your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name Az.Network`. If the module requires an update, use the command `Update-Module -Name Az.Network`, if necessary.
If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
If you choose to install and use PowerShell locally, this article requires the A
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] -- The how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.31.0 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed.
+## Create a virtual network
+ ### [Portal](#tab/portal) [!INCLUDE [virtual-network-create.md](~/reusable-content/ce-skilling/azure/includes/virtual-network-create.md)] ### [PowerShell](#tab/powershell)
-Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) named **test-rg** in the **eastus2** location.
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) named `test-rg` in the `eastus2` location.
```azurepowershell-interactive $rg =@{
New-AzVirtualNetwork @net
### [CLI](#tab/cli)
-Create a resource group with [az group create](/cli/azure/group#az-group-create) named **test-rg** in the **eastus2** location.
+Create a resource group with [az group create](/cli/azure/group#az-group-create) named `test-rg` in the `eastus2` location.
```azurecli-interactive az group create \
Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to
> [!IMPORTANT]
-> Azure Virtual Network encryption requires supported virtual machine SKUs in the virtual network for traffic to be encrypted. The setting **dropUnencrypted** will drop traffic between unsupported virtual machine SKUs if they are deployed in the virtual network. For more information, see [Azure Virtual Network encryption requirements](virtual-network-encryption-overview.md#requirements).
+> To encrypt traffic, Virtual Network encryption requires supported virtual machine versions in the virtual network. The setting `dropUnencrypted` drops traffic between unsupported virtual machine versions if they're deployed in the virtual network. For more information, see [Azure Virtual Network encryption requirements](virtual-network-encryption-overview.md#requirements).
## Enable encryption on a virtual network
Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to
Use the following steps to enable encryption for a virtual network.
-1. In the search box at the top of the portal, begin typing **Virtual networks**. When **Virtual networks** appears in the search results, select it.
+1. In the search box at the top of the portal, begin to enter **Virtual networks**. When **Virtual networks** appears in the search results, select it.
-1. Select **vnet-1**.
+1. Select **vnet-1** to open the **vnet-1** pane.
-1. In the **Overview** of **vnet-1**, select the **Properties** tab.
+1. On the service menu, select **Overview**, and then select the **Properties** tab.
-1. Select **Disabled** next to **Encryption**:
+1. Under **Encryption**, select **Disabled**.
- :::image type="content" source="./media/how-to-create-encryption-portal/virtual-network-properties.png" alt-text="Screenshot of properties of the virtual network.":::
+ :::image type="content" source="./media/how-to-create-encryption-portal/virtual-network-properties.png" alt-text="Screenshot that shows properties of the virtual network.":::
1. Select the box next to **Virtual network encryption**.
Use the following steps to enable encryption for a virtual network.
### [PowerShell](#tab/powershell)
-You can also enable encryption on an existing virtual network using [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork). **This step isn't necessary if you created the virtual network with encryption enabled in the previous steps.**
+You can also enable encryption on an existing virtual network by using [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork). *This step isn't necessary if you created the virtual network with encryption enabled in the previous steps.*
```azurepowershell-interactive ## Place the virtual network configuration into a variable. ##
$vnet | Set-AzVirtualNetwork
### [CLI](#tab/cli)
-You can also enable encryption on an existing virtual network using [az network vnet update](/cli/azure/network/vnet#az-network-vnet-update). **This step isn't necessary if you created the virtual network with encryption enabled in the previous steps.**
+You can also enable encryption on an existing virtual network by using [az network vnet update](/cli/azure/network/vnet#az-network-vnet-update). *This step isn't necessary if you created the virtual network with encryption enabled in the previous steps.*
```azurecli-interactive az network vnet update \
You can also enable encryption on an existing virtual network using [az network
-## Verify encryption enabled
+## Verify that encryption is enabled
### [Portal](#tab/portal)
-1. In the search box at the top of the portal, begin typing **Virtual networks**. When **Virtual networks** appears in the search results, select it.
+1. In the search box at the top of the portal, begin to enter **Virtual networks**. When **Virtual networks** appears in the search results, select it.
-1. Select **vnet-1**.
+1. Select **vnet-1** to open the **vnet-1** pane.
-1. In the **Overview** of **vnet-1**, select the **Properties** tab.
+1. On the service menu, select **Overview**, and then select the **Properties** tab.
1. Verify that **Encryption** is set to **Enabled**.
- :::image type="content" source="./media/how-to-create-encryption-portal/virtual-network-properties-encryption-enabled.png" alt-text="Screenshot of properties of the virtual network with encryption enabled.":::
+ :::image type="content" source="./media/how-to-create-encryption-portal/virtual-network-properties-encryption-enabled.png" alt-text="Screenshot that shows properties of the virtual network with Encryption st as Enabled.":::
### [PowerShell](#tab/powershell)
$net = @{
$vnet = Get-AzVirtualNetwork @net ```
-To view the parameter for encryption, enter the following information.
+To view the parameter for encryption, enter the following information:
```azurepowershell-interactive $vnet.Encryption
True AllowUnencrypted
+## Clean up resources
+ ### [Portal](#tab/portal) [!INCLUDE [portal-clean-up.md](~/reusable-content/ce-skilling/azure/includes/portal-clean-up.md)] ### [PowerShell](#tab/powershell)
-When no longer needed, you can use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all of the resources it contains:
+When you no longer need this resource group, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all the resources it contains.
```azurepowershell-interactive $cleanup = @{
Remove-AzResourceGroup @cleanup -Force
### [CLI](#tab/cli)
-When you're done with the virtual network, use [az group delete](/cli/azure/group#az-group-delete) to remove the resource group and all its resources.
+When you finish with the virtual network, use [az group delete](/cli/azure/group#az-group-delete) to remove the resource group and all its resources.
```azurecli-interactive az group delete \
az group delete \
-## Next steps
--- For more information about Azure Virtual Networks, see [What is Azure Virtual Network?](/azure/virtual-network/virtual-networks-overview)
+## Related content
-- For more information about Azure Virtual Network encryption, see [What is Azure Virtual Network encryption?](virtual-network-encryption-overview.md)
+- For more information about Virtual Network, see [What is Azure Virtual Network?](/azure/virtual-network/virtual-networks-overview).
+- For more information about Virtual Network encryption, see [What is Azure Virtual Network encryption?](virtual-network-encryption-overview.md).
virtual-network Manage Network Security Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-network-security-group.md
Title: Create, change, or delete an Azure network security group
-description: Learn how to create, change, or delete a network security group (NSG).
+description: Learn how to create, change, or delete an Azure network security group (NSG).
# Create, change, or delete a network security group
-Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. To learn more about network security groups, see [Network security group overview](./network-security-groups-overview.md). Next, complete the [Filter network traffic](tutorial-filter-network-traffic.md) tutorial to gain some experience with network security groups.
+When you use security rules in network security groups (NSGs), you can filter the type of network traffic that flows in and out of virtual network subnets and network interfaces. To learn more about NSGs, see [Network security group overview](./network-security-groups-overview.md). Next, complete the [Filter network traffic](tutorial-filter-network-traffic.md) tutorial to gain some experience with NSGs.
## Prerequisites
-If you don't have an Azure account with an active subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Complete one of these tasks before starting the remainder of this article:
+If you don't have an Azure account with an active subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Complete one of these tasks before you start the remainder of this article:
- **Portal users**: Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.--- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
+- **PowerShell users**: Either run the commands in [Azure Cloud Shell](https://shell.azure.com/powershell) or run PowerShell locally from your computer. Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools that are preinstalled and configured to use with your account. On the Cloud Shell browser tab, find the **Select environment** dropdown list. Then select **PowerShell** if it isn't already selected.
If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Run `Connect-AzAccount` to sign in to Azure. -- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.
+- **Azure CLI users**: Either run the commands in [Cloud Shell](https://shell.azure.com/bash) or run the Azure CLI locally from your computer. Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools that are preinstalled and configured to use with your account. On the Cloud Shell browser tab, find the **Select environment** dropdown list. Then select **Bash** if it isn't already selected.
- If you're running Azure CLI locally, use Azure CLI version 2.0.28 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure.
+ If you're running the Azure CLI locally, use Azure CLI version 2.0.28 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure.
-Assign the [Network contributor role](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) or a [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) with the appropriate [Permissions](#permissions).
+Assign the [Network Contributor role](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) or a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) with the appropriate [permissions](#permissions).
## Work with network security groups
-You can create, [view all](#view-all-network-security-groups), [view details of](#view-details-of-a-network-security-group), [change](#change-a-network-security-group), and [delete](#delete-a-network-security-group) a network security group. You can also associate or dissociate a network security group from [a network interface](#associate-or-dissociate-a-network-security-group-to-or-from-a-network-interface) or [subnet](#associate-or-dissociate-a-network-security-group-to-or-from-a-subnet).
+You can create, [view all](#view-all-network-security-groups), [view details of](#view-details-of-a-network-security-group), [change](#change-a-network-security-group), and [delete](#delete-a-network-security-group) an NSG. You can also associate or dissociate an NSG from a [network interface](#associate-or-dissociate-a-network-security-group-to-or-from-a-network-interface) or a [subnet](#associate-or-dissociate-a-network-security-group-to-or-from-a-subnet).
### Create a network security group
-There's a limit to how many network security groups you can create for each Azure region and subscription. To learn more, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).
+The number of NSGs that you can create for each Azure region and subscription is limited. To learn more, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).
# [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Network security group*. Select **Network security groups** in the search results.
+1. In the search box at the top of the portal, enter **Network security group**. Select **Network security groups** in the search results.
-2. Select **+ Create**.
+1. Select **+ Create**.
-3. In the **Create network security group** page, under the **Basics** tab, enter or select the following values:
+1. On the **Create network security group** page, under the **Basics** tab, enter or select the following values:
| Setting | Action | | | | | **Project details** | | | Subscription | Select your Azure subscription. |
- | Resource group | Select an existing resource group, or create a new one by selecting **Create new**. This example uses **myResourceGroup** resource group. |
+ | Resource group | Select an existing resource group, or create a new one by selecting **Create new**. This example uses the `myResourceGroup` resource group. |
| **Instance details** | |
- | Network security group name | Enter a name for the network security group you're creating. |
- | Region | Select the region you want. |
+ | Network security group name | Enter a name for the NSG that you're creating. |
+ | Region | Select the region that you want. |
- :::image type="content" source="./media/manage-network-security-group/create-network-security-group.png" alt-text="Screenshot of create network security group in Azure portal.":::
+ :::image type="content" source="./media/manage-network-security-group/create-network-security-group.png" alt-text="Screenshot that shows creating a NSG in the Azure portal.":::
-4. Select **Review + create**.
+1. Select **Review + create**.
-5. After you see the **Validation passed** message, select **Create**.
+1. After you see the **Validation passed** message, select **Create**.
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) to create a network security group named **myNSG** in **East US** region. **myNSG** is created in the existing **myResourceGroup** resource group.
+Use [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) to create an NSG named `myNSG` in the **East US** region. The NSG named `myNSG` is created in the existing `myResourceGroup` resource group.
```azurepowershell-interactive New-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup -Location eastus
New-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup -Loca
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) to create a network security group named **myNSG** in the existing **myResourceGroup** resource group.
+Use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) to create an NSG named `myNSG` in the existing `myResourceGroup` resource group.
```azurecli-interactive az network nsg create --resource-group MyResourceGroup --name myNSG
az network nsg create --resource-group MyResourceGroup --name myNSG
# [**Portal**](#tab/network-security-group-portal)
-In the search box at the top of the portal, enter *Network security group*. Select **Network security groups** in the search results to see the list of network security groups in your subscription.
+In the search box at the top of the portal, enter **Network security group**. Select **Network security groups** in the search results to see the list of NSGs in your subscription.
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup) to list all network security groups in your subscription.
+Use [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup) to list all the NSGs in your subscription.
```azurepowershell-interactive Get-AzNetworkSecurityGroup | format-table Name, Location, ResourceGroupName, ProvisioningState, ResourceGuid
Get-AzNetworkSecurityGroup | format-table Name, Location, ResourceGroupName, Pro
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network nsg list](/cli/azure/network/nsg#az-network-nsg-list) to list all network security groups in your subscription.
+Use [az network nsg list](/cli/azure/network/nsg#az-network-nsg-list) to list all the NSGs in your subscription.
```azurecli-interactive az network nsg list --out table
az network nsg list --out table
# [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
-
-2. Select the name of your network security group.
-
-Under **Settings**, you can view the **Inbound security rules**, **Outbound security rules**, **Network interfaces**, and **Subnets** that the network security group is associated to.
+1. In the search box at the top of the portal, enter **Network security group** and select **Network security groups** in the search results.
-Under **Monitoring**, you can enable or disable **Diagnostic settings**. For more information, see [Resource logging for a network security group](virtual-network-nsg-manage-log.md).
+1. Select the name of your NSG.
-Under **Help**, you can view **Effective security rules**. For more information, see [Diagnose a virtual machine network traffic filter problem](diagnose-network-traffic-filter-problem.md).
+ - Under **Settings**, view the **Inbound security rules**, **Outbound security rules**, **Network interfaces**, and **Subnets** to which the NSG is associated.
+ - Under **Monitoring**, enable or disable **Diagnostic settings**. For more information, see [Resource logging for a network security group](virtual-network-nsg-manage-log.md).
+ - Under **Help**, view **Effective security rules**. For more information, see [Diagnose a virtual machine (VM) network traffic filter problem](diagnose-network-traffic-filter-problem.md).
+ :::image type="content" source="./media/manage-network-security-group/network-security-group-details-inline.png" alt-text="Screenshot that shows the Network security group page in the Azure portal." lightbox="./media/manage-network-security-group/network-security-group-details-expanded.png":::
-To learn more about the common Azure settings listed, see the following articles:
+To learn more about the common Azure settings that are listed, see the following articles:
- [Activity log](/azure/azure-monitor/essentials/platform-logs-overview)--- [Access control (IAM)](../role-based-access-control/overview.md)-
+- [Access control identity and access management (IAM)](../role-based-access-control/overview.md)
- [Tags](../azure-resource-manager/management/tag-resources.md)- - [Locks](../azure-resource-manager/management/lock-resources.md)- - [Automation script](../azure-resource-manager/templates/export-template-portal.md) # [**PowerShell**](#tab/network-security-group-powershell)
-Use [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup) to view details of a network security group.
+Use [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup) to view the details of an NSG.
```azurepowershell-interactive Get-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup ```
-To learn more about the common Azure settings listed, see the following articles:
+To learn more about the common Azure settings that are listed, see the following articles:
- [Activity log](/azure/azure-monitor/essentials/platform-logs-overview)- - [Access control (IAM)](../role-based-access-control/overview.md)- - [Tags](../azure-resource-manager/management/tag-resources.md)- - [Locks](../azure-resource-manager/management/lock-resources.md) # [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network nsg show](/cli/azure/network/nsg#az-network-nsg-show) to view details of a network security group.
+Use [az network nsg show](/cli/azure/network/nsg#az-network-nsg-show) to view the details of an NSG.
```azurecli-interactive az network nsg show --resource-group myResourceGroup --name myNSG ```
-To learn more about the common Azure settings listed, see the following articles:
+To learn more about the common Azure settings that are listed, see the following articles:
- [Activity log](/azure/azure-monitor/essentials/platform-logs-overview)- - [Access control (IAM)](../role-based-access-control/overview.md)- - [Tags](../azure-resource-manager/management/tag-resources.md)- - [Locks](../azure-resource-manager/management/lock-resources.md) ### Change a network security group
-The most common changes to a network security group are:
-- [Associate or dissociate a network security group to or from a network interface](#associate-or-dissociate-a-network-security-group-to-or-from-a-network-interface)
+The most common changes to an NSG are:
+- [Associate or dissociate a network security group to or from a network interface](#associate-or-dissociate-a-network-security-group-to-or-from-a-network-interface)
- [Associate or dissociate a network security group to or from a subnet](#associate-or-dissociate-a-network-security-group-to-or-from-a-subnet)- - [Create a security rule](#create-a-security-rule)- - [Delete a security rule](#delete-a-security-rule) ### Associate or dissociate a network security group to or from a network interface
-For more information about the association and dissociation of a network security group, see [Associate or dissociate a network security group](virtual-network-network-interface.md#associate-or-dissociate-a-network-security-group).
+For more information about the association and dissociation of an NSG, see [Associate or dissociate a network security group](virtual-network-network-interface.md#associate-or-dissociate-a-network-security-group).
### Associate or dissociate a network security group to or from a subnet # [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
+1. In the search box at the top of the portal, enter **Network security group**. Then select **Network security groups** in the search results.
-2. Select the name of your network security group, then select **Subnets**.
+1. Select the name of your NSG, and then select **Subnets**.
-To associate a network security group to the subnet, select **+ Associate**, then select your virtual network and the subnet that you want to associate the network security group to. Select **OK**.
+ - To associate an NSG to the subnet, select **+ Associate**. Then select your virtual network and the subnet to which you want to associate the NSG. Select **OK**.
+ :::image type="content" source="./media/manage-network-security-group/associate-subnet-network-security-group.png" alt-text="Screenshot that shows associating a network security group to a subnet in the Azure portal.":::
-To dissociate a network security group from the subnet, select the three dots next to the subnet that you want to dissociate the network security group from, and then select **Dissociate**. Select **Yes**.
+ - To dissociate an NSG from the subnet, select the three dots next to the subnet from which you want to dissociate the NSG, and then select **Dissociate**. Select **Yes**.
+ :::image type="content" source="./media/manage-network-security-group/dissociate-subnet-network-security-group.png" alt-text="Screenshot that shows dissociating an NSG from a subnet in the Azure portal.":::
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) to associate or dissociate a network security group to or from a subnet.
+Use [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) to associate or dissociate an NSG to or from a subnet.
```azurepowershell-interactive ## Place the virtual network configuration into a variable. ##
Set-AzVirtualNetwork -VirtualNetwork $virtualNetwork
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to associate or dissociate a network security group to or from a subnet.
+Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to associate or dissociate an NSG to or from a subnet.
```azurecli-interactive az network vnet subnet update --resource-group myResourceGroup --vnet-name myVNet --name mySubnet --network-security-group myNSG
az network vnet subnet update --resource-group myResourceGroup --vnet-name myVNe
### Delete a network security group
-If a network security group is associated to any subnets or network interfaces, it can't be deleted. Dissociate a network security group from all subnets and network interfaces before attempting to delete it.
+If an NSG is associated to any subnets or network interfaces, it can't be deleted. Dissociate an NSG from all subnets and network interfaces before you attempt to delete it.
# [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
+1. In the search box at the top of the portal, enter **Network security group**. Then select **Network security groups** in the search results.
-2. Select the network security group you want to delete.
+1. Select the NSG that you want to delete.
-3. Select **Delete**, then select **Yes** in the confirmation dialog box.
+1. Select **Delete**, and then select **Yes** in the confirmation dialog box.
- :::image type="content" source="./media/manage-network-security-group/delete-network-security-group.png" alt-text="Screenshot of delete a network security group in Azure portal.":::
+ :::image type="content" source="./media/manage-network-security-group/delete-network-security-group.png" alt-text="Screenshot that shows deleting a network security group in the Azure portal.":::
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [Remove-AzNetworkSecurityGroup](/powershell/module/az.network/remove-aznetworksecuritygroup) to delete a network security group.
+Use [Remove-AzNetworkSecurityGroup](/powershell/module/az.network/remove-aznetworksecuritygroup) to delete an NSG.
```azurepowershell-interactive Remove-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
Remove-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network nsg delete](/cli/azure/network/nsg#az-network-nsg-delete) to delete a network security group.
+Use [az network nsg delete](/cli/azure/network/nsg#az-network-nsg-delete) to delete an NSG.
```azurecli-interactive az network nsg delete --resource-group myResourceGroup --name myNSG
az network nsg delete --resource-group myResourceGroup --name myNSG
## Work with security rules
-A network security group contains zero or more security rules. You can [create](#create-a-security-rule), [view all](#view-all-security-rules), [view details of](#view-details-of-a-security-rule), [change](#change-a-security-rule), and [delete](#delete-a-security-rule) a security rule.
+An NSG contains zero or more security rules. You can [create](#create-a-security-rule), [view all](#view-all-security-rules), [view details of](#view-the-details-of-a-security-rule), [change](#change-a-security-rule), and [delete](#delete-a-security-rule) a security rule.
### Create a security rule
-There's a limit to how many rules per network security group you can create for each Azure location and subscription. To learn more, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits).
+The number of rules per NSG that you can create for each Azure location and subscription is limited. To learn more, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits).
# [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
+1. In the search box at the top of the portal, enter **Network security group**. Then select **Network security groups** in the search results.
-2. Select the name of the network security group you want to add a security rule to.
+1. Select the name of the NSG to which you want to add a security rule.
-3. Select **Inbound security rules** or **Outbound security rules**.
+1. Select **Inbound security rules** or **Outbound security rules**.
- Several existing rules are listed, including some you may not have added. When you create a network security group, several default security rules are created in it. To learn more, see [default security rules](./network-security-groups-overview.md#default-security-rules). You can't delete default security rules, but you can override them with rules that have a higher priority.
+ Several existing rules are listed, including some that you might not have added. When you create an NSG, several default security rules are created in it. To learn more, see [Default security rules](./network-security-groups-overview.md#default-security-rules). You can't delete default security rules, but you can override them with rules that have a higher priority.
-4. <a name="security-rule-settings"></a>Select **+ Add**. Select or add values for the following settings, and then select **Add**:
+1. <a name="security-rule-settings"></a>Select **+ Add**. Select or add values for the following settings, and then select **Add**.
| Setting | Value | Details | | - | -- | - |
- | **Source** | One of:<ul><li>**Any**</li><li>**IP Addresses**</li><li>**My IP address**</li><li>**Service Tag**</li><li>**Application security group**</li></ul> | <p>If you choose **IP Addresses**, you must also specify **Source IP addresses/CIDR ranges**.</p><p>If you choose **Service Tag**, you must also pick a **Source service tag**.</p><p>If you choose **Application security group**, you must also pick an existing application security group. If you choose **Application security group** for both **Source** and **Destination**, the network interfaces within both application security groups must be in the same virtual network. Learn how to [create an application security group](#create-an-application-security-group).</p> |
- | **Source IP addresses/CIDR ranges** | A comma-delimited list of IP addresses and Classless Interdomain Routing (CIDR) ranges | <p>This setting appears if you set **Source** to **IP Addresses**. You must specify a single value or comma-separated list of multiple values. An example of multiple values is `10.0.0.0/16, 192.188.1.1`. There are limits to the number of values you can specify. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).</p><p>If the IP address you specify is assigned to an Azure VM, specify its private IP address, not its public IP address. Azure processes security rules after it translates the public IP address to a private IP address for inbound security rules, but before it translates a private IP address to a public IP address for outbound rules. To learn more about IP addresses in Azure, see [Public IP addresses](./ip-services/public-ip-addresses.md) and [Private IP addresses](./ip-services/private-ip-addresses.md).</p> |
+ | **Source** | One of:<ul><li>**Any**</li><li>**IP Addresses**</li><li>**My IP address**</li><li>**Service Tag**</li><li>**Application security group**</li></ul> | <p>If you select **IP Addresses**, you must also specify **Source IP addresses/CIDR ranges**.</p><p>If you select **Service Tag**, you must also select a **Source service tag**.</p><p>If you select **Application security group**, you must also select an existing application security group. If you select **Application security group** for both **Source** and **Destination**, the network interfaces within both application security groups must be in the same virtual network. Learn how to [create an application security group](#create-an-application-security-group).</p> |
+ | **Source IP addresses/CIDR ranges** | A comma-delimited list of IP addresses and Classless Interdomain Routing (CIDR) ranges | <p>This setting appears if you set **Source** to **IP Addresses**. You must specify a single value or comma-separated list of multiple values. An example of multiple values is `10.0.0.0/16, 192.188.1.1`. The number of values that you can specify is limited. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).</p><p>If the IP address that you specify is assigned to an Azure VM, specify its private IP address, not its public IP address. Azure processes security rules after it translates the public IP address to a private IP address for inbound security rules, but before it translates a private IP address to a public IP address for outbound rules. To learn more about IP addresses in Azure, see [Public IP addresses](./ip-services/public-ip-addresses.md) and [Private IP addresses](./ip-services/private-ip-addresses.md).</p> |
| **Source service tag** | A service tag from the dropdown list | This setting appears if you set **Source** to **Service Tag** for a security rule. A service tag is a predefined identifier for a category of IP addresses. To learn more about available service tags, and what each tag represents, see [Service tags](../virtual-network/service-tags-overview.md). | | **Source application security group** | An existing application security group | This setting appears if you set **Source** to **Application security group**. Select an application security group that exists in the same region as the network interface. Learn how to [create an application security group](#create-an-application-security-group). |
- | **Source port ranges** | One of:<ul><li>A single port, such as `80`</li><li>A range of ports, such as `1024-65535`</li><li>A comma-separated list of single ports and/or port ranges, such as `80, 1024-65535`</li><li>An asterisk (`*`) to allow traffic on any port</li></ul> | This setting specifies the ports on which the rule allows or denies traffic. There are limits to the number of ports you can specify. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits). |
- | **Destination** | One of:<ul><li>**Any**</li><li>**IP Addresses**</li><li>**Service Tag**</li><li>**Application security group**</li></ul> | <p>If you choose **IP addresses**, you must also specify **Destination IP addresses/CIDR ranges**.</p><p>If you choose **Service Tag**, you must also pick a **Destination service tag**.</p><p>If you choose **Application security group**, you must also select an existing application security group. If you choose **Application security group** for both **Source** and **Destination**, the network interfaces within both application security groups must be in the same virtual network. Learn how to [create an application security group](#create-an-application-security-group).</p> |
- | **Destination IP addresses/CIDR ranges** | A comma-delimited list of IP addresses and CIDR ranges | <p>This setting appears if you change **Destination** to **IP Addresses**. Similar to **Source** and **Source IP addresses/CIDR ranges**, you can specify single or multiple addresses or ranges. There are limits to the number you can specify. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).</p><p>If the IP address you specify is assigned to an Azure VM, ensure that you specify its private IP, not its public IP address. Azure processes security rules after it translates the public IP address to a private IP address for inbound security rules, but before Azure translates a private IP address to a public IP address for outbound rules. To learn more about IP addresses in Azure, see [Public IP addresses](./ip-services/public-ip-addresses.md) and [Private IP addresses](./ip-services/private-ip-addresses.md).</p> |
+ | **Source port ranges** | One of:<ul><li>A single port, such as `80`</li><li>A range of ports, such as `1024-65535`</li><li>A comma-separated list of single ports and/or port ranges, such as `80, 1024-65535`</li><li>An asterisk (`*`) to allow traffic on any port</li></ul> | This setting specifies the ports on which the rule allows or denies traffic. The number of ports that you can specify is limited. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits). |
+ | **Destination** | One of:<ul><li>**Any**</li><li>**IP Addresses**</li><li>**Service Tag**</li><li>**Application security group**</li></ul> | <p>If you select **IP Addresses**, you must also specify **Destination IP addresses/CIDR ranges**.</p><p>If you select **Service Tag**, you must also select a **Destination service tag**.</p><p>If you select **Application security group**, you must also select an existing application security group. If you select **Application security group** for both **Source** and **Destination**, the network interfaces within both application security groups must be in the same virtual network. Learn how to [create an application security group](#create-an-application-security-group).</p> |
+ | **Destination IP addresses/CIDR ranges** | A comma-delimited list of IP addresses and CIDR ranges | <p>This setting appears if you change **Destination** to **IP Addresses**. You can specify single or multiple addresses or ranges like you can do with **Source** and **Source IP addresses/CIDR ranges**. The number that you can specify is limited. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).</p><p>If the IP address that you specify is assigned to an Azure VM, ensure that you specify its private IP, not its public IP address. Azure processes security rules after it translates the public IP address to a private IP address for inbound security rules, but before Azure translates a private IP address to a public IP address for outbound rules. To learn more about IP addresses in Azure, see [Public IP addresses](./ip-services/public-ip-addresses.md) and [Private IP addresses](./ip-services/private-ip-addresses.md).</p> |
| **Destination service tag** | A service tag from the dropdown list | This setting appears if you set **Destination** to **Service Tag** for a security rule. A service tag is a predefined identifier for a category of IP addresses. To learn more about available service tags, and what each tag represents, see [Service tags](../virtual-network/service-tags-overview.md). | | **Destination application security group** | An existing application security group | This setting appears if you set **Destination** to **Application security group**. Select an application security group that exists in the same region as the network interface. Learn how to [create an application security group](#create-an-application-security-group). |
- | **Service** | A destination protocol from the dropdown list | This setting specifies the destination protocol and port range for the security rule. You can choose a predefined service, like **RDP**, or choose **Custom** and provide the port range in **Destination port ranges**. |
- | **Destination port ranges** | One of:<ul><li>A single port, such as `80`</li><li>A range of ports, such as `1024-65535`</li><li>A comma-separated list of single ports and/or port ranges, such as `80, 1024-65535`</li><li>An asterisk (`*`) to allow traffic on any port</li></ul> | As with **Source port ranges**, you can specify single or multiple ports and ranges. There are limits to the number you can specify. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits). |
- | **Protocol** | **Any**, **TCP**, **UDP**, or **ICMP** | You may restrict the rule to the Transmission Control Protocol (TCP), User Datagram Protocol (UDP), or Internet Control Message Protocol (ICMP). The default is for the rule to apply to all protocols (Any). |
+ | **Service** | A destination protocol from the dropdown list | This setting specifies the destination protocol and port range for the security rule. You can select a predefined service, like **RDP**, or select **Custom** and provide the port range in **Destination port ranges**. |
+ | **Destination port ranges** | One of:<ul><li>A single port, such as `80`</li><li>A range of ports, such as `1024-65535`</li><li>A comma-separated list of single ports and/or port ranges, such as `80, 1024-65535`</li><li>An asterisk (`*`) to allow traffic on any port</li></ul> | As with **Source port ranges**, you can specify single or multiple ports and ranges. The number that you can specify is limited. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits). |
+ | **Protocol** | **Any**, **TCP**, **UDP**, or **ICMP** | You can restrict the rule to the Transmission Control Protocol (TCP), User Datagram Protocol (UDP), or Internet Control Message Protocol (ICMP). The default is for the rule to apply to all protocols (**Any**). |
| **Action** | **Allow** or **Deny** | This setting specifies whether this rule allows or denies access for the supplied source and destination configuration. |
- | **Priority** | A value between 100 and 4096 that's unique for all security rules within the network security group | Azure processes security rules in priority order. The lower the number, the higher the priority. We recommend that you leave a gap between priority numbers when you create rules, such as 100, 200, and 300. Leaving gaps makes it easier to add rules in the future, so that you can give them higher or lower priority than existing rules. |
- | **Name** | A unique name for the rule within the network security group | The name can be up to 80 characters. It must begin with a letter or number, and it must end with a letter, number, or underscore. The name may contain only letters, numbers, underscores, periods, or hyphens. |
- | **Description** | A text description | You may optionally specify a text description for the security rule. The description can't be longer than 140 characters. |
+ | **Priority** | A value between 100 and 4,096 that's unique for all security rules within the NSG | Azure processes security rules in priority order. The lower the number, the higher the priority. We recommend that you leave a gap between priority numbers when you create rules, such as 100, 200, and 300. Leaving gaps makes it easier to add rules in the future so that you can give them higher or lower priority than existing rules. |
+ | **Name** | A unique name for the rule within the NSG | The name can be up to 80 characters. It must begin with a letter or number, and it must end with a letter, number, or underscore. The name can contain only letters, numbers, underscores, periods, or hyphens. |
+ | **Description** | A text description | You can optionally specify a text description for the security rule. The description can't be longer than 140 characters. |
- :::image type="content" source="./media/manage-network-security-group/add-security-rule.png" alt-text="Screenshot of add a security rule to a network security group in Azure portal.":::
+ :::image type="content" source="./media/manage-network-security-group/add-security-rule.png" alt-text="Screenshot that shows adding a security rule to a network security group in the Azure portal.":::
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [Add-AzNetworkSecurityRuleConfig](/powershell/module/az.network/add-aznetworksecurityruleconfig) to create a network security group rule.
+Use [Add-AzNetworkSecurityRuleConfig](/powershell/module/az.network/add-aznetworksecurityruleconfig) to create an NSG rule.
```azurepowershell-interactive ## Place the network security group configuration into a variable. ##
Set-AzNetworkSecurityGroup -NetworkSecurityGroup $networkSecurityGroup
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to create a network security group rule.
+Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to create an NSG rule.
```azurecli-interactive az network nsg rule create --resource-group myResourceGroup --nsg-name myNSG --name RDP-rule --priority 300 \
az network nsg rule create --resource-group myResourceGroup --nsg-name myNSG --n
### View all security rules
-A network security group contains zero or more rules. To learn more about the information listed when viewing rules, see [Security rules](./network-security-groups-overview.md#security-rules).
+An NSG contains zero or more rules. To learn more about the list of information when you view the rules, see [Security rules](./network-security-groups-overview.md#security-rules).
# [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
+1. In the search box at the top of the portal, enter **Network security group**. Then select **Network security groups** in the search results.
-2. Select the name of the network security group that you want to view the rules for.
+1. Select the name of the NSG for which you want to view the rules.
-3. Select **Inbound security rules** or **Outbound security rules**.
+1. Select **Inbound security rules** or **Outbound security rules**.
- The list contains any rules you've created and the [default security rules](./network-security-groups-overview.md#default-security-rules) of your network security group.
+ The list contains any rules that you created and the [default security rules](./network-security-groups-overview.md#default-security-rules) of your NSG.
- :::image type="content" source="./media/manage-network-security-group/view-security-rules.png" alt-text="Screenshot of inbound security rules of a network security group in Azure portal.":::
+ :::image type="content" source="./media/manage-network-security-group/view-security-rules.png" alt-text="Screenshot that shows inbound security rules of a network security group in the Azure portal.":::
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [Get-AzNetworkSecurityRuleConfig](/powershell/module/az.network/get-aznetworksecurityruleconfig) to view security rules of a network security group.
+Use [Get-AzNetworkSecurityRuleConfig](/powershell/module/az.network/get-aznetworksecurityruleconfig) to view the security rules of an NSG.
```azurepowershell-interactive ## Place the network security group configuration into a variable. ##
Get-AzNetworkSecurityRuleConfig -NetworkSecurityGroup $networkSecurityGroup | fo
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network nsg rule list](/cli/azure/network/nsg/rule#az-network-nsg-rule-list) to view security rules of a network security group.
+Use [az network nsg rule list](/cli/azure/network/nsg/rule#az-network-nsg-rule-list) to view the security rules of an NSG.
```azurecli-interactive az network nsg rule list --resource-group myResourceGroup --nsg-name myNSG ```
-### View details of a security rule
+### View the details of a security rule
# [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
+1. In the search box at the top of the portal, enter **Network security group**. Then select **Network security groups** in the search results.
-2. Select the name of the network security group that you want to view the rules for.
+1. Select the name of the NSG for which you want to view the rules.
-3. Select **Inbound security rules** or **Outbound security rules**.
+1. Select **Inbound security rules** or **Outbound security rules**.
-4. Select the rule you want to view details for. For an explanation of all settings, see [Security rule settings](#security-rule-settings).
+1. Select the rule for which you want to view details. For an explanation of all settings, see [Security rule settings](#security-rule-settings).
- > [!NOTE]
- > This procedure only applies to a custom security rule. It doesn't work if you choose a default security rule.
+ > [!NOTE]
+ > This procedure applies only to a custom security rule. It doesn't work if you choose a default security rule.
- :::image type="content" source="./media/manage-network-security-group/view-security-rule-details.png" alt-text="Screenshot of details of an inbound security rule of a network security group in Azure portal.":::
+ :::image type="content" source="./media/manage-network-security-group/view-security-rule-details.png" alt-text="Screenshot that shows the details of an inbound security rule of a network security group in the Azure portal.":::
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [Get-AzNetworkSecurityRuleConfig](/powershell/module/az.network/get-aznetworksecurityruleconfig) to view details of a security rule.
+Use [Get-AzNetworkSecurityRuleConfig](/powershell/module/az.network/get-aznetworksecurityruleconfig) to view the details of a security rule.
```azurepowershell-interactive ## Place the network security group configuration into a variable. ##
Get-AzNetworkSecurityRuleConfig -Name RDP-rule -NetworkSecurityGroup $networkSec
``` > [!NOTE]
-> This procedure only applies to a custom security rule. It doesn't work if you choose a default security rule.
+> This procedure applies only to a custom security rule. It doesn't work if you choose a default security rule.
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network nsg rule show](/cli/azure/network/nsg/rule#az-network-nsg-rule-show) to view details of a security rule.
+Use [az network nsg rule show](/cli/azure/network/nsg/rule#az-network-nsg-rule-show) to view the details of a security rule.
```azurecli-interactive az network nsg rule show --resource-group myResourceGroup --nsg-name myNSG --name RDP-rule ``` > [!NOTE]
-> This procedure only applies to a custom security rule. It doesn't work if you choose a default security rule.
+> This procedure applies only to a custom security rule. It doesn't work if you choose a default security rule.
### Change a security rule # [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
+1. In the search box at the top of the portal, enter **Network security group**. Then select **Network security groups** in the search results.
-2. Select the name of the network security group that you want to view the rules for.
+1. Select the name of the NSG for which you want to view the rules.
-3. Select **Inbound security rules** or **Outbound security rules**.
+1. Select **Inbound security rules** or **Outbound security rules**.
-4. Select the rule you want to change.
+1. Select the rule that you want to change.
-5. Change the settings as needed, and then select **Save**. For an explanation of all settings, see [Security rule settings](#security-rule-settings).
+1. Change the settings as needed, and then select **Save**. For an explanation of all settings, see [Security rule settings](#security-rule-settings).
- :::image type="content" source="./media/manage-network-security-group/change-security-rule.png" alt-text="Screenshot of change of an inbound security rule details of a network security group in Azure portal.":::
+ :::image type="content" source="./media/manage-network-security-group/change-security-rule.png" alt-text="Screenshot that shows changing the inbound security rule details of a network security group in the Azure portal.":::
> [!NOTE]
- > This procedure only applies to a custom security rule. You aren't allowed to change a default security rule.
+ > This procedure applies only to a custom security rule. You aren't allowed to change a default security rule.
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [Set-AzNetworkSecurityRuleConfig](/powershell/module/az.network/set-aznetworksecurityruleconfig) to update a network security group rule.
+Use [Set-AzNetworkSecurityRuleConfig](/powershell/module/az.network/set-aznetworksecurityruleconfig) to update an NSG rule.
```azurepowershell-interactive ## Place the network security group configuration into a variable. ##
Set-AzNetworkSecurityGroup -NetworkSecurityGroup $networkSecurityGroup
``` > [!NOTE]
-> This procedure only applies to a custom security rule. You aren't allowed to change a default security rule.
+> This procedure applies only to a custom security rule. You aren't allowed to change a default security rule.
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network nsg rule update](/cli/azure/network/nsg/rule#az-network-nsg-rule-update) to update a network security group rule.
+Use [az network nsg rule update](/cli/azure/network/nsg/rule#az-network-nsg-rule-update) to update an NSG rule.
```azurecli-interactive az network nsg rule update --resource-group myResourceGroup --nsg-name myNSG --name RDP-rule --priority 200 ``` > [!NOTE]
-> This procedure only applies to a custom security rule. You aren't allowed to change a default security rule.
+> This procedure applies only to a custom security rule. You aren't allowed to change a default security rule.
### Delete a security rule # [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
+1. In the search box at the top of the portal, enter **Network security group**. Then select **Network security groups** in the search results.
-2. Select the name of the network security group that you want to view the rules for.
+1. Select the name of the NSG for which you want to view the rules.
-3. Select **Inbound security rules** or **Outbound security rules**.
+1. Select **Inbound security rules** or **Outbound security rules**.
-4. Select the rules you want to delete.
+1. Select the rules that you want to delete.
-5. Select **Delete**, and then select **Yes**.
+1. Select **Delete**, and then select **Yes**.
- :::image type="content" source="./media/manage-network-security-group/delete-security-rule.png" alt-text="Screenshot of delete of an inbound security rule of a network security group in Azure portal.":::
+ :::image type="content" source="./media/manage-network-security-group/delete-security-rule.png" alt-text="Screenshot that shows deleting an inbound security rule of a network security group in the Azure portal.":::
> [!NOTE]
- > This procedure only applies to a custom security rule. You aren't allowed to delete a default security rule.
+ > This procedure applies only to a custom security rule. You aren't allowed to delete a default security rule.
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [Remove-AzNetworkSecurityRuleConfig](/powershell/module/az.network/remove-aznetworksecurityruleconfig) to delete a security rule from a network security group.
+Use [Remove-AzNetworkSecurityRuleConfig](/powershell/module/az.network/remove-aznetworksecurityruleconfig) to delete a security rule from an NSG.
```azurepowershell-interactive ## Place the network security group configuration into a variable. ##
Set-AzNetworkSecurityGroup -NetworkSecurityGroup $networkSecurityGroup
``` > [!NOTE]
-> This procedure only applies to a custom security rule. You aren't allowed to change a default security rule.
+> This procedure applies only to a custom security rule. You aren't allowed to change a default security rule.
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network nsg rule delete](/cli/azure/network/nsg/rule#az-network-nsg-rule-delete) to delete a security rule from a network security group.
+Use [az network nsg rule delete](/cli/azure/network/nsg/rule#az-network-nsg-rule-delete) to delete a security rule from an NSG.
```azurecli-interactive az network nsg rule delete --resource-group myResourceGroup --nsg-name myNSG --name RDP-rule ``` > [!NOTE]
-> This procedure only applies to a custom security rule. You aren't allowed to change a default security rule.
+> This procedure applies only to a custom security rule. You aren't allowed to change a default security rule.
## Work with application security groups
-An application security group contains zero or more network interfaces. To learn more, see [application security groups](./network-security-groups-overview.md#application-security-groups). All network interfaces in an application security group must exist in the same virtual network. To learn how to add a network interface to an application security group, see [Add a network interface to an application security group](virtual-network-network-interface.md#add-or-remove-from-application-security-groups).
+An application security group contains zero or more network interfaces. To learn more, see [Application security groups](./network-security-groups-overview.md#application-security-groups). All network interfaces in an application security group must exist in the same virtual network. To learn how to add a network interface to an application security group, see [Add a network interface to an application security group](virtual-network-network-interface.md#add-or-remove-from-application-security-groups).
### Create an application security group # [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Application security group*. Select **Application security groups** in the search results.
+1. In the search box at the top of the portal, enter **Application security group**. Then select **Application security groups** in the search results.
-2. Select **+ Create**.
+1. Select **+ Create**.
-3. In the **Create an application security group** page, under the **Basics** tab, enter or select the following values:
+1. On the **Create an application security group** page, under the **Basics** tab, enter or select the following values:
| Setting | Action | | | | | **Project details** | | | Subscription | Select your Azure subscription. |
- | Resource group | Select an existing resource group, or create a new one by selecting **Create new**. This example uses **myResourceGroup** resource group. |
+ | Resource group | Select an existing resource group, or create a new one by selecting **Create new**. This example uses the `myResourceGroup` resource group. |
| **Instance details** | |
- | Name | Enter a name for the application security group you're creating. |
- | Region | Select the region you want to create the application security group in. |
+ | Name | Enter a name for the application security group that you're creating. |
+ | Region | Select the region in which you want to create the application security group. |
- :::image type="content" source="./media/manage-network-security-group/create-application-security-group.png" alt-text="Screenshot of create an application security group in Azure portal.":::
+ :::image type="content" source="./media/manage-network-security-group/create-application-security-group.png" alt-text="Screenshot that shows creating an application security group in the Azure portal.":::
-5. Select **Review + create**.
+1. Select **Review + create**.
-6. After you see the **Validation passed** message, select **Create**.
+1. After you see the **Validation passed** message, select **Create**.
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [New-AzApplicationSecurityGroup](/powershell/module/az.network/new-azapplicationsecuritygroup)
+Use [New-AzApplicationSecurityGroup](/powershell/module/az.network/new-azapplicationsecuritygroup) to create an application security group.
```azurepowershell-interactive New-AzApplicationSecurityGroup -ResourceGroupName myResourceGroup -Name myASG -Location eastus
New-AzApplicationSecurityGroup -ResourceGroupName myResourceGroup -Name myASG -L
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network asg create](/cli/azure/network/asg#az-network-asg-create)
+Use [az network asg create](/cli/azure/network/asg#az-network-asg-create) to create an application security group.
```azurecli-interactive az network asg create --resource-group myResourceGroup --name myASG --location eastus
az network asg create --resource-group myResourceGroup --name myASG --location e
# [**Portal**](#tab/network-security-group-portal)
-In the search box at the top of the portal, enter *Application security group*. Select **Application security groups** in the search results. The Azure portal displays a list of your application security groups.
+In the search box at the top of the portal, enter **Application security group**. Then select **Application security groups** in the search results. A list of your application security groups appears in the Azure portal.
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [Get-AzApplicationSecurityGroup](/powershell/module/az.network/get-azapplicationsecuritygroup) to list all application security groups in your Azure subscription.
+Use [Get-AzApplicationSecurityGroup](/powershell/module/az.network/get-azapplicationsecuritygroup) to list all the application security groups in your Azure subscription.
```azurepowershell-interactive Get-AzApplicationSecurityGroup | format-table Name, ResourceGroupName, Location
Get-AzApplicationSecurityGroup | format-table Name, ResourceGroupName, Location
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network asg list](/cli/azure/network/asg#az-network-asg-list) to list all application security groups in a resource group.
+Use [az network asg list](/cli/azure/network/asg#az-network-asg-list) to list all the application security groups in a resource group.
```azurecli-interactive az network asg list --resource-group myResourceGroup --out table ```
-### View details of a specific application security group
+### View the details of a specific application security group
# [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Application security group*. Select **Application security groups** in the search results.
+1. In the search box at the top of the portal, enter **Application security group**. Then select **Application security groups** in the search results.
-2. Select the application security group that you want to view the details of.
+1. Select the application security group for which you want to view the details.
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [Get-AzApplicationSecurityGroup](/powershell/module/az.network/get-azapplicationsecuritygroup)
+Use [Get-AzApplicationSecurityGroup](/powershell/module/az.network/get-azapplicationsecuritygroup) to view the details of an application security group.
```azurepowershell-interactive Get-AzApplicationSecurityGroup -Name myASG
Get-AzApplicationSecurityGroup -Name myASG
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network asg show](/cli/azure/network/asg#az-network-asg-show) to view details of an application security group.
+Use [az network asg show](/cli/azure/network/asg#az-network-asg-show) to view the details of an application security group.
```azurecli-interactive az network asg show --resource-group myResourceGroup --name myASG
az network asg show --resource-group myResourceGroup --name myASG
# [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Application security group*. Select **Application security groups** in the search results.
+1. In the search box at the top of the portal, enter **Application security group**. Then select **Application security groups** in the search results.
-2. Select the application security group that you want to change.
+1. Select the application security group that you want to change:
-Select **move** next to **Resource group** or **Subscription** to change the resource group or subscription respectively.
+ - Select **move** next to **Resource group** or **Subscription** to change the resource group or subscription, respectively.
-Select **Edit** next to **Tags** to add or remove tags. to learn more, see [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md)
+ - Select **edit** next to **Tags** to add or remove tags. To learn more, see [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md).
+ :::image type="content" source="./media/manage-network-security-group/change-application-security-group.png" alt-text="Screenshot that shows changing an application security group in the Azure portal.":::
-> [!NOTE]
-> You can't change the location of an application security group.
+ > [!NOTE]
+ > You can't change the location of an application security group.
-Select **Access control (IAM)** to assign or remove permissions to the application security group.
+ - Select **Access control (IAM)** to assign or remove permissions to the application security group.
# [**PowerShell**](#tab/network-security-group-powershell)
-> [!NOTE]
-> You can't change an application security group using PowerShell.
+You can't change an application security group by using PowerShell.
# [**Azure CLI**](#tab/network-security-group-cli)
az network asg update --resource-group myResourceGroup --name myASG --tags Dept=
``` > [!NOTE]
-> You can't change the resource group, subscription or location of an application security group using the Azure CLI.
+> You can't change the resource group, subscription, or location of an application security group by using the Azure CLI.
### Delete an application security group
You can't delete an application security group if it contains any network interf
# [**Portal**](#tab/network-security-group-portal)
-1. In the search box at the top of the portal, enter *Application security group*. Select **Application security groups** in the search results.
+1. In the search box at the top of the portal, enter **Application security group**. Then select **Application security groups** in the search results.
-2. Select the application security group you want to delete.
+1. Select the application security group that you want to delete.
-3. Select **Delete**, and then select **Yes** to delete the application security group.
+1. Select **Delete**, and then select **Yes** to delete the application security group.
- :::image type="content" source="./media/manage-network-security-group/delete-application-security-group.png" alt-text="Screenshot of delete application security group in Azure portal.":::
+ :::image type="content" source="./media/manage-network-security-group/delete-application-security-group.png" alt-text="Screenshot that shows deleting an application security group in the Azure portal.":::
# [**PowerShell**](#tab/network-security-group-powershell)
az network asg delete --resource-group myResourceGroup --name myASG
## Permissions
-To manage network security groups, security rules, and application security groups, your account must be assigned to the [Network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role. A [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) can also be used that's assigned the appropriate permissions as listed in the following tables:
+To manage NSGs, security rules, and application security groups, your account must be assigned to the [Network Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role. You can also use a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) with the appropriate permissions assigned, as listed in the following tables.
> [!NOTE]
-> You might NOT see the full list of service tags if the Network Contributor role has been assigned at a Resource Group level. To view the full list, you can assign this role at a Subscription scope instead. If you can only allow Network Contributor for the Resource Group, you can then also create a custom role for the permissions "Microsoft.Network/locations/serviceTags/read" and "Microsoft.Network/locations/serviceTagDetails/read" and assign them at a Subscription scope along with the Network Contributor at Resource Group scope.
+> You might *not* see the full list of service tags if the Network Contributor role was assigned at a resource group level. To view the full list, you can assign this role at a subscription scope instead. If you can only allow the Network Contributor role for the resource group, you can then also create a custom role for the permissions `Microsoft.Network/locations/serviceTags/read` and `Microsoft.Network/locations/serviceTagDetails/read`. Assign them at a subscription scope along with the Network Contributor role at the resource group scope.
### Network security group | Action | Name | |-- | - |
-| Microsoft.Network/networkSecurityGroups/read | Get network security group |
-| Microsoft.Network/networkSecurityGroups/write | Create or update network security group |
-| Microsoft.Network/networkSecurityGroups/delete | Delete network security group |
-| Microsoft.Network/networkSecurityGroups/join/action | Associate a network security group to a subnet or network interface
+| `Microsoft.Network/networkSecurityGroups/read` | Get an NSG. |
+| `Microsoft.Network/networkSecurityGroups/write` | Create or update an NSG. |
+| `Microsoft.Network/networkSecurityGroups/delete` | Delete an NSG. |
+| `Microsoft.Network/networkSecurityGroups/join/action` | Associate an NSG to a subnet or network interface.
->[!NOTE]
-> To perform `write` operations on a network security group, the subscription account must have at least `read` permissions for resource group along with `Microsoft.Network/networkSecurityGroups/write` permission.
+> [!NOTE]
+> To perform `write` operations on an NSG, the subscription account must have at least `read` permissions for the resource group along with `Microsoft.Network/networkSecurityGroups/write` permission.
### Network security group rule | Action | Name | |-- | - |
-| Microsoft.Network/networkSecurityGroups/securityRules/read | Get rule |
-| Microsoft.Network/networkSecurityGroups/securityRules/write | Create or update rule |
-| Microsoft.Network/networkSecurityGroups/securityRules/delete | Delete rule |
+| `Microsoft.Network/networkSecurityGroups/securityRules/read` | Get a rule. |
+| `Microsoft.Network/networkSecurityGroups/securityRules/write` | Create or update a rule. |
+| `Microsoft.Network/networkSecurityGroups/securityRules/delete` | Delete a rule. |
### Application security group | Action | Name | | -- | - |
-| Microsoft.Network/applicationSecurityGroups/joinIpConfiguration/action | Join an IP configuration to an application security group|
-| Microsoft.Network/applicationSecurityGroups/joinNetworkSecurityRule/action | Join a security rule to an application security group |
-| Microsoft.Network/applicationSecurityGroups/read | Get an application security group |
-| Microsoft.Network/applicationSecurityGroups/write | Create or update an application security group |
-| Microsoft.Network/applicationSecurityGroups/delete | Delete an application security group |
+| `Microsoft.Network/applicationSecurityGroups/joinIpConfiguration/action` | Join an IP configuration to an application security group.|
+| `Microsoft.Network/applicationSecurityGroups/joinNetworkSecurityRule/action` | Join a security rule to an application security group. |
+| `Microsoft.Network/applicationSecurityGroups/read` | Get an application security group. |
+| `Microsoft.Network/applicationSecurityGroups/write` | Create or update an application security group. |
+| `Microsoft.Network/applicationSecurityGroups/delete` | Delete an application security group. |
-## Next steps
+## Related content
- Add or remove [a network interface to or from an application security group](./virtual-network-network-interface.md?tabs=network-interface-portal#add-or-remove-from-application-security-groups).--- Create and assign [Azure Policy definitions](./policy-reference.md) for virtual networks
+- Create and assign [Azure Policy definitions](./policy-reference.md) for virtual networks.
virtual-network Virtual Network Scenario Udr Gw Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-scenario-udr-gw-nva.md
Title: Hybrid connection with two-tier application
-description: Learn how to deploy virtual appliances and route tables to create a multi-tier application environment in Azure
+description: Learn how to deploy virtual appliances and route tables to create a multitier application environment in Azure.
# Virtual appliance scenario
-A common scenario among larger Azure customers is the need to provide a two-tiered application exposed to the Internet, while allowing access to the back tier from an on-premises datacenter. This document walks you through a scenario using route tables, a VPN Gateway, and network virtual appliances to deploy a two-tier environment that meets the following requirements:
+A common scenario among larger Azure customers is the need to provide a two-tiered application that's exposed to the internet while it also allows access to the back tier from an on-premises datacenter. This article walks you through a scenario that uses route tables, a VPN gateway, and network virtual appliances to deploy a two-tiered environment that meets the following requirements:
-* Web application must be accessible from the public Internet only.
+* A web application must be accessible from the public internet only.
+* A web server that hosts the application must be able to access a back-end application server.
+* All traffic from the internet to the web application must go through a firewall virtual appliance. This virtual appliance is used for internet traffic only.
+* All traffic that goes to the application server must go through a firewall virtual appliance. This virtual appliance is used for access to the back-end server, and for access coming in from the on-premises network via a VPN gateway.
+* Administrators must be able to manage the firewall virtual appliances from their on-premises computers by using a third firewall virtual appliance that's used exclusively for management purposes.
-* Web server hosting the application must be able to access a backend application server.
+This example is a standard perimeter network (also known as DMZ) scenario with a DMZ and a protected network. You can construct this scenario in Azure by using network security groups (NSGs), firewall virtual appliances, or a combination of both.
-* All traffic from the Internet to the web application must go through a firewall virtual appliance. This virtual appliance is used for Internet traffic only.
-
-* All traffic going to the application server must go through a firewall virtual appliance. This virtual appliance is used for access to the backend end server, and access coming in from the on-premises network via a VPN Gateway.
-
-* Administrators must be able to manage the firewall virtual appliances from their on-premises computers, by using a third firewall virtual appliance used exclusively for management purposes.
-
-This example is a standard perimeter network (also known as DMZ) scenario with a DMZ and a protected network. Such scenario can be constructed in Azure by using NSGs, firewall virtual appliances, or a combination of both.
-
-The following table shows some of the pros and cons between NSGs and firewall virtual appliances.
+The following table shows some of the pros and cons for NSGs and firewall virtual appliances.
| Item | Pros | Cons | | -- | | |
-| **NSG** | No cost. <br/>Integrated into Azure role based access. <br/>Rules can be created in Azure Resource Manager templates. | Complexity could vary in larger environments. |
-| **Firewall** | Full control over data plane. <br/> Central management through firewall console. |Cost of firewall appliance. <br/> Not integrated with Azure role based access. |
+| NSG | No cost. <br/>Integrated into Azure role-based access. <br/>Ability to create rules in Azure Resource Manager templates. | Complexity could vary in larger environments. |
+| Firewall | Full control over data plane. <br/> Central management through firewall console. |Cost of firewall appliance. <br/> Not integrated with Azure role-based access. |
The following solution uses firewall virtual appliances to implement a perimeter network (DMZ)/protected network scenario. ## Considerations
-You can deploy the environment explained previously in Azure using different features available today, as follows.
-
-* **Virtual network**. An Azure virtual network acts in similar fashion to an on-premises network, and can be segmented into one or more subnets to provide traffic isolation, and separation of concerns.
-
-* **Virtual appliance**. Several partners provide virtual appliances in the Azure Marketplace that can be used for the three firewalls described previously.
-
-* **Route tables**. Route tables are used by Azure networking to control the flow of packets within a virtual network. These route tables can be applied to subnets. You can apply a route table to the GatewaySubnet, which forwards all traffic entering into the Azure virtual network from a hybrid connection to a virtual appliance.
-
-* **IP Forwarding**. By default, the Azure networking engine forwards packets to virtual network interface cards (NICs) only if the packet destination IP address matches the NIC IP address. Therefore, if a route table defines that a packet must be sent to a given virtual appliance, the Azure networking engine would drop that packet. To ensure the packet is delivered to a VM (in this case a virtual appliance) that isn't the actual destination for the packet, enable IP Forwarding for the virtual appliance.
-
-* **Network Security Groups (NSGs)**. The following example doesn't make use of NSGs, but you could use NSGs applied to the subnets and/or NICs in this solution. The NSGs would further filter the traffic in and out of those subnets and NICs.
--
-In this example, there's a subscription that contains the following items:
-
-* Two resource groups, not shown in the diagram.
+You can deploy the preceding environment in Azure by using features that are available today:
- * **ONPREMRG**. Contains all resources necessary to simulate an on-premises network.
+* **Virtual network**: An Azure virtual network acts in a similar fashion to an on-premises network. You can segment it into one or more subnets to provide traffic isolation and separation of concerns.
+* **Virtual appliance**: Several partners provide virtual appliances in Azure Marketplace to use for the three firewalls described previously.
+* **Route tables**: Route tables are used by Azure networking to control the flow of packets within a virtual network. You can apply these route tables to subnets. You can apply a route table to `GatewaySubnet`, which forwards all traffic that enters into the Azure virtual network from a hybrid connection to a virtual appliance.
+* **IP forwarding**: By default, the Azure networking engine forwards packets to virtual network interface cards (NICs) only if the packet destination IP address matches the NIC IP address. If a route table defines that a packet must be sent to a specific virtual appliance, the Azure networking engine drops that packet. To ensure that the packet is delivered to a VM (in this case a virtual appliance) that isn't the actual destination for the packet, enable IP forwarding for the virtual appliance.
+* **Network security groups**: The following example doesn't make use of NSGs, but you can use NSGs applied to the subnets or NICs in this solution. The NSGs further filter the traffic in and out of those subnets and NICs.
- * **AZURERG**. Contains all resources necessary for the Azure virtual network environment.
-* A virtual network named **onpremvnet** segmented as follows used to mimic an on-premises datacenter.
+In this example, a subscription contains the following items:
- * **onpremsn1**. Subnet containing a virtual machine (VM) running Linux distribution to mimic an on-premises server.
+* Two resource groups (not shown in the diagram):
- * **onpremsn2**. Subnet containing a VM running Linux distribution to mimic an on-premises computer used by an administrator.
+ * `ONPREMRG`: Contains all resources necessary to simulate an on-premises network.
+ * `AZURERG`: Contains all resources necessary for the Azure virtual network environment.
-* There's one firewall virtual appliance named **OPFW** on **onpremvnet** used to maintain a tunnel to **azurevnet**.
+* A virtual network named `onpremvnet` is segmented and used to mimic an on-premises datacenter:
-* A virtual network named **azurevnet** segmented as follows.
+ * `onpremsn1`: A subnet that contains a virtual machine (VM) running a Linux distribution to mimic an on-premises server.
+ * `onpremsn2`: A subnet that contains a VM running a Linux distribution to mimic an on-premises computer used by an administrator.
- * **azsn1**. External firewall subnet used exclusively for the external firewall. All Internet traffic comes in through this subnet. This subnet only contains a NIC linked to the external firewall.
+* One firewall virtual appliance is named `OPFW` on `onpremvnet`. It's used to maintain a tunnel to `azurevnet`.
+* A virtual network named `azurevnet` is segmented as follows:
- * **azsn2**. Front end subnet hosting a VM running as a web server that is accessed from the Internet.
+ * `azsn1`: An external firewall subnet used exclusively for the external firewall. All internet traffic comes in through this subnet. This subnet contains only a NIC linked to the external firewall.
+ * `azsn2`: A front-end subnet that hosts a VM running as a web server that's accessed from the internet.
+ * `azsn3`: A back-end subnet that hosts a VM running a back-end application server accessed by the front-end web server.
+ * `azsn4`: A management subnet used exclusively to provide management access to all firewall virtual appliances. This subnet contains only a NIC for each firewall virtual appliance used in the solution.
+ * `GatewaySubnet`: An Azure hybrid connection subnet that's required for Azure ExpressRoute and Azure VPN Gateway to provide connectivity between Azure virtual networks and other networks.
- * **azsn3**. Backend subnet hosting a VM running a backend application server accessed by the front end web server.
+* Three firewall virtual appliances are in the `azurevnet` network:
- * **azsn4**. Management subnet used exclusively to provide management access to all firewall virtual appliances. This subnet only contains a NIC for each firewall virtual appliance used in the solution.
-
- * **GatewaySubnet**. Azure hybrid connection subnet required for ExpressRoute and VPN Gateway to provide connectivity between Azure VNets and other networks.
-
-* There are 3 firewall virtual appliances in the **azurevnet** network.
-
- * **AZF1**. External firewall exposed to the public Internet by using a public IP address resource in Azure. You need to ensure you have a template from the Marketplace or directly from your appliance vendor that deploys a 3-NIC virtual appliance.
-
- * **AZF2**. Internal firewall used to control traffic between **azsn2** and **azsn3**. This firewall is also a 3-NIC virtual appliance.
-
- * **AZF3**. Management firewall accessible to administrators from the on-premises datacenter, and connected to a management subnet used to manage all firewall appliances. You can find 2-NIC virtual appliance templates in the Marketplace, or request one directly from your appliance vendor.
+ * `AZF1`: An external firewall exposed to the public internet by using a public IP address resource in Azure. You need to ensure that you have a template from Azure Marketplace or directly from your appliance vendor that deploys a three-NIC virtual appliance.
+ * `AZF2`: An internal firewall used to control traffic between `azsn2` and `azsn3`. This firewall is also a three-NIC virtual appliance.
+ * `AZF3`: A management firewall accessible to administrators from the on-premises datacenter and connected to a management subnet that's used to manage all firewall appliances. You can find two-NIC virtual appliance templates in Azure Marketplace. You can also request one directly from your appliance vendor.
## Route tables
-Each subnet in Azure can be linked to a route table used to define how traffic initiated in that subnet is routed. If no UDRs are defined, Azure uses default routes to allow traffic to flow from one subnet to another. To better understand route tables and traffic routing, see [Azure virtual network traffic routing](virtual-networks-udr-overview.md).
+Link each subnet in Azure to a route table to define how traffic initiated in that subnet is routed. If no user-defined routes (UDRs) are defined, Azure uses default routes to allow traffic to flow from one subnet to another. To better understand route tables and traffic routing, see [Azure virtual network traffic routing](virtual-networks-udr-overview.md).
-To ensure communication is done through the right firewall appliance, based on the last requirement listed previously, you must create the following route table in **azurevnet**.
+To ensure that communication is done through the proper firewall appliance, based on the last requirement listed previously, you must create the following route table in `azurevnet`.
### azgwudr
-In this scenario, the only traffic flowing from on-premises to Azure is used to manage the firewalls by connecting to **AZF3**, and that traffic must go through the internal firewall, **AZF2**. Therefore, only one route is necessary in the **GatewaySubnet** as shown as follows.
+In this scenario, the only traffic that flows from on-premises to Azure is used to manage the firewalls by connecting to `AZF3`, and that traffic must go through the internal firewall, `AZF2`. Only one route is necessary in `GatewaySubnet`, as shown here:
| Destination | Next hop | Explanation | | | | |
-| 10.0.4.0/24 | 10.0.3.11 | Allows on-premises traffic to reach management firewall **AZF3** |
+| 10.0.4.0/24 | 10.0.3.11 | Allows on-premises traffic to reach management firewall `AZF3`. |
### azsn2udr | Destination | Next hop | Explanation | | | | |
-| 10.0.3.0/24 | 10.0.2.11 |Allows traffic to the backend subnet hosting the application server through **AZF2** |
-| 0.0.0.0/0 | 10.0.2.10 |Allows all other traffic to be routed through **AZF1** |
+| 10.0.3.0/24 | 10.0.2.11 |Allows traffic to the back-end subnet that hosts the application server through `AZF2`. |
+| 0.0.0.0/0 | 10.0.2.10 |Allows all other traffic to be routed through `AZF1`. |
### azsn3udr | Destination | Next hop | Explanation | | | | |
-| 10.0.2.0/24 |10.0.3.10 |Allows traffic to **azsn2** to flow from app server to the webserver through **AZF2** |
+| 10.0.2.0/24 |10.0.3.10 |Allows traffic to `azsn2` to flow from an app server to the web server through `AZF2`. |
-You also need to create route tables for the subnets in **onpremvnet** to mimic the on-premises datacenter.
+You also need to create route tables for the subnets in `onpremvnet` to mimic the on-premises datacenter.
### onpremsn1udr+ | Destination | Next hop | Explanation | | | | |
-| 192.168.2.0/24 | 192.168.1.4 |Allows traffic to **onpremsn2** through **OPFW** |
+| 192.168.2.0/24 | 192.168.1.4 |Allows traffic to `onpremsn2` through `OPFW`. |
### onpremsn2udr | Destination | Next hop | Explanation | | | | |
-| 10.0.3.0/24 |192.168.2.4 |Allows traffic to the backed subnet in Azure through **OPFW** |
-| 192.168.1.0/24 | 192.168.2.4 |Allows traffic to **onpremsn1** through **OPFW** |
+| 10.0.3.0/24 |192.168.2.4 |Allows traffic to the back-end subnet in Azure through `OPFW`. |
+| 192.168.1.0/24 | 192.168.2.4 |Allows traffic to `onpremsn1` through `OPFW`. |
-## IP Forwarding
+## IP forwarding
-Route tables and IP Forwarding are features that you can use in combination to allow virtual appliances to be used to control traffic flow in an Azure Virtual Network. A virtual appliance is nothing more than a VM that runs an application used to handle network traffic in some way, such as a firewall or a NAT device.
+Route tables and IP forwarding are features that you can use in combination to allow virtual appliances to control traffic flow in an Azure virtual network. A virtual appliance is nothing more than a VM that runs an application used to handle network traffic in some way, such as a firewall or a network address translation device.
-This virtual appliance VM must be able to receive incoming traffic that isn't addressed to itself. To allow a VM to receive traffic addressed to other destinations, you must enable IP Forwarding for the VM. This setting is an Azure setting, not a setting in the guest operating system. Your virtual appliance still needs to run some type of application to handle the incoming traffic, and route it appropriately.
+This virtual appliance VM must be able to receive incoming traffic that isn't addressed to itself. To allow a VM to receive traffic addressed to other destinations, you must enable IP forwarding for the VM. This setting is an Azure setting, not a setting in the guest operating system. Your virtual appliance still needs to run some type of application to handle the incoming traffic and route it appropriately.
-To learn more about IP Forwarding, see [Azure virtual network traffic routing](virtual-networks-udr-overview.md).
+To learn more about IP forwarding, see [Azure virtual network traffic routing](virtual-networks-udr-overview.md).
-As an example, imagine you have the following setup in an Azure vnet:
+As an example, imagine that you have the following setup in an Azure virtual network:
-* Subnet **onpremsn1** contains a VM named **onpremvm1**.
+* Subnet `onpremsn1` contains a VM named `onpremvm1`.
+* Subnet `onpremsn2` contains a VM named `onpremvm2`.
+* A virtual appliance named `OPFW` is connected to `onpremsn1` and `onpremsn2`.
+* A UDR linked to `onpremsn1` specifies that all traffic to `onpremsn2` must be sent to `OPFW`.
-* Subnet **onpremsn2** contains a VM named **onpremvm2**.
+At this point, if `onpremvm1` tries to establish a connection with `onpremvm2`, the UDR is used, and traffic is sent to `OPFW` as the next hop. The actual packet destination isn't being changed. It still says that `onpremvm2` is the destination.
-* A virtual appliance named **OPFW** is connected to **onpremsn1** and **onpremsn2**.
+Without IP forwarding enabled for `OPFW`, the Azure virtual networking logic drops the packets because it allows only packets to be sent to a VM if the VM's IP address is the destination for the packet.
-* A user defined route linked to **onpremsn1** specifies that all traffic to **onpremsn2** must be sent to **OPFW**.
+With IP forwarding, the Azure virtual network logic forwards the packets to `OPFW`, without changing its original destination address. `OPFW` must handle the packets and determine what to do with them.
-At this point, if **onpremvm1** tries to establish a connection with **onpremvm2**, the UDR will be used and traffic will be sent to **OPFW** as the next hop. Keep in mind that the actual packet destination isn't being changed, it still says **onpremvm2** is the destination.
+For the previous scenario to work, you must enable IP forwarding on the NICs for `OPFW`, `AZF1`, `AZF2`, and `AZF3` that are used for routing (all NICs except the ones linked to the management subnet).
-Without IP Forwarding enabled for **OPFW**, the Azure virtual networking logic drops the packets, since it only allows packets to be sent to a VM if the VMΓÇÖs IP address is the destination for the packet.
+## Firewall rules
-With IP Forwarding, the Azure virtual network logic forwards the packets to OPFW, without changing its original destination address. **OPFW** must handle the packets and determine what to do with them.
-
-For the scenario previously to work, you must enable IP Forwarding on the NICs for **OPFW**, **AZF1**, **AZF2**, and **AZF3** that are used for routing (all NICs except the ones linked to the management subnet).
-
-## Firewall Rules
-
-As described previously, IP Forwarding only ensures packets are sent to the virtual appliances. Your appliance still needs to decide what to do with those packets. In the previous scenario, you need to create the following rules in your appliances:
+As described previously, IP forwarding only ensures that packets are sent to the virtual appliances. Your appliance still needs to decide what to do with those packets. In the previous scenario, you need to create the following rules in your appliances.
### OPFW
-OPFW represents an on-premises device containing the following rules:
+OPFW represents an on-premises device that contains the following rules:
-* **Route**: All traffic to 10.0.0.0/16 (**azurevnet**) must be sent through tunnel **ONPREMAZURE**.
-
-* **Policy**: Allow all bidirectional traffic between **port2** and **ONPREMAZURE**.
+* **Route**: All traffic to 10.0.0.0/16 (`azurevnet`) must be sent through the tunnel `ONPREMAZURE`.
+* **Policy**: Allow all bidirectional traffic between `port2` and `ONPREMAZURE`.
### AZF1
-AZF1 represents an Azure virtual appliance containing the following rules:
+`AZF1` represents an Azure virtual appliance that contains the following rule:
-* **Policy**: Allow all bidirectional traffic between **port1** and **port2**.
+**Policy**: Allow all bidirectional traffic between `port1` and `port2`.
### AZF2
-AZF2 represents an Azure virtual appliance containing the following rules:
+`AZF2` represents an Azure virtual appliance that contains the following rule:
-* **Policy**: Allow all bidirectional traffic between **port1** and **port2**.
+**Policy**: Allow all bidirectional traffic between `port1` and `port2`.
### AZF3
-AZF3 represents an Azure virtual appliance containing the following rules:
+`AZF3` represents an Azure virtual appliance that contains the following rule:
-* **Route**: All traffic to 192.168.0.0/16 (**onpremvnet**) must be sent to the Azure gateway IP address (that is, 10.0.0.1) through **port1**.
+**Route**: All traffic to 192.168.0.0/16 (`onpremvnet`) must be sent to the Azure gateway IP address (that is, 10.0.0.1) through `port1`.
-## Network Security Groups (NSGs)
+## Network security groups
-In this scenario, NSGs aren't being used. However, you could apply NSGs to each subnet to restrict incoming and outgoing traffic. For instance, you could apply the following NSG rules to the external FW subnet.
+In this scenario, NSGs aren't being used. However, you can apply NSGs to each subnet to restrict incoming and outgoing traffic. For instance, you can apply the following NSG rules to the external firewall subnet.
**Incoming**
-* Allow all TCP traffic from the Internet to port 80 on any VM in the subnet.
-
-* Deny all other traffic from the Internet.
+* Allow all TCP traffic from the internet to port 80 on any VM in the subnet.
+* Deny all other traffic from the internet.
**Outgoing**
-* Deny all traffic to the Internet.
+Deny all traffic to the internet.
-## High level steps
+## High-level steps
-To deploy this scenario, use the following high level steps.
+To deploy this scenario, follow these steps:
-1. Sign in to your Azure Subscription.
+1. Sign in to your Azure subscription.
-2. If you want to deploy a virtual network to mimic the on-premises network, deploy the resources that are part of **ONPREMRG**.
+1. If you want to deploy a virtual network to mimic the on-premises network, deploy the resources that are part of `ONPREMRG`.
-3. Deploy the resources that are part of **AZURERG**.
+1. Deploy the resources that are part of `AZURERG`.
-4. Deploy the tunnel from **onpremvnet** to **azurevnet**.
+1. Deploy the tunnel from `onpremvnet` to `azurevnet`.
-5. Once all resources are provisioned, sign in to **onpremvm2** and ping 10.0.3.101 to test connectivity between **onpremsn2** and **azsn3**.
+1. After all resources are provisioned, sign in to `onpremvm2` and ping 10.0.3.101 to test connectivity between `onpremsn2` and `azsn3`.
virtual-wan Openvpn Azure Ad Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-mfa.md
Title: 'Enable MFA for VPN users by using Microsoft Entra authentication'
+ Title: 'Enable MFA for VPN users: Microsoft Entra ID authentication'
description: Learn how to enable Microsoft Entra multifactor authentication (MFA) for VPN users by using Microsoft Entra authentication. - Previously updated : 08/23/2023- Last updated : 09/24/2024+
-# Enable Microsoft Entra multifactor authentication (MFA) for VPN users by using Microsoft Entra authentication
+# Enable multifactor authentication (MFA) for P2S VPN - Microsoft Entra ID authentication
[!INCLUDE [overview](../../includes/vpn-gateway-vwan-openvpn-enable-mfa-overview.md)]
vpn-gateway Gateway Sku Consolidation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/gateway-sku-consolidation.md
+
+ Title: Gateway SKU mappings
+
+description: Learn about the changes for virtual network gateway SKUs for VPN Gateway.
++++ Last updated : 09/18/2024++++
+# VPN Gateway SKU consolidation and migration
+
+We're simplifying our VPN Gateway SKU portfolio. Due to the lack of redundancy, lower availability, and potential higher costs associated with additional failover solutions, we're transitioning all non availability zone (AZ) supported SKUs to AZ supported SKUs. This article helps you understand the upcoming changes for VPN Gateway virtual network gateway SKUs. This article expands on the official announcement.
+
+* **Effective January 1, 2025**: Creation of new VPN gateways using VpnGw1-5 SKUs (non-AZ) will no longer be possible.
+* **Migration period**: From April 2025 to October 2026, all existing VPN gateways using VpnGw1-5 SKUs (non-AZ SKUs) will be seamlessly migrated to VpnGw1-5 SKUs (AZ).
+
+To support this migration, we're reducing the prices on AZ SKUs. Refer to the [pricing page](https://azure.microsoft.com/pricing/details/vpn-gateway/) to select the appropriate SKU for your organization.
+
+> [!NOTE]
+> This article doesn't apply to the following legacy gateway SKUs: Standard or High Performance. For information legacy SKUS, including legacy SKU migration, see [Working with VPN Gateway legacy SKUs](vpn-gateway-about-skus-legacy.md).
+
+## Mapping old SKUs to new SKUs
+
+The following diagram shows current SKUs and the new SKUs they'll automatically be migrated to.
++
+## FAQ
+
+### What actions do I need to take?
+
+There are no actions that you need to take. If your gateway currently uses one of the SKUs listed in previous section, we'll migrate the gateway for you. When we perform the migration, the migration is seamless. There's no downtime expected. You'll be notified in advance about migration for your gateway. We recommend that you don't change your SKU manually in anticipation of SKU migration unless you want to upgrade to a higher SKU.
+
+### What is the timeline?
+
+Migration will begin after March 2025. You'll be notified when your gateway will be migrated.
+
+### Can I create new gateways using the older SKUs?
+
+No. You can't create a new gateway using VpnGw1-5 SKUs (non-AZ SKUs) after January 2025.
+
+### How long will my existing gateway SKUs be supported?
+
+The existing gateway SKUs are supported until they're migrated to AZ SKUs. The targeted deprecation for non-AZ SKUs is September 16, 2026. There will be no impact to existing AZ SKUs.
+
+### Will there be any pricing differences for my gateways after migration?
+
+Yes. On January 1, 2025 you can see the new [Pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). Until that date, the pricing changes won't show on the pricing page.
+
+### When does new AZ pricing take effect?
+
+Yes. The new pricing timeline is:
+
+* If your existing gateway uses a VpnGw1-5 SKU, new pricing starts after your gateway is migrated.
+* If your existing gateway uses a VpnGw1AZ-5AZ SKU, new pricing starts January 1, 2025.
+
+### Can I deploy VpnGw 1-5 AZ SKUs in all regions?
+
+Yes, you can deploy AZ SKUs in all regions. If a region doesn't currently support availability zones, you can still create VPN Gateway AZ SKUs, but the deployment will remain regional.
+
+### Will there be downtime during migrating my Non-AZ gateways?
+
+No. This migration is seamless and there's no expected downtime during migration.
+
+### Will there be any performance impact on my gateways with this migration?
+
+Yes. AZ SKUs get the benefits of Zone redundancy for VPN gateways in [zone redundant regions](https://learn.microsoft.com/azure/reliability/availability-zones-service-support). If the region doesn't support zone redundancy, the gateway is regional until the region it's deployed to supports zone redundancy.
+
+### Is VPN Gateway Basic SKU retiring?
+
+No, the VPN Gateway Basic SKU isn't retiring. You can create a VPN gateway using the Basic gateway SKU via [PowerShell](create-gateway-basic-sku-powershell.md) or CLI. Currently, the VPN Gateway Basic SKU supports only the Basic SKU public IP address resource (which is on a path to retirement). We're working on adding support for the Standard SKU public IP address resource to the VPN Gateway Basic SKU.
+
+### When will my Standard and HighPerformance gateway be migrated?
+
+Standard and HighPerformance gateway will be migrated to AZ gateways in CY26. For more information, see this [announcement](https://azure.microsoft.com/updates/standard-and-highperformance-vpn-gateway-skus-will-be-retired-on-30-september-2025/) and [Working with VPN Gateway legacy SKUs](vpn-gateway-about-skus-legacy.md).
+
+## Next steps
+
+For more information about SKUs, see [About gateway SKUs](about-gateway-skus.md).
vpn-gateway Openvpn Azure Ad Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-mfa.md
Title: 'Enable MFA for VPN users: Microsoft Entra ID authentication'
+ Title: 'Enable MFA for VPN users - Microsoft Entra ID authentication'
description: Learn how to enable multifactor authentication (MFA) for VPN users. Previously updated : 05/15/2024 Last updated : 09/24/2024
-# Enable Microsoft Entra ID multifactor authentication (MFA) for VPN users
+# Enable Microsoft Entra ID multifactor authentication (MFA) for P2S VPN users
[!INCLUDE [overview](../../includes/vpn-gateway-vwan-openvpn-enable-mfa-overview.md)]
vpn-gateway Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/packet-capture.md
The following examples of JSON and a JSON schema provide explanations of each pr
"Filters": [ { "SourceSubnets": [
- "20.1.1.0/24"
+ "10.1.0.0/24"
], "DestinationSubnets": [ "10.1.1.0/24"
The following examples of JSON and a JSON schema provide explanations of each pr
], "TcpFlags": 16, "SourceSubnets": [
- "20.1.1.0/24"
+ "10.1.0.0/24"
], "DestinationSubnets": [ "10.1.1.0/24"
The following examples of JSON and a JSON schema provide explanations of each pr
], "TcpFlags": 16, "SourceSubnets": [
- "20.1.1.0/24"
+ "10.1.0.0/24"
], "DestinationSubnets": [ "10.1.1.0/24"
The following examples of JSON and a JSON schema provide explanations of each pr
"default": [], "examples": [ [
- "20.1.1.0/24"
+ "10.1.0.0/24"
] ], "additionalItems": true,
The following examples of JSON and a JSON schema provide explanations of each pr
"description": "An explanation about the purpose of this instance.", "default": "", "examples": [
- "20.1.1.0/24"
+ "10.1.0.0/24"
] } },
vpn-gateway Vpn Gateway Vnet Vnet Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md
Previously updated : 07/11/2024 Last updated : 09/24/2024 # Configure a VNet-to-VNet VPN gateway connection using PowerShell
In this example, because the gateways are in the different subscriptions, we've
PS D:\> $vnet1gw.Name VNet1GW PS D:\> $vnet1gw.Id
- /subscriptions/b636ca99-6f88-4df4-a7c3-2f8dc4545509/resourceGroupsTestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW
+ /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroupsTestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW
``` 1. **[Subscription 5]** Get the virtual network gateway for Subscription 5. Sign in and connect to Subscription 5 before running the following example:
In this example, because the gateways are in the different subscriptions, we've
PS C:\> $vnet5gw.Name VNet5GW PS C:\> $vnet5gw.Id
- /subscriptions/66c8e4f1-ecd6-47ed-9de7-7e530de23994/resourceGroups/TestRG5/providers/Microsoft.Network/virtualNetworkGateways/VNet5GW
+ /subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/TestRG5/providers/Microsoft.Network/virtualNetworkGateways/VNet5GW
``` 1. **[Subscription 1]** Create the TestVNet1 to TestVNet5 connection. In this step, you create the connection from TestVNet1 to TestVNet5. The difference here is that $vnet5gw can't be obtained directly because it is in a different subscription. You'll need to create a new PowerShell object with the values communicated from Subscription 1 in the previous steps. Use the following example. Replace the Name, ID, and shared key with your own values. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete.
In this example, because the gateways are in the different subscriptions, we've
```azurepowershell-interactive $vnet5gw = New-Object -TypeName Microsoft.Azure.Commands.Network.Models.PSVirtualNetworkGateway $vnet5gw.Name = "VNet5GW"
- $vnet5gw.Id = "/subscriptions/66c8e4f1-ecd6-47ed-9de7-7e530de23994/resourceGroups/TestRG5/providers/Microsoft.Network/virtualNetworkGateways/VNet5GW"
+ $vnet5gw.Id = "/subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/TestRG5/providers/Microsoft.Network/virtualNetworkGateways/VNet5GW"
$Connection15 = "VNet1toVNet5" New-AzVirtualNetworkGatewayConnection -Name $Connection15 -ResourceGroupName $RG1 -VirtualNetworkGateway1 $vnet1gw -VirtualNetworkGateway2 $vnet5gw -Location $Location1 -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' ```
In this example, because the gateways are in the different subscriptions, we've
```azurepowershell-interactive $vnet1gw = New-Object -TypeName Microsoft.Azure.Commands.Network.Models.PSVirtualNetworkGateway $vnet1gw.Name = "VNet1GW"
- $vnet1gw.Id = "/subscriptions/b636ca99-6f88-4df4-a7c3-2f8dc4545509/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW "
+ $vnet1gw.Id = "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW "
$Connection51 = "VNet5toVNet1" New-AzVirtualNetworkGatewayConnection -Name $Connection51 -ResourceGroupName $RG5 -VirtualNetworkGateway1 $vnet5gw -VirtualNetworkGateway2 $vnet1gw -Location $Location5 -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' ```