Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
api-center | Register Apis Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/register-apis-github-actions.md | In the following steps, create a Microsoft Entra ID service principal, which wil > [!NOTE] > Configuring a service principal is shown for demonstration purposes. The recommended way to authenticate with Azure for GitHub Actions is with OpenID Connect, an authentication method that uses short-lived tokens. Setting up OpenID Connect with GitHub Actions is more complex but offers hardened security. [Learn more](../app-service/deploy-github-actions.md?tabs=openid%2Caspnetcore#1-generate-deployment-credentials) -Create a service principal using the [az ad sp create-for-rbac](/cli/azure/ad#az-ad-sp-create-for-rbac) command. The following example first uses the [az apic show](/cli/azure/apic#az-apic-show) command to retrieve the resource ID of the API center. The service principal is then created with the Contributor role for the API center. +Create a service principal using the [az ad sp create-for-rbac](/cli/azure/ad#az-ad-sp-create-for-rbac) command. The following example first uses the [az apic show](/cli/azure/apic#az-apic-show) command to retrieve the resource ID of the API center. The service principal is then created with the Azure API Center Service Contributor role for the API center. #### [Bash](#tab/bash) spName=<service-principal-name> apicResourceId=$(az apic show --name $apiCenter --resource-group $resourceGroup --query "id" --output tsv) -az ad sp create-for-rbac --name $spName --role Contributor --scopes $apicResourceId --json-auth +az ad sp create-for-rbac --name $spName --role "Azure API Center Service Contributor" --scopes $apicResourceId --json-auth ``` #### [PowerShell](#tab/powershell) $spName = "<service-principal-name>" $apicResourceId = $(az apic show --name $apiCenter --resource-group $resourceGroup --query "id" --output tsv) -az ad sp create-for-rbac --name $spName --role Contributor --scopes $apicResourceId --json-auth +az ad sp create-for-rbac --name $spName --role "Azure API Center Service Contributor" --scopes $apicResourceId --json-auth ``` |
api-management | Migrate Stv1 To Stv2 Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2-vnet.md | For a VNet-inject instance, you have the following migration options: * [**Option 2: Change to a new subnet**](#option-2-migrate-and-change-to-new-subnet) - Migrate your instance by specifying a different subnet in the same or a different VNet. After migration, optionally migrate back to the instance's original subnet. The migration process changes the VIP address(es) of the instance. After migration, you need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address(es). +Under certain, less frequent conditions, migration in the same subnet may not be possible or behaves differently. For more information, see [Special conditions and scenarios](#special-conditions-and-scenarios). + If you need to migrate a *non-VNnet-injected* API Management hosted on the `stv1` platform, see [Migrate a non-VNet-injected API Management instance to the stv2 platform](migrate-stv1-to-stv2-no-vnet.md). [!INCLUDE [api-management-migration-alert](../../includes/api-management-migration-alert.md)] After you update the VNet configuration, the status of your API Management insta [!INCLUDE [api-management-migration-rollback](../../includes/api-management-migration-rollback.md)] +## Special conditions and scenarios ++Under certain conditions, [Option 1: Migrate and keep same subnet](#option-1-migrate-and-keep-same-subnet) may not be available or behaves differently. The portal detects these conditions and recommends the migration option(s). If you aren't able to use Option 1, or multiple conditions are present, use [Option 2: Change to a new subnet](#option-2-migrate-and-change-to-new-subnet). ++* **VNet with special internal conditions** - If your API Management instance is currently deployed in a VNet with special internal conditions (unrelated to customer configuration), you are notified in the portal that Option 1 for same-subnet migration in the portal includes additional downtime (approximately 1 hour). Using the portal for migration is recommended. You can also use the following modified Azure CLI script for same-subnet migration with approximately 1 hour of downtime: ++ ```azurecli + APIM_NAME={name of your API Management instance} + # In PowerShell, use the following syntax: $APIM_NAME={name of your API Management instance} + RG_NAME={name of your resource group} + # Get resource ID of API Management instance + APIM_RESOURCE_ID=$(az apim show --name $APIM_NAME --resource-group $RG_NAME --query id --output tsv) + # Call REST API to migrate to stv2 and preserve VIP address for special condition + az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2024-06-01-preview&migrateWithDowntime=true" --body '{"mode": "PreserveIP"}' + ``` ++* **Multiple stv1 instances in subnet** - Sufficient free IP addresses may not be available for a same-subnet migration if you attempt to migrate the instances simultaneously. You may be able to migrate instances sequentially using Option 1. ++* **Subnet delegation** - If the subnet where API Management is deployed is currently delegated to other Azure services, you must migrate using Option 2. ++* **Azure Key Vault blocked** - If access to Azure Key Vault is currently blocked, you must migrate using Option 2, including setting up NSG rules in the new subnet for access to Azure Key Vault. + [!INCLUDE [api-management-migration-support](../../includes/api-management-migration-support.md)] ## Frequently asked questions |
app-service | Configure Language Java Apm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-apm.md | To enable via the Azure CLI, you need to create an Application Insights resource # [Linux](#tab/linux) ++> [!NOTE] +> The latest [New Relic documentation](https://docs.newrelic.com/install/java/?deployment=appServer&framework=jboss) lists JBoss EAP support up to 7.x. JBoss EAP 8.x is not yet supported. ++ 1. Create a NewRelic account at [NewRelic.com](https://newrelic.com/signup)-2. Download the Java agent from NewRelic. It has a file name similar to *newrelic-java-x.x.x.zip*. +2. [Download the Java agent from NewRelic](https://download.newrelic.com/newrelic/java-agent/newrelic-agent/current/newrelic-java.zip). 3. Copy your license key, you need it to configure the agent later. 4. [SSH into your App Service instance](configure-linux-open-ssh-session.md) and create a new directory */home/site/wwwroot/apm*. 5. Upload the unpacked NewRelic Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/newrelic*. 6. Modify the YAML file at */home/site/wwwroot/apm/newrelic/newrelic.yml* and replace the placeholder license value with your own license key. 7. In the Azure portal, browse to your application in App Service and create a new Application Setting. - ::: zone pivot="java-javase" + ::: zone pivot="java-javase,java-jboss" Create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`. To enable via the Azure CLI, you need to create an Application Insights resource ::: zone-end - ::: zone pivot="java-jboss" -- For **JBoss EAP**, `[TODO]`. -- ::: zone-end - # [Windows](#tab/windows) 1. Create a NewRelic account at [NewRelic.com](https://newrelic.com/signup) To enable via the Azure CLI, you need to create an Application Insights resource ::: zone-end - ::: zone pivot="java-jboss" + - For **JBoss EAP**, `[TODO]`. - ::: zone-end +> [!NOTE] +> If you already have an environment variable for `JAVA_OPTS`, append the `-javaagent:/...` option to the end of the current value. -+ > [!NOTE]-> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value. +> If you already have an environment variable for `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value. + ## Configure AppDynamics To enable via the Azure CLI, you need to create an Application Insights resource ::: zone pivot="java-jboss" - For **JBoss EAP**, `[TODO]`. + <!-- For **JBoss EAP**, `[TODO]`. --> ::: zone-end To enable via the Azure CLI, you need to create an Application Insights resource ::: zone-end - ::: zone pivot="java-jboss" -- For **JBoss EAP**, `[TODO]`. -- ::: zone-end - ## Configure Datadog # [Linux](#tab/linux)-* The configuration options are different depending on which Datadog site your organization is using. See the official [Datadog Integration for Azure Documentation](https://docs.datadoghq.com/integrations/azure/) +The configuration options are different depending on which Datadog site your organization is using. See the official [Datadog Integration for Azure Documentation](https://docs.datadoghq.com/integrations/azure/) # [Windows](#tab/windows)-* The configuration options are different depending on which Datadog site your organization is using. See the official [Datadog Integration for Azure Documentation](https://docs.datadoghq.com/integrations/azure/) +The configuration options are different depending on which Datadog site your organization is using. See the official [Datadog Integration for Azure Documentation](https://docs.datadoghq.com/integrations/azure/) ## Configure Dynatrace # [Linux](#tab/linux)-* Dynatrace provides an [Azure Native Dynatrace Service](https://www.dynatrace.com/monitoring/technologies/azure-monitoring/). To monitor Azure App Services using Dynatrace, see the official [Dynatrace for Azure documentation](https://docs.datadoghq.com/integrations/azure/) +Dynatrace provides an [Azure Native Dynatrace Service](https://www.dynatrace.com/monitoring/technologies/azure-monitoring/). To monitor Azure App Services using Dynatrace, see the official [Dynatrace for Azure documentation](https://docs.datadoghq.com/integrations/azure/) # [Windows](#tab/windows)-* Dynatrace provides an [Azure Native Dynatrace Service](https://www.dynatrace.com/monitoring/technologies/azure-monitoring/). To monitor Azure App Services using Dynatrace, see the official [Dynatrace for Azure documentation](https://docs.datadoghq.com/integrations/azure/) +Dynatrace provides an [Azure Native Dynatrace Service](https://www.dynatrace.com/monitoring/technologies/azure-monitoring/). To monitor Azure App Services using Dynatrace, see the official [Dynatrace for Azure documentation](https://docs.datadoghq.com/integrations/azure/) |
app-service | Configure Language Java Data Sources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-data-sources.md | For more information, see the [Spring Boot documentation on data access](https:/ ::: zone pivot="java-tomcat" > [!TIP]-> By default, the Linux Tomcat containers can automatically configure shared data sources for you in the Tomcat server. The only thing for you to do is add an app setting that contains a valid JDBC connection string to an Oracle, SQL Server, PostgreSQL, or MySQL database (including the connection credentials), and App Service automatically adds the cooresponding shared database to */usr/local/tomcat/conf/context.xml* for you, using an appropriate driver available in the container. For an end-to-end scenario using this approach, see [Tutorial: Build a Tomcat web app with Azure App Service on Linux and MySQL](tutorial-java-tomcat-mysql-app.md). +> By default, the Linux Tomcat containers can automatically configure shared data sources for you in the Tomcat server. The only thing for you to do is add an app setting that contains a valid JDBC connection string to an Oracle, SQL Server, PostgreSQL, or MySQL database (including the connection credentials), and App Service automatically adds the corresponding shared database to */usr/local/tomcat/conf/context.xml*, using an appropriate driver available in the container. For an end-to-end scenario using this approach, see [Tutorial: Build a Tomcat web app with Azure App Service on Linux and MySQL](tutorial-java-tomcat-mysql-app.md). These instructions apply to all database connections. You need to fill placeholders with your chosen database's driver class name and JAR file. Provided is a table with class names and driver downloads for common databases. az webapp deploy --resource-group <group-name> --name <app-name> --src-path <jar ::: zone pivot="java-jboss" -There are three core steps when [registering a data source with JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/configuration_guide/datasource_management): uploading the JDBC driver, adding the JDBC driver as a module, and registering the module. App Service is a stateless hosting service, so the configuration commands for adding and registering the data source module must be scripted and applied as the container starts. +> [!TIP] +> By default, the Linux JBoss containers can automatically configure shared data sources for you in the JBoss server. The only thing for you to do is add an app setting that contains a valid JDBC connection string to an Oracle, SQL Server, PostgreSQL, or MySQL database (including the connection credentials), and App Service automatically adds the corresponding shared data source, using an appropriate driver available in the container. For an end-to-end scenario using this approach, see [Tutorial: Build a JBoss web app with Azure App Service on Linux and MySQL](tutorial-java-jboss-mysql-app.md). -1. Obtain your database's JDBC driver. -2. Create an XML module definition file for the JDBC driver. The following example shows a module definition for PostgreSQL. +There are three core steps when [registering a data source with JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/configuration_guide/datasource_management): - ```xml - <?xml version="1.0" ?> - <module xmlns="urn:jboss:module:1.1" name="org.postgres"> - <resources> - <!-- ***** IMPORTANT : REPLACE THIS PLACEHOLDER *******--> - <resource-root path="/home/site/deployments/tools/postgresql-42.2.12.jar" /> - </resources> - <dependencies> - <module name="javax.api"/> - <module name="javax.transaction.api"/> - </dependencies> - </module> - ``` +1. Upload the JDBC driver. +1. Add the JDBC driver as a module. +1. Add a data source with the module. -1. Put your JBoss CLI commands into a file named `jboss-cli-commands.cli`. The JBoss commands must add the module and register it as a data source. The following example shows the JBoss CLI commands for PostgreSQL. +App Service is a stateless hosting service, so you must put these steps into a startup script and run it each time the JBoss container starts. Using PostgreSQL, MySQL, and SQL Database as an examples: - ```bash - #!/usr/bin/env bash - module add --name=org.postgres --resources=/home/site/deployments/tools/postgresql-42.2.12.jar --module-xml=/home/site/deployments/tools/postgres-module.xml +# [PostgreSQL](#tab/postgresql) - /subsystem=datasources/jdbc-driver=postgres:add(driver-name="postgres",driver-module-name="org.postgres",driver-class-name=org.postgresql.Driver,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource) - data-source add --name=postgresDS --driver-name=postgres --jndi-name=java:jboss/datasources/postgresDS --connection-url=${POSTGRES_CONNECTION_URL,env.POSTGRES_CONNECTION_URL:jdbc:postgresql://db:5432/postgres} --user-name=${POSTGRES_SERVER_ADMIN_FULL_NAME,env.POSTGRES_SERVER_ADMIN_FULL_NAME:postgres} --password=${POSTGRES_SERVER_ADMIN_PASSWORD,env.POSTGRES_SERVER_ADMIN_PASSWORD:example} --use-ccm=true --max-pool-size=5 --blocking-timeout-wait-millis=5000 --enabled=true --driver-class=org.postgresql.Driver --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter --jta=true --use-java-context=true --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker - ``` +# [MySQL](#tab/mysql) -1. Create a startup script, `startup_script.sh` that calls the JBoss CLI commands. The following example shows how to call your `jboss-cli-commands.cli`. Later, you'll configure App Service to run this script when the container starts. - ```bash - $JBOSS_HOME/bin/jboss-cli.sh --connect --file=/home/site/deployments/tools/jboss-cli-commands.cli - ``` +# [SQL Database](#tab/sqldatabase) -1. Using an FTP client of your choice, upload your JDBC driver, `jboss-cli-commands.cli`, `startup_script.sh`, and the module definition to `/site/deployments/tools/`. -2. Configure your site to run `startup_script.sh` when the container starts. In the Azure portal, navigate to **Configuration** > **General Settings** > **Startup Command**. Set the startup command field to `/home/site/deployments/tools/startup_script.sh`. **Save** your changes. -To confirm that the datasource was added to the JBoss server, SSH into your webapp and run `$JBOSS_HOME/bin/jboss-cli.sh --connect`. Once you're connected to JBoss, run the `/subsystem=datasources:read-resource` to print a list of the data sources. + ::: zone-end |
app-service | Configure Language Java Deploy Run | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-deploy-run.md | All Java runtimes on App Service come with the Java Flight Recorder. You can use # [Linux](#tab/linux) -SSH into your App Service and run the `jcmd` command to see a list of all the Java processes running. In addition to jcmd itself, you should see your Java application running with a process ID number (pid). +SSH into your App Service and run the `jcmd` command to see a list of all the Java processes running. In addition to `jcmd` itself, you should see your Java application running with a process ID number (pid). ```shell 078990bbcd11:/home# jcmd To improve performance of Tomcat applications, you can compile your JSP files be ::: zone-end -> [!NOTE] -> - [!INCLUDE [robots933456](../../includes/app-service-web-configure-robots933456.md)] ## Choosing a Java runtime version App Service allows users to choose the major version of the JVM, such as Java 8 or Java 11, and the patch version, such as 1.8.0_232 or 11.0.5. You can also choose to have the patch version automatically updated as new minor versions become available. In most cases, production apps should use pinned patch JVM versions. This prevents unanticipated outages during a patch version autoupdate. All Java web apps use 64-bit JVMs, and it's not configurable. If you're using Tomcat, you can choose to pin the patch version of Tomcat. On Windows, you can pin the patch versions of the JVM and Tomcat independently. On Linux, you can pin the patch version of Tomcat; the patch version of the JVM is also pinned but isn't separately configurable. If you choose to pin the minor version, you need to periodically update the JVM ::: zone pivot="java-jboss" +## Run JBoss CLI ++In your JBoss app's SSH session, you can run the JBoss CLI with the following command: ++``` +$JBOSS_HOME/bin/jboss-cli.sh --connect +``` ++Depending on where JBoss is in the server lifecycle, you might not be able to connect. Wait a few minutes and try again. This approach is useful for quick checks of your current server state (for example, to see if a data source is properly configured). ++Also, changes you make to the server with JBoss CLI in the SSH session doesn't persist after the app restarts. Each time the app starts, the JBoss EAP server begins with a clean installation. During the [startup lifecycle](#jboss-server-lifecycle), App Service makes the necessary server configurations and deploys the app. To make any persistent changes in the JBoss server, use a [custom startup script or a startup command](#3-server-configuration-phase). For an end-to-end example, see [Configure data sources for a Tomcat, JBoss, or Java SE app in Azure App Service](configure-language-java-data-sources.md?pivots=java-jboss). ++Alternatively, you can manually configure App Service to run any file on startup. For example: ++```azurecli-interactive +az webapp config set --resource-group <group-name> --name <app-name> --startup-file /home/site/scripts/foo.sh +``` ++For more information about the CLI commands you can run, see: ++- [Red Hat JBoss EAP documentation](https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html-single/getting_started_with_red_hat_jboss_enterprise_application_platform/index#management-cli-overview_assembly-jboss-eap-management) +- [WildFly CLI Recipes](https://docs.jboss.org/author/display/WFLY/CLI%20Recipes.html) + ## Clustering -App Service supports clustering for JBoss EAP versions 7.4.1 and greater. To enable clustering, your web app must be [integrated with a virtual network](overview-vnet-integration.md). When the web app is integrated with a virtual network, it restarts, and the JBoss EAP installation automatically starts up with a clustered configuration. The JBoss EAP instances communicate over the subnet specified in the virtual network integration, using the ports shown in the `WEBSITES_PRIVATE_PORTS` environment variable at runtime. You can disable clustering by creating an app setting named `WEBSITE_DISABLE_CLUSTERING` with any value. +App Service supports clustering for JBoss EAP versions 7.4.1 and greater. To enable clustering, your web app must be [integrated with a virtual network](overview-vnet-integration.md). When the web app is integrated with a virtual network, it restarts, and the JBoss EAP installation automatically starts up with a clustered configuration. When you [run multiple instances with autoscaling](/azure/azure-monitor/autoscale/autoscale-get-started), the JBoss EAP instances communicate with each other over the subnet specified in the virtual network integration. You can disable clustering by creating an app setting named `WEBSITE_DISABLE_CLUSTERING` with any value. + > [!NOTE] > If you're enabling your virtual network integration with an ARM template, you need to manually set the property `vnetPrivatePorts` to a value of `2`. If you enable virtual network integration from the CLI or Portal, this property is set for you automatically. App Service supports clustering for JBoss EAP versions 7.4.1 and greater. To ena When clustering is enabled, the JBoss EAP instances use the FILE_PING JGroups discovery protocol to discover new instances and persist the cluster information like the cluster members, their identifiers, and their IP addresses. On App Service, these files are under `/home/clusterinfo/`. The first EAP instance to start obtains read/write permissions on the cluster membership file. Other instances read the file, find the primary node, and coordinate with that node to be included in the cluster and added to the file. > [!Note]-> You can avoid JBOSS clustering timeouts by [cleaning up obsolete discovery files during your app startup](https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/JBOSS/avoid_timeouts_obsolete_nodes.md) +> You can avoid JBoss clustering timeouts by [cleaning up obsolete discovery files during your app startup](https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/JBOSS/avoid_timeouts_obsolete_nodes.md). The Premium V3 and Isolated V2 App Service Plan types can optionally be distributed across Availability Zones to improve resiliency and reliability for your business-critical workloads. This architecture is also known as [zone redundancy](../availability-zones/migrate-app-service.md). The JBoss EAP clustering feature is compatible with the zone redundancy feature. You don't need to incrementally add instances (scaling out), you can add multipl JBoss EAP is available in the following pricing tiers: **F1**, **P0v3**, **P1mv3**, **P2mv3**, **P3mv3**, **P4mv3**, and **P5mv3**. +## JBoss server lifecycle ++A JBoss EAP app in App Service goes through five distinct phases before actually launching the server. ++- [1. Environment setup phase](#1-environment-setup-phase) +- [2. Server launch phase](#2-server-launch-phase) +- [3. Server configuration phase](#3-server-configuration-phase) +- [4. App deployment phase](#4-app-deployment-phase) +- [5. Server reload phase](#5-server-reload-phase) ++See respective sections below for details as well as opportunities to customize it (such as through [app settings](configure-common.md)). ++### 1. Environment setup phase ++- The SSH service is started to enable [secure SSH sessions](configure-linux-open-ssh-session.md) with the container. +- The Keystore of the Java runtime is updated with any public and private certificates defined in Azure portal. + - Public certificates are provided by the platform in the */var/ssl/certs* directory, and they're loaded to *$JRE_HOME/lib/security/cacerts*. + - Private certificates are provided by the platform in the */var/ssl/private* directory, and they're loaded to *$JRE_HOME/lib/security/client.jks*. +- If any certificates are loaded in the Java keystore in this step, the properties `javax.net.ssl.keyStore`, `javax.net.ssl.keyStorePassword` and `javax.net.ssl.keyStoreType` are added to the `JAVA_TOOL_OPTIONS` environment variable. +- Some initial JVM configuration is determined such as logging directories and Java memory heap parameters: + - If you provide the `ΓÇôXms` or `ΓÇôXmx` flags for memory in the app setting `JAVA_OPTS`, these values override the ones provided by the platform. + - If you configure the app setting `WEBSITES_CONTAINER_STOP_TIME_LIMIT`, the value is passed to the runtime property `org.wildfly.sigterm.suspend.timeout`, which controls the maximum shutdown wait time (in seconds) when JBoss is being stopped. +- If the app is integrated with a virtual network, the App Service runtime passes a list of ports to be used for inter-server communication in the environment variable `WEBSITE_PRIVATE_PORTS` and launch JBoss using the `clustering` configuration. Otherwise, the `standalone` configuration is used. + - For the `clustering` configuration, the server configuration file *standalone-azure-full-ha.xml* is used. + - For the `standalone` configuration, the server configuration file *standalone-full.xml* is used. ++### 2. Server launch phase ++- If JBoss is launched in the `clustering` configuration: + - Each JBoss instance receives an internal identifier between 0 and the number of instances that the app is scaled out to. + - If some files are found in the transaction store path for this server instance (by using its internal identifier), it means this server instance is taking the place of an identical service instance that crashed previously and left uncommitted transactions behind. The server is configured to resume the work on these transactions. +- Regardless if JBoss starting in the `clustering` or `standalone` configuration, if the server version is 7.4 or above and the runtime uses Java 17, then the configuration is updated to enable the Elytron subsystem for security. +- If you configure the app setting `WEBSITE_JBOSS_OPTS`, the value is passed to the JBoss launcher script. This setting can be used to provide paths to property files and more flags that influence the startup of JBoss. ++### 3. Server configuration phase ++- At the start of this phase, App Service first waits for both the JBoss server and the admin interface to be ready to receive requests before continuing. This can take a few more seconds if Application Insights is enabled. +- When both JBoss Server and the admin interface are ready, App Service does the following: + - Adds the JBoss module `azure.appservice`, which provides utility classes for logging and integration with App Service. + - Updates the console logger to use a colorless mode so that log files aren't full of color escaping sequences. + - Sets up the integration with Azure Monitor logs. + - Updates the binding IP addresses of the WSDL and management interfaces. + - Adds the JBoss module `azure.appservice.easyauth` for integration with [App Service authentication](overview-authentication-authorization.md) and Microsoft Entra ID. + - Updates the logging configuration of access logs and the name and rotation of the main server log file. +- Unless the app setting `WEBSITE_SKIP_AUTOCONFIGURE_DATABASE` is defined, App Service autodetects JDBC URLs in the App Service app settings. If valid JDBC URLs exist for PostgreSQL, MySQL, MariaDB, Oracle, SQL Server, or Azure SQL Database, it adds the corresponding driver(s) to the server and adds a data source for each of the JDBC URL and sets the JNDI name for each data source to `java:jboss/env/jdbc/<app-setting-name>_DS`, where `<app-setting-name>` is the name of the app setting. +- If the `clustering` configuration is enabled, the console logger to be configured is checked. +- If there are JAR files deployed to the */home/site/libs* directory, a new global module is created with all of these JAR files. +- At the end of the phase, App Service runs the custom startup script, if one exists. The search logic for the custom startup script as follows: + - If you configured a startup command (in the Azure portal, with Azure CLI, etc.), run it; otherwise, + - If the path */home/site/scripts/startup.sh* exists, use it; otherwise, + - If the path */home/startup.sh* exists, use it. ++The custom startup command or script runs as the root user (no need for `sudo`), so they can install Linux packages or launch the JBoss CLI to perform more JBoss install/customization commands (creating datasources, installing resource adapters), etc. For information on Ubuntu package management commands, see the [Ubuntu Server documentation](https://documentation.ubuntu.com/server/how-to/software/package-management/). For JBoss CLI commands, see the [JBoss Management CLI Guide](https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html-single/management_cli_guide/index#how_to_cli). ++### 4. App deployment phase ++The startup script deploys apps to JBoss by looking in the following locations, in order of precedence: ++- If you configured the app setting `WEBSITE_JAVA_WAR_FILE_NAME`, deploy the file designated by it. +- If */home/site/wwwroot/app.war* exists, deploy it. +- If any other EAR and WAR files exist in */home/site/wwwroot*, deploy them. +- If */home/site/wwwroot/webapps* exists, deploy the files and directories in it. WAR files are deployed as applications themselves, and directories are deployed as "exploded" (uncompressed) web apps. +- If any standalone JSP pages exist in */home/site/wwwroot*, copy them to the web server root and deploy them as one web app. +- If no deployable files are found yet, deploy the default welcome page (parking page) in the root context. ++### 5. Server reload phase ++- Once the deployment steps are complete, the JBoss server is reloaded to apply any changes that require a server reload. +- After the server reloads, the application(s) deployed to JBoss EAP server should be ready to respond to requests. +- The server runs until the App Service app is stopped or restarted. You can manually stop or restart the App Service app, or you trigger a restart when you deploy files or make configuration changes to the App Service app. +- If the JBoss server exits abnormally in the `clustering` configuration, a final function called `emit_alert_tx_store_not_empty` is executed. The function checks if the JBoss process left a nonempty transaction store file in disk; if so, an error is logged in the console: `Error: finishing server with non-empty store for node XXXX`. When a new server instance is started, it looks for these nonempty transaction store files to resume the work (see [2. Server launch phase](#2-server-launch-phase)). + ::: zone-end ::: zone pivot="java-tomcat" |
azure-app-configuration | Howto Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-geo-replication.md | This feature isn't yet supported in the Azure App Configuration Java Spring Prov ### [Kubernetes](#tab/kubernetes) -This feature isn't yet supported in the Azure App Configuration Kubernetes Provider. +Update the `AzureAppConfigurationProvider` resource of your Azure App Configuration Kubernetes Provider. Add a `loadBalancingEnabled` property and set it to `true`. ++``` yaml +apiVersion: azconfig.io/v1 +kind: AzureAppConfigurationProvider +metadata: + name: appconfigurationprovider-sample +spec: + endpoint: <your-app-configuration-store-endpoint> + loadBalancingEnabled: true + target: + configMapName: configmap-created-by-appconfig-provider +``` ++> [!NOTE] +> Load balancing support is available if you use version **2.1.0** or later of [Azure App Configuration Kubernetes Provider](./quickstart-azure-kubernetes-service.md). ### [Python](#tab/python) |
azure-boost | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-boost/overview.md | Azure Boost contains several features that can improve the performance and secur - **Networking:** Azure Boost includes a suite of software and hardware networking systems that provide a significant boost to both network performance (Up to 200-Gbps network bandwidth) and network security. Azure Boost compatible virtual machine hosts contain the new [Microsoft Azure Network Adapter (MANA)](../../articles/virtual-network/accelerated-networking-mana-overview.md). Learn more about [Azure Boost networking](../../articles/azure-boost/overview.md#networking). -- **Storage:** Storage operations are offloaded to the Azure Boost FPGA. This offload provides leading efficiency and performance while improving security, reducing jitter, and improving latency for workloads. Local storage now runs at up to 17.3-GBps and 3.8 million IOPS with remote storage up to 12.5-GBps throughput and 650 K IOPS. Learn more about [Azure Boost Storage](../../articles/azure-boost/overview.md#storage).+- **Storage:** Storage operations are offloaded to the Azure Boost FPGA. This offload provides leading efficiency and performance while improving security, reducing jitter, and improving latency for workloads. Local storage now runs at up to 26-GBps and 6.6 million IOPS with remote storage up to 14-GBps throughput and 750 K IOPS. Learn more about [Azure Boost Storage](../../articles/azure-boost/overview.md#storage). - **Security:** Azure Boost uses [Cerberus](../security/fundamentals/project-cerberus.md) as an independent HW Root of Trust to achieve NIST 800-193 certification. Customer workloads can't run on Azure Boost powered architecture unless the firmware and software running on the system is trusted. Learn more about [Azure Boost Security](../../articles/azure-boost/overview.md#security). Consistent updates and performance enhancements ensures you're always a step ahe ## Storage Azure Boost architecture offloads storage covering local, remote and cached disks that provide leading efficiency and performance while improving security, reducing jitter & improving latency for workloads. Azure Boost already provides acceleration for workloads in the fleet using remote storage including specialized workloads such as the Ebsv5 VM types. Also, these improvements provide potential cost saving for customers by consolidating existing workload into fewer or smaller sized VMs. -Azure Boost delivers industry leading throughput performance at up to 12.5-GBps throughput and 650K IOPS. This performance is enabled by accelerated storage processing and exposing NVMe disk interfaces to VMs. Storage tasks are offloaded from the host processor to dedicated programmable Azure Boost hardware in our dynamically programmable FPGA. This architecture allows us to update the FPGA hardware in the fleet enabling continuous delivery for our customers. +Azure Boost delivers industry leading throughput performance at up to 14-GBps throughput and 750K IOPS. This performance is enabled by accelerated storage processing and exposing NVMe disk interfaces to VMs. Storage tasks are offloaded from the host processor to dedicated programmable Azure Boost hardware in our dynamically programmable FPGA. This architecture allows us to update the FPGA hardware in the fleet enabling continuous delivery for our customers. :::image type="content" source="./media/boost-storage-nvme-vs-scsi.png" alt-text="Diagram showing the difference between managed SCSI storage and Azure Boost's managed NVMe storage."::: -By fully applying Azure Boost architecture, we deliver remote, local, and cached disk performance improvements at up to 17-GBps throughput and 3.8M IOPS. Azure Boost SSDs are designed to provide high performance optimized encryption at rest, and minimal jitter to NVMe local disks for Azure VMs with local disks. +By fully applying Azure Boost architecture, we deliver remote, local, and cached disk performance improvements at up to 26-GBps throughput and 6.6M IOPS. Azure Boost SSDs are designed to provide high performance optimized encryption at rest, and minimal jitter to NVMe local disks for Azure VMs with local disks. :::image type="content" source="./media/boost-storage-ssd-comparison.png" alt-text="Diagram showing the difference between local SCSI SSDs and Azure Boost's local NVMe SSDs."::: |
azure-cache-for-redis | Cache How To Active Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md | description: Learn how to replicate your Azure Cache for Redis Enterprise instan Previously updated : 03/23/2023 Last updated : 11/11/2024 There are a few restrictions when using active geo replication: - You can't add an existing (that is, running) cache to a geo-replication group. You can only add a cache to a geo-replication group when you create the cache. - All caches within a geo-replication group must have the same configuration. For example, all caches must have the same SKU, capacity, eviction policy, clustering policy, modules, and TLS setting. - You can't use the `FLUSHALL` and `FLUSHDB` Redis commands when using active geo-replication. Prohibiting the commands prevents unintended deletion of data. Use the [flush operation](#flush-operation) from the portal instead.-- The E1 SKU does not support active geo-replication.+- The E1 SKU doesn't support active geo-replication. ## Create or join an active geo-replication group There are a few restrictions when using active geo replication: To remove a cache instance from an active geo-replication group, you just delete the instance. The remaining instances then reconfigure themselves automatically. -## Force-unlink if there's a region outage +## Force unlink if there's a region outage -In case one of the caches in your replication group is unavailable due to region outage, you can forcefully remove the unavailable cache from the replication group. +In case one of the caches in your replication group is unavailable due to region outage, you can forcefully remove the unavailable cache from the replication group. After you apply **Force-unlink** to a cache, you can't sync any data that is written to that cache back to the replication group after force-unlinking. You should remove the unavailable cache because the remaining caches in the replication group start storing the metadata that hasnΓÇÖt been shared to the unavailable cache. When this happens, the available caches in your replication group might run out of memory. Use the Azure CLI to create a new cache and geo-replication group, or to add a n This example creates a new Azure Cache for Redis Enterprise E10 cache instance called _Cache1_ in the East US region. Then, the cache is added to a new active geo-replication group called _replicationGroup_: ```azurecli-interactive-az redisenterprise create --location "East US" --cluster-name "Cache1" --sku "Enterprise_E10" --resource-group "myResourceGroup" --group-nickname "replicationGroup" --linked-databases id="/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default" +az redisenterprise create --location "East US" --cluster-name "Cache1" --sku "Enterprise_E10" --resource-group "myResourceGroup" --group-nickname "replicationGroup" --linked-databases id="/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default" ``` To configure active geo-replication properly, the ID of the cache instance being created must be added with the `--linked-databases` parameter. The ID is in the format: To configure active geo-replication properly, the ID of the cache instance being This example creates a new Enterprise E10 cache instance called _Cache2_ in the West US region. Then, the script adds the cache to the `replicationGroup` active geo-replication group create in a previous procedure. This way, it's linked in an active-active configuration with _Cache1_. ```azurecli-interactive-az redisenterprise create --location "West US" --cluster-name "Cache2" --sku "Enterprise_E10" --resource-group "myResourceGroup" --group-nickname "replicationGroup" --linked-databases id="/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default" --linked-databases id="/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache2/databases/default" +az redisenterprise create --location "West US" --cluster-name "Cache2" --sku "Enterprise_E10" --resource-group "myResourceGroup" --group-nickname "replicationGroup" --linked-databases id="/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default" --linked-databases id="/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache2/databases/default" ``` As before, you need to list both _Cache1_ and _Cache2_ using the `--linked-databases` parameter. Use Azure PowerShell to create a new cache and geo-replication group, or to add This example creates a new Azure Cache for Redis Enterprise E10 cache instance called _Cache1_ in the East US region. Then, the cache is added to a new active geo-replication group called _replicationGroup_: ```powershell-interactive-New-AzRedisEnterpriseCache -Name "Cache1" -ResourceGroupName "myResourceGroup" -Location "East US" -Sku "Enterprise_E10" -GroupNickname "replicationGroup" -LinkedDatabase '{id:"/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default"}' +New-AzRedisEnterpriseCache -Name "Cache1" -ResourceGroupName "myResourceGroup" -Location "East US" -Sku "Enterprise_E10" -GroupNickname "replicationGroup" -LinkedDatabase '{id:"/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default"}' ``` To configure active geo-replication properly, the ID of the cache instance being created must be added with the `-LinkedDatabase` parameter. The ID is in the format: To configure active geo-replication properly, the ID of the cache instance being #### Create new Enterprise instance in an existing geo-replication group using PowerShell -This example creates a new Enterprise E10 cache instance called _Cache2_ in the West US region. Then, the script adds the cache to the "replicationGroup" active geo-replication group created in the previous procedure. the links the two caches, _Cache1_ and _Cache2_, in an active-active configuration. +This example creates a new Enterprise E10 cache instance called _Cache2_ in the West US region. Then, the script adds the cache to the _replicationGroup_ active geo-replication group created in the previous procedure. After the running the command, the two caches, _Cache1_ and _Cache2_, are linked in an active-active configuration. ```powershell-interactive-New-AzRedisEnterpriseCache -Name "Cache2" -ResourceGroupName "myResourceGroup" -Location "West US" -Sku "Enterprise_E10" -GroupNickname "replicationGroup" -LinkedDatabase '{id:"/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default"}', '{id:"/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache2/databases/default"}' +New-AzRedisEnterpriseCache -Name "Cache2" -ResourceGroupName "myResourceGroup" -Location "West US" -Sku "Enterprise_E10" -GroupNickname "replicationGroup" -LinkedDatabase '{id:"/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default"}', '{id:"/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache2/databases/default"}' ``` As before, you need to list both _Cache1_ and _Cache2_ using the `-LinkedDatabase` parameter. ## Scaling instances in a geo-replication group-It is possible to scale instances that are configured to use active geo-replication. However, a geo-replication group with a mix of different cache sizes can introduce problems. To prevent these issues from occurring, all caches in a geo replication group need to be the same size and capacity. -Since it is difficult to simultaneously scale all instances in the geo-replication group, Azure Cache for Redis has a locking mechanism. If you scale one instance in a geo-replication group, the underlying VM will be scaled, but the memory available will be capped at the original size until the other instances are scaled up as well. And any other scaling operations for the remaining instances are locked until they match the same configuration as the first cache to be scaled. +It's possible to scale instances that are configured to use active geo-replication. However, a geo-replication group with a mix of different cache sizes can introduce problems. To prevent these issues from occurring, all caches in a geo replication group need to be the same size and capacity. ++Because it's difficult to simultaneously scale all instances in the geo-replication group, Azure Cache for Redis has a locking mechanism. If you scale one instance in a geo-replication group, the underlying VM is scaled, but the memory available is capped at the original size until the other instances are scaled up as well. Any other scaling operations for the remaining instances are locked until they match the same configuration as the first cache to be scaled. ### Scaling example-For example, you may have three instances in your geo-replication group, all Enterprise E10 instances: ++For example, you might have three instances in your geo-replication group, all Enterprise E10 instances: | Instance Name | Redis00 | Redis01 | Redis02 | |--|:--:|:--:|:--:| Let's say you want to scale up each instance in this geo-replication group to an |--|:--:|:--:|:--:| | Type | Enterprise E20 | Enterprise E10 | Enterprise E10 | -At this point, the `Redis01` and `Redis02` instances can only scale up to an Enterprise E20 instance. All other scaling operations are blocked. +At this point, the `Redis01` and `Redis02` instances can only scale up to an Enterprise E20 instance. All other scaling operations are blocked. >[!NOTE] > The `Redis00` instance is not blocked from scaling further at this point. But it will be blocked once either `Redis01` or `Redis02` is scaled to be an Enterprise E20. > -Once each instance has been scaled to the same tier and size, all scaling locks are removed: +Once each instance is scaled to the same tier and size, all scaling locks are removed: | Instance Name | Redis00 | Redis01 | Redis02 | |--|:--:|:--:|:--:| |
azure-cache-for-redis | Cache Monitor Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-monitor-diagnostic-settings.md | PUT https://management.azure.com/{resourceUri}/providers/Microsoft.Insights/diag ```json { "properties": {- "storageAccountId": "/subscriptions/df602c9c-7aa0-407d-a6fb-eb20c8bd1192/resourceGroups/apptest/providers/Microsoft.Storage/storageAccounts/appteststorage1", - "eventHubAuthorizationRuleId": "/subscriptions/1a66ce04-b633-4a0b-b2bc-a912ec8986a6/resourceGroups/montest/providers/microsoft.eventhub/namespaces/mynamespace/eventhubs/myeventhub/authorizationrules/myrule", + "storageAccountId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/apptest/providers/Microsoft.Storage/storageAccounts/appteststorage1", + "eventHubAuthorizationRuleId": "/subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/montest/providers/microsoft.eventhub/namespaces/mynamespace/eventhubs/myeventhub/authorizationrules/myrule", "eventHubName": "myeventhub",- "workspaceId": "/subscriptions/4b9e8510-67ab-4e9a-95a9-e2f1e570ea9c/resourceGroups/insights-integration/providers/Microsoft.OperationalInsights/workspaces/myworkspace", + "workspaceId": "/subscriptions/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a/resourceGroups/insights-integration/providers/Microsoft.OperationalInsights/workspaces/myworkspace", "logs": [ { "category": "ConnectedClientList", PUT https://management.azure.com/{resourceUri}/providers/Microsoft.Insights/diag ```json { "properties": {- "storageAccountId": "/subscriptions/df602c9c-7aa0-407d-a6fb-eb20c8bd1192/resourceGroups/apptest/providers/Microsoft.Storage/storageAccounts/myteststorage", - "eventHubAuthorizationRuleID": "/subscriptions/1a66ce04-b633-4a0b-b2bc-a912ec8986a6/resourceGroups/montest/providers/microsoft.eventhub/namespaces/mynamespace/authorizationrules/myrule", + "storageAccountId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/apptest/providers/Microsoft.Storage/storageAccounts/myteststorage", + "eventHubAuthorizationRuleID": "/subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/montest/providers/microsoft.eventhub/namespaces/mynamespace/authorizationrules/myrule", "eventHubName": "myeventhub",- "marketplacePartnerId": "/subscriptions/abcdeabc-1234-1234-ab12-123a1234567a/resourceGroups/test-rg/providers/Microsoft.Datadog/monitors/mydatadog", - "workspaceId": "/subscriptions/4b9e8510-67ab-4e9a-95a9-e2f1e570ea9c/resourceGroups/insights integration/providers/Microsoft.OperationalInsights/workspaces/myworkspace", + "marketplacePartnerId": "/subscriptions/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b/resourceGroups/test-rg/providers/Microsoft.Datadog/monitors/mydatadog", + "workspaceId": "/subscriptions/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a/resourceGroups/insights integration/providers/Microsoft.OperationalInsights/workspaces/myworkspace", "logs": [ { "category": "ConnectionEvents", If you send your logs to a storage account, the contents of the logs look like t ], "roleInstance": "1" },- "resourceId": "/SUBSCRIPTIONS/E6761CE7-A7BC-442E-BBAE-950A121933B5/RESOURCEGROUPS/AZURE-CACHE/PROVIDERS/MICROSOFT.CACHE/REDIS/MYCACHE", + "resourceId": "/SUBSCRIPTIONS/eeee4efe-ff5f-aa6a-bb7b-cccccc8c8c8c/RESOURCEGROUPS/AZURE-CACHE/PROVIDERS/MICROSOFT.CACHE/REDIS/MYCACHE", "Level": 4, "operationName": "Microsoft.Cache/ClientList" } If you send your logs to a storage account, a log for a connection event looks l ```json { "time": "2023-01-24T10:00:02.3680050Z",- "resourceId": "/SUBSCRIPTIONS/4A1C78C6-5CB1-422C-A34E-0DF7FCB9BD0B/RESOURCEGROUPS/TEST/PROVIDERS/MICROSOFT.CACHE/REDISENTERPRISE/AUDITING-SHOEBOX/DATABASES/DEFAULT", + "resourceId": "/SUBSCRIPTIONS/ffff5f5f-aa6a-bb7b-cc8c-dddddd9d9d9d/RESOURCEGROUPS/TEST/PROVIDERS/MICROSOFT.CACHE/REDISENTERPRISE/AUDITING-SHOEBOX/DATABASES/DEFAULT", "category": "ConnectionEvents", "location": "westus", "operationName": "Microsoft.Cache/redisEnterprise/databases/ConnectionEvents/Read", And the log for an auth event looks like this: ```json { "time": "2023-01-24T10:00:02.3680050Z",- "resourceId": "/SUBSCRIPTIONS/4A1C78C6-5CB1-422C-A34E-0DF7FCB9BD0B/RESOURCEGROUPS/TEST/PROVIDERS/MICROSOFT.CACHE/REDISENTERPRISE/AUDITING-SHOEBOX/DATABASES/DEFAULT", + "resourceId": "/SUBSCRIPTIONS/ffff5f5f-aa6a-bb7b-cc8c-dddddd9d9d9d/RESOURCEGROUPS/TEST/PROVIDERS/MICROSOFT.CACHE/REDISENTERPRISE/AUDITING-SHOEBOX/DATABASES/DEFAULT", "category": "ConnectionEvents", "location": "westus", "operationName": "Microsoft.Cache/redisEnterprise/databases/ConnectionEvents/Read", And the log for a disconnection event looks like this: ```json { "time": "2023-01-24T10:00:03.3680050Z",- "resourceId": "/SUBSCRIPTIONS/4A1C78C6-5CB1-422C-A34E-0DF7FCB9BD0B/RESOURCEGROUPS/TEST/PROVIDERS/MICROSOFT.CACHE/REDISENTERPRISE/AUDITING-SHOEBOX/DATABASES/DEFAULT", + "resourceId": "/SUBSCRIPTIONS/ffff5f5f-aa6a-bb7b-cc8c-dddddd9d9d9d/RESOURCEGROUPS/TEST/PROVIDERS/MICROSOFT.CACHE/REDISENTERPRISE/AUDITING-SHOEBOX/DATABASES/DEFAULT", "category": "ConnectionEvents", "location": "westus", "operationName": "Microsoft.Cache/redisEnterprise/databases/ConnectionEvents/Read", |
azure-cache-for-redis | Cache Tutorial Vector Similarity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-vector-similarity.md | Next, you'll read the csv file into a pandas DataFrame. :::image type="content" source="media/cache-tutorial-vector-similarity/code-cell-3.png" alt-text="Screenshot of results from executing code cell 3, displaying eight columns and a sampling of 10 rows of data." lightbox="media/cache-tutorial-vector-similarity/code-cell-3.png"::: -1. Next, process the data by adding an `id` index, removing spaces from the column titles, and filters the movies to take only movies made after 1970 and from English speaking countries. This filtering step reduces the number of movies in the dataset, which lowers the cost and time required to generate embeddings. You're free to change or remove the filter parameters based on your preferences. +1. Next, process the data by adding an `id` index, removing spaces from the column titles, and filters the movies to take only movies made after 1970 and from English speaking countries or regions. This filtering step reduces the number of movies in the dataset, which lowers the cost and time required to generate embeddings. You're free to change or remove the filter parameters based on your preferences. To filter the data, add the following code to a new code cell: |
azure-functions | Dedicated Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dedicated-plan.md | description: Learn about the benefits of running Azure Functions on a dedicated - build-2024 Previously updated : 01/26/2023 Last updated : 10/16/2024 # Dedicated hosting plans for Azure Functions You pay for function apps in an App Service Plan as you would for other App Serv ## <a name="always-on"></a> Always On -If you run on an App Service plan, you should enable the **Always on** setting so that your function app runs correctly. On an App Service plan, the functions runtime goes idle after a few minutes of inactivity, so only HTTP triggers will "wake up" your functions. The **Always on** setting is available only on an App Service plan. On a Consumption plan, the platform activates function apps automatically. +When you run your app on an App Service plan, you should enable the **Always on** setting so that your function app runs correctly. On an App Service plan, the Functions runtime goes idle after a few minutes of inactivity. The **Always on** setting is available only on an App Service plan. In other plans, the platform activates function apps automatically. If you choose not to enable **Always on**, you can reactivate an idled app in these ways: -Even with Always On enabled, the execution timeout for individual functions is controlled by the `functionTimeout` setting in the [host.json](functions-host-json.md#functiontimeout) project file. ++ Send a request to an HTTP trigger endpoint or any other endpoint on the app. Even a failed request should wake up your app. ++ Acccess your app in the [Azure portal](https://portal.azure.com). ++Even with **Always on** enabled, the execution timeout for individual functions is controlled by the `functionTimeout` setting in the [host.json](functions-host-json.md#functiontimeout) project file. ## Scaling |
azure-functions | Deployment Zip Push | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/deployment-zip-push.md | Last updated 08/12/2018 This article describes how to deploy your function app project files to Azure from a .zip (compressed) file. You learn how to do a push deployment, both by using Azure CLI and by using the REST APIs. [Azure Functions Core Tools](functions-run-local.md) also uses these deployment APIs when publishing a local project to Azure. -Zip deployment is also an easy way to run your functions from the deployment package. To learn more, see [Run your functions from a package file in Azure](run-functions-from-deployment-package.md). +Zip deployment is also an easy way to [run your functions from a package file in Azure](run-functions-from-deployment-package.md). It is the default deployment technology in the [Consumption](./consumption-plan.md), [Elastic Premium](./functions-premium-plan.md), and [Dedicated (App Service)](./dedicated-plan.md) hosting plans. The [Flex Consumption](./flex-consumption-plan.md) plan does not support zip deployment. Azure Functions has the full range of continuous deployment and integration options that are provided by Azure App Service. For more information, see [Continuous deployment for Azure Functions](functions-continuous-deployment.md). |
azure-functions | Dotnet Isolated In Process Differences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md | Use the following table to compare feature and functional differences between th | Cold start times<sup>2</sup> | [Configurable optimizations](./dotnet-isolated-process-guide.md#performance-optimizations) | Optimized | | ReadyToRun | [Supported](dotnet-isolated-process-guide.md#readytorun) | [Supported](functions-dotnet-class-library.md#readytorun) | | [Flex Consumption] | [Supported](./flex-consumption-plan.md#supported-language-stack-versions) | Not supported |+| .NET Aspire | [Preview](dotnet-isolated-process-guide.md#net-aspire-preview) | Not supported | <sup>1</sup> When you need to interact with a service using parameters determined at runtime, using the corresponding service SDKs directly is recommended over using imperative bindings. The SDKs are less verbose, cover more scenarios, and have advantages for error handling and debugging purposes. This recommendation applies to both models. |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | var builder = FunctionsApplication.CreateBuilder(args); builder.Services .AddApplicationInsightsTelemetryWorkerService() .ConfigureFunctionsApplicationInsights()- .AddSingleton<IHttpResponderService, DefaultHttpResponderService>() - .Configure<LoggerFilterOptions>(options => + .AddSingleton<IHttpResponderService, DefaultHttpResponderService>(); ++builder.Logging.Services.Configure<LoggerFilterOptions>(options => + { + // The Application Insights SDK adds a default logging filter that instructs ILogger to capture only Warning and more severe logs. Application Insights requires an explicit override. + // Log levels can also be configured using appsettings.json. For more information, see https://learn.microsoft.com/azure/azure-monitor/app/worker-service#ilogger-logs + LoggerFilterRule defaultRule = options.Rules.FirstOrDefault(rule => rule.ProviderName + == "Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider"); + if (defaultRule is not null) {- // The Application Insights SDK adds a default logging filter that instructs ILogger to capture only Warning and more severe logs. Application Insights requires an explicit override. - // Log levels can also be configured using appsettings.json. For more information, see https://learn.microsoft.com/azure/azure-monitor/app/worker-service#ilogger-logs - LoggerFilterRule toRemove = options.Rules.FirstOrDefault(rule => rule.ProviderName - == "Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider"); - - if (toRemove is not null) - { - options.Rules.Remove(toRemove); - } - }); + options.Rules.Remove(defaultRule); + } + }); var host = builder.Build(); ``` When you use an `IHostApplicationBuilder`, by default, exceptions thrown by your ### Application Insights -You can configure your isolated process application to emit logs directly to [Application Insights](/azure/azure-monitor/app/app-insights-overview?tabs=net). This behavior replaces the default behavior of [relaying logs through the host](./configure-monitoring.md#custom-application-logs), and is recommended because it gives you control over how those logs are emitted. +You can configure your isolated process application to emit logs directly to [Application Insights](/azure/azure-monitor/app/app-insights-overview?tabs=net). This behavior replaces the default behavior of [relaying logs through the host](./configure-monitoring.md#custom-application-logs). Unless you are using .NET Aspire, configuring direct Application Insights integration is recommended because it gives you control over how those logs are emitted. ++Application Insights integration is not enabled by default in all setup experiences. Some templates will create Functions projects with the necessary packages and startup code commented out. If you want to use Application Insights integration, you can uncomment these lines in `Program.cs` and the project's `.csproj` file. The instructions in the rest of this section also describe how to enable the integration. ++If your project is part of a [.NET Aspire orchestration](#net-aspire-preview), it uses OpenTelemetry for monitoring instead. You should not enable direct Application Insights integration within .NET Aspire projects. Instead, configure the Azure Monitor OpenTelemetry exporter as part of the [service defaults project](/dotnet/aspire/fundamentals/service-defaults#opentelemetry-configuration). If your Functions project uses Application Insights integration in a .NET Aspire context, the application will error on startup. #### Install packages There are a few requirements for running .NET functions in the isolated worker m When you create your function app in Azure using the methods in the previous section, these required settings are added for you. When you create these resources [by using ARM templates or Bicep files for automation](functions-infrastructure-as-code.md), you must make sure to set them in the template. +## .NET Aspire (Preview) ++[.NET Aspire](/dotnet/aspire/get-started/aspire-overview) is an opinionated stack that simplifies development of distributed applications in the cloud. You can enlist .NET 8 and .NET 9 isolated worker model projects in Aspire 9.0 orchestrations using preview support. The section outlines the core requirements for enlistment. ++This integration requires specific setup: ++- Use [Aspire 9.0 or later](/dotnet/aspire/fundamentals/setup-tooling) and the [.NET 9 SDK](https://dotnet.microsoft.com/download/dotnet/9.0). Aspire 9.0 supports the .NET 8 and .NET 9 frameworks. +- If you use Visual Studio, update to version 17.12 or later. You must also have the latest version of the Functions tools for Visual Studio. To check for updates, navigate to **Tools** > **Options**, choose **Azure Functions** under **Projects and Solutions**. Select **Check for updates** and install updates as prompted. +- In the [Aspire app host project](/dotnet/aspire/fundamentals/app-host-overview): + - You must reference [Aspire.Hosting.Azure.Functions]. + - You must have a project reference to your Functions project. + - In the app host's `Program.cs`, you must also include the project by calling `AddAzureFunctionsProject<TProject>()` on your `IDistributedApplicationBuilder`. This method is used instead of the `AddProject<TProject>()` that you use for other project types. If you just use `AddProject<TProject>()`, the Functions project will not start properly. +- In the Functions project: + - You must reference the [2.x versions](#version-2x-preview) of [Microsoft.Azure.Functions.Worker] and [Microsoft.Azure.Functions.Worker.Sdk]. You must also update any references you have to `Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore` to the 2.x version as well. + - Your `Program.cs` should use the `IHostApplicationBuilder` version of [host instance startup](#start-up-and-configuration). + - If you want to use your Aspire service defaults, you should include a project reference to the service defaults project. Before building your `IHostApplicationBuilder` in `Program.cs`, you should also include a call to `builder.AddServiceDefaults()`. + - You shouldn't keep configuration in `local.settings.json`, aside from the `FUNCTIONS_WORKER_RUNTIME` setting, which should remain "dotnet-isolated". Other configuration should be set through the app host project. + - You should remove any direct Application Insights integrations. Monitoring in Aspire is instead handled through its OpenTelemetry support. ++The following example shows a minimal `Program.cs` for an App Host project: ++```csharp +var builder = DistributedApplication.CreateBuilder(args); ++builder.AddAzureFunctionsProject<Projects.MyFunctionsProject>("MyFunctionsProject"); ++builder.Build().Run(); +``` ++The following example shows a minimal `Program.cs` for a Functions project used in Aspire: ++```csharp +using Microsoft.Azure.Functions.Worker; +using Microsoft.Azure.Functions.Worker.Builder; +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Extensions.Hosting; ++var builder = FunctionsApplication.CreateBuilder(args); ++builder.AddServiceDefaults(); ++builder.ConfigureFunctionsWebApplication(); ++builder.Build().Run(); +``` ++This does not include the default Application Insights configuration that you see in many of the other `Program.cs` examples in this article. Instead, Aspire's OpenTelemetry integration is configured through the `builder.AddServiceDefaults()` call. ++### Considerations and best practices for .NET Aspire integration ++Consider the following points when evaluating .NET Aspire with Azure Functions: ++- Support for Azure Functions with .NET Aspire is currently in preview. During the preview period, when you publish the Aspire solution to Azure, Functions projects are deployed as Azure Container Apps resources without event-driven scaling. Azure Functions support is not available for apps deployed in this mode. +- Trigger and binding configuration through Aspire is currently limited to specific integrations. See [Connection configuration with Aspire](#connection-configuration-with-aspire) for details. +- Your `Program.cs` should use the `IHostApplicationBuilder` version of [host instance startup](#start-up-and-configuration). This allows you to call `builder.AddServiceDefaults()` to add [.NET Aspire service defaults](/dotnet/aspire/fundamentals/service-defaults) to your Functions project. +- Aspire uses OpenTelemetry for monitoring. You can configure Aspire to export telemetry to Azure Monitor through the service defaults project. In many other Azure Functions contexts, you might include direct integration with Application Insights by registering the telemetry worker service. This is not recommended in Aspire and can lead to runtime errors with version 2.22.0 of `Microsoft.ApplicationInsights.WorkerService`. You should remove any direct Application Insights integrations from your Functions project when using Aspire. +- For Functions projects enlisted into an Aspire orchestration, most of the application configuration should come from the Aspire app host project. You should typically avoid setting things in `local.settings.json`, other than the `FUNCTIONS_WORKER_RUNTIME` setting. If the same environment variable is set by `local.settings.json` and Aspire, the system uses the Aspire version. +- Do not configure the Storage emulator for any connections in `local.settings.json`. Many Functions starter templates include the emulator as a default for `AzureWebJobsStorage`. However, emulator configuration can prompt some IDEs to start a version of the emulator that can conflict with the version that Aspire uses. ++### Connection configuration with Aspire ++Azure Functions requires a [host storage connection (`AzureWebJobsStorage`)](./functions-reference.md#connecting-to-host-storage-with-an-identity) for several of its core behaviors. When you call `AddAzureFunctionsProject<TProject>()` in your app host project, a default `AzureWebJobsStorage` connection is created and provided to the Functions project. This default connection uses the Storage emulator for local development runs and automatically provisions a storage account when deployed. For additional control, you can replace this connection by calling `.WithHostStorage()` on the Functions project resource. ++The following example shows a minimal `Program.cs` for an app host project that replaces the host storage: ++```csharp +var builder = DistributedApplication.CreateBuilder(args); ++var myHostStorage = builder.AddAzureStorage("myHostStorage"); ++builder.AddAzureFunctionsProject<Projects.MyFunctionsProject>("MyFunctionsProject") + .WithHostStorage(myHostStorage); ++builder.Build().Run(); +``` ++> [!NOTE] +> When Aspire provisions the host storage in publish mode, it defaults to creating role assignments for the [Storage Account Contributor], [Storage Blob Data Contributor], [Storage Queue Data Contributor], and [Storage Table Data Contributor] roles. ++Your triggers and bindings reference connections by name. Some Aspire integrations are enabled to provide these through a call to `WithReference()` on the project resource: ++| Aspire integration | Notes | +|--|| +| [Azure Blobs](/dotnet/aspire/storage/azure-storage-blobs-integration) | When Aspire provisions the resource, it defaults to creating role assignments for the [Storage Blob Data Contributor], [Storage Queue Data Contributor], and [Storage Table Data Contributor] roles. | +| [Azure Queues](/dotnet/aspire/storage/azure-storage-queues-integration) | When Aspire provisions the resource, it defaults to creating role assignments for the [Storage Blob Data Contributor], [Storage Queue Data Contributor], and [Storage Table Data Contributor] roles. | +| [Azure Event Hubs](/dotnet/aspire/messaging/azure-event-hubs-integration) | When Aspire provisions the resource, it defaults to creating a role assignment using the [Azure Event Hubs Data Owner] role. | +| [Azure Service Bus](/dotnet/aspire/messaging/azure-service-bus-integration) | When Aspire provisions the resource, it defaults to creating a role assignment using the [Azure Service Bus Data Owner] role. | ++The following example shows a minimal `Program.cs` for an app host project that configures a queue trigger. In this example, the corresponding queue trigger has its `Connection` property set to "MyQueueTriggerConnection". ++```csharp +var builder = DistributedApplication.CreateBuilder(args); ++var myAppStorage = builder.AddAzureStorage("myAppStorage").RunAsEmulator(); +var queues = myAppStorage.AddQueues("queues"); ++builder.AddAzureFunctionsProject<Projects.MyFunctionsProject>("MyFunctionsProject") + .WithReference(queues, "MyQueueTriggerConnection"); ++builder.Build().Run(); +``` ++For other integrations, calls to `WithReference` set the configuration in a different way, making it available to [Aspire client integrations](/dotnet/aspire/fundamentals/integrations-overview#client-integrations), but not to triggers and bindings. For these integrations, you should call `WithEnvironment()` to pass the connection information for the trigger or binding to resolve. The following example shows how to set the environment variable "MyBindingConnection" for a resource that exposes a connection string expression: ++```csharp +builder.AddAzureFunctionsProject<Projects.MyFunctionsProject>("MyFunctionsProject") + .WithEnvironment("MyBindingConnection", otherIntegration.Resource.ConnectionStringExpression); +``` ++You can configure both `WithReference()` and `WithEnvironment()` if you want a connection to be used both by Aspire client integrations and the triggers and bindings system. ++For some resources, the structure of a connection might be different between when you run it locally and when you publish it to azure. In the previous example, `otherIntegration` could be a resource that runs as an emulator, so `ConnectionStringExpression` would return an emulator connection string. However, when the resource is published, Aspire might set up an identity-based connection, and `ConnectionStringExpression` would return the service's URI. In this case, to set up [identity based connections for Azure Functions](./functions-reference.md#configure-an-identity-based-connection), you might need to provide a different environment variable name. The following example uses `builder.ExecutionContext.IsPublishMode` to conditionally add the necessary suffix: ++```csharp +builder.AddAzureFunctionsProject<Projects.MyFunctionsProject>("MyFunctionsProject") + .WithEnvironment("MyBindingConnection" + (builder.ExecutionContext.IsPublishMode ? "__serviceUri" : ""), otherIntegration.Resource.ConnectionStringExpression); +``` ++Depending on your scenario, you may also need to adjust the permissions that will be assigned for an identity-based connection. You can use the [`ConfigureConstruct<T>()` method](/dotnet/api/aspire.hosting.azureconstructresourceextensions.configureconstruct) to customize how Aspire configures infrastructure when it publishes your project. ++Consult each binding's [reference pages](./functions-triggers-bindings.md#supported-bindings) for details on the connection formats it supports and the permissions those formats require. + ## Debugging When running locally using Visual Studio or Visual Studio Code, you're able to debug your .NET isolated worker project as normal. However, there are two debugging scenarios that don't work as expected. Keep these considerations in mind when using Functions with preview versions of [Microsoft.Azure.Functions.Worker]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/ [Microsoft.Azure.Functions.Worker.Sdk]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/ +[Aspire.Hosting.Azure.Functions]: https://www.nuget.org/packages/Aspire.Hosting.Azure.Functions ++[Storage Account Contributor]: ../role-based-access-control/built-in-roles.md#storage-account-contributor +[Storage Blob Data Contributor]: ../role-based-access-control/built-in-roles.md#storage-blob-data-contributor +[Storage Queue Data Contributor]: ../role-based-access-control/built-in-roles.md#storage-queue-data-contributor +[Storage Table Data Contributor]: ../role-based-access-control/built-in-roles.md#storage-table-data-contributor +[Azure Event Hubs Data Owner]: ../role-based-access-control/built-in-roles.md#azure-event-hubs-data-owner +[Azure Service Bus Data Owner]: ../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner + [HostBuilder]: /dotnet/api/microsoft.extensions.hosting.hostbuilder [IHostApplicationBuilder]: /dotnet/api/microsoft.extensions.hosting.ihostapplicationbuilder [IHost]: /dotnet/api/microsoft.extensions.hosting.ihost |
azure-functions | Functions Bindings Azure Mysql Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-mysql-input.md | const { app, input } = require('@azure/functions'); const mysqlInput = input.generic({ type: 'mysql', commandText: 'select * from Products where Cost = @Cost',+ parameters: '@Cost={Cost}', commandType: 'Text', connectionStringSetting: 'MySqlConnectionString' }) |
azure-functions | Functions Bindings Azure Mysql Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-mysql-output.md | The [configuration](#configuration) section explains these properties. The following example is sample JavaScript code: ```javascript-module.exports = async function (context, req, products) { +module.exports = async function (context, req, product) { context.log('JavaScript HTTP trigger and MySQL output binding function processed a request.'); context.res = { // status: 200, /* Defaults to 200 */ mimetype: "application/json",- body: products + body: product }; } ``` |
azure-functions | Functions Bindings Service Bus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md | Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs]( This version allows you to bind to types from [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus). +This version supports configuration of triggers and bindings through [.NET Aspire integration](./dotnet-isolated-process-guide.md#connection-configuration-with-aspire). + Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.ServiceBus), version 5.x. # [Functions 2.x+](#tab/functionsv2/isolated-process) |
azure-functions | Functions Bindings Storage Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md | Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs]( This version allows you to bind to types from [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs). Learn more about how these new types are different from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md). +This version supports configuration of triggers and bindings through [.NET Aspire integration](./dotnet-isolated-process-guide.md#connection-configuration-with-aspire). + Add the extension to your project by installing the [Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs NuGet package], version 5.x or later. Using the .NET CLI: |
azure-functions | Functions Bindings Storage Queue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md | Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs]( This version allows you to bind to types from [Azure.Storage.Queues](/dotnet/api/azure.storage.queues). +This version supports configuration of triggers and bindings through [.NET Aspire integration](./dotnet-isolated-process-guide.md#connection-configuration-with-aspire). + Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues), version 5.x. Using the .NET CLI: ```dotnetcli-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues --version 5.0.0 +dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues ``` [!INCLUDE [functions-bindings-storage-extension-v5-isolated-worker-tables-note](../../includes/functions-bindings-storage-extension-v5-isolated-worker-tables-note.md)] |
azure-functions | Functions Consumption Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-consumption-costs.md | -This article shows you how to estimate plan costs for the Consumption and Flex Consumption hosting plans. +This article shows you how to estimate plan costs for the Flex Consumption and Consumption hosting plans. -Azure Functions currently offers four different hosting plans for your function apps, with each plan having its own pricing model: +Azure Functions currently offers these different hosting options for your function apps, with each option having its own hosting plan pricing model: | Plan | Description | | - | -- |-| [**Consumption**](consumption-plan.md) | You're only charged for the time that your function app runs. This plan includes a [free grant][pricing page] on a per subscription basis.| -| [**Flex Consumption plan**](flex-consumption-plan.md)| You pay for execution time on the instances on which your functions are running, plus any _always ready_ instances. Instances are dynamically added and removed based on the number of incoming events. Also supports virtual network integration. | +| [**Flex Consumption plan**](flex-consumption-plan.md)| You pay for execution time on the instances on which your functions are running, plus any _always ready_ instances. Instances are dynamically added and removed based on the number of incoming events. This is the recommended dynamic scale plan, which also supports virtual network integration. | | [**Premium**](functions-premium-plan.md) | Provides you with the same features and scaling mechanism as the Consumption plan, but with enhanced performance and virtual network integration. Cost is based on your chosen pricing tier. To learn more, see [Azure Functions Premium plan](functions-premium-plan.md). | | [**Dedicated (App Service)**](dedicated-plan.md) <br/>(basic tier or higher) | When you need to run in dedicated VMs or in isolation, use custom images, or want to use your excess App Service plan capacity. Uses [regular App Service plan billing](https://azure.microsoft.com/pricing/details/app-service/). Cost is based on your chosen pricing tier.|+| [**Container Apps**](functions-container-apps-hosting.md) | Create and deploy containerized function apps in a fully managed environment hosted by Azure Container Apps, which lets you rRun your functions alongside other microservices, APIs, websites, and workflows as container-hosted programs. | +| [**Consumption**](consumption-plan.md) | You're only charged for the time that your function app runs. This plan includes a [free grant][pricing page] on a per subscription basis.| [!INCLUDE [functions-flex-preview-note](../../includes/functions-flex-preview-note.md)] -You should always choose the plan that best supports the feature, performance, and cost requirements for your function executions. To learn more, see [Azure Functions scale and hosting](functions-scale.md). +You should always choose the option that best supports the feature, performance, and cost requirements for your function executions. To learn more, see [Azure Functions scale and hosting](functions-scale.md). -This article focuses on Consumption and Flex Consumption plans because in these plans billing depends on active periods of executions inside each instance. +This article focuses on Flex Consumption and Consumption plans because in these plans billing depends on active periods of executions inside each instance. Durable Functions can also run in both of these plans. To learn more about the cost considerations when using Durable Functions, see [Durable Functions billing](./durable/durable-functions-billing.md). Durable Functions can also run in both of these plans. To learn more about the c The way that consumption-based costs are calculated, including free grants, depends on the specific plan. For the most current cost and grant information, see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/). -### [Consumption plan](#tab/consumption-plan) --The execution *cost* of a single function execution is measured in *GB-seconds*. Execution cost is calculated by combining its memory usage with its execution time. A function that runs for longer costs more, as does a function that consumes more memory. --Consider a case where the amount of memory used by the function stays constant. In this case, calculating the cost is simple multiplication. For example, say that your function consumed 0.5 GB for 3 seconds. Then the execution cost is `0.5GB * 3s = 1.5 GB-seconds`. --Since memory usage changes over time, the calculation is essentially the integral of memory usage over time. The system does this calculation by sampling the memory usage of the process (along with child processes) at regular intervals. As mentioned on the [pricing page], memory usage is rounded up to the nearest 128-MB bucket. When your process is using 160 MB, you're charged for 256 MB. The calculation takes into account concurrency, which is multiple concurrent function executions in the same process. --> [!NOTE] -> While CPU usage isn't directly considered in execution cost, it can have an impact on the cost when it affects the execution time of the function. --For an HTTP-triggered function, when an error occurs before your function code begins to execute you aren't charged for an execution. This means that 401 responses from the platform due to API key validation or the App Service Authentication / Authorization feature don't count against your execution cost. Similarly, 5xx status code responses aren't counted when they occur in the platform before your function processes the request. A 5xx response generated by the platform after your function code has started to execute is still counted as an execution, even when the error isn't raised from your function code. - ### [Flex Consumption plan](#tab/flex-consumtion-plan) [!INCLUDE [functions-flex-consumption-billing-table](../../includes/functions-flex-consumption-billing-table.md)] In a situation like this, the pricing depends more on the kind of work being don In this scenario, the total hourly cost of running on-demand on a single instance is `$0.1152 + $0.0288 = $0.144 USD`. +### [Consumption plan](#tab/consumption-plan) ++The execution _cost_ of a single function execution is measured in _GB-seconds_. Execution cost is calculated by combining its memory usage with its execution time. A function that runs for longer costs more, as does a function that consumes more memory. ++Consider a case where the amount of memory used by the function stays constant. In this case, calculating the cost is simple multiplication. For example, say that your function consumed 0.5 GB for 3 seconds. Then the execution cost is `0.5GB * 3s = 1.5 GB-seconds`. ++Since memory usage changes over time, the calculation is essentially the integral of memory usage over time. The system does this calculation by sampling the memory usage of the process (along with child processes) at regular intervals. As mentioned on the [pricing page], memory usage is rounded up to the nearest 128-MB bucket. When your process is using 160 MB, you're charged for 256 MB. The calculation takes into account concurrency, which is multiple concurrent function executions in the same process. ++> [!NOTE] +> While CPU usage isn't directly considered in execution cost, it can have an impact on the cost when it affects the execution time of the function. ++For an HTTP-triggered function, when an error occurs before your function code begins to execute you aren't charged for an execution. This means that 401 responses from the platform due to API key validation or the App Service Authentication / Authorization feature don't count against your execution cost. Similarly, 5xx status code responses aren't counted when they occur in the platform before your function processes the request. A 5xx response generated by the platform after your function code has started to execute is still counted as an execution, even when the error isn't raised from your function code. + ## Other related costs |
azure-functions | Functions Deployment Technologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-technologies.md | Title: Deployment technologies in Azure Functions description: Learn the different ways you can deploy code to Azure Functions. Previously updated : 09/27/2024 Last updated : 11/07/2024 # Deployment technologies in Azure Functions The following table describes the available deployment methods for your code pro | Deployment type | Methods | Best for... | | | | |-| Tools-based | • [Visual Studio Code publish](functions-develop-vs-code.md#publish-to-azure)<br/>• [Visual Studio publish](functions-develop-vs.md#publish-to-azure)<br/>• [Core Tools publish](functions-run-local.md#publish) | Deployments during development and other improvised deployments. Deploying your code on-demand using [local development tools](functions-develop-local.md#local-development-environments). | +| Tools-based | • [Azure CLI](/cli/azure/functionapp/deployment/source#az-functionapp-deployment-source-config-zip)<br/>• [Visual Studio Code publish](functions-develop-vs-code.md#publish-to-azure)<br/>• [Visual Studio publish](functions-develop-vs.md#publish-to-azure)<br/>• [Core Tools publish](functions-run-local.md#publish) | Deployments during development and other improvised deployments. Deploying your code on-demand using [local development tools](functions-develop-local.md#local-development-environments). | | App Service-managed| • [Deployment Center (CI/CD)](functions-continuous-deployment.md)<br/>• [Container deployments](./functions-how-to-custom-container.md#enable-continuous-deployment-to-azure) | Continuous deployment (CI/CD) from source control or from a container registry. Deployments are managed by the App Service platform (Kudu).| | External pipelines|• [Azure Pipelines](functions-how-to-azure-devops.md)<br/>• [GitHub Actions](functions-how-to-github-actions.md) | Production pipelines that include validation, testing, and other actions that must be run as part of an automated deployment. Deployments are managed by the pipeline. | Each plan has different behaviors. Not all deployment technologies are available | Deployment technology | Flex Consumption| Consumption | Elastic Premium | Dedicated | Container Apps | |--|:-:|:-:|::|::|:-:|-| [OneDeploy](#one-deploy) |Γ£ö| | | | | +| [One deploy](#one-deploy) |Γ£ö| | | | | | [Zip deploy](#zip-deploy) | |Γ£ö|Γ£ö|Γ£ö| | | [External package URL](#external-package-url)<sup>1</sup> | |Γ£ö|Γ£ö|Γ£ö| | | [Docker container](#docker-container) | | Linux-only | Linux-only | Linux-only |Γ£ö| Package-based deployment methods store the package in the storage account associ ## Deployment technology details -The following deployment methods are available in Azure Functions. +The following deployment methods are available in Azure Functions. Refer to the [deployment technology availability](#deployment-technology-availability) table to determine which technologies each hosting plan supports. ### One deploy One deploy is the only deployment technology supported for apps on the Flex Consumption plan. The end result is a ready-to-run .zip package that your function app runs on. One deploy is the only deployment technology supported for apps on the Flex Cons Zip deploy is the default and recommended deployment technology for function apps on the Consumption, Elastic Premium, and App Service (Dedicated) plans. The end result a ready-to-run .zip package that your function app runs on. It differs from [external package URL](#external-package-url) in that our platform is responsible for remote building and storing your app content. >__How to use it:__ Deploy by using your favorite client tool: [Visual Studio Code](functions-develop-vs-code.md#publish-to-azure), [Visual Studio](functions-develop-vs.md#publish-to-azure), or from the command line using [Azure Functions Core Tools](functions-run-local.md#project-file-deployment) or the [Azure CLI](/cli/azure/functionapp/deployment/source#az-functionapp-deployment-source-config-zip). Our [Azure Dev Ops Task](functions-how-to-azure-devops.md#deploy-your-app-1) and [GitHub Action](functions-how-to-github-actions.md) similarly leverage zip deploy. -+> >When you deploy by using zip deploy, you can set your app to [run from package](run-functions-from-deployment-package.md). To run from package, set the [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package) application setting value to `1`. We recommend zip deployment. It yields faster loading times for your applications, and it's the default for VS Code, Visual Studio, and the Azure CLI. >__When to use it:__ Zip deploy is the default and recommended deployment technology for function apps on the Windows Consumption, Windows and Linux Elastic Premium, and Windows and Linux App Service (Dedicated) plans. In the portal-based editor, you can directly edit the files that are in your fun >__How to use it:__ To be able to edit your functions in the [Azure portal](https://portal.azure.com), you must have [created your functions in the portal](./functions-get-started.md). To preserve a single source of truth, using any other deployment method makes your function read-only and prevents continued portal editing. To return to a state in which you can edit your files in the Azure portal, you can manually turn the edit mode back to `Read/Write` and remove any deployment-related application settings (like [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package)). ->__When to use it:__ The portal is a good way to get started with Azure Functions. For more advanced development work, we recommend that you use one of the following client tools: +>__When to use it:__ The portal is a good way to get started with Azure Functions. Because of [development limitations in the Azure portal](functions-how-to-use-azure-function-app-settings.md#development-limitations-in-the-azure-portal), you should use one of the following client tools more advanced development work: > >+ [Visual Studio Code](./create-first-function-vs-code-csharp.md) >+ [Azure Functions Core Tools (command line)](functions-run-local.md) In the portal-based editor, you can directly edit the files that are in your fun >__Where app content is stored:__ App content is stored on the file system, which may be backed by Azure Files from the storage account specified when the function app was created. -The following table shows the operating systems and languages that support in-portal editing: --| Language | Windows Consumption | Windows Premium | Windows Dedicated | Linux Consumption | Linux Premium | Linux Dedicated | -|-|:--: |:-:|:--:|:--:|:-:|::| -| C#<sup>1</sup> | | | | | | -| Java | | | | | | | -| JavaScript (Node.js) |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| -| Python<sup>2</sup> | | | |Γ£ö |Γ£ö |Γ£ö | -| PowerShell |Γ£ö|Γ£ö|Γ£ö| | | | -| TypeScript (Node.js) | | | | | | | --<sup>1</sup> In-portal editing is only supported for C# script files, which run in-process with the host. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md). -<sup>2</sup> In-portal editing is only supported for the [v1 Python programming model](functions-reference-python.md?pivots=python-mode-configuration). - ## Deployment behaviors When you deploy updates to your function app code, currently executing functions are terminated. After deployment completes, the new code is loaded to begin processing requests. Review [Improve the performance and reliability of Azure Functions](performance-reliability.md#write-functions-to-be-stateless) to learn how to write stateless and defensive functions. |
azure-functions | Functions Event Grid Blob Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-grid-blob-trigger.md | Title: 'Tutorial: Trigger Azure Functions on blob containers using an event subs description: This tutorial shows how to create a low-latency, event-driven trigger on an Azure Blob Storage container using an Event Grid event subscription. Previously updated : 05/20/2024 Last updated : 11/09/2024 zone_pivot_groups: programming-languages-set-functions #Customer intent: As an Azure Functions developer, I want learn how to create an event-based trigger on a Blob Storage container so that I can get a more rapid response to changes in the container. This article creates a C# app that runs in isolated worker mode, which supports > [!IMPORTANT] > This tutorial has you use the [Flex Consumption plan](flex-consumption-plan.md), which is currently in preview. The Flex Consumption plan only supports the event-based version of the Blob Storage trigger.-> You can complete this tutorial using any other [hosting plan](functions-scale.md) for your function app. ## Prerequisites This article creates a C# app that runs in isolated worker mode, which supports When you create a Blob Storage trigger function using Visual Studio Code, you also create a new project. You need to edit the function to consume an event subscription as the source, rather than use the regular polled container. -1. In Visual Studio Code, open your function app. +1. In Visual Studio Code, press F1 to open the command palette, enter `Azure Functions: Create Function...`, and select **Create new project**. -1. Press F1 to open the command palette, enter `Azure Functions: Create Function...`, and select **Create new project**. --1. For your project workspace, select the directory location. Make sure that you either create a new folder or choose an empty folder for the project workspace. +1. For your project workspace, select a directory location. Make sure that you either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that's already part of a workspace. -1. At the prompts, provide the following information: + 1. At the prompts, provide the following information: ::: zone pivot="programming-language-csharp" |Prompt|Action| When you create a Blob Storage trigger function using Visual Studio Code, you al |**Select a language**| Select `C#`. | |**Select a .NET runtime**| Select `.NET 8.0 Isolated LTS`. | |**Select a template for your project's first function**| Select `Azure Blob Storage trigger (using Event Grid)`. |- |**Provide a function name**| Enter `BlobTriggerEventGrid`. | + |**Provide a function name**| Enter `EventGridBlobTrigger`. | |**Provide a namespace** | Enter `My.Functions`. | |**Select setting from "local.settings.json"**| Select `Create new local app setting`. |- |**Select subscription**| Select your subscription.| + |**Select subscription**| Select your subscription, if needed.| |**Select a storage account**| Use Azurite emulator for local storage. |- |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | + |**The path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | |**Select how you would like to open your project**| Select `Open in current window`. | ::: zone-end ::: zone pivot="programming-language-python" |Prompt|Action| |--|--| |**Select a language**| Select `Python`. |+ |**Select a Python programming model** | Select `Model V2` | |**Select a Python interpreter to create a virtual environment**| Select your preferred Python interpreter. If an option isn't shown, enter the full path to your Python binary. |- |**Select a template for your project's first function**| Select `Azure Blob Storage trigger (using Event Grid)`. | - |**Provide a function name**| Enter `BlobTriggerEventGrid`. | + |**Select a template for your project's first function**| Select `Blob trigger`. (The event-based template isn't yet available.)| + |**Provide a function name**| Enter `EventGridBlobTrigger`. | + |**The path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | |**Select setting from "local.settings.json"**| Select `Create new local app setting`. |- |**Select subscription**| Select your subscription.| + |**Select subscription**| Select your subscription, if needed.| |**Select a storage account**| Use Azurite emulator for local storage. |- |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | |**Select how you would like to open your project**| Select `Open in current window`. |- ::: zone-end + ::: zone-end ::: zone pivot="programming-language-java" |Prompt|Action| |--|--| |**Select a language**| Select `Java`. | |**Select a version of Java**| Select `Java 11` or `Java 8`, the Java version on which your functions run in Azure and that you've locally verified. | | **Provide a group ID** | Select `com.function`. |- | **Provide an artifact ID** | Select `BlobTriggerEventGrid`. | + | **Provide an artifact ID** | Select `EventGridBlobTrigger` (or the default). | | **Provide a version** | Select `1.0-SNAPSHOT`. | | **Provide a package name** | Select `com.function`. |- | **Provide an app name** | Accept the generated name starting with `BlobTriggerEventGrid`. | + | **Provide an app name** | Accept the generated name starting with `EventGridBlobTrigger`. | | **Select the build tool for Java project** | Select `Maven`. | |**Select how you would like to open your project**| Select `Open in current window`. |- ::: zone-end - ::: zone pivot="programming-language-typescript" - |Prompt|Action| ++ An HTTP triggered function (`HttpExample`) is created for you. You won't use this function and must instead create a new function. + ::: zone-end + ::: zone pivot="programming-language-typescript" + |Prompt|Action| |--|--| |**Select a language for your function project**| Select `TypeScript`. | |**Select a TypeScript programming model**| Select `Model V4`. | |**Select a template for your project's first function**| Select `Azure Blob Storage trigger (using Event Grid)`. |- |**Provide a function name**| Enter `BlobTriggerEventGrid`. | + |**Provide a function name**| Enter `EventGridBlobTrigger`. | |**Select setting from "local.settings.json"**| Select `Create new local app setting`. |- |**Select subscription**| Select your subscription.| + |**Select subscription**| Select your subscription, if needed.| |**Select a storage account**| Use Azurite emulator for local storage. |- |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | + |**The path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | |**Select how you would like to open your project**| Select `Open in current window`. | ::: zone-end ::: zone pivot="programming-language-javascript" When you create a Blob Storage trigger function using Visual Studio Code, you al |**Select a language for your function project**| Select `JavaScript`. | |**Select a JavaScript programming model**| Select `Model V4`. | |**Select a template for your project's first function**| Select `Azure Blob Storage trigger (using Event Grid)`. |- |**Provide a function name**| Enter `BlobTriggerEventGrid`. | + |**Provide a function name**| Enter `eventGridBlobTrigger`. | |**Select setting from "local.settings.json"**| Select `Create new local app setting`. |- |**Select subscription**| Select your subscription.| + |**Select subscription**| Select your subscription, if needed.| |**Select a storage account**| Use Azurite emulator for local storage. |- |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | + |**The path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | |**Select how you would like to open your project**| Select `Open in current window`. | ::: zone-end ::: zone pivot="programming-language-powershell" When you create a Blob Storage trigger function using Visual Studio Code, you al |--|--| |**Select a language for your function project**| Select `PowerShell`. | |**Select a template for your project's first function**| Select `Azure Blob Storage trigger (using Event Grid)`. |- |**Provide a function name**| Enter `BlobTriggerEventGrid`. | + |**Provide a function name**| Enter `EventGridBlobTrigger`. | |**Select setting from "local.settings.json"**| Select `Create new local app setting`. |- |**Select subscription**| Select your subscription.| + |**Select subscription**| Select your subscription, if needed.| |**Select a storage account**| Use Azurite emulator for local storage. |- |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | + |**The path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | |**Select how you would like to open your project**| Select `Open in current window`. | ::: zone-end +4. In the command palette, enter `Azure Functions: Create Function...` and select `EventGridBlobTrigger`. If you don't see this template, first select **Change template filter** > **All**. ++5. At the prompts, provide the following information: ++ |Prompt|Action| + |--|--| + | **Provide a package name** | Select `com.function`. | + | **Provide a function name** | Enter `EventGridBlobTrigger`. | + |**Select setting from "local.settings.json"**| Select `Create new local app setting`. | + |**Select subscription**| Select your subscription.| + |**Select a storage account**| Use Azurite emulator for local storage. | + |**The path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | ++You now have a function that can be triggered by events in a Blob Storage container. ++## Update the trigger source ++You first need to switch the trigger source from the default Blob trigger source (container polling) to an event subscription source. ++1. Open the function_app.py project file and you see a definition for the `EventGridBlobTrigger` function with the `blob_trigger` decorator applied. ++1. Update the decorator by adding `source = "EventGrid"`. Your function should now look something like this: ++ ```python + @app.blob_trigger(arg_name="myblob", source="EventGrid", path="samples-workitems", + connection="<STORAGE_ACCOUNT>") + def EventGridBlobTrigger(myblob: func.InputStream): + logging.info(f"Python blob trigger function processed blob" + f"Name: {myblob.name}" + f"Blob Size: {myblob.length} bytes") + ``` + + In this definition `source = "EventGrid"` indicates that an event subscription to the `samples-workitems` blob container is used as the source of the event that starts the trigger. +## (Optional) Review the code +Open the generated `EventGridBlobTrigger.cs` file and you see a definition for an `EventGridBlobTrigger` function that looks something like this: +++In this definition `Source = BlobTriggerSource.EventGrid` indicates that an event subscription to the blob container (in the example `PathValue`) is used as the source of the event that starts the trigger. +Open the generated `EventGridBlobTrigger.java` file and you see a definition for an `EventGridBlobTrigger` function that looks something like this: ++```java + @FunctionName("EventGridBlobTrigger") + @StorageAccount("<STORAGE_ACCOUNT>") + public void run( + @BlobTrigger(name = "content", source = "EventGrid", path = "samples-workitems/{name}", dataType = "binary") byte[] content, + @BindingName("name") String name, + final ExecutionContext context + ) { + context.getLogger().info("Java Blob trigger function processed a blob. Name: " + name + "\n Size: " + content.length + " Bytes"); + } +``` ++In this definition `source = EventGrid` indicates that an event subscription to the `samples-workitems` blob container is used as the source of the event that starts the trigger. +In the `EventGridBlobTrigger` folder, open the `function.json` file and find a binding definition like this with a `type` of `blobTrigger` and a `source` of `EventGrid`: +++The `path` indicates that the `samples-workitems` blob container is used as the source of the event that starts the trigger. +Open the generated `EventGridBlobTrigger.js` file and you see a definition for a function that looks something like this: +++In this definition, a `source` of `EventGrid` indicates that an event subscription to the `samples-workitems` blob container is used as the source of the event that starts the trigger. +Open the generated `EventGridBlobTrigger.ts` file and you see a definition for a function that looks something like this: +++In this definition, a `source` of `EventGrid` indicates that an event subscription to the `samples-workitems` blob container is used as the source of the event that starts the trigger. ## Upgrade the Storage extension dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs Visual Studio Code uses Azurite to emulate Azure Storage services when running locally. You use Azurite to emulate the Azure Blob Storage service during local development and testing. -1. If haven't already done so, install the [Azurite v3 extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=Azurite.azurite). +1. If you haven't already done so, install the [Azurite v3 extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=Azurite.azurite). 1. Verify that the *local.settings.json* file has `"UseDevelopmentStorage=true"` set for `AzureWebJobsStorage`, which tells Core Tools to use Azurite instead of a real storage account connection when running locally. Now both the Functions host and the trigger are sharing the same storage account To create an event subscription, you need to provide Event Grid with the URL of the specific endpoint to report Blob Storage events. This _blob extension_ URL is composed of these parts: -| Part | Example | -| | | +| Part | Example | +| |-| | Base function app URL | `https://<FUNCTION_APP_NAME>.azurewebsites.net` | -| Blob-specific path | `/runtime/webhooks/blobs` | -| Function query string | `?functionName=Host.Functions.BlobTriggerEventGrid` | -| Blob extension access key | `&code=<BLOB_EXTENSION_KEY>` | +| Blob-specific path | `/runtime/webhooks/blobs` | +| Function query string | `?functionName=Host.Functions.<FUNCTION_NAME>` | +| Blob extension access key | `&code=<BLOB_EXTENSION_KEY>` | The blob extension access key is designed to make it more difficult for others to access your blob extension endpoint. To determine your blob extension access key: The blob extension access key is designed to make it more difficult for others t 1. Create a new endpoint URL for the Blob Storage trigger based on the following example: ```http- https://<FUNCTION_APP_NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.BlobTriggerEventGrid&code=<BLOB_EXTENSION_KEY> + https://<FUNCTION_APP_NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.EventGridBlobTrigger&code=<BLOB_EXTENSION_KEY> ``` - In this example, replace `<FUNCTION_APP_NAME>` with the name of your function app and replace `<BLOB_EXTENSION_KEY>` with the value you got from the portal. If you used a different name for your function, you'll also need to change the `functionName` query string value to your function name. + In this example, replace `<FUNCTION_APP_NAME>` with the name of your function app, and `<BLOB_EXTENSION_KEY>` with the value you got from the portal. If you used a different name for your function, replace `EventGridBlobTrigger` with that function name. You can now use this endpoint URL to create an event subscription. An event subscription, powered by Azure Event Grid, raises events based on chang 1. Sign in to the [Azure portal](https://portal.azure.com) and make a note of the **Resource group** for your storage account. You create your other resources in the same group to make it easier to clean up resources when you're done. -1. select the **Events** option from the left menu. +1. Select the **Events** option from the left menu. ![Add storage account event](./media/functions-event-grid-blob-trigger/functions-event-grid-local-dev-add-event.png) An event subscription, powered by Azure Event Grid, raises events based on chang | **Endpoint** | Your Azure-based URL endpoint | Use the URL endpoint that you built, which includes the key value. | 1. Select **Confirm selection** to validate the endpoint URL. +2. Select the **Filters** tab and provide the following information to the prompts: ++ | Setting | Suggested value | Description | + ||--|--| + | **Enable subject filtering** | *Enabled* | Enables filtering on which blobs can trigger the function. | + | **Subject Begins With** | **`/blobServices/default/containers/<CONTAINER_NAME>/blobs/<BLOB_PREFIX>`** | Replace `<CONTAINER_NAME` and `<BLOB_PREFIX>` with values you choose. This sets the subscription to trigger only for blobs that start with `BLOB_PREFIX` and are in the `CONTAINER_NAME` container. | + | **Subject Ends With** | *.txt* | Ensures that the function will only be triggered by blobs ending with `.txt`. | ++For more information on filtering to specific blobs, see [Event Filtering for Azure Event Hubs](../event-grid/event-filtering.md). -1. Select **Create** to create the event subscription. +7. Select **Create** to create the event subscription. ## Upload a file to the container Now that you uploaded a file to the **samples-workitems** container, the functio ## Next steps -+ [Working with blobs](storage-considerations.md#working-with-blobs) +- [Working with blobs](storage-considerations.md#working-with-blobs) - [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md) - [Event Grid trigger for Azure Functions](./functions-bindings-event-grid.md) |
azure-functions | Functions How To Azure Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-azure-devops.md | Title: Continuously update function app code using Azure Pipelines description: Learn how to use Azure Pipelines to set up a pipeline that builds and deploys apps to Azure Functions. Previously updated : 09/27/2024 Last updated : 11/07/2024 ms.devlang: azurecli You'll use the `AzureFunctionApp` task to deploy to Azure Functions. There are n Choose your task version at the top of the article. YAML pipelines aren't available for Azure DevOps 2019 and earlier. +> [!NOTE] +> The [AzureFunctionApp@2](/azure/devops/pipelines/tasks/reference/azure-function-app-v2) is highly recommended. Deploying to an app on the [Flex Consumption](./flex-consumption-plan.md) plan is only supported in version 2. + ## Prerequisites * An Azure DevOps organization. If you don't have one, you can [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up). If your team already has one, then make sure you're an administrator of the Azure DevOps project that you want to use. |
azure-functions | Functions How To Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md | Optional parameters for all function app plans: Keep the following considerations in mind when using the Azure Functions action: -+ When using GitHub Actions, the code is deployed to your function app using [Zip deployment for Azure Functions](deployment-zip-push.md). ++ When using GitHub Actions, the code is deployed using [one deploy](./functions-deployment-technologies.md#one-deploy) to apps on the [Flex Consumption](./flex-consumption-plan.md) plan and [zip deploy](deployment-zip-push.md) to apps on the [Consumption](./consumption-plan.md), [Elastic Premium](./functions-premium-plan.md), and [Dedicated (App Service)](./dedicated-plan.md) plans. The exception is Linux Consumption, where [external package URL](./functions-deployment-technologies.md#external-package-url) is used. + The credentials required by GitHub to connection to Azure for deployment are stored as Secrets in your GitHub repository and accessed in the deployment as `secrets.<SECRET_NAME>`. |
azure-functions | Functions How To Use Azure Function App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md | Title: Configure function app settings in Azure Functions description: Learn how to configure function app settings in Azure Functions. Previously updated : 07/02/2024 Last updated : 11/11/2024 ms.assetid: 81eb04f8-9a27-45bb-bf24-9ab6c30d205c Use the following procedure to migrate from a Premium plan to a Consumption plan ## Development limitations in the Azure portal +The following table shows the operating systems and languages that support in-portal editing: ++| Language | Windows Consumption | Windows Premium | Windows Dedicated | Linux Consumption | Linux Premium | Linux Dedicated | +|-|:--: |:-:|:--:|:--:|:-:|::| +| C# | | | | | | +| Java | | | | | | | +| JavaScript (Node.js) |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| +| Python | | | |Γ£ö |Γ£ö |Γ£ö | +| PowerShell |Γ£ö|Γ£ö|Γ£ö| | | | +| TypeScript (Node.js) | | | | | | | + Consider these limitations when you develop your functions in the [Azure portal](https://portal.azure.com): + In-portal editing is supported only for functions that were created or last modified in the Azure portal. |
azure-functions | Functions Networking Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md | -This article describes the networking features available across the hosting options for Azure Functions. All the following networking options give you some ability to access resources without using internet-routable addresses or to restrict internet access to a function app. +This article describes the networking features available across the hosting options for Azure Functions. The following networking options can be categorized as inbound and outbound networking features. Inbound features allow you to restrict access to your app, whereas outbound features allow you to connect your app to resources secured by a virtual network and control how outbound traffic is routed. The [hosting models](functions-scale.md) have different levels of network isolation available. Choosing the correct one helps you meet your network isolation requirements. To call other services that have a private endpoint connection, such as storage ### Service endpoints -Using service endpoints, you can restrict many Azure services to selected virtual network subnets to provide a higher level of security. Regional virtual network integration enables your function app to reach Azure services that are secured with service endpoints. This configuration is supported on all [plans](functions-scale.md#networking-features) that support virtual network integration. To access a service endpoint-secured service, you must do the following: +Using service endpoints, you can restrict many Azure services to selected virtual network subnets to provide a higher level of security. Regional virtual network integration enables your function app to reach Azure services that are secured with service endpoints. This configuration is supported on all [plans](functions-scale.md#networking-features) that support virtual network integration. Follow these steps to access a secured service endpoint: 1. Configure regional virtual network integration with your function app to connect to a specific subnet. 1. Go to the destination service and configure service endpoints against the integration subnet. To restrict access to a specific subnet, create a restriction rule with a **Virt If service endpoints aren't already enabled with `Microsoft.Web` for the subnet that you selected, they're automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet. -If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app is configured for service endpoints in anticipation of having them enabled later on the subnet. +If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app is configured for service endpoints, which you enable later on the subnet. ![Screenshot of the "Add IP Restriction" pane with the Virtual Network type selected.](../app-service/media/app-service-ip-restrictions/access-restrictions-vnet-add.png) You can't use service endpoints to restrict access to apps that run in an App Se To learn how to set up service endpoints, see [Establish Azure Functions private site access](functions-create-private-site-access.md). -## Virtual network integration +## Outbound networking features -Virtual network integration allows your function app to access resources inside a virtual network. -Azure Functions supports two kinds of virtual network integration: ---Virtual network integration in Azure Functions uses shared infrastructure with App Service web apps. To learn more about the two types of virtual network integration, see: --* [Regional virtual network integration](../app-service/overview-vnet-integration.md#regional-virtual-network-integration) -* [Gateway-required virtual network integration](../app-service/configure-gateway-required-vnet-integration.md) --To learn how to set up virtual network integration, see [Enable virtual network integration](#enable-virtual-network-integration). --### Enable virtual network integration --1. In your function app in the [Azure portal](https://portal.azure.com), select **Networking**, then under **VNet Integration** select **Click here to configure**. --1. Select **Add VNet**. +You can use the features in this section toto manage outbound connections made by your app. - :::image type="content" source="./media/functions-networking-options/vnet-int-function-app.png" alt-text="Select VNet Integration"::: +### Virtual network integration -1. The drop-down list contains all of the Azure Resource Manager virtual networks in your subscription in the same region. Select the virtual network you want to integrate with. +This section details the features that Functions supports to control data outbound from your app. - :::image type="content" source="./media/functions-networking-options/vnet-int-add-vnet-function-app.png" alt-text="Select the VNet"::: +Virtual network integration gives your function app access to resources in your virtual network. Once integrated, your app routes outbound traffic through the virtual network. This allows your app to access private endpoints or resources with rules allowing traffic from only select subnets. When the destination is an IP address outside of the virtual network, the source IP will still be sent from the one of the addresses listed in your app's properties, unless you've configured a NAT Gateway. - * The Functions Flex Consumption and Elastic Premium plans only supports regional virtual network integration. If the virtual network is in the same region, either create a new subnet or select an empty, pre-existing subnet. +Azure Functions supports two kinds of virtual network integration: - * To select a virtual network in another region, you must have a virtual network gateway provisioned with point to site enabled. Virtual network integration across regions is only supported for Dedicated plans, but global peerings work with regional virtual network integration. +* [Regional virtual network integration](#regional-virtual-network-integration) for apps running on the [Flex Consumption](./flex-consumption-plan.md), [Elastic Premium](./functions-premium-plan.md), [Dedicated (App Service)](./dedicated-plan.md), and [Container Apps](./functions-container-apps-hosting.md) hosting plans (recommended) +* [Gateway-required virtual network integration](../app-service/configure-gateway-required-vnet-integration.md) for apps running on the [Dedicated (App Service)](./dedicated-plan.md) hosting plan -During the integration, your app is restarted. When integration is finished, you see details on the virtual network you're integrated with. By default, Route All is enabled, and all traffic is routed into your virtual network. --If you wish for only your private traffic ([RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) to be routed, please follow the steps in the [app service documentation](../app-service/overview-vnet-integration.md#application-routing). +To learn how to set up virtual network integration, see [Enable virtual network integration](#enable-virtual-network-integration). ### Regional virtual network integration Using regional virtual network integration enables your app to access: When you use regional virtual network integration, you can use the following Azure networking features: -* **Network security groups (NSGs)**: You can block outbound traffic with an NSG that's placed on your integration subnet. The inbound rules don't apply because you can't use virtual network integration to provide inbound access to your app. -* **Route tables (UDRs)**: You can place a route table on the integration subnet to send outbound traffic where you want. +* **[Network security groups (NSGs)](#network-security-groups)**: You can block outbound traffic with an NSG that's placed on your integration subnet. The inbound rules don't apply because you can't use virtual network integration to provide inbound access to your app. +* **[Route tables (UDRs)](#routes)**: You can place a route table on the integration subnet to send outbound traffic where you want. > [!NOTE] > When you route all of your outbound traffic into your virtual network, it's subject to the NSGs and UDRs that are applied to your integration subnet. When virtual network integrated, your function app's outbound traffic to public IP addresses is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere. > > Regional virtual network integration isn't able to use port 25. -For the Flex Consumption plan: -1. Ensure that the `Microsoft.App` Azure resource provider is enabled for your subscription by [following these instructions](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). The subnet delegation required by Flex Consumption apps is `Microsoft.App/environments`. -1. The subnet delegation required by Flex Consumption apps is `Microsoft.App/environments`. This is a change from Elastic Premium and App Service which have a different delegation requirement. -1. You can plan for 40 IP addresses to be used at the most for one function app, even if the app scales beyond 40. For example, if you have fifteen Flex Consumption function apps that will be VNet integrated into the same subnet, you can plan for 15x40 = 600 IP addresses used at the most. This limit is subject to change, and is not enforced. -1. The subnet can't already be in use for other purposes (like private or service endpoints, or [delegated](../virtual-network/subnet-delegation-overview.md) to any other hosting plan or service). While you can share the same subnet with multiple Flex Consumption apps, the networking resources will be shared across these function apps and this can lead to one function app impacting the performance of others on the same subnet. +Considerations for the [Flex Consumption](./flex-consumption-plan.md) plan: +* Ensure that the `Microsoft.App` Azure resource provider is enabled for your subscription by [following these instructions](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). This is needed for subnet delegation. +* The subnet delegation required when running in a Flex Consumption plan is `Microsoft.App/environments`. This differs from the Elastic Premium and Dedicated (App Service) plans, which have a different delegation requirement. +* You can plan for 40 IP addresses to be used at the most for one function app, even if the app scales beyond 40. For example, if you have 15 Flex Consumption function apps that are integrated in the same subnet, you must plan for 15x40 = 600 IP addresses used at the most. This limit is subject to change, and is not enforced. +* The subnet can't already be in use for other purposes (like private or service endpoints, or [delegated](../virtual-network/subnet-delegation-overview.md) to any other hosting plan or service). While you can share the same subnet with multiple Flex Consumption apps, the networking resources are shared across these function apps, which can lead to one app impacting the performance of others on the same subnet. -There are some limitations with using virtual network: +Considerations for the [Elastic Premium](./functions-premium-plan.md), [Dedicated (App Service)](./dedicated-plan.md), and [Container Apps](./functions-container-apps-hosting.md) plans: -* The feature is available from Flex Consumption, Elastic Premium, and App Service Premium V2 and Premium V3. It's also available in Standard but only from newer App Service deployments. If you are on an older deployment, you can only use the feature from a Premium V2 App Service plan. If you want to make sure you can use the feature in a Standard App Service plan, create your app in a Premium V3 App Service plan. Those plans are only supported on our newest deployments. You can scale down if you desire after that. -* The integration subnet can be used by only one App Service plan. +* The feature is available for Elastic Premium and App Service Premium V2 and Premium V3. It's also available in Standard but only from newer App Service deployments. If you are on an older deployment, you can only use the feature from a Premium V2 App Service plan. If you want to make sure you can use the feature in a Standard App Service plan, create your app in a Premium V3 App Service plan. Those plans are only supported on our newest deployments. You can scale down if you desire after that. * The feature can't be used by Isolated plan apps that are in an App Service Environment.-* The feature requires an unused subnet that's a /28 or larger in an Azure Resource Manager virtual network. * The app and the virtual network must be in the same region.-* You can't delete a virtual network with an integrated app. Remove the integration before you delete the virtual network. +* The feature requires an unused subnet that's a /28 or larger in an Azure Resource Manager virtual network. +* The integration subnet can be used by only one App Service plan. * You can have up to two regional virtual network integrations per App Service plan. Multiple apps in the same App Service plan can use the same integration subnet.+* You can't delete a virtual network with an integrated app. Remove the integration before you delete the virtual network. * You can't change the subscription of an app or a plan while there's an app that's using regional virtual network integration. +### Enable virtual network integration ++1. In your function app in the [Azure portal](https://portal.azure.com), select **Networking**, then under **VNet Integration** select **Click here to configure**. ++1. Select **Add VNet**. ++ :::image type="content" source="./media/functions-networking-options/vnet-int-function-app.png" alt-text="Screenshot of the VNet Integration page where you can enable virtual network integration in your app." ::: ++1. The drop-down list contains all of the Azure Resource Manager virtual networks in your subscription in the same region. Select the virtual network you want to integrate with. ++ :::image type="content" source="./media/functions-networking-options/vnet-int-add-vnet-function-app.png" alt-text="Select the VNet"::: ++ * The Flex Consumption and Elastic Premium hosting plans only support regional virtual network integration. If the virtual network is in the same region, either create a new subnet or select an empty, preexisting subnet. ++ * To select a virtual network in another region, you must have a virtual network gateway provisioned with point to site enabled. Virtual network integration across regions is only supported for Dedicated plans, but global peerings work with regional virtual network integration. ++During the integration, your app is restarted. When integration is finished, you see details on the virtual network you're integrated with. By default, Route All is enabled, and all traffic is routed into your virtual network. ++If you prefer to only have your private traffic ([RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) routed, follow the steps in this [App Service article](../app-service/overview-vnet-integration.md#application-routing). + ### Subnets -Virtual network integration depends on a dedicated subnet. When you provision a subnet, the Azure subnet loses five IPs from the start. For the Elastic Premium and App Service plans, one address is used from the integration subnet for each plan instance. When you scale your app to four instances, then four addresses are used. For Flex Consumption this does not apply and instances share IP addresses. +Virtual network integration depends on a dedicated subnet. When you provision a subnet, the Azure subnet loses five IPs from the start. For the Elastic Premium and App Service plans, one address is used from the integration subnet for each plan instance. When you scale your app to four instances, then four addresses are used. For Flex Consumption this doesn't apply and instances share IP addresses. -When you scale up or down in size, the required address space is doubled for a short period of time. This affects the real, available supported instances for a given subnet size. The following table shows both the maximum available addresses per CIDR block and the effect this has on horizontal scale: +In the Elastic Premium and Dedicated (App Service) plans, the required address space is doubled for a short period of time when you scale up or down in instance size. This affects the real, available supported instances for a given subnet size. The following table shows both the maximum available addresses per CIDR block and the effect this has on horizontal scale: | CIDR block size | Max available addresses | Max horizontal scale (instances)<sup>*</sup> | |--|-|| When you scale up or down in size, the required address space is doubled for a s Since subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity for Functions Elastic Premium plans, you should use a /24 with 256 addresses for Windows and a /26 with 64 addresses for Linux. When creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /24 and /26 is required for Windows and Linux respectively. -When you want your apps in another plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing virtual network integration. +The Flex Consumption plan allows for multiple apps running in the Flex Consumption plan to integrate with the same subnet. This isn't the case for the Elastic Premium and Dedicated (App Service) hosting plans. These plans only allow two virtual networks to be connected with each App Service plan. Multiple apps from a single App Service plan can join the same subnet, but apps from a different plan can't use that same subnet. The feature is fully supported for both Windows and Linux apps, including [custom containers](../app-service/configure-custom-container.md). All of the behaviors act the same between Windows apps and Linux apps. ### Network security groups -You can use network security groups to block inbound and outbound traffic to resources in a virtual network. An app that uses regional virtual network integration can use a [network security group][VNETnsg] to block outbound traffic to resources in your virtual network or the internet. To block traffic to public addresses, you must have virtual network integration with Route All enabled. The inbound rules in an NSG don't apply to your app because virtual network integration affects only outbound traffic from your app. +You can use [network security groups][VNETnsg] to control traffic between resources in your virtual network. For example, you can create a security rule that blocks your app's outbound traffic from reaching a resource in your virtual network or from leaving the network. These security rules apply to apps that have configured virtual network integration. To block traffic to public addresses, you must have virtual network integration and Route All enabled. The inbound rules in an NSG don't apply to your app because virtual network integration affects only outbound traffic from your app. -To control inbound traffic to your app, use the Access Restrictions feature. An NSG that's applied to your integration subnet is in effect regardless of any routes applied to your integration subnet. If your function app is virtual network integrated with Route All enabled, and you don't have any routes that affect public address traffic on your integration subnet, all of your outbound traffic is still subject to NSGs assigned to your integration subnet. When Route All isn't enabled, NSGs are only applied to RFC1918 traffic. +To control inbound traffic to your app, use the Access Restrictions feature. An NSG that's applied to your integration subnet is in effect regardless of any routes applied to your integration subnet. If your function app is virtual network integrated with [Route All](../app-service/configure-vnet-integration-routing.md#configure-application-routing) enabled, and you don't have any routes that affect public address traffic on your integration subnet, all of your outbound traffic is still subject to NSGs assigned to your integration subnet. When Route All isn't enabled, NSGs are only applied to RFC1918 traffic. ### Routes -You can use route tables to route outbound traffic from your app to wherever you want. By default, route tables only affect your RFC1918 destination traffic. When Route All is enabled, all of your outbound calls are affected. When [Route All](../app-service/overview-vnet-integration.md#application-routing) is disabled, only private traffic (RFC1918) is affected by your route tables. Routes that are set on your integration subnet won't affect replies to inbound app requests. Common destinations can include firewall devices or gateways. +You can use route tables to route outbound traffic from your app to wherever you want. By default, route tables only affect your RFC1918 destination traffic. When [Route All](../app-service/overview-vnet-integration.md#application-routing) is enabled, all of your outbound calls are affected. When Route All is disabled, only private traffic (RFC1918) is affected by your route tables. Routes that are set on your integration subnet won't affect replies to inbound app requests. Common destinations can include firewall devices or gateways. If you want to route all outbound traffic on-premises, you can use a route table to send all outbound traffic to your ExpressRoute gateway. If you do route traffic to a gateway, be sure to set routes in the external network to send any replies back. Border Gateway Protocol (BGP) routes also affect your app traffic. If you have BGP routes from something like an ExpressRoute gateway, your app outbound traffic is affected. By default, BGP routes affect only your RFC1918 destination traffic. When your function app is virtual network integrated with Route All enabled, all outbound traffic can be affected by your BGP routes. +### Outbound IP restrictions ++Outbound IP restrictions are available in a Flex Consumption plan, Elastic Premium plan, App Service plan, or App Service Environment. You can configure outbound restrictions for the virtual network where your App Service Environment is deployed. ++When you integrate a function app in an Elastic Premium plan or an App Service plan with a virtual network, the app can still make outbound calls to the internet by default. By integrating your function app with a virtual network with Route All enabled, you force all outbound traffic to be sent into your virtual network, where network security group rules can be used to restrict traffic. For Flex Consumption all traffic is already routed through the virtual network and Route All isn't needed. ++To learn how to control the outbound IP using a virtual network, see [Tutorial: Control Azure Functions outbound IP with an Azure virtual network NAT gateway](functions-how-to-use-nat-gateway.md). + ### Azure DNS private zones After your app integrates with your virtual network, it uses the same DNS server that your virtual network is configured with and will work with the Azure DNS private zones linked to the virtual network. -## Restrict your storage account to a virtual network +### Automation +The following APIs let you programmatically manage regional virtual network integrations: +++ **Azure CLI**: Use the [`az functionapp vnet-integration`](/cli/azure/functionapp/vnet-integration) commands to add, list, or remove a regional virtual network integration. ++ **ARM templates**: Regional virtual network integration can be enabled by using an Azure Resource Manager template. For a full example, see [this Functions quickstart template](https://azure.microsoft.com/resources/templates/function-premium-vnet-integration/).++## Hybrid Connections ++[Hybrid Connections](../azure-relay/relay-hybrid-connections-protocol.md) is a feature of Azure Relay that you can use to access application resources in other networks. It provides access from your app to an application endpoint. You can't use it to access your application. Hybrid Connections is available to functions that run on Windows in all but the Consumption plan. ++As used in Azure Functions, each hybrid connection correlates to a single TCP host and port combination. This means that the hybrid connection's endpoint can be on any operating system and any application as long as you're accessing a TCP listening port. The Hybrid Connections feature doesn't know or care what the application protocol is or what you're accessing. It just provides network access. ++To learn more, see the [App Service documentation for Hybrid Connections](../app-service/app-service-hybrid-connections.md). These same configuration steps support Azure Functions. ++>[!IMPORTANT] +> Hybrid Connections is only supported when your function app runs on Windows. Linux apps aren't supported. ++## Connecting to Azure Services through a virtual network ++Virtual network integration enables your function app to access resources in a virtual network. This section overviews things you should consider when attempting to connect your app to certain services. ++### Restrict your storage account to a virtual network > [!NOTE] > To quickly deploy a function app with private endpoints enabled on the storage account, please refer to the following template: [Function app with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints). When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoints. -This feature is supported for all Windows and Linux virtual network-supported SKUs in the Dedicated (App Service) plan and for the Elastic Premium plans, as well as the Flex Consumption plan. The Consumption plan isn't supported. To learn how to set up a function with a storage account restricted to a private network, see [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). +You can use a network restricted storage account with function apps on the Flex Consumption, Elastic Premium, and Dedicated (App Service) plans; the Consumption plan isn't supported. For Elastic Premium and Dedicated plans, you have to ensure that private [content share routing](../app-service/configure-vnet-integration-routing.md#content-share) is configured. To learn how to configure your function app with a storage account secured with a virtual network, see [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). -## Use Key Vault references +### Use Key Vault references You can use Azure Key Vault references to use secrets from Azure Key Vault in your Azure Functions application without requiring any code changes. Azure Key Vault is a service that provides centralized secrets management, with full control over access policies and audit history. If virtual network integration is configured for the app, [Key Vault references](../app-service/app-service-key-vault-references.md) may be used to retrieve secrets from a network-restricted vault. -## Virtual network triggers (non-HTTP) +### Virtual network triggers (non-HTTP) -Currently, you can use non-HTTP trigger functions from within a virtual network in one of two ways: +Your workload may require your app to be triggered from an event source protected by a virtual network. There's two options if you want your app to dynamically scale based on the number of events received from non-HTTP trigger sources: ++ Run your function app in a [Flex Consumption](./flex-consumption-plan.md). + Run your function app in an [Elastic Premium plan](./functions-premium-plan.md) and enable virtual network trigger support.-+ Run your function app in a Flex Consumption, App Service plan or App Service Environment. -### Elastic Premium plan with virtual network triggers +Function apps running on the [Dedicated (App Service)](./dedicated-plan.md) plans don't dynamically scale based on events. Rather, scale out is dictated by [autoscale](./dedicated-plan.md#scaling) rules you define. ++#### Elastic Premium plan with virtual network triggers -The [Elastic Premium plan](functions-premium-plan.md) lets you create functions that are triggered by services inside a virtual network. These non-HTTP triggers are known as _virtual network triggers_. +The [Elastic Premium plan](functions-premium-plan.md) lets you create functions that are triggered by services secured by a virtual network. These non-HTTP triggers are known as _virtual network triggers_. -By default, virtual network triggers don't cause your function app to scale beyond their pre-warmed instance count. However, certain extensions support virtual network triggers that cause your function app to scale dynamically. You can enable this _dynamic scale monitoring_ in your function app for supported extensions in one of these ways: +By default, virtual network triggers don't cause your function app to scale beyond their prewarmed instance count. However, certain extensions support virtual network triggers that cause your function app to scale dynamically. You can enable this _dynamic scale monitoring_ in your function app for supported extensions in one of these ways: #### [Azure portal](#tab/azure-portal) The extensions in this table support dynamic scale monitoring of virtual network > [!IMPORTANT] > When you enable virtual network trigger monitoring, only triggers for these extensions can cause your app to scale dynamically. You can still use triggers from extensions that aren't in this table, but they won't cause scaling beyond their pre-warmed instance count. For a complete list of all trigger and binding extensions, see [Triggers and bindings](./functions-triggers-bindings.md#supported-bindings). -### App Service plan and App Service Environment with virtual network triggers +#### App Service plan and App Service Environment with virtual network triggers -When your function app runs in either an App Service plan or an App Service Environment, you can use non-HTTP trigger functions. For your functions to get triggered correctly, you must be connected to a virtual network with access to the resource defined in the trigger connection. +When your function app runs in either an App Service plan or an App Service Environment, you can write functions that are triggered by resources secured by a virtual network. For your functions to get triggered correctly, your app must be connected to a virtual network with access to the resource defined in the trigger connection. For example, assume you want to configure Azure Cosmos DB to accept traffic only from a virtual network. In this case, you must deploy your function app in an App Service plan that provides virtual network integration with that virtual network. Integration enables a function to be triggered by that Azure Cosmos DB resource. -## Hybrid Connections --[Hybrid Connections](../azure-relay/relay-hybrid-connections-protocol.md) is a feature of Azure Relay that you can use to access application resources in other networks. It provides access from your app to an application endpoint. You can't use it to access your application. Hybrid Connections is available to functions that run on Windows in all but the Consumption plan. --As used in Azure Functions, each hybrid connection correlates to a single TCP host and port combination. This means that the hybrid connection's endpoint can be on any operating system and any application as long as you're accessing a TCP listening port. The Hybrid Connections feature doesn't know or care what the application protocol is or what you're accessing. It just provides network access. --To learn more, see the [App Service documentation for Hybrid Connections](../app-service/app-service-hybrid-connections.md). These same configuration steps support Azure Functions. -->[!IMPORTANT] -> Hybrid Connections is only supported on Windows plans. Linux isn't supported. --## Outbound IP restrictions --Outbound IP restrictions are available in a Flex Consumption plan, Elastic Premium plan, App Service plan, or App Service Environment. You can configure outbound restrictions for the virtual network where your App Service Environment is deployed. --When you integrate a function app in an Elastic Premium plan or an App Service plan with a virtual network, the app can still make outbound calls to the internet by default. By integrating your function app with a virtual network with Route All enabled, you force all outbound traffic to be sent into your virtual network, where network security group rules can be used to restrict traffic. For Flex Consumption all traffic is already routed through the virtual network and Route All is not needed. --To learn how to control the outbound IP using a virtual network, see [Tutorial: Control Azure Functions outbound IP with an Azure virtual network NAT gateway](functions-how-to-use-nat-gateway.md). --## Automation -The following APIs let you programmatically manage regional virtual network integrations: --+ **Azure CLI**: Use the [`az functionapp vnet-integration`](/cli/azure/functionapp/vnet-integration) commands to add, list, or remove a regional virtual network integration. -+ **ARM templates**: Regional virtual network integration can be enabled by using an Azure Resource Manager template. For a full example, see [this Functions quickstart template](https://azure.microsoft.com/resources/templates/function-premium-vnet-integration/). - ## Testing considerations When testing functions in a function app with private endpoints, you must do your testing from within the same virtual network, such as on a virtual machine (VM) in that network. To use the **Code + Test** option in the portal from that VM, you need to add following [CORS origins](./functions-how-to-use-azure-function-app-settings.md?tabs=portal#cors) to your function app: |
azure-functions | Functions Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scale.md | Title: Azure Functions scale and hosting description: Compare the various options you need to consider when choosing a hosting plan in which to run your function app in Azure Functions. ms.assetid: 5b63649c-ec7f-4564-b168-e0a74cb7e0f3 Previously updated : 07/16/2024 Last updated : 11/04/2024 # Azure Functions hosting options When you create a function app in Azure, you must choose a hosting option for yo | Hosting option | Service | Availability | Container support | | | | | | -| **[Consumption plan]** | Azure Functions | Generally available (GA) | None | -| **[Flex Consumption plan]** | Azure Functions | Preview | None | +| **[Flex Consumption plan]** | Azure Functions | GA | None | | **[Premium plan]** | Azure Functions | GA | Linux | | **[Dedicated plan]** | Azure Functions | GA | Linux | | **[Container Apps]** | Azure Container Apps | GA | Linux |+| **[Consumption plan]** | Azure Functions | Generally available (GA) | None | Azure Functions hosting options are facilitated by Azure App Service infrastructure on both Linux and Windows virtual machines. The hosting option you choose dictates the following behaviors: The following is a summary of the benefits of the various options for Azure Func | Option | Benefits | | | | -|**[Consumption plan]**| Pay for compute resources only when your functions are running (pay-as-you-go) with automatic scale.<br/><br/>On the Consumption plan, instances of the Functions host are dynamically added and removed based on the number of incoming events.<br/><br/> ✔ Default hosting plan that provides true _serverless_ hosting.<br/>✔ Pay only when your functions are running.<br/>✔ Scales automatically, even during periods of high load.| -|**[Flex Consumption plan]**| Get high scalability with compute choices, virtual networking, and pay-as-you-go billing.<br/><br/>On the Flex Consumption plan, instances of the Functions host are dynamically added and removed based on the configured per instance concurrency and the number of incoming events. <br/><br/> ✔ Reduce cold starts by specifying a number of pre-provisioned (always ready) instances.<br/> ✔ Supports virtual networking for added security.<br/>✔ Pay when your functions are running.<br/>✔ Scales automatically, even during periods of high load.| +|**[Flex Consumption plan]**| Get rapid horizontal scaling with compute choices, virtual networking, and pay-as-you-go billing.<br/><br/>On the Flex Consumption plan, instances of the Functions host are dynamically added and removed based on the configured per instance concurrency and the number of incoming events. <br/><br/> ✔ Reduce cold starts by specifying a number of pre-provisioned (always ready) instances.<br/> ✔ Supports virtual networking for added security.<br/>✔ Pay when your functions are running.<br/>✔ Scales automatically, even during periods of high load.| |**[Premium plan]**|Automatically scales based on demand using prewarmed workers, which run applications with no delay after being idle, runs on more powerful instances, and connects to virtual networks. <br/><br/>Consider the Azure Functions Premium plan in the following situations: <br/><br/>✔ Your function apps run continuously, or nearly continuously.<br/>✔ You want more control of your instances and want to deploy multiple function apps on the same plan with event-driven scaling.<br/>✔ You have a high number of small executions and a high execution bill, but low GB seconds in the Consumption plan.<br/>✔ You need more CPU or memory options than are provided by consumption plans.<br/>✔ Your code needs to run longer than the maximum execution time allowed on the Consumption plan.<br/>✔ You require virtual network connectivity.<br/>✔ You want to provide a custom Linux image in which to run your functions. | |**[Dedicated plan]** |Run your functions within an App Service plan at regular [App Service plan rates](https://azure.microsoft.com/pricing/details/app-service/windows/).<br/><br/>Best for long-running scenarios where [Durable Functions](durable/durable-functions-overview.md) can't be used. Consider an App Service plan in the following situations:<br/><br/>✔ You have existing and underutilized virtual machines that are already running other App Service instances.<br/>✔ You must have fully predictable billing, or you need to manually scale instances.<br/>✔ You want to run multiple web apps and function apps on the same plan<br/>✔ You need access to larger compute size choices.<br/>✔ Full compute isolation and secure network access provided by an App Service Environment (ASE).<br/>✔ Very high memory usage and high scale (ASE).| | **[Container Apps]** | Create and deploy containerized function apps in a fully managed environment hosted by Azure Container Apps.<br/><br/>Use the Azure Functions programming model to build event-driven, serverless, cloud native function apps. Run your functions alongside other microservices, APIs, websites, and workflows as container-hosted programs. Consider hosting your functions on Container Apps in the following situations:<br/><br/>✔ You want to package custom libraries with your function code to support line-of-business apps.<br/>✔ You need to migration code execution from on-premises or legacy apps to cloud native microservices running in containers.<br/>✔ When you want to avoid the overhead and complexity of managing Kubernetes clusters and dedicated compute.<br/>✔ Your functions need high-end processing power provided by dedicated GPU compute resources. | +|**[Consumption plan]**| Pay for compute resources only when your functions are running (pay-as-you-go) with automatic scale.<br/><br/>On the Consumption plan, instances of the Functions host are dynamically added and removed based on the number of incoming events.<br/><br/> ✔ Default hosting plan that provides true _serverless_ hosting.<br/>✔ Pay only when your functions are running.<br/>✔ Scales automatically, even during periods of high load.| The remaining tables in this article compare hosting options based on various features and behaviors. This table shows operating system support for the hosting options. | Hosting | Linux<sup>1</sup> deployment| Windows<sup>2</sup> deployment | | | | | -| **[Consumption plan]** | ✅ Code-only<br/>❌ Container (not supported) | ✅ Code-only | | **[Flex Consumption plan]** | ✅ Code-only<br/>❌ Container (not supported) | ❌ Not supported | | **[Premium plan]** | ✅ Code-only<br/>✅ Container | ✅ Code-only | | **[Dedicated plan]** | ✅ Code-only<br/>✅ Container | ✅ Code-only | | **[Container Apps]** | ✅ Container-only | ❌ Not supported |+| **[Consumption plan]** | ✅ Code-only<br/>❌ Container (not supported) | ✅ Code-only | 1. Linux is the only supported operating system for the [Python runtime stack](./functions-reference-python.md). 2. Windows deployments are code-only. Functions doesn't currently support Windows containers. Maximum instances are given on a per-function app (Consumption) or per-plan (Pre | Plan | Scale out | Max # instances | | | | |-| **[Consumption plan]** | [Event driven](event-driven-scaling.md). Scales out automatically, even during periods of high load. Functions infrastructure scales CPU and memory resources by adding more instances of the Functions host, based on the number of incoming trigger events. | **Windows:** 200<br/>**Linux:** 100<sup>1</sup> | | **[Flex Consumption plan]** | [Per-function scaling](./flex-consumption-plan.md#per-function-scaling). Event-driven scaling decisions are calculated on a per-function basis, which provides a more deterministic way of scaling the functions in your app. With the exception of HTTP, Blob storage (Event Grid), and Durable Functions, all other function trigger types in your app scale on independent instances. All HTTP triggers in your app scale together as a group on the same instances, as do all Blob storage (Event Grid) triggers. All Durable Functions triggers also share instances and scale together. | Limited only by total memory usage of all instances across a given region. For more information, see [Instance memory](flex-consumption-plan.md#instance-memory). | | **[Premium plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding more instances of the Functions host, based on the number of events that its functions are triggered on. | **Windows:** 100<br/>**Linux:** 20-100<sup>2</sup>| | **[Dedicated plan]**<sup>3</sup> | Manual/autoscale |10-30<br/>100 (ASE)| | **[Container Apps]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding more instances of the Functions host, based on the number of events that its functions are triggered on. | 300-1000<sup>4</sup> |+| **[Consumption plan]** | [Event driven](event-driven-scaling.md). Scales out automatically, even during periods of high load. Functions infrastructure scales CPU and memory resources by adding more instances of the Functions host, based on the number of incoming trigger events. | **Windows:** 200<br/>**Linux:** 100<sup>1</sup> | 1. During scale-out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a Consumption plan. <br/> 2. In some regions, Linux apps on a Premium plan can scale to 100 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/> Maximum instances are given on a per-function app (Consumption) or per-plan (Pre | Plan | Details | | -- | -- |-| **[Consumption plan]** | Apps can scale to zero when idle, meaning some requests might have more latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from prewarmed placeholder functions that already have the host and language processes running. | | **[Flex Consumption plan]** | Supports [always ready instances](./flex-consumption-plan.md#always-ready-instances) to reduce the delay when provisioning new instances. | | **[Premium plan]** | Supports [always ready instances](./functions-premium-plan.md#always-ready-instances) to avoid cold starts by letting you maintain one or more _perpetually warm_ instances. | | **[Dedicated plan]** | When running in a Dedicated plan, the Functions host can run continuously on a prescribed number of instances, which means that cold start isn't really an issue. | | **[Container Apps]** | Depends on the [minimum number of replicas](../container-apps/scale-app.md#scale-definition):<br/> • When set to zero: apps can scale to zero when idle and some requests might have more latency at startup.<br/>• When set to one or more: the host process runs continuously, which means that cold start isn't an issue. | +| **[Consumption plan]** | Apps can scale to zero when idle, meaning some requests might have more latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from prewarmed placeholder functions that already have the host and language processes running. | ## Service limits Maximum instances are given on a per-function app (Consumption) or per-plan (Pre | Plan | Details | | | |-| **[Consumption plan]** | Pay only for the time your functions run. Billing is based on number of executions, execution time, and memory used. | -| **[Flex Consumption plan]** | Billing is based on number of executions, the memory of instances when they're actively executing functions, plus the cost of any [always ready instances](./flex-consumption-plan.md#always-ready-instances). For more information, see [Flex Consumption plan billing](flex-consumption-plan.md#billing). +| **[Flex Consumption plan]** | Billing is based on number of executions, the memory of instances when they're actively executing functions, plus the cost of any [always ready instances](./flex-consumption-plan.md#always-ready-instances). For more information, see [Flex Consumption plan billing](flex-consumption-plan.md#billing). | | **[Premium plan]** | Premium plan is based on the number of core seconds and memory used across needed and prewarmed instances. At least one instance per plan must always be kept warm. This plan provides the most predictable pricing. | | **[Dedicated plan]** | You pay the same for function apps in an App Service Plan as you would for other App Service resources, like web apps.<br/><br/>For an ASE, there's a flat monthly rate that pays for the infrastructure and doesn't change with the size of the environment. There's also a cost per App Service plan vCPU. All apps hosted in an ASE are in the Isolated pricing SKU. For more information, see the [ASE overview article](../app-service/environment/overview.md#pricing). | | **[Container Apps]** | Billing in Azure Container Apps is based on your plan type. For more information, see [Billing in Azure Container Apps](../container-apps/billing.md).| +| **[Consumption plan]** | Pay only for the time your functions run. Billing is based on number of executions, execution time, and memory used. | For a direct cost comparison between dynamic hosting plans (Consumption, Flex Consumption, and Premium), see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/). For pricing of the various Dedicated plan options, see the [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service). For pricing Container Apps hosting, see [Azure Container Apps pricing](https://azure.microsoft.com/pricing/details/container-apps/). |
azure-functions | Language Support Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md | Title: Azure Functions language runtime support policy -description: Learn about Azure Functions language runtime support policy + Title: Azure Functions language stack support policy +description: Learn about the support policy for the various language stacks that Azure Functions supports. Last updated 08/05/2024 -# Language runtime support policy +# Azure Functions language stack support policy -This article explains Azure functions language runtime support policy. +This article explains the support policy for the language stacks supported by Azure Functions. ## Retirement process -Azure Functions runtime is built around various components, including operating systems, the Azure Functions host, and language-specific workers. To maintain full-support coverages for function apps, Functions support aligns with end-of-life support for a given language. To achieve this goal, Functions implements a phased reduction in support as programming language versions reach their end-of-life dates. For most language versions, the retirement date coincides with the community end-of-life date. +The Azure Functions runtime includes the Azure Functions host and programming language-specific workers. To maintain full-support coverage when running your functions in Azure, Functions support aligns with end-of-life support for a given language. To help you keep your apps up-to-date and supported, Functions implements a phased reduction in support as language stack versions reach their end-of-life dates. Generally, the retirement date coincides with the community end-of-life date of the given language. -### Notification phase ++ **Notification phase**: -The Functions team sends notification emails to function app users about upcoming language version retirements. When you receive the notification, you should prepare to upgrade functions apps to use to a supported version. + The Functions team sends you notification emails about upcoming language version retirements that affect your function apps. When you receive this notification, you should prepare to upgrade these apps to use to a supported version. -### Retirement phase ++ **Retirement phase**: -After the language end-of-life date, function apps that use retired language versions can still be created and deployed, and they continue to run on the platform. However your apps aren't eligible for new features, security patches, and performance optimizations until you upgrade them to a supported language version. --> [!IMPORTANT] ->You're highly encouraged to upgrade the language version of your affected function apps to a supported version. ->If you're running functions apps using an unsupported runtime or language version, you may encounter issues and performance implications and will be required to upgrade before receiving support for your function app. + After the language end-of-life date, function apps that use retired language versions can still be created and deployed, and they continue to run on the platform. However, these apps aren't eligible for new features, security patches, and performance optimizations until after you upgrade them to a supported language version. + > [!IMPORTANT] + >If you're running function apps using an unsupported runtime or language version, you may encounter issues and performance implications and are required to upgrade before receiving support for your function app. Because of this, you're highly encouraged to upgrade the language version of such an app to a supported version. TO learn how, see [Update language stack versions in Azure Functions](./update-language-versions.md). ## Retirement policy exceptions -Any Azure Functions supported exceptions to language-specific retirement policies are documented here. +Any Functions-supported exceptions to language-specific retirement policies are documented here: > There are currently no exceptions to the general retirement policy. To learn more about specific language version support policy timeline, visit the ## Configuring language versions -|Language | Configuration guides | +|Language stack | Configuration guides | |--|--| |C# (isolated worker model) |[link](./dotnet-isolated-process-guide.md#supported-versions)| |C# (in-process model) |[link](./functions-dotnet-class-library.md#supported-versions)| To learn more about specific language version support policy timeline, visit the ## Retired runtime versions -This historical table shows the highest language level for specific Azure Functions runtime versions that are no longer supported: +This historical table shows the highest language stack level for no-longer-supported versions of the Functions runtime: -|Language |2.x | 3.x | +|Language stack |2.x | 3.x | |--|| | |[C#](functions-dotnet-class-library.md)|GA (.NET Core 2.1)| GA (.NET Core 3.1 & .NET 5<sup>*</sup>) | |[JavaScript/TypeScript](functions-reference-node.md?tabs=javascript)|GA (Node.js 10 & 8)| GA (Node.js 14, 12, & 10) | For the language levels currently supported by Azure Functions, see [Languages b To learn more about how to upgrade your functions apps language versions, see the following resources: ++ [Update language stack versions](./update-language-versions.md) + [Currently supported language versions](./supported-languages.md#languages-by-runtime-version) |
azure-functions | Run Functions From Deployment Package | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/run-functions-from-deployment-package.md | For more information, see [this announcement](https://github.com/Azure/app-servi ## Enable functions to run from a package -To enable your function app to run from a package, add a `WEBSITE_RUN_FROM_PACKAGE` app setting to your function app. The `WEBSITE_RUN_FROM_PACKAGE` app setting can have one of the following values: +Function apps on the [Flex Consumption](./flex-consumption-plan.md) hosting plan run from a package by default. No special configuration needs to be done. ++To enable your function app to run from a package on the [Consumption](./consumption-plan.md), [Elastic Premium](./functions-premium-plan.md), and [Dedicated (App Service)](./dedicated-plan.md) hosting plans, add a `WEBSITE_RUN_FROM_PACKAGE` app setting to your function app. The `WEBSITE_RUN_FROM_PACKAGE` app setting can have one of the following values: | Value | Description | ||| The following table indicates the recommended `WEBSITE_RUN_FROM_PACKAGE` values ## General considerations ++ Do not add the `WEBSITE_RUN_FROM_PACKAGE` app setting to apps on the [Flex Consumption](./flex-consumption-plan.md) plan. + The package file must be .zip formatted. Tar and gzip formats aren't supported. + [Zip deployment](#integration-with-zip-deployment) is recommended. + When deploying your function app to Windows, you should set `WEBSITE_RUN_FROM_PACKAGE` to `1` and publish with zip deployment. When you set the `WEBSITE_RUN_FROM_PACKAGE` app setting value to `1`, the zip de ## Use WEBSITE_RUN_FROM_PACKAGE = URL -This section provides information about how to run your function app from a package deployed to a URL endpoint. This option is the only one supported for running from a Linux-hosted package with a Consumption plan. +This section provides information about how to run your function app from a package deployed to a URL endpoint. This option is the only one supported for running from a Linux-hosted package with a Consumption plan. This option is not supported in the [Flex Consumption](./flex-consumption-plan.md) plan. ### Considerations for deploying from a URL ++ Do not set `WEBSITE_RUN_FROM_PACKAGE = <URL>` in apps on the [Flex Consumption](./flex-consumption-plan.md) plan. This option is not supported. + Function apps running on Windows experience a slight increase in [cold-start time](event-driven-scaling.md#cold-start) when the application package is deployed to a URL endpoint via `WEBSITE_RUN_FROM_PACKAGE = <URL>`. + When you specify a URL, you must also [manually sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package. + The Functions runtime must have permissions to access the package URL. |
azure-resource-manager | Migrate Blueprint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/migrate-blueprint.md | + + Title: Migrate blueprints to deployment stacks +description: Learn how to migrate blueprints to deployment stacks. ++ Last updated : 11/11/2024+++# Migrate blueprints to deployment stacks ++This article explains how to convert your Blueprint definitions and assignments into deployment stacks. Deployment stacks are new tools within the `Microsoft.Resources` namespace, bringing Azure Blueprint features into this area. ++## Migration steps ++1. Export the blueprint definitions into the blueprint definition JSON files which include the artifacts of Azure policies, Azure role assignments, and templates. For more information, see [Export your blueprint definition](../../governance/blueprints/how-to/import-export-ps.md#export-your-blueprint-definition). +2. Convert the blueprint definition JSON files into a single ARM template or Bicep file to be deployed via deployment stacks with the following considerations: ++ - **Role assignments**: Convert any [role assignments](/azure/templates/microsoft.authorization/roleassignments). + - **Policies**: Convert any [policy assignments](/azure/templates/microsoft.authorization/policyassignments) into the Bicep (or ARM JSON template) syntax, and then add them to your main template. You can also embed the [`policyDefinitions`](/azure/templates/microsoft.authorization/policydefinitions) into the JSON template. + - **Templates**: Convert any templates into a main template for submission to a deployment stack. You can use [modules](./modules.md) in Bicep, embed templates as nested templates or template links, and optionally use [template specs](./template-specs.md) to store your templates in Azure. Template Specs aren't required to use deployment stacks. + - **Locks**: Deployment stack [DenySettingsMode](./deployment-stacks.md#protect-managed-resources) gives you the ability to block unwanted changes via `DenySettingsMode` (similar to [Blueprint locks](../../governance/blueprints/concepts/resource-locking.md)). You can configure these via Azure CLI or Azure PowerShell. In order to do this, you need corresponding roles to be able to set deny settings. For more information, see [Deployment stacks](./deployment-stacks.md). ++3. You can optionally create template specs for the converted ARM templates or Bicep files. Template specs allow you to store templates and their versions in your Azure environment, simplifying the sharing of the templates across your organization. Deployment stacks enable you to deploy template spec definitions, or ARM templates/Bicep files, to a specified target scope. ++## Sample ++The following Bicep file is a sample migration file. ++```bicep +targetScope = 'subscription' ++param roleAssignmentName string = 'myTestRoleAssignment' +param roleDefinitionId string = guid(roleAssignmentName) +param principalId string = guid('myTestId') ++param policyAssignmentName string = 'myTestPolicyAssignment' +param policyDefinitionID string = '/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d' ++param rgName string = 'myTestRg' +param rgLocation string = deployment().location +param templateSpecName string = 'myNetworkingTs' ++// Step 1 - create role assignments +resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = { + name: guid(roleAssignmentName) + properties: { + principalId: principalId + roleDefinitionId: roleDefinitionId + } +} ++// Step 2 - create policy assignments +resource policyAssignment 'Microsoft.Authorization/policyAssignments@2022-06-01' = { + name: policyAssignmentName + scope: subscriptionResourceId('Microsoft.Resources/resourceGroups', resourceGroup().name) + properties: { + policyDefinitionId: policyDefinitionID + } +} ++// Step 3 - create template artifacts via modules (or template specs) +resource rg1 'Microsoft.Resources/resourceGroups@2021-01-01' = { + name: rgName + location: rgLocation +} ++module vnet 'templates/bicep/vnet.bicep' = if (rgName == 'myTestRg') { + name: uniqueString(rgName) + scope: rg1 + params: { location: rgLocation } +} +``` |
azure-resource-manager | Error Job Size Exceeded | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-job-size-exceeded.md | Title: Job size exceeded error description: Describes how to troubleshoot errors for job size exceeded or if the template is too large for deployments using a Bicep file or Azure Resource Manager template (ARM template). Previously updated : 06/20/2024 Last updated : 11/11/2024 # Resolve errors for job size exceeded When deploying a template, you receive an error stating the deployment has excee ## Cause -You get this error when the deployment exceeds an allowed limit. Typically, you see this error when either your template or the job that runs the deployment is too large. +This error occurs when the deployment exceeds the allowed size limits. It usually appears when the template or the deployment job is too large. Note that templates are compressed before their sizes are verified for deployment, so the effective limits may be larger than the template's actual size. -The deployment job can't exceed 1 MB and that includes metadata about the request. For large templates, the metadata combined with the template might exceed a job's allowed size. +The deployment job size limit is 1 MB after compression, including metadata about the request. For large templates, the combined size of metadata and the template may surpass this limit. -The template can't exceed 4 MB, and each resource definition can't exceed 1 MB. The limits apply to the final state of the template after it has been expanded for resource definitions that use loops to create many instances. The final state also includes the resolved values for variables and parameters. +The compressed template size itself canΓÇÖt exceed 4 MB, and each individual resource definition canΓÇÖt exceed 1 MB after compression. These limits apply to the template's final state after expansion for any resource definitions that use loops to create multiple instances, which includes resolved values for all variables and parameters. Other template limits are: Other template limits are: - 64 output values - 24,576 characters in a template expression -## Solution 1: Use dependencies carefully +## Solution 1: Reduce name size # [Bicep](#tab/bicep) -Use an [implicit dependency](../bicep/resource-dependencies.md#implicit-dependency) that's created when a resource references another resource by its symbolic name. For most deployments, it's not necessary to use `dependsOn` and create an [explicit dependency](../bicep/resource-dependencies.md#explicit-dependency). +Try to shorten the length of the names you use for [parameters](../bicep/parameters.md), [variables](../bicep/variables.md), and [outputs](../bicep/outputs.md). When these values are repeated in loops, a long name gets multiplied many times. # [JSON](#tab/json) -When using [copy](../templates/copy-resources.md) loops to deploy resources, don't use the loop name as a dependency: --```json -dependsOn: [ "nicLoop" ] -``` --Instead, use the instance of the resource from the loop that you need to depend on. For example: --```json -dependsOn: [ - "[resourceId('Microsoft.Network/networkInterfaces', concat('nic-', copyIndex()))]" -] -``` +Try to shorten the length of the names you use for [parameters](../templates/parameters.md), [variables](../templates/variables.md), and [outputs](../templates/outputs.md). When these values are repeated through copy loops, a long name gets multiplied many times. When your file deploys lots of different resource types, consider dividing it in You can set other resources as implicit dependencies, and [get values from the output of modules](../bicep/outputs.md#outputs-from-modules). +Use [template specs](../bicep/template-specs.md) rather than [Bicep modules](../bicep/modules.md). Bicep modules are converted into a single ARM template with nested templates. # [JSON](#tab/json) When your template deploys lots of different resource types, consider dividing i You can set other resources as dependent on the linked template, and [get values from the output of the linked template](../templates/linked-templates.md#get-values-from-linked-template). +Use [template specs](../templates/linked-templates.md#template-specs) rather than [nested templates](../templates/linked-templates.md#nested-template). + -## Solution 3: Reduce name size +## Solution 3: Use dependencies carefully # [Bicep](#tab/bicep) -Try to shorten the length of the names you use for [parameters](../bicep/parameters.md), [variables](../bicep/variables.md), and [outputs](../bicep/outputs.md). When these values are repeated in loops, a long name gets multiplied many times. +Use an [implicit dependency](../bicep/resource-dependencies.md#implicit-dependency) that's created when a resource references another resource by its symbolic name. For most deployments, it's not necessary to use `dependsOn` and create an [explicit dependency](../bicep/resource-dependencies.md#explicit-dependency). # [JSON](#tab/json) -Try to shorten the length of the names you use for [parameters](../templates/parameters.md), [variables](../templates/variables.md), and [outputs](../templates/outputs.md). When these values are repeated through copy loops, a long name gets multiplied many times. +When using [copy](../templates/copy-resources.md) loops to deploy resources, don't use the loop name as a dependency: ++```json +dependsOn: [ "nicLoop" ] +``` ++Instead, use the instance of the resource from the loop that you need to depend on. For example: ++```json +dependsOn: [ + "[resourceId('Microsoft.Network/networkInterfaces', concat('nic-', copyIndex()))]" +] +``` ++Complex dependencies can quickly consume the data limits. For example, if a loop of *n* resources depends on another loop of *n* resources, it results in storing *O(n┬▓)* data. By contrast, if each resource in one loop only depends on its counterpart in the other loop, it results in *O(n)* data. This difference may seem subtle, but the storage impact grows very quickly. ++## Solution 4: Reduce incompressible data ++Including large amounts of incompressible data, such as certificates or binaries, or data with a low compression ratio in a template or parameters will quickly consume the size limit. |
communication-services | Media Quality Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-quality-sdk.md | zone_pivot_groups: acs-plat-web-ios-android-windows # Media quality statistics -To help you understand media quality in VoIP and video calls that use Azure Communication Services, there's a feature called *media quality statistics*. Use it to examine the low-level audio, video, and screen-sharing quality metrics for incoming and outgoing call metrics. +To help you better understand media quality in VoIP and video calls that use Azure Communication Services, there's a feature called *media quality statistics*. Use it to examine the low-level audio, video, and screen-sharing quality metrics for incoming and outgoing call metrics. ::: zone pivot="platform-web" [!INCLUDE [Media Stats for Web](./includes/media-stats/media-stats-web.md)] |
communication-services | Pre Call Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md | Title: Azure Communication Services Pre-Call diagnostics + Title: Azure Communication Services pre-call diagnostics -description: Overview of Pre-Call Diagnostic APIs +description: Overview of the pre-call diagnostic API feature. -# Pre-Call diagnostic +# Pre-call diagnostic [!INCLUDE [Public Preview Disclaimer](../../includes/public-preview-include.md)] -The Pre-Call API enables developers to programmatically validate a clientΓÇÖs readiness to join an Azure Communication Services Call. The Pre-Call APIs can be accessed through the Calling SDK. They provide multiple diagnostics including device, connection, and call quality. Pre-Call APIs are available only for Web (JavaScript). We'll be enabling these capabilities across platforms in the future, provide us with feedback on what platforms you would like to see Pre-Call APIs on. +The pre-call API feature enables developers to programmatically validate a clientΓÇÖs readiness to join an Azure Communication Services call. You can only access pre-call features using the Calling SDK. The pre-call diagnostic feature provides multiple diagnostics including device, connection, and call quality. The pre-call diagnotic feature is available only for Web (JavaScript). We plan to enable these capabilities across platforms in the future. Provide us with [feedback](../../support.md) about which platforms you want to see pre-call diagnostics enabled. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended. - An active Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A User Access Token to instantiate the call client. Learn how to [create and manage user access tokens](../../quickstarts/identity/access-tokens.md). You can also use the Azure CLI and run the next command with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.)+- A User Access Token to instantiate the call client. Learn how to [create and manage user access tokens](../../quickstarts/identity/access-tokens.md). You can also use the Azure CLI and run the next command with your connection string to create a user and an access token. Remember to copy the connection string from the resource through Azure portal. ```azurecli-interactive az communication identity token issue --scope voip --connection-string "yourConnectionString" ``` - For details, see [Use Azure CLI to Create and Manage Access Tokens](../../quickstarts/identity/access-tokens.md?pivots=platform-azcli). + For more information, see [Use Azure CLI to Create and Manage Access Tokens](../../quickstarts/identity/access-tokens.md?pivots=platform-azcli). -## Accessing Pre-Call APIs +## Accessing pre-call diagnostics >[!IMPORTANT]->Pre-Call diagnostics are available starting on the version [1.9.1-beta.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.9.1-beta.1) of the Calling SDK. Make sure to use that version when trying the next instructions. +>Pre-call diagnostics are available starting with version [1.9.1-beta.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.9.1-beta.1) of the Calling SDK. Make sure to use that version or higher when following these instructions. -To Access the Pre-Call API, you need to initialize a `callClient`, and provision an Azure Communication Services access token. There you can access the `PreCallDiagnostics` feature and the `startTest` method. +To Access pre-call diagnostics, you need to initialize a `callClient`, and provision an Azure Communication Services access token. There you can access the `PreCallDiagnostics` feature and the `startTest` method. ```javascript import { CallClient, Features} from "@azure/communication-calling"; Once it finishes running, developers can access the result object. ## Diagnostic results -The Pre-Call API returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality stats and in-call diagnostics. The results are returned as a `PreCallDiagnosticsResult` object. +Pre-call diagnostics returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality statistics, and in-call diagnostics. The results are returned as a `PreCallDiagnosticsResult` object. ```javascript export declare type PreCallDiagnosticsResult = { ``` -Individual result objects can be accessed as such using the `preCallDiagnosticsResult` constant. Results for individual tests be returned as they're completed with many of the test results being available immediately. If you use the `inCallDiagnostics` test, the results might take up to 1 minute as the test validates quality of the video and audio. +You can access individual result objects using the `preCallDiagnosticsResult` type. Results for individual tests are returned as they're completed with many of the test results being available immediately. If you use the `inCallDiagnostics` test, the results might take up to 1 minute as the test validates the quality of the video and audio. ### Browser support-Browser compatibility check. Checks for `Browser` and `OS` compatibility and provides a `Supported` or `NotSupported` value back. ++Browser compatibility check. Checks for `Browser` and `OS` compatibility and returns a `Supported` or `NotSupported` value. ```javascript const browserSupport = await preCallDiagnosticsResult.browserSupport; ``` -In the case that the test fails and the browser being used by the user is `NotSupported`, the easiest way to fix that is by asking the user to switch to a supported browser. Refer to the supported browsers in our [documentation](./calling-sdk-features.md#javascript-calling-sdk-support-by-os-and-browser). +If the test fails and the browser being used by the user is `NotSupported`, the easiest way to fix that is by asking the user to switch to a supported browser. Refer to the supported browsers in [Calling SDK overview > JavaScript Calling SDK support by OS and browser](./calling-sdk-features.md#javascript-calling-sdk-support-by-os-and-browser). >[!NOTE] >Known issue: `browser support` test returning `Unknown` in cases where it should be returning a correct value. ### Device access-Permission check. Checks whether video and audio devices are available from a permissions perspective. Provides `boolean` value for `audio` and `video` devices. ++The permission check determines whether video and audio devices are available from a permissions perspective. Provides `boolean` value for `audio` and `video` devices. ```javascript Permission check. Checks whether video and audio devices are available from a pe ``` -In the case that the test fails and the permissions are false for audio and video, the user shouldn't continue into joining a call. Rather you need to prompt the user to enable the permissions. To do it, the best way is provided the specific instruction on how to access permission access based on the OS, version and browser they are on. For more information on permissions, check out our [recommendations](https://techcommunity.microsoft.com/t5/azure-communication-services/checklist-for-advanced-calling-experiences-in-mobile-web/ba-p/3266312). +If the test fails and the permissions are false for audio and video, the user shouldn't continue into joining a call. Rather, prompt the user to enable the permissions. The best way to do this is by providing the specific instruction on how to access permissions based on the OS, version, and browser they're using. For more information about permissions, see the [Checklist for advanced calling experiences in web browsers](https://techcommunity.microsoft.com/t5/azure-communication-services/checklist-for-advanced-calling-experiences-in-mobile-web/ba-p/3266312). ### Device enumeration-Device availability. Checks whether microphone, camera and speaker devices are detected in the system and ready to use. Provides an `Available` or `NotAvailable` value back. ++Device availability. Checks whether microphone, camera, and speaker devices are detected in the system and ready to use. Returns an `Available` or `NotAvailable` value. ```javascript Device availability. Checks whether microphone, camera and speaker devices are d ``` -In the case that devices aren't available, the user shouldn't continue into joining a call. Rather the user should be prompted to check device connections to ensure any headsets, cameras or speakers are properly connected. For more information on device management, check out our [documentation](../../how-tos/calling-sdk/manage-video.md?pivots=platform-web#device-management) +If devices aren't available, the user shouldn't continue into joining a call. Rather, prompt the user to check device connections to ensure any headsets, cameras, or speakers are properly connected. For more information about device management, see [Manage video during calls](../../how-tos/calling-sdk/manage-video.md?pivots=platform-web#device-management). ### InCall diagnostics-Performs a quick call to check in-call metrics for audio and video and provides results back. Includes connectivity (`connected`, boolean), bandwidth quality (`bandWidth`, `'Bad' | 'Average' | 'Good'`) and call diagnostics for audio and video (`diagnostics`). Diagnostic are provided `jitter`, `packetLoss` and `rtt` and results are generated using a simple quality grade (`'Bad' | 'Average' | 'Good'`). -InCall diagnostics uses [media quality stats](./media-quality-sdk.md) to calculate quality scores and diagnose issues. During the pre-call diagnostic, the full set of media quality stats are available for consumption. These stats include raw values across video and audio metrics that can be used programatically. The InCall diagnostic provides a convenience layer on top of media quality stats to consume the results without the need to process all the raw data. See section on media stats for instructions to access. +Performs a quick call to check in-call metrics for audio and video and provides results back. Includes connectivity (`connected`, boolean), bandwidth quality (`bandWidth`, `'Bad' | 'Average' | 'Good'`) and call diagnostics for audio and video (`diagnostics`). Provided diagnostic categories include `jitter`, `packetLoss`, and `rtt` and results are generated using a simple quality grade (`'Bad' | 'Average' | 'Good'`). ++InCall diagnostics uses [Media quality statistics](./media-quality-sdk.md) to calculate quality scores and diagnose issues. During the pre-call diagnostic, the full set of media quality statistics are available for consumption. These statistics include raw values across video and audio metrics that you can use programatically. ++The InCall diagnostic provides a convenience layer on top of media quality statistics to consume the results without the need to process all the raw data. For more information including instructions to access, see [Media quality statistics for an ongoing call](./media-quality-sdk.md#media-quality-statistics-for-an-ongoing-call). ```javascript InCall diagnostics uses [media quality stats](./media-quality-sdk.md) to calcula ``` -At this step, there are multiple failure points to watch out for. The values provided by the API are based on the threshold values required by the service. Those raw thresholds can be found in our [media quality stats documentation](./media-quality-sdk.md#best-practices). --- If connection fails, the user should be prompted to recheck their network connectivity. Connection failures can also be attributed to network conditions like DNS, Proxies or Firewalls. For more information on recommended network setting, check out our [documentation](network-requirements.md).-- If bandwidth is `Bad`, the user should be prompted to try out a different network or verify the bandwidth availability on their current one. Ensure no other high bandwidth activities might be taking place.+At this step, there are multiple possible failure points. The values provided by the API are based on the threshold values required by the service. The raw thresholds can be found in [Media quality statistics](./media-quality-sdk.md#best-practices). -### Media stats -For granular stats on quality metrics like jitter, packet loss, rtt, etc. `callMediaStatistics` are provided as part of the `preCallDiagnosticsResult` feature. See the [full list and description of the available metrics](./media-quality-sdk.md) in the linked article. You can subscribe to the call media stats to get full collection of them. This stat is the raw metrics that are used to calculate InCall diagnostic results and which can be consumed granularly for further analysis. --```javascript --const mediaStatsCollector = callMediaStatistics.startCollector(); --mediaStatsCollector.on('mediaStatsEmitted', (mediaStats: SDK.MediaStats) => { - // process the stats for the call. - console.log(mediaStats); -}); --``` +- If a connection fails, prompt users to recheck their network connectivity. Connection failures can also be attributed to network conditions like DNS, proxies, or firewalls. For more information on recommended network setting, see [Network recommendations](network-requirements.md). +- If bandwidth is `Bad`, prompt users to try a different network or verify the bandwidth availability on their current network. Ensure no other high bandwidth activities are taking place. ## Pricing -When the Pre-Call diagnostic test runs, behind the scenes it uses calling minutes to run the diagnostic. The test lasts for roughly 30 seconds, using up 30 seconds of calling which is charged at the standard rate of $0.004 per participant per minute. For the case of Pre-Call diagnostic, the charge will be for 1 participant x 30 seconds = $0.002. +When the pre-call diagnostic test runs behind the scenes, it uses calling minutes to run the diagnostic. The test lasts for roughly 30 seconds, using up 30 seconds of calling time which is charged at the standard rate of $0.004 per participant per minute. For the case of pre-call diagnostics, the charge is for 1 participant x 30 seconds = $0.002. ## Next steps |
container-apps | Java Application Performance Management Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-application-performance-management-config.md | + + Title: "Tutorial: Configure Application Performance Management (APM) Java agent with init-container in Azure Container Apps" +description: Learn to configure Application Performance Management (APM) Java agent with init-container in Azure Container Apps +++++ Last updated : 11/4/2024++++# Tutorial: Configure Application Performance Management (APM) Java agent with init-container in Azure Container Apps ++Application Performance Management (APM) helps power observability for your container apps. You can package the APM plugin in the same image or Dockerfile with your app, but it binds the management efforts together, like release and Common Vulnerabilities and Exposures (CVE) mitigation. Rather than binding the concerns together, you can apply Java agent and init containers in Azure Container Apps to inject APM solutions without modifying your app image. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Prepare an image to set up Java agent and push to Azure Container Registry +> * Create a Container Apps environment and a container app as the target Java app +> * Configure init containers and volume mounts to set up Application Insights integration ++## Prerequisites ++- Have an instance of [Application Insights](/azure/azure-monitor/app/app-insights-overview) +- Have an instance of Azure Container Registry or other container image registries +- Install [Docker](https://www.docker.com/) to build image +- Install the latest version of the [Azure CLI](/cli/azure/install-azure-cli) ++## Set up the environment ++The following commands help you define variables and ensure your Container Apps extension is up to date. ++1. Set up environment variables used in following commands. ++ # [Bash](#tab/bash) ++ ```bash + SUBSCRIPTION_ID="<SUBSCRIPTION_ID>" # Replace with your own Azure subscription ID + APP_INSIGHTS_RESOURCE_ID="/subscriptions/$SUBSCRIPTION_ID/resourceGroups/my-resource-group/providers/microsoft.insights/components/my-app-insights" + CONTAINER_REGISTRY_NAME="myacr" + RESOURCE_GROUP="my-resource-group" + ENVIRONMENT_NAME="my-environment" + CONTAINER_APP_NAME="my-container-app" + LOCATION="eastus" + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + $SUBSCRIPTION_ID="<SUBSCRIPTION_ID>" # Replace with your own Azure subscription ID + $APP_INSIGHTS_RESOURCE_ID="/subscriptions/$SUBSCRIPTION_ID/resourceGroups/my-resource-group/providers/microsoft.insights/components/my-app-insights" + $CONTAINER_REGISTRY_NAME="myacr" + $RESOURCE_GROUP="my-resource-group" + $ENVIRONMENT_NAME="my-environment" + $CONTAINER_APP_NAME="my-container-app" + $LOCATION="eastus" + ``` ++1. Sign in to the Azure CLI. ++ # [Bash](#tab/bash) ++ ```bash + az login + az account set --subscription $SUBSCRIPTION_ID + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + az login + az account set --subscription $SUBSCRIPTION_ID + ``` ++1. Ensure you have the latest version of Azure CLI extensions for Container Apps and Application Insights. ++ # [Bash](#tab/bash) ++ ```bash + az extension add -n containerapp --upgrade + az extension add -n application-insights --upgrade + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + az extension add -n containerapp --upgrade + az extension add -n application-insights --upgrade + ``` ++1. Retrieve the connection string of Application Insights. ++ # [Bash](#tab/bash) ++ ```bash + CONNECTION_STRING=$(az monitor app-insights component show \ + --ids $APP_INSIGHTS_RESOURCE_ID \ + --query connectionString) + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + $CONNECTION_STRING=(az monitor app-insights component show ` + --ids $APP_INSIGHTS_RESOURCE_ID ` + --query connectionString) + ``` ++## Prepare the container image ++1. Build setup image for Application Insights Java agent. ++ Save the Dockerfile along with the setup script, and run `docker build` in the same directory. + + ```Dockerfile + FROM mcr.microsoft.com/cbl-mariner/base/core:2.0 + + ARG version="3.5.4" + + RUN tdnf update -y && tdnf install -y curl ca-certificates + + RUN curl -L "https://github.com/microsoft/ApplicationInsights-Java/releases/download/${version}/applicationinsights-agent-${version}.jar" > agent.jar + + ADD setup.sh /setup.sh + + ENTRYPOINT ["/bin/sh", "setup.sh"] + ``` ++ ++ ```setup.sh + #!/bin/sh ++ if [[ -z "$CONNECTION_STRING" ]]; then + echo "Environment variable CONNECTION_STRING is not found. Exiting..." + exit 1 + else + echo "{\"connectionString\": \"$CONNECTION_STRING\"}" > /java-agent/applicationinsights.json + cp agent.jar /java-agent/agent.jar + fi + ``` ++ ++ # [Bash](#tab/bash) ++ ```bash + docker build . -t "$CONTAINER_REGISTRY_NAME.azurecr.io/samples/java-agent-setup:1.0.0" + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + docker build . -t "$CONTAINER_REGISTRY_NAME.azurecr.io/samples/java-agent-setup:1.0.0" + ``` ++1. Push the image to Azure Container Registry or other container image registries. + + # [Bash](#tab/bash) ++ ```bash + az acr login --name $CONTAINER_REGISTRY_NAME + docker push "$CONTAINER_REGISTRY_NAME.azurecr.io/samples/java-agent-setup:1.0.0" + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + az acr login --name $CONTAINER_REGISTRY_NAME + docker push "$CONTAINER_REGISTRY_NAME.azurecr.io/samples/java-agent-setup:1.0.0" + ``` ++> [!TIP] +> You can find related code in this step from [Azure-Samples/azure-container-apps-java-samples](https://github.com/Azure-Samples/azure-container-apps-java-samples). ++## Create a Container Apps environment and a Container App as the target Java app ++1. Create a Container Apps environment. ++ # [Bash](#tab/bash) ++ ```bash + az containerapp env create \ + --name $ENVIRONMENT_NAME \ + --resource-group $RESOURCE_GROUP \ + --location "$LOCATION" \ + --query "properties.provisioningState" + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + az containerapp env create ` + --name $ENVIRONMENT_NAME ` + --resource-group $RESOURCE_GROUP ` + --location "$LOCATION" ` + --query "properties.provisioningState" + ``` ++ Once created, the command returns a "Succeeded" message. ++1. Create a Container app for further configurations. ++ # [Bash](#tab/bash) ++ ```bash + az containerapp create \ + --name $CONTAINER_APP_NAME \ + --environment $ENVIRONMENT_NAME \ + --resource-group $RESOURCE_GROUP \ + --query "properties.provisioningState" + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + az containerapp create ` + --name $CONTAINER_APP_NAME ` + --environment $ENVIRONMENT_NAME ` + --resource-group $RESOURCE_GROUP ` + --query "properties.provisioningState" + ``` ++ Once created, the command returns a "Succeeded" message. ++## Configure init-container, secrets, environment variables, and volumes to set up Application Insights integration ++1. Get current configurations of the running Container App. ++ # [Bash](#tab/bash) ++ ```bash + az containerapp show \ + --name $CONTAINER_APP_NAME \ + --resource-group $RESOURCE_GROUP \ + -o yaml > app.yaml + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + az containerapp show ` + --name $CONTAINER_APP_NAME ` + --resource-group $RESOURCE_GROUP ` + -o yaml > app.yaml + ``` ++ The YAML file `app.yaml` is created in current directory. ++1. Edit the app YAML file. ++ - Add secret for Application Insights connection string ++ ```yaml + properties: + configuration: + secrets: + - name: app-insights-connection-string + value: $CONNECTION_STRING + ``` ++ Replace $CONNECTION_STRING with your Azure Application Insights connection string. ++ - Add ephemeral storage volume for Java agent files + + ```yaml + properties: + template: + volumes: + - name: java-agent-volume + storageType: EmptyDir + ``` ++ - Add init-container with volume mounts and environment variables + + ```yaml + properties: + template: + initContainers: + - image: <CONTAINER_REGISTRY_NAME>.azurecr.io/samples/java-agent-setup:1.0.0 + name: java-agent-setup + resources: + cpu: 0.25 + memory: 0.5Gi + env: + - name: CONNECTION_STRING + secretRef: app-insights-connection-string + volumeMounts: + - mountPath: /java-agent + volumeName: java-agent-volume + ``` ++ Replace `<CONTAINER_REGISTRY_NAME>` with your Azure Container Registry name. ++ - Update app container with volume mounts and environment variables + + ```yaml + properties: + template: + containers: + - name: test-java-app + image: mcr.microsoft.com/azurespringapps/samples/hello-world:0.0.1 + resources: + cpu: 0.5 + memory: 1Gi + env: + - name: JAVA_TOOL_OPTIONS + value: -javaagent:/java-agent/agent.jar + volumeMounts: + - mountPath: /java-agent + volumeName: java-agent-volume + ``` ++1. Update the container app with modified YAML file. ++ # [Bash](#tab/bash) ++ ```bash + az containerapp update \ + --name $CONTAINER_APP_NAME \ + --resource-group $RESOURCE_GROUP \ + --yaml app.yaml \ + --query "properties.provisioningState" + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + az containerapp update ` + --name $CONTAINER_APP_NAME ` + --resource-group $RESOURCE_GROUP ` + --yaml app.yaml ` + --query "properties.provisioningState" + ``` ++ Once updated, the command returns a "Succeeded" message. Then you can check out your Application Insights in Azure portal to see your Container App is connected. ++## Clean up resources ++The resources created in this tutorial contribute to your Azure bill. If you aren't going to keep them in the long term, run the following commands to clean them up. ++# [Bash](#tab/bash) +```bash +az group delete --resource-group $RESOURCE_GROUP +``` +# [PowerShell](#tab/powershell) +```powershell +az group delete --resource-group $RESOURCE_GROUP +``` ++++## Other APM solutions ++Other than [Azure Application Insights](/azure/azure-monitor/app/java-standalone-config), there are other popular APM solutions in the community. If you want to integrate your Azure Container App with other APM providers, just replace the Java agent JAR and related config files. ++- [AppDynamics](https://docs.appdynamics.com/appd/21.x/21.4/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent) +- [Dynatrace](https://docs.dynatrace.com/docs/setup-and-configuration/technology-support/application-software/java) +- [Elastic](https://www.elastic.co/guide/en/apm/agent/java/https://docsupdatetracker.net/index.html) +- [NewRelic](https://docs.newrelic.com/docs/apm/agents/java-agent/getting-started/introduction-new-relic-java/) |
container-apps | Java Dynamic Log Level | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-dynamic-log-level.md | The following Java logging frameworks are supported: ### Supported log levels by different logging frameworks -Different logging frameworks support different log levels. In the JVM diagnostics platform, some frameworks are better supported than others. Before changing logging levels, make sure the log levels you're using are supported by both the framework and platform. +Different logging frameworks support different log levels. In the JVM diagnostics platform, some frameworks are better supported than others. Before changing logging levels, make sure the framework and platform support the log levels you're using. -| Framework | OFF | FATAL | ERROR | WARN | INFO | DEBUG | TRACE | ALL | -||-|-|-|||-|-|--| -| Log4j2 | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | -| Logback | Yes | No | Yes | Yes | Yes | Yes | Yes | Yes | -| jboss-logging | No | Yes | Yes | Yes | Yes | Yes | Yes | No | -| **Platform** | Yes | No | Yes | Yes | Yes | Yes | Yes | No | +| Framework | OFF | FATAL | ERROR | WARN | INFO | DEBUG | TRACE | +||-|-|-|||-|-| +| Log4j2 | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| Logback | Yes | No | Yes | Yes | Yes | Yes | Yes | +| jboss-logging | No | Yes | Yes | Yes | Yes | Yes | Yes | +| **Platform** | Yes | No | Yes | Yes | Yes | Yes | Yes | ### General visibility of log levels -| Log Level | FATAL | ERROR | WARN | INFO | DEBUG | TRACE | ALL | -|--|-|-|||-|-|--| -| **OFF** | | | | | | | | -| **FATAL** | Yes | | | | | | | -| **ERROR** | Yes | Yes | | | | | | -| **WARN** | Yes | Yes | Yes | | | | | -| **INFO** | Yes | Yes | Yes | Yes | | | | -| **DEBUG** | Yes | Yes | Yes | Yes | Yes | | | -| **TRACE** | Yes | Yes | Yes | Yes | Yes | Yes | | -| **ALL** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | --For example, if you set log level to `DEBUG`, your app will print logs with level `FATAL`, `ERROR`, `WARN`, `INFO`, `DEBUG` and will NOT print logs with level `TRACE` AND `ALL`. +| Log Level | FATAL | ERROR | WARN | INFO | DEBUG | TRACE | +|--|-|-|||-|-| +| **OFF** | | | | | | | +| **FATAL** | Yes | | | | | | +| **ERROR** | Yes | Yes | | | | | +| **WARN** | Yes | Yes | Yes | | | | +| **INFO** | Yes | Yes | Yes | Yes | | | +| **DEBUG** | Yes | Yes | Yes | Yes | Yes | | +| **TRACE** | Yes | Yes | Yes | Yes | Yes | Yes | ++For example, if you set log level to `INFO`, your app prints logs with level `FATAL`, `ERROR`, `WARN`, `INFO`, and does NOT print logs with level `DEBUG` and `TRACE`. ## Related content |
deployment-environments | Quickstart Create Access Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md | As a developer, you can create environments associated with a [project](concept- ## Prerequisites -- [Create and configure a dev center](quickstart-create-and-configure-devcenter.md).+- Your organization must configure Azure Deployment Environments with a dev center and at least one project before you can create a deployment environment. + - Platform engineers can follow these steps to configure [Quickstart: Configure Azure Deployment Environments](quickstart-create-and-configure-devcenter.md). +- You must have permissions as a Deployment Environments User for a project. If you don't have permissions to a project, contact your administrator. ## Create an environment |
event-hubs | Event Hubs Data Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-data-explorer.md | Title: Overview of the Event Hubs Data Explorer description: This article provides an overview of the Event Hubs Data Explorer, which provides an easy way to send data to and receive data from Azure Event Hubs. Previously updated : 08/22/2024 Last updated : 11/11/2024 -# Use Event Hubs Data Explorer to run data operations on Event Hubs +# Use Event Hubs Data Explorer to run data operations on Event Hubs (Preview) Azure Event Hubs is a scalable event processing service that ingests and processes large volumes of events and data, with low latency and high reliability. For a high-level overview of the service, see [What is Event Hubs?](event-hubs-about.md). To download the event payload, select the specific event and select the **downlo ## Next steps * Learn more about [Event Hubs](event-hubs-about.md).- * Check out [Event Hubs features and terminology](event-hubs-features.md) + * Check out [Event Hubs features and terminology](event-hubs-features.md) |
event-hubs | Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/geo-replication.md | description: 'This article describes the Azure Event Hubs geo-replication featur Previously updated : 06/10/2024 Last updated : 11/11/2024 There are two features that provide geo-disaster recovery in Azure Event Hubs. - ***Geo-disaster recovery*** (Metadata DR), which just provides replication of **only metadata**. - ***Geo-replication*** (public preview), which provides replication of **both metadata and the data**. +> [!NOTE] +> The Geo-replication feature is supported by only the dedicated tier. + These features shouldn't be confused with Availability Zones. Both geographic recovery features provide resilience between Azure regions such as East US and West US. Availability Zone support provides resilience within a specific geographic region, such as East US. For more information on Availability Zones, see [Event Hubs Availability Zone support](./event-hubs-availability-and-consistency.md). > [!IMPORTANT] |
governance | Control Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/ism-protected/control-mapping.md | assigning following Azure Policy definitions: - Deprecated accounts should be removed from your subscription - Deprecated accounts with owner permissions should be removed from your subscription -### 1490 An application whitelisting solution is implemented on all servers to restrict the execution of executables, software libraries, scripts and installers to an approved set +### 1490 An application allow listing solution is implemented on all servers to restrict the execution of executables, software libraries, scripts and installers to an approved set - Adaptive Application Controls should be enabled on virtual machines |
governance | Control Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/swift-2020/control-mapping.md | virtual machines where an application allowlist is recommended but has not yet b - Adaptive application controls for defining safe applications should be enabled on your machines -## 1.1 Least Functionality | Authorized Software / Whitelisting +## 1.1 Least Functionality | Authorized Software / Allow Listing Adaptive application control in Azure Security Center is an intelligent, automated end-to-end application filtering solution that can block or prevent specific software from running on your |
healthcare-apis | Quickstart Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/quickstart-arm.md | + + Title: Create an Azure Health Data Services de-identification service by using Azure Resource Manager template (ARM template) +description: Learn how to create an Azure Health Data Services de-identification service by using Azure Resource Manager template (ARM template). +++++++ Last updated : 11/11/2024++# Customer intent: As a cloud administrator, I want a quick method to deploy an Azure resource for production environments or to evaluate the service's functionality. +++# Quickstart: Deploy the de-identification service (preview) using an ARM template ++This quickstart describes how to use an Azure Resource Manager template (ARM template) to create +an Azure Health Data Services de-identification service (preview). +++If your environment meets the prerequisites and you're familiar with using ARM templates, select the +**Deploy to Azure** button. The template opens in the Azure portal. +++## Prerequisites ++If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +++## Review the template ++The template used in this quickstart is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/deidentification-service-create/). +++One Azure resource is defined in the template: ++- [Microsoft.HealthDataAIServices/deidServices](/azure/templates): Create a de-identification service. ++## Deploy the template ++Deploy the template using any standard method to [Deploy a local ARM template](/azure/azure-resource-manager/templates/deployment-tutorial-local-template) such as the following example using Azure CLI. +1. Save the template file as **azuredeploy.json** to your local computer. +1. Create a resource group in one of the supported regions for the de-identification service, replacing **\<deid-service-name\>** with the name you choose for your de-identification service: + ```azurecli + az group create --name exampleRG --location eastus ++ az deployment group create --resource-group exampleRG --template-file azuredeploy.json --parameters deidServiceName="<deid-service-name>" + ``` ++When the deployment finishes, you should see a message indicating the deployment succeeded. ++## Review deployed resources ++Review your resource with Azure CLI, replacing **\<deid-service-name\>** with the name you choose for your de-identification service: +```azurecli +az resource show -g exampleRG -n <deid-service-name> --resource-type "Microsoft.HealthDataAIServices/deidServices" +``` ++## Clean up resources ++When no longer needed, delete the resource group. The resource group and all the resources in the +resource group are deleted. +```azurecli +az group delete --name exampleRG +``` ++## Next steps ++For a step-by-step tutorial that guides you through the process of creating a template, see: ++> [!div class="nextstepaction"] +> [Tutorial: Create and deploy your first ARM template](/azure/azure-resource-manager/templates/template-tutorial-create-first-template) ++- [Quickstart: Azure Health De-identification client library for .NET](quickstart-sdk-net.md) |
iot | Tutorial Devkit Espressif Esp32 Freertos Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-devkit-espressif-esp32-freertos-iot-hub.md | |
iot | Tutorial Devkit Mxchip Az3166 Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-devkit-mxchip-az3166-iot-hub.md | |
logic-apps | Azure Ai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connectors/azure-ai.md | ms.suite: integration Previously updated : 10/14/2024 Last updated : 11/11/2024 -# Connect to Azure AI services from Standard workflows in Azure Logic Apps (Preview) +# Connect to Azure AI services from Standard workflows in Azure Logic Apps [!INCLUDE [logic-apps-sku-standard](../../../includes/logic-apps-sku-standard.md)] -> [!NOTE] -> This capability is in preview and is subject to the -> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - To integrate enterprise data and services with AI technologies, you can use the **Azure OpenAI** and **Azure AI Search** built-in connectors in Standard logic app workflows. These connectors support multiple authentication types, such as API keys, Microsoft Entra ID, and managed identities. They also can connect to Azure OpenAI Service and Azure AI Search endpoints behind firewalls so that your workflows securely connect to your AI resources in Azure. This guide provides an overview and examples for how to use the **Azure OpenAI** and **Azure AI Search** connector operations in your workflow. The following pattern is only one example that shows how a chat workflow might l ## See also +[Azure OpenAI and Azure AI Search connectors are now generally available](https://techcommunity.microsoft.com/blog/integrationsonazureblog/%F0%9F%93%A2-announcement-azure-openai-and-azure-ai-search-connectors-are-now-generally-av/4163682) [Azure OpenAI and AI Search connectors for Azure Logic Apps (Standard)](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/public-preview-of-azure-openai-and-ai-search-in-app-connectors/ba-p/4049584) |
networking | Azure Network Latency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/azure-network-latency.md | Use the following tabs to view latency statistics for each region. > [!NOTE]-> Round-trip latency to West India from other Azure regions is included in the table. However, West India is not a source region so roundtrips from West India are not included in the table.] +> Round-trip latency to West India from other Azure regions is included in the table. However, West India is not a source region so roundtrips from West India are not included in the table. #### [Asia](#tab/Asia/APAC) |
operational-excellence | Relocation Static Web Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-static-web-apps.md | Review the following prerequisites before you prepare for the relocation. - If using integrated API support provided by Azure Functions: - Determine the availability of Azure Functions in the target region. - Determine if Function API Keys are being used. For example, are you using Key Vault or do you deploy them as part of your application configuration files?- - Determine the deployment model for API support in the target region: [Distributed managed functions](../static-web-apps/distributed-functions.md) or [Bring Your own functions](../static-web-apps/functions-bring-your-own.md). Understand the differences between the two models. + - Determine the deployment model for API support in the target region: [Bring Your own functions](../static-web-apps/functions-bring-your-own.md). Understand the differences between the two models. - Ensure that the Standard Hosting Plan is used to host the Static Web App. For more information about hosting plans, see [Azure Static Web Apps hosting plans](../static-web-apps/plans.md). |
partner-solutions | Dynatrace How To Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md | Title: Manage your Azure Native Dynatrace Service integration description: This article describes how to manage Dynatrace on the Azure portal. Previously updated : 08/28/2024 Last updated : 11/07/2024 The details include: At the bottom, you see two tabs: -- **Get started tab** also provides links to Dynatrace dashboards, logs and Smartscape Topology.+- **Get started tab** also provides links to Dynatrace dashboards, logs, and Smartscape Topology. - **Monitoring tab** provides a summary of the resources sending logs to Dynatrace. If you select the **Monitoring** pane, you see a table with information about the Azure resources sending logs to Dynatrace. The column **Logs to Dynatrace** indicates whether the resource is sending logs - _Resource doesn't support sending logs_ - Only resource types with monitoring log categories can be configured to send logs. See [supported categories](/azure/azure-monitor/essentials/resource-logs-categories). - _Limit of five diagnostic settings reached_ - Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/cli/azure/monitor/diagnostic-settings).-- _Error_ - The resource is configured to send logs to Dynatrace, but is blocked by an error.+- _Error_ - The resource is configured to send logs to Dynatrace, but is blocked due to an error. - _Logs not configured_ - Only Azure resources that have the appropriate resource tags are configured to send logs to Dynatrace. - _Agent not configured_ - Virtual machines without the Dynatrace OneAgent installed don't emit logs to Dynatrace. -## Use one Dynatrace resource with multiple subscriptions --You can now monitor all your subscriptions through a single Dynatrace resource using **Monitored Subscriptions**. Your experience is simplified because you don't have to set up a Dynatrace resource in every subscription that you intend to monitor. You can monitor multiple subscriptions by linking them to a single Dynatrace resource that is tied to a Dynatrace environment. This provides a single pane view for all resources across multiple subscriptions. --To manage multiple subscriptions that you want to monitor, select **Monitored Subscriptions** in the **Dynatrace environment configurations** section of the Resource menu. ---From **Monitored Subscriptions** in the Resource menu, select **Add Subscriptions**. The **Add Subscriptions** experience that opens and shows the subscriptions you have *Owner* role assigned to and any Dynatrace resource created in those subscriptions that is already linked to the same Dynatrace environment as the current resource. --If the subscription you want to monitor has a resource already linked to the same Dynatrace org, we recommend that you delete the Dynatrace resources to avoid shipping duplicate data and incurring double the charges. --Select the subscriptions you want to monitor through the Dynatrace resource and select **Add**. ---If the list doesn't get updated automatically, select **Refresh** to view the subscriptions and their monitoring status. You might see an intermediate status of *In Progress* while a subscription gets added. When the subscription is successfully added, you see the status is updated to **Active**. If a subscription fails to get added, **Monitoring Status** shows as **Failed**. ---The set of tag rules for metrics and logs defined for the Dynatrace resource applies to all subscriptions that are added for monitoring. Setting separate tag rules for different subscriptions isn't supported. Diagnostics settings are automatically added to resources in the added subscriptions that match the tag rules defined for the Dynatrace resource. --If you have existing Dynatrace resources that are linked to the account for monitoring, you can end up with duplication of logs that can result in added charges. Ensure you delete redundant Dynatrace resources that are already linked to the account. You can view the list of connected resources and delete the redundant ones. We recommend consolidating subscriptions into the same Dynatrace resource where possible. - ## Monitor virtual machines using Dynatrace OneAgent You can install Dynatrace OneAgent on virtual machines as an extension. Select **Virtual Machines** under **Dynatrace environment config** in the Resource menu. In the working pane, you see a list of all virtual machines in the subscription. For each virtual machine, the following info is displayed: | Column | Description | |||-| **Name** | Virtual machine name. | -| **Status** | Indicates whether the virtual machine is stopped or running. Dynatrace OneAgent can only be installed on virtual machines that are running. If the virtual machine is stopped, installing the Dynatrace OneAgent will be disabled. | +| **Name** | The name of the virtual machine. | +| **Status** | Indicates whether the virtual machine is stopped or running. Dynatrace OneAgent can only be installed on virtual machines that are running. If the virtual machine is stopped, installing the Dynatrace OneAgent is disabled. | | **OneAgent status** | Whether the Dynatrace OneAgent is running on the virtual machine. | | **OneAgent version** | The Dynatrace OneAgent version number. |-| **Auto-update** | Whether auto-update has been enabled for the OneAgent. | +| **Auto-update** | Whether autoupdate is enabled for the OneAgent. | | **Log monitoring** | Whether log monitoring option was selected when OneAgent was installed. | | **Monitoring mode** | Whether the Dynatrace OneAgent is monitoring hosts in [full-stack monitoring mode or infrastructure monitoring mode](https://www.dynatrace.com/support/help/how-to-use-dynatrace/hosts/basic-concepts/get-started-with-infrastructure-monitoring). | If you would like to reconfigure single sign-on, select **Single sign-on** in th If single sign-on was already configured, you can disable it. -To establish single sign-on or change the application, select **Enable single sign-on through Microsoft Entra ID**. The portal retrieves Dynatrace application from Microsoft Entra ID. The app comes from the enterprise app name selected during the [pre-configuration steps](dynatrace-how-to-configure-prereqs.md). +To establish single sign-on or change the application, select **Enable single sign-on through Microsoft Entra ID**. The portal retrieves Dynatrace application from Microsoft Entra ID. The app comes from the enterprise app name selected during the [preconfiguration steps](dynatrace-how-to-configure-prereqs.md). ## Delete Dynatrace resource |
role-based-access-control | Built In Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md | The following table provides a brief description of each built-in role. Click th > | | | | > | <a name='application-insights-component-contributor'></a>[Application Insights Component Contributor](./built-in-roles/monitor.md#application-insights-component-contributor) | Can manage Application Insights components | ae349356-3a1b-4a5e-921d-050484c6347e | > | <a name='application-insights-snapshot-debugger'></a>[Application Insights Snapshot Debugger](./built-in-roles/monitor.md#application-insights-snapshot-debugger) | Gives user permission to view and download debug snapshots collected with the Application Insights Snapshot Debugger. Note that these permissions are not included in the [Owner](/azure/role-based-access-control/built-in-roles#owner) or [Contributor](/azure/role-based-access-control/built-in-roles#contributor) roles. When giving users the Application Insights Snapshot Debugger role, you must grant the role directly to the user. The role is not recognized when it is added to a custom role. | 08954f03-6346-4c2e-81c0-ec3a5cfae23b |+> | <a name='azure-managed-grafana-workspace-contributor'></a>[Azure Managed Grafana Workspace Contributor](./built-in-roles/monitor.md#azure-managed-grafana-workspace-contributor) | Can manage Azure Managed Grafana resources, without providing access to the workspaces themselves. | 5c2d7e57-b7c2-4d8a-be4f-82afa42c6e95 | > | <a name='grafana-admin'></a>[Grafana Admin](./built-in-roles/monitor.md#grafana-admin) | Manage server-wide settings and manage access to resources such as organizations, users, and licenses. | 22926164-76b3-42b3-bc55-97df8dab3e41 | > | <a name='grafana-editor'></a>[Grafana Editor](./built-in-roles/monitor.md#grafana-editor) | Create, edit, delete, or view dashboards; create, edit, or delete folders; and edit or view playlists. | a79a5197-3a5c-4973-a920-486035ffd60f | > | <a name='grafana-limited-viewer'></a>[Grafana Limited Viewer](./built-in-roles/monitor.md#grafana-limited-viewer) | View home page. | 41e04612-9dac-4699-a02b-c82ff2cc3fb5 | |
role-based-access-control | Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/monitor.md | Gives user permission to view and download debug snapshots collected with the Ap } ``` +## Azure Managed Grafana Workspace Contributor ++Can manage Azure Managed Grafana resources, without providing access to the workspaces themselves. ++> [!div class="mx-tableFixed"] +> | Actions | Description | +> | | | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/write | Write grafana | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/delete | Delete grafana | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/PrivateEndpointConnectionsApproval/action | Approve PrivateEndpointConnection | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/managedPrivateEndpoints/action | Operations on Private Endpoints | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/locations/operationStatuses/write | Write operation statuses | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/privateEndpointConnectionProxies/validate/action | Validate PrivateEndpointConnectionProxy | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/privateEndpointConnectionProxies/write | Create/Update PrivateEndpointConnectionProxy | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/privateEndpointConnectionProxies/delete | Delete PrivateEndpointConnectionProxy | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/privateEndpointConnections/write | Update PrivateEndpointConnection | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/privateEndpointConnections/delete | Delete PrivateEndpointConnection | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/managedPrivateEndpoints/write | Write Managed Private Endpoints | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/managedPrivateEndpoints/delete | Delete Managed Private Endpoints | +> | [Microsoft.Authorization](../permissions/management-and-governance.md#microsoftauthorization)/*/read | Read roles and role assignments | +> | [Microsoft.Insights](../permissions/monitor.md#microsoftinsights)/AlertRules/Write | Create or update a classic metric alert | +> | [Microsoft.Insights](../permissions/monitor.md#microsoftinsights)/AlertRules/Delete | Delete a classic metric alert | +> | [Microsoft.Insights](../permissions/monitor.md#microsoftinsights)/AlertRules/Read | Read a classic metric alert | +> | [Microsoft.Insights](../permissions/monitor.md#microsoftinsights)/AlertRules/Activated/Action | Classic metric alert activated | +> | [Microsoft.Insights](../permissions/monitor.md#microsoftinsights)/AlertRules/Resolved/Action | Classic metric alert resolved | +> | [Microsoft.Insights](../permissions/monitor.md#microsoftinsights)/AlertRules/Throttled/Action | Classic metric alert rule throttled | +> | [Microsoft.Insights](../permissions/monitor.md#microsoftinsights)/AlertRules/Incidents/Read | Read a classic metric alert incident | +> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/deployments/read | Gets or lists deployments. | +> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/deployments/write | Creates or updates an deployment. | +> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/deployments/delete | Deletes a deployment. | +> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/deployments/cancel/action | Cancels a deployment. | +> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/deployments/validate/action | Validates an deployment. | +> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/deployments/whatIf/action | Predicts template deployment changes. | +> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/deployments/exportTemplate/action | Export template for a deployment | +> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. | +> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/deployments/operationstatuses/read | Gets or lists deployment operation statuses. | +> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. | +> | **NotActions** | | +> | *none* | | +> | **DataActions** | | +> | *none* | | +> | **NotDataActions** | | +> | *none* | | ++```json +{ + "assignableScopes": [ + "/" + ], + "description": "Can manage Azure Managed Grafana resources, without providing access to the workspaces themselves.", + "id": "/providers/Microsoft.Authorization/roleDefinitions/5c2d7e57-b7c2-4d8a-be4f-82afa42c6e95", + "name": "5c2d7e57-b7c2-4d8a-be4f-82afa42c6e95", + "permissions": [ + { + "actions": [ + "Microsoft.Dashboard/grafana/write", + "Microsoft.Dashboard/grafana/delete", + "Microsoft.Dashboard/grafana/PrivateEndpointConnectionsApproval/action", + "Microsoft.Dashboard/grafana/managedPrivateEndpoints/action", + "Microsoft.Dashboard/locations/operationStatuses/write", + "Microsoft.Dashboard/grafana/privateEndpointConnectionProxies/validate/action", + "Microsoft.Dashboard/grafana/privateEndpointConnectionProxies/write", + "Microsoft.Dashboard/grafana/privateEndpointConnectionProxies/delete", + "Microsoft.Dashboard/grafana/privateEndpointConnections/write", + "Microsoft.Dashboard/grafana/privateEndpointConnections/delete", + "Microsoft.Dashboard/grafana/managedPrivateEndpoints/write", + "Microsoft.Dashboard/grafana/managedPrivateEndpoints/delete", + "Microsoft.Authorization/*/read", + "Microsoft.Insights/AlertRules/Write", + "Microsoft.Insights/AlertRules/Delete", + "Microsoft.Insights/AlertRules/Read", + "Microsoft.Insights/AlertRules/Activated/Action", + "Microsoft.Insights/AlertRules/Resolved/Action", + "Microsoft.Insights/AlertRules/Throttled/Action", + "Microsoft.Insights/AlertRules/Incidents/Read", + "Microsoft.Resources/deployments/read", + "Microsoft.Resources/deployments/write", + "Microsoft.Resources/deployments/delete", + "Microsoft.Resources/deployments/cancel/action", + "Microsoft.Resources/deployments/validate/action", + "Microsoft.Resources/deployments/whatIf/action", + "Microsoft.Resources/deployments/exportTemplate/action", + "Microsoft.Resources/deployments/operations/read", + "Microsoft.Resources/deployments/operationstatuses/read", + "Microsoft.Resources/subscriptions/resourceGroups/read" + ], + "notActions": [], + "dataActions": [], + "notDataActions": [] + } + ], + "roleName": "Azure Managed Grafana Workspace Contributor", + "roleType": "BuiltInRole", + "type": "Microsoft.Authorization/roleDefinitions" +} +``` + ## Grafana Admin Manage server-wide settings and manage access to resources such as organizations, users, and licenses. |
role-based-access-control | Pim Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/pim-integration.md | + + Title: Eligible and time-bound role assignments in Azure RBAC +description: Learn about the integration of Azure role-based access control (Azure RBAC) and Microsoft Entra Privileged Identity Management (PIM) to create eligible and time-bound role assignments. +++ Last updated : 11/11/2024++++# Eligible and time-bound role assignments in Azure RBAC ++If you have a Microsoft Entra ID P2 or Microsoft Entra ID Governance license, [Microsoft Entra Privileged Identity Management (PIM)](/entra/id-governance/privileged-identity-management/pim-configure) is integrated into role assignment steps. For example, you can assign roles to users for a limited period of time. You can also make users eligible for role assignments so that they must activate to use the role, such as request approval. Eligible role assignments provide just-in-time access to a role for a limited period of time. ++This article describes the integration of Azure role-based access control (Azure RBAC) and Microsoft Entra Privileged Identity Management (PIM) to create eligible and time-bound role assignments. ++## PIM functionality ++If you have PIM, you can create eligible and time-bound role assignments using the **Access control (IAM)** page in the Azure portal. You can create eligible role assignments for users, but you can't create eligible role assignments for applications, service principals, or managed identities because they can't perform the activation steps. You can create eligible role assignments at management group, subscription, and resource group scope, but not at resource scope. ++Here's an example of the **Assignment type** tab when you add a role assignment using the **Access control (IAM)** page. This capability is being deployed in stages, so it might not be available yet in your tenant or your interface might look different. +++The assignment type options available to you might vary depending or your PIM policy. For example, PIM policy defines whether permanent assignments can be created, maximum duration for time-bound assignments, roles activations requirements (approval, multifactor authentication, or Conditional Access authentication context), and other settings. For more information, see [Configure Azure resource role settings in Privileged Identity Management](/entra/id-governance/privileged-identity-management/pim-resource-roles-configure-role-settings). ++Users with eligible and/or time-bound assignments must have a valid license. If you don't want to use the PIM functionality, select the **Active** assignment type and **Permanent** assignment duration options. These settings create a role assignment where the principal always has permissions in the role. ++To better understand PIM, you should review the following terms. ++| Term or concept | Role assignment category | Description | +| | | | +| eligible | Type | A role assignment that requires a user to perform one or more actions to use the role. If a user has been made eligible for a role, that means they can activate the role when they need to perform privileged tasks. There's no difference in the access given to someone with a permanent versus an eligible role assignment. The only difference is that some people don't need that access all the time. | +| active | Type | A role assignment that doesn't require a user to perform any action to use the role. Users assigned as active have the privileges assigned to the role. | +| activate | | The process of performing one or more actions to use a role that a user is eligible for. Actions might include performing a multifactor authentication (MFA) check, providing a business justification, or requesting approval from designated approvers. | +| permanent eligible | Duration | A role assignment where a user is always eligible to activate the role. | +| permanent active | Duration | A role assignment where a user can always use the role without performing any actions. | +| time-bound eligible | Duration | A role assignment where a user is eligible to activate the role only within start and end dates. | +| time-bound active | Duration | A role assignment where a user can use the role only within start and end dates. | +| just-in-time (JIT) access | | A model in which users receive temporary permissions to perform privileged tasks, which prevents malicious or unauthorized users from gaining access after the permissions have expired. Access is granted only when users need it. | +| principle of least privilege access | | A recommended security practice in which every user is provided with only the minimum privileges needed to accomplish the tasks they're authorized to perform. This practice minimizes the number of Global Administrators and instead uses specific administrator roles for certain scenarios. | ++For more information, see [What is Microsoft Entra Privileged Identity Management?](/entra/id-governance/privileged-identity-management/pim-configure). ++## How to list eligible and time-bound role assignments ++If you want to see which users are using the PIM functionality, here are options for how to list eligible and time-bound role assignments. ++### Option 1: List using the Azure portal ++1. Sign in to the Azure portal, open the **Access control (IAM)** page, and select the **Role assignments** tab. ++1. Filter the eligible and time-bound role assignments. ++ You can group and sort by **State**, and look for role assignments that aren't the **Active permanent** type. ++ :::image type="content" source="./media/shared/sub-access-control-role-assignments-eligible.png" alt-text="Screenshot of Access control and Active assignments and Eligible assignments tabs." lightbox="./media/shared/sub-access-control-role-assignments-eligible.png"::: ++### Option 2: List using PowerShell ++There isn't a single PowerShell command that can list both the eligible and active time-bound role assignments. To list your eligible role assignments, use the [Get-AzRoleEligibilitySchedule](/powershell/module/az.resources/get-azroleeligibilityschedule) command. To list your active role assignments, use the [Get-AzRoleAssignmentSchedule](/powershell/module/az.resources/get-azroleassignmentschedule) command. ++This example shows how to list eligible and time-bound role assignments in a subscription, which includes these role assignment types: ++- Eligible permanent +- Eligible time-bound +- Active time-bound ++The `Where-Object` command filters out active permanent role assignments that are available with Azure RBAC functionality without PIM. ++```powershell +Get-AzRoleEligibilitySchedule -Scope /subscriptions/<subscriptionId> +Get-AzRoleAssignmentSchedule -Scope /subscriptions/<subscriptionId> | Where-Object {$_.EndDateTime -ne $null } +``` ++For information about how scopes are constructed, see [Understand scope for Azure RBAC](/azure/role-based-access-control/scope-overview). ++## How to convert eligible and time-bound role assignments to active permanent ++If your organization has process or compliance reasons to limit the use of PIM, here are options for how to convert these role assignments to active permanent. ++### Option 1: Convert using the Azure portal ++1. In the Azure portal, on the **Role assignments** tab and **State** column, select the **Eligible permanent**, **Eligible time-bound**, and **Active time-bound** links for each role assignment you want to convert. ++1. In the **Edit assignment** pane, select **Active** for the assignment type and **Permanent** for the assignment duration. ++ For more information, see [Edit assignment](role-assignments-portal.yml#edit-assignment). ++ :::image type="content" source="./media/shared/assignment-type-edit.png" alt-text="Screenshot of Edit assignment pane with Assignment type options displayed." lightbox="./media/shared/assignment-type-edit.png"::: ++1. When finished, select **Save**. ++ Your updates might take a while to be processed and reflected in the portal. ++1. Repeat these steps for all role assignments at management group, subscription, and resource group scopes that you want to convert. ++ If you have role assignments at resource scope that you want to convert, you have to make changes directly in PIM. ++### Option 2: Convert using PowerShell ++There isn't a command or API to directly convert role assignments to a different state or type, so instead you can follow these steps. ++> [!IMPORTANT] +> Removing role assignments can potentially cause disruptions in your environment. Be sure that you understand the impact before you perform these steps. ++1. Retrieve and save the list of all of your eligible and time-bound role assignment in a secure location to prevent data loss. ++ > [!IMPORTANT] + > It is important that you save the list of eligible and time-bound role assignments because these steps require you to remove these role assignments before you create the same role assignments as active permanent. ++2. Use the [New-AzRoleEligibilityScheduleRequest](/powershell/module/az.resources/new-azroleeligibilityschedulerequest) command to remove your eligible role assignments. ++ This example shows how to remove an eligible role assignment. ++ ```powershell + $guid = New-Guid + New-AzRoleEligibilityScheduleRequest -Name $guid -Scope <Scope> -PrincipalId <PrincipalId> -RoleDefinitionId <RoleDefinitionId> -RequestType AdminRemove + ``` + +3. Use the [New-AzRoleAssignmentScheduleRequest](/powershell/module/az.resources/new-azroleassignmentschedulerequest) command to remove your active time-bound role assignments. ++ This example shows how to remove an active time-bound role assignment. ++ ```powershell + $guid = New-Guid + New-AzRoleAssignmentScheduleRequest -Name $guid -Scope <Scope> -PrincipalId <PrincipalId> -RoleDefinitionId <RoleDefinitionId> -RequestType AdminRemove + ``` ++4. Use the [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) command to check for an existing role assignment and use the [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) command to create an active permanent role assignment with Azure RBAC for each eligible and time-bound role assignment. ++ This example shows how to check for an existing role assignment and create an active permanent role assignment with Azure RBAC. ++ ```powershell + $result = Get-AzRoleAssignment -ObjectId $RA.PrincipalId -RoleDefinitionName $RA.RoleDefinitionDisplayName -Scope $RA.Scope; + if($result -eq $null) { + New-AzRoleAssignment -ObjectId $RA.PrincipalId -RoleDefinitionName $RA.RoleDefinitionDisplayName -Scope $RA.Scope + } + ``` ++## How to limit the creation of eligible or time-bound role assignments ++If your organization has process or compliance reasons to limit the use of PIM, you can use Azure Policy to limit the creation of eligible or time-bound role assignments. For more information, see [What is Azure Policy?](/azure/governance/policy/overview). ++Here's an example policy that limits the creation of eligible and time-bound role assignments except for a specific list of identities. Additional parameters and checks can be added for other allow conditions. ++```json +{ + "properties": { + "displayName": "Limit eligible and active time-bound role assignments except for allowed principal IDs", + "policyType": "Custom", + "mode": "All", + "metadata": { + "createdBy": "aaaaaaaa-bbbb-cccc-1111-222222222222", + "createdOn": "2024-11-05T02:31:25.1246591Z", + "updatedBy": "aaaaaaaa-bbbb-cccc-1111-222222222222", + "updatedOn": "2024-11-06T07:58:17.1699721Z" + }, + "version": "1.0.0", + "parameters": { + "allowedPrincipalIds": { + "type": "Array", + "metadata": { + "displayName": "Allowed Principal IDs", + "description": "A list of principal IDs that can receive PIM role assignments." + }, + "defaultValue": [] + } + }, + "policyRule": { + "if": { + "anyof": [ + { + "allOf": [ + { + "field": "type", + "equals": "Microsoft.Authorization/roleEligibilityScheduleRequests" + }, + { + "not": { + "field": "Microsoft.Authorization/roleEligibilityScheduleRequests/principalId", + "in": "[parameters('allowedPrincipalIds')]" + } + } + ] + }, + { + "allOf": [ + { + "field": "type", + "equals": "Microsoft.Authorization/roleAssignmentScheduleRequests" + }, + { + "not": { + "field": "Microsoft.Authorization/roleAssignmentScheduleRequests/principalId", + "in": "[parameters('allowedPrincipalIds')]" + } + } + ] + } + ] + }, + "then": { + "effect": "deny" + } + }, + "versions": [ + "1.0.0" + ] + }, + "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4ef/providers/Microsoft.Authorization/policyDefinitions/1aaaaaa1-2bb2-3cc3-4dd4-5eeeeeeeeee5", + "type": "Microsoft.Authorization/policyDefinitions", + "name": "1aaaaaa1-2bb2-3cc3-4dd4-5eeeeeeeeee5", + "systemData": { + "createdBy": "test1@contoso.com", + "createdByType": "User", + "createdAt": "2024-11-05T02:31:25.0836273Z", + "lastModifiedBy": "test1@contoso.com", + "lastModifiedByType": "User", + "lastModifiedAt": "2024-11-06T07:58:17.1651655Z" + } +} +``` ++For information about PIM resource properties, see these REST API docs: ++- [RoleEligibilityScheduleRequest](/rest/api/authorization/role-eligibility-schedule-requests/get) +- [RoleAssignmentScheduleRequest](/rest/api/authorization/role-assignment-schedule-requests/get) ++For information about how to assign an Azure Policy with parameters, see [Tutorial: Create and manage policies to enforce compliance](/azure/governance/policy/tutorials/create-and-manage#assign-a-policy). ++## Next steps ++- [Assign Azure roles using the Azure portal](role-assignments-portal.yml) +- [What is Microsoft Entra Privileged Identity Management?](/entra/id-governance/privileged-identity-management/pim-configure) |
role-based-access-control | Role Assignments Eligible Activate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-eligible-activate.md | Title: Activate eligible Azure role assignments (Preview) - Azure RBAC + Title: Activate eligible Azure role assignments - Azure RBAC description: Learn how to activate eligible Azure role assignments in Azure role-based access control (Azure RBAC) using the Azure portal. Previously updated : 06/27/2024 Last updated : 11/11/2024 -# Activate eligible Azure role assignments (Preview) --> [!IMPORTANT] -> Azure role assignment integration with Privileged Identity Management is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +# Activate eligible Azure role assignments Eligible Azure role assignments provide just-in-time access to a role for a limited period of time. Microsoft Entra Privileged Identity Management (PIM) role activation has been integrated into the Access control (IAM) page in the Azure portal. If you have been made eligible for an Azure role, you can activate that role using the Azure portal. This capability is being deployed in stages, so it might not be available yet in your tenant or your interface might look different. ## Prerequisites - Microsoft Entra ID P2 license or Microsoft Entra ID Governance license-- [Eligible role assignment](./role-assignments-portal.yml#step-6-select-assignment-type-(preview))+- [Eligible role assignment](./role-assignments-portal.yml#step-6-select-assignment-type) - `Microsoft.Authorization/roleAssignments/read` permission, such as [Reader](./built-in-roles/general.md#reader) ## Activate group membership (if needed) These steps describe how to activate an eligible role assignment using the Azure ## Next steps -- [Integration with Privileged Identity Management (Preview)](./role-assignments.md#integration-with-privileged-identity-management-preview)+- [Eligible and time-bound role assignments in Azure RBAC](./pim-integration.md) - [Activate my Azure resource roles in Privileged Identity Management](/entra/id-governance/privileged-identity-management/pim-resource-roles-activate-your-roles) |
role-based-access-control | Role Assignments Steps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-steps.md | If you are using a service principal to assign roles, you might get the error "I Once you know the security principal, role, and scope, you can assign the role. You can assign roles using the Azure portal, Azure PowerShell, Azure CLI, Azure SDKs, or REST APIs. -You can have up to **4000** role assignments in each subscription. This limit includes role assignments at the subscription, resource group, and resource scopes. [Eligible role assignments](./role-assignments-portal.yml#step-6-select-assignment-type-(preview)) and role assignments scheduled in the future do not count towards this limit. You can have up to **500** role assignments in each management group. For more information, see [Troubleshoot Azure RBAC limits](troubleshoot-limits.md). +You can have up to **4000** role assignments in each subscription. This limit includes role assignments at the subscription, resource group, and resource scopes. [Eligible role assignments](./role-assignments-portal.yml#step-6-select-assignment-type) and role assignments scheduled in the future do not count towards this limit. You can have up to **500** role assignments in each management group. For more information, see [Troubleshoot Azure RBAC limits](troubleshoot-limits.md). Check out the following articles for detailed steps for how to assign roles. |
role-based-access-control | Role Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments.md | description: Learn about Azure role assignments in Azure role-based access contr Previously updated : 08/30/2024 Last updated : 11/11/2024 # Understand Azure role assignments The preceding condition allows users to read blobs with a blob index tag key of For more information about conditions, see [What is Azure attribute-based access control (Azure ABAC)?](conditions-overview.md) -## Integration with Privileged Identity Management (Preview) --> [!IMPORTANT] -> Azure role assignment integration with Privileged Identity Management is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --If you have a Microsoft Entra ID P2 or Microsoft Entra ID Governance license, [Microsoft Entra Privileged Identity Management (PIM)](/entra/id-governance/privileged-identity-management/pim-configure) is integrated into role assignment steps. For example, you can assign roles to users for a limited period of time. You can also make users eligible for role assignments so that they must activate to use the role, such as request approval. Eligible role assignments provide just-in-time access to a role for a limited period of time. You can't create eligible role assignments for applications, service principals, or managed identities because they can't perform the activation steps. You can create eligible role assignments at management group, subscription, and resource group scope, but not at resource scope. This capability is being deployed in stages, so it might not be available yet in your tenant or your interface might look different. --The assignment type options available to you might vary depending or your PIM policy. For example, PIM policy defines whether permanent assignments can be created, maximum duration for time-bound assignments, roles activations requirements (approval, multifactor authentication, or Conditional Access authentication context), and other settings. For more information, see [Configure Azure resource role settings in Privileged Identity Management](/entra/id-governance/privileged-identity-management/pim-resource-roles-configure-role-settings). --If you don't want to use the PIM functionality, select the **Active** assignment type and **Permanent** assignment duration options. These settings create a role assignment where the principal always has permissions in the role. ---To better understand PIM, you should review the following terms. --| Term or concept | Role assignment category | Description | -| | | | -| eligible | Type | A role assignment that requires a user to perform one or more actions to use the role. If a user has been made eligible for a role, that means they can activate the role when they need to perform privileged tasks. There's no difference in the access given to someone with a permanent versus an eligible role assignment. The only difference is that some people don't need that access all the time. | -| active | Type | A role assignment that doesn't require a user to perform any action to use the role. Users assigned as active have the privileges assigned to the role. | -| activate | | The process of performing one or more actions to use a role that a user is eligible for. Actions might include performing a multifactor authentication (MFA) check, providing a business justification, or requesting approval from designated approvers. | -| permanent eligible | Duration | A role assignment where a user is always eligible to activate the role. | -| permanent active | Duration | A role assignment where a user can always use the role without performing any actions. | -| time-bound eligible | Duration | A role assignment where a user is eligible to activate the role only within start and end dates. | -| time-bound active | Duration | A role assignment where a user can use the role only within start and end dates. | -| just-in-time (JIT) access | | A model in which users receive temporary permissions to perform privileged tasks, which prevents malicious or unauthorized users from gaining access after the permissions have expired. Access is granted only when users need it. | -| principle of least privilege access | | A recommended security practice in which every user is provided with only the minimum privileges needed to accomplish the tasks they're authorized to perform. This practice minimizes the number of Global Administrators and instead uses specific administrator roles for certain scenarios. | --For more information, see [What is Microsoft Entra Privileged Identity Management?](/entra/id-governance/privileged-identity-management/pim-configure). - ## Next steps - [Delegate Azure access management to others](delegate-role-assignments-overview.md) - [Steps to assign an Azure role](role-assignments-steps.md)+- [Eligible and time-bound role assignments in Azure RBAC](./pim-integration.md) |
role-based-access-control | Troubleshoot Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshoot-limits.md | When you try to assign a role, you get the following error message: ### Cause -Azure supports up to **4000** role assignments per subscription. This limit includes role assignments at the subscription, resource group, and resource scopes, but not at the management group scope. [Eligible role assignments](./role-assignments-portal.yml#step-6-select-assignment-type-(preview)) and role assignments scheduled in the future do not count towards this limit. You should try to reduce the number of role assignments in the subscription. +Azure supports up to **4000** role assignments per subscription. This limit includes role assignments at the subscription, resource group, and resource scopes, but not at the management group scope. [Eligible role assignments](./role-assignments-portal.yml#step-6-select-assignment-type) and role assignments scheduled in the future do not count towards this limit. You should try to reduce the number of role assignments in the subscription. > [!NOTE] > The **4000** role assignments limit per subscription is fixed and cannot be increased. |
static-web-apps | Apis Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-functions.md | The following table contrasts the differences between using managed and existing | Supported Azure Functions [hosting plans](../azure-functions/functions-scale.md) | Consumption | Consumption<br>Premium<br>Dedicated | | [Integrated security](user-information.md) with direct access to user authentication and role-based authorization data | Γ£ö | Γ£ö | | [Routing integration](./configuration.md?#routes) that makes the `/api` route available to the web app securely without requiring custom CORS rules. | Γ£ö | Γ£ö |-| [Distributed functions (preview)](./distributed-functions.md) for dynamic global distribution of backend compute. | Γ£ö | Γ£ò | | [Durable Functions](../azure-functions/durable/durable-functions-overview.md) programming model | Γ£ò | Γ£ö | | [Managed identity](../app-service/overview-managed-identity.md) | Γ£ò | Γ£ö | | [Azure App Service Authentication and Authorization](../app-service/configure-authentication-provider-aad.md) token management | Γ£ò | Γ£ö | The following table contrasts the differences between using managed and existing [!INCLUDE [APIs overview](../../includes/static-web-apps-apis-overview.md)] -> [!NOTE] -> [Distributed functions](./distributed-functions.md) is available with managed functions. Distributed functions automatically distribute your managed functions to regions of high request loads. - ## Configuration API endpoints are available to the web app through the `api` route. |
static-web-apps | Distributed Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/distributed-functions.md | - Title: Distributed managed functions in Azure Static Web Apps (preview) -description: Configure dynamic distribution of your Static Web Apps managed functions to high request load regions. ---- Previously updated : 03/12/2024----# Distributed managed functions in Azure Static Web Apps (preview) --As requests to your APIs increase, you often want to distribute your APIs to the Azure regions getting the most demand. When you enable dynamic distribution, your API functions are automatically replicated to the regions closest to highest levels of incoming requests. For each request, Azure automatically directs traffic to the most appropriate region. Distributing your APIs reduces network latency and increases application performance and reliability of your static web app. --Distributed functions are only available on the [Standard hosting plan](plans.md). ---Distributed functions can help reduce your network latency by up to 70%. Decreased network latency leads to improved performance and responsiveness of web applications with global audiences. Distributed functions can also improve application performance when quick response times are needed for responsive personalization, routing or authorization. --Distributed functions only apply to the production environment of your static web app. --> [!NOTE] -> Distributed functions is not compatible with Next.js hybrid rendering applications. --## Enable distributed functions --Before enabling distributed functions, make sure your static web app is under the Standard hosting plan with managed functions. --Use the following steps to enable distributed functions. --1. Open your static web app in the Azure portal. - -1. From the *Settings* section, select **APIs**. --1. Check the box labeled **Distributed functions**. --1. Select **Confirm**. --## Next steps --> [!div class="nextstepaction"] -> [Use preview environments](preview-environments.md) |
storage | Container Storage Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-release-notes.md | -This article provides the release notes for Azure Container Storage. It's important to note that minor releases introduce new functionalities in a backward-compatible manner (for example, 1.1.0 GA). Patch releases focus on bug fixes, security updates, and smaller improvements (for example, 1.1.2). +This article provides the release notes for Azure Container Storage. It's important to note that minor releases introduce new functionalities in a backward-compatible manner (for example, 1.2.0 Minor Release). Patch releases focus on bug fixes, security updates, and smaller improvements (for example, 1.1.2). ## Supported versions The following Azure Container Storage versions are supported: | Milestone | Status | |-|-| +|1.2.0- Minor Release | Supported | |1.1.2- Patch Release | Supported | |1.1.1- Patch Release | Supported | |1.1.0- General Availability| Supported | The following Azure Container Storage versions are no longer supported: 1.0.6-pr ## Minor vs. patch versions -Minor versions introduce small improvements, performance enhancements, or minor new features without breaking existing functionality. For example, version 1.1.0 would move to 1.2.0. Patch versions are released more frequently than minor versions. They focus solely on bug fixes and security updates. For example, version 1.1.2 would be updated to 1.1.3. +Minor versions introduce small improvements, performance enhancements, or minor new features without breaking existing functionality. For example, version 1.2.0 would move to 1.3.0. Patch versions are released more frequently than minor versions. They focus solely on bug fixes and security updates. For example, version 1.1.2 would be updated to 1.1.3. ++## Version 1.2.0 ++### Improvements and issues that are fixed +- **Bug fixes and performance improvements**: General stability improvements have been made to address key recovery issues, especially during upgrade scenarios. These updates are designed to ensure more reliable recovery processes and prevent unexpected service interruptions, delivering a smoother and more consistent experience. +- **Ephemeral Disk Performance Enhancements**: We improved overall performance for Azure Container Storage with ephemeral NVMe disks as the backing storage option, delivering up to a 100% increase in write IOPS in setups with replication enabled. For more details, read about ephemeral disk performance [using local NVMe](/azure/storage/container-storage/use-container-storage-with-local-disk#optimize-performance-when-using-local-nvme) and [using local NVMe with replication](/azure/storage/container-storage/use-container-storage-with-local-nvme-replication#optimize-performance-when-using-local-nvme). + ## Version 1.1.2 |
synapse-analytics | Synapse Service Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-service-identity.md | Title: Managed identity + Title: Managed service identity for Azure Synapse Analytics -description: Learn about using managed identities in Azure Synapse. +description: Learn about using and deploying managed identities for Azure Synapse Analytics. - Previously updated : 01/27/2022+ Last updated : 11/11/2024 -# Managed identity for Azure Synapse +# Managed identities for Azure Synapse Analytics This article helps you understand managed identity (formerly known as Managed Service Identity/MSI) and how it works in Azure Synapse. This article helps you understand managed identity (formerly known as Managed Se Managed identities eliminate the need to manage credentials. Managed identities provide an identity for the service instance when connecting to resources that support Microsoft Entra authentication. For example, the service can use a managed identity to access resources like [Azure Key Vault](/azure/key-vault/general/overview), where data admins can securely store credentials or access storage accounts. The service uses the managed identity to obtain Microsoft Entra tokens. -There are two types of supported managed identities: +There are two types of supported managed identities: - **System-assigned:** You can enable a managed identity directly on a service instance. When you allow a system-assigned managed identity during the creation of the service, an identity is created in Microsoft Entra tied to that service instance's lifecycle. By design, only that Azure resource can use this identity to request tokens from Microsoft Entra ID. So when the resource is deleted, Azure automatically deletes the identity for you. Azure Synapse Analytics requires that a system-assigned managed identity must be created along with the Synapse workspace.-- **User-assigned:** You may also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and assign it to one or more instances of a Synapse workspace. In user-assigned managed identities, the identity is managed separately from the resources that use it.+- **User-assigned:** You can also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and assign it to one or more instances of a Synapse workspace. In user-assigned managed identities, the identity is managed separately from the resources that use it. Managed identity provides the below benefits: Managed identity provides the below benefits: ## System-assigned managed identity >[!NOTE]-> System-assigned managed identity is also referred to as 'Managed identity' elsewhere in the documentation and in the Synapse Studio UI for backward compatibility purpose. We will explicitly mention 'User-assigned managed identity' when referring to it. +> System-assigned managed identity is also referred to as 'Managed identity' elsewhere in the documentation and in the Synapse Studio UI for backward compatibility purpose. We will explicitly mention 'User-assigned managed identity' when referring to it. ++### Retrieve system-assigned managed identity using Azure portal ++You can find the managed identity information from Azure portal -> your Synapse workspace -> Properties. +++The managed identity information will also show up when you create linked service, which supports managed identity authentication, like Azure Blob, Azure Data Lake Storage, Azure Key Vault, etc. ++To grant permissions, follow these steps. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). ++1. Select **Access control (IAM)**. ++1. Select **Add** > **Add role assignment**. ++ :::image type="content" source="~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open."::: ++1. On the **Members** tab, select **Managed identity**, and then select **Select members**. ++1. Select your Azure subscription. ++1. Under **System-assigned managed identity**, select **Synapse workspace**, and then select a workspace. You can also use the object ID or workspace name (as the managed-identity name) to find this identity. To get the managed identity's application ID, use PowerShell. ++1. On the **Review + assign** tab, select **Review + assign** to assign the role. ++### Retrieve system-assigned managed identity using PowerShell ++The managed identity principal ID and tenant ID will be returned when you get a specific service instance as follows. Use the **PrincipalId** to grant access: ++```powershell +PS C:\> (Get-AzSynapseWorkspace -ResourceGroupName <resourceGroupName> -Name <workspaceName>).Identity ++IdentityType PrincipalId TenantId + -- -- +SystemAssigned aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb aaaabbbb-0000-cccc-1111-dddd2222eeee +``` ++You can get the application ID by copying above principal ID, then running below Microsoft Entra ID command with principal ID as parameter. ++```powershell +PS C:\> Get-AzADServicePrincipal -ObjectId aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb ++ServicePrincipalNames : {00001111-aaaa-2222-bbbb-3333cccc4444, https://identity.azure.net/P86P8g6nt1QxfPJx22om8MOooMf/Ag0Qf/nnREppHkU=} +ApplicationId : 00001111-aaaa-2222-bbbb-3333cccc4444 +DisplayName : <workspaceName> +Id : aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb +Type : ServicePrincipal +``` ++### Retrieve managed identity using REST API ++The managed identity principal ID and tenant ID will be returned when you get a specific service instance as follows. ++Call below API in the request: ++``` +GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Synapse/workspaces/{workspaceName}?api-version=2018-06-01 +``` ++**Response**: You'll get response like shown in below example. The "identity" section is populated accordingly. ++```json +{ + "properties": { + "defaultDataLakeStorage": { + "accountUrl": "https://exampledatalakeaccount.dfs.core.windows.net", + "filesystem": "examplefilesystem" + }, + "encryption": { + "doubleEncryptionEnabled": false + }, + "provisioningState": "Succeeded", + "connectivityEndpoints": { + "web": "https://web.azuresynapse.net?workspace=%2fsubscriptions%2{subscriptionId}%2fresourceGroups%2f{resourceGroupName}%2fproviders%2fMicrosoft.Synapse%2fworkspaces%2f{workspaceName}", + "dev": "https://{workspaceName}.dev.azuresynapse.net", + "sqlOnDemand": "{workspaceName}-ondemand.sql.azuresynapse.net", + "sql": "{workspaceName}.sql.azuresynapse.net" + }, + "managedResourceGroupName": "synapseworkspace-managedrg-f77f7cf2-XXXX-XXXX-XXXX-c4cb7ac3cf4f", + "sqlAdministratorLogin": "sqladminuser", + "privateEndpointConnections": [], + "workspaceUID": "e56f5773-XXXX-XXXX-XXXX-a0dc107af9ea", + "extraProperties": { + "WorkspaceType": "Normal", + "IsScopeEnabled": false + }, + "publicNetworkAccess": "Enabled", + "cspWorkspaceAdminProperties": { + "initialWorkspaceAdminObjectId": "3746a407-XXXX-XXXX-XXXX-842b6cf1fbcc" + }, + "trustedServiceBypassEnabled": false + }, + "type": "Microsoft.Synapse/workspaces", + "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Synapse/workspaces/{workspaceName}", + "location": "eastus", + "name": "{workspaceName}", + "identity": { + "type": "SystemAssigned", + "tenantId": "aaaabbbb-0000-cccc-1111-dddd2222eeee", + "principalId": "aaaaaaaa-bbbb-cccc-1111-222222222222" + }, + "tags": {} +} +``` ++> [!TIP] +> To retrieve the managed identity from an ARM template, add an **outputs** section in the ARM JSON: ++```json +{ + "outputs":{ + "managedIdentityObjectId":{ + "type":"string", + "value":"[reference(resourceId('Microsoft.Synapse/workspaces', parameters('<workspaceName>')), '2018-06-01', 'Full').identity.principalId]" + } + } +} +``` ### <a name="generate-managed-identity"></a> Generate system-assigned managed identity If you find your service instance doesn't have a managed identity associated fol Call **New-AzSynapseWorkspace** command, then you see "Identity" fields being newly generated: ```powershell+PS C:\> $password = ConvertTo-SecureString -String "****" -AsPlainText -Force PS C:\> $creds = New-Object System.Management.Automation.PSCredential ("ContosoUser", $password) PS C:\> New-AzSynapseWorkspace -ResourceGroupName <resourceGroupName> -Name <workspaceName> -Location <region> -DefaultDataLakeStorageAccountName <storageAccountName> -DefaultDataLakeStorageFileSystem <fileSystemName> -SqlAdministratorLoginCredential $creds You can retrieve the managed identity from Azure portal or programmatically. The >[!TIP] > If you don't see the managed identity, [generate managed identity](#generate-managed-identity) by updating your service instance. -#### Retrieve system-assigned managed identity using Azure portal --You can find the managed identity information from Azure portal -> your Synapse workspace -> Properties. ---- Managed Identity Object ID--The managed identity information will also show up when you create linked service, which supports managed identity authentication, like Azure Blob, Azure Data Lake Storage, Azure Key Vault, etc. --To grant permissions, follow these steps. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). --1. Select **Access control (IAM)**. --1. Select **Add** > **Add role assignment**. -- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open."::: --1. On the **Members** tab, select **Managed identity**, and then select **Select members**. --1. Select your Azure subscription. --1. Under **System-assigned managed identity**, select **Synapse workspace**, and then select a workspace. You can also use the object ID or workspace name (as the managed-identity name) to find this identity. To get the managed identity's application ID, use PowerShell. --1. On the **Review + assign** tab, select **Review + assign** to assign the role. --#### Retrieve system-assigned managed identity using PowerShell --The managed identity principal ID and tenant ID will be returned when you get a specific service instance as follows. Use the **PrincipalId** to grant access: --```powershell -PS C:\> (Get-AzSynapseWorkspace -ResourceGroupName <resourceGroupName> -Name <workspaceName>).Identity --IdentityType PrincipalId TenantId - -- -- -SystemAssigned aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb aaaabbbb-0000-cccc-1111-dddd2222eeee -``` --You can get the application ID by copying above principal ID, then running below Microsoft Entra ID command with principal ID as parameter. --```powershell -PS C:\> Get-AzADServicePrincipal -ObjectId aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb --ServicePrincipalNames : {00001111-aaaa-2222-bbbb-3333cccc4444, https://identity.azure.net/P86P8g6nt1QxfPJx22om8MOooMf/Ag0Qf/nnREppHkU=} -ApplicationId : 00001111-aaaa-2222-bbbb-3333cccc4444 -DisplayName : <workspaceName> -Id : aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb -Type : ServicePrincipal -``` --#### Retrieve managed identity using REST API --The managed identity principal ID and tenant ID will be returned when you get a specific service instance as follows. --Call below API in the request: --``` -GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Synapse/workspaces/{workspaceName}?api-version=2018-06-01 -``` --**Response**: You will get response like shown in below example. The "identity" section is populated accordingly. --```json -{ - "properties": { - "defaultDataLakeStorage": { - "accountUrl": "https://exampledatalakeaccount.dfs.core.windows.net", - "filesystem": "examplefilesystem" - }, - "encryption": { - "doubleEncryptionEnabled": false - }, - "provisioningState": "Succeeded", - "connectivityEndpoints": { - "web": "https://web.azuresynapse.net?workspace=%2fsubscriptions%2{subscriptionId}%2fresourceGroups%2f{resourceGroupName}%2fproviders%2fMicrosoft.Synapse%2fworkspaces%2f{workspaceName}", - "dev": "https://{workspaceName}.dev.azuresynapse.net", - "sqlOnDemand": "{workspaceName}-ondemand.sql.azuresynapse.net", - "sql": "{workspaceName}.sql.azuresynapse.net" - }, - "managedResourceGroupName": "synapseworkspace-managedrg-f77f7cf2-XXXX-XXXX-XXXX-c4cb7ac3cf4f", - "sqlAdministratorLogin": "sqladminuser", - "privateEndpointConnections": [], - "workspaceUID": "e56f5773-XXXX-XXXX-XXXX-a0dc107af9ea", - "extraProperties": { - "WorkspaceType": "Normal", - "IsScopeEnabled": false - }, - "publicNetworkAccess": "Enabled", - "cspWorkspaceAdminProperties": { - "initialWorkspaceAdminObjectId": "3746a407-XXXX-XXXX-XXXX-842b6cf1fbcc" - }, - "trustedServiceBypassEnabled": false - }, - "type": "Microsoft.Synapse/workspaces", - "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Synapse/workspaces/{workspaceName}", - "location": "eastus", - "name": "{workspaceName}", - "identity": { - "type": "SystemAssigned", - "tenantId": "aaaabbbb-0000-cccc-1111-dddd2222eeee", - "principalId": "aaaaaaaa-bbbb-cccc-1111-222222222222" - }, - "tags": {} -} -``` --> [!TIP] -> To retrieve the managed identity from an ARM template, add an **outputs** section in the ARM JSON: --```json -{ - "outputs":{ - "managedIdentityObjectId":{ - "type":"string", - "value":"[reference(resourceId('Microsoft.Synapse/workspaces', parameters('<workspaceName>')), '2018-06-01', 'Full').identity.principalId]" - } - } -} -``` --### Execute Azure Synapse Spark Notebooks with system assigned managed identity +## Execute Azure Synapse Spark Notebooks with system assigned managed identity You can easily execute Synapse Spark Notebooks with the system assigned managed identity (or workspace managed identity) by enabling *Run as managed identity* from the *Configure session* menu. To execute Spark Notebooks with workspace managed identity, users need to have following RBAC roles: - Synapse Compute Operator on the workspace or selected Spark pool You can easily execute Synapse Spark Notebooks with the system assigned managed ## User-assigned managed identity -You can create, delete, manage user-assigned managed identities in Microsoft Entra ID. For more details refer to [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md). +You can create, delete, manage user-assigned managed identities in Microsoft Entra ID. For more information, see [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md). In order to use a user-assigned managed identity, you must first [create credentials](../data-factory/credentials.md) in your service instance for the UAMI. In order to use a user-assigned managed identity, you must first [create credent ## Next steps - [Create credentials](../data-factory/credentials.md). -See the following topics that introduce when and how to use managed identity: +See the following articles that introduce when and how to use managed identity: - [Store credential in Azure Key Vault](../data-factory/store-credentials-in-key-vault.md). - [Copy data from/to Azure Data Lake Store using managed identities for Azure resources authentication](../data-factory/connector-azure-data-lake-store.md). |
virtual-network-manager | Concept Security Admins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-security-admins.md | By default, security admin rules aren't applied to a virtual network containing - [Azure SQL Managed Instances](/azure/azure-sql/managed-instance/connectivity-architecture-overview#mandatory-security-rules-with-service-aided-subnet-configuration) - Azure Databricks +You can request to enable your Azure Virtual Network Manager to apply security admin rules on virtual networks with these services by submitting a request using [this form](https://forms.office.com/r/MPUXZE2wMY). + When a virtual network contains these services, the security admin rules skip this virtual network. If you want *Allow* rules applied to this virtual network, you create your security configuration with the `AllowRulesOnly` field set in the [securityConfiguration.properties.applyOnNetworkIntentPolicyBasedServices](/dotnet/api/microsoft.azure.management.network.models.networkintentpolicybasedservice?view=azure-dotnet&preserve-view=true) .NET class. When set, only *Allow* rules in your security configuration are applied to this virtual network. *Deny* rules aren't applied to this virtual network. Virtual networks without these services can continue using *Allow* and *Deny* rules. You can create a security configuration with *Allow* rules only and deploy it to your virtual networks with [Azure PowerShell](/powershell/module/az.network/new-aznetworkmanagersecurityadminconfiguration#example-1) and [Azure CLI](/cli/azure/network/manager/security-admin-config#az-network-manager-security-admin-config-create-examples). |
virtual-network | Tutorial Create Route Table Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-cli.md | - Title: Route network traffic - Azure CLI -description: In this article, learn how to route network traffic with a route table using the Azure CLI. --- Previously updated : 08/08/2024---# Customer intent: I want to route traffic from one subnet, to a different subnet, through a network virtual appliance. ---# Route network traffic with a route table using the Azure CLI --Azure automatically routes traffic between all subnets within a virtual network, by default. You can create your own routes to override Azure's default routing. The ability to create custom routes is helpful if, for example, you want to route traffic between subnets through a network virtual appliance (NVA). In this article, you learn how to: --* Create a route table -* Create a route -* Create a virtual network with multiple subnets -* Associate a route table to a subnet -* Create a basic NVA that routes traffic from an Ubuntu VM -* Deploy virtual machines (VM) into different subnets -* Route traffic from one subnet to another through an NVA ----- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Create a route table --Before you can create a route table, create a resource group with [az group create](/cli/azure/group) for all resources created in this article. --```azurecli-interactive -# Create a resource group. -az group create \ - --name test-rg \ - --location westus2 -``` --Create a route table with [az network route-table create](/cli/azure/network/route-table#az-network-route-table-create). The following example creates a route table named *route-table-public*. --```azurecli-interactive -# Create a route table -az network route-table create \ - --resource-group test-rg \ - --name route-table-public -``` --## Create a route --Create a route in the route table with [az network route-table route create](/cli/azure/network/route-table/route#az-network-route-table-route-create). --```azurecli-interactive -az network route-table route create \ - --name to-private-subnet \ - --resource-group test-rg \ - --route-table-name route-table-public \ - --address-prefix 10.0.1.0/24 \ - --next-hop-type VirtualAppliance \ - --next-hop-ip-address 10.0.2.4 -``` --## Associate a route table to a subnet --Before you can associate a route table to a subnet, you have to create a virtual network and subnet. Create a virtual network with one subnet with [az network vnet create](/cli/azure/network/vnet). --```azurecli-interactive -az network vnet create \ - --name vnet-1 \ - --resource-group test-rg \ - --address-prefix 10.0.0.0/16 \ - --subnet-name subnet-public \ - --subnet-prefix 10.0.0.0/24 -``` --Create two more subnets with [az network vnet subnet create](/cli/azure/network/vnet/subnet). --```azurecli-interactive -# Create a private subnet. -az network vnet subnet create \ - --vnet-name vnet-1 \ - --resource-group test-rg \ - --name subnet-private \ - --address-prefix 10.0.1.0/24 --# Create a DMZ subnet. -az network vnet subnet create \ - --vnet-name vnet-1 \ - --resource-group test-rg \ - --name subnet-dmz \ - --address-prefix 10.0.2.0/24 -``` --Associate the *route-table-subnet-public* route table to the *subnet-public* subnet with [az network vnet subnet update](/cli/azure/network/vnet/subnet). --```azurecli-interactive -az network vnet subnet update \ - --vnet-name vnet-1 \ - --name subnet-public \ - --resource-group test-rg \ - --route-table route-table-public -``` --## Create an NVA --An NVA is a VM that performs a network function, such as routing, firewalling, or WAN optimization. We create a basic NVA from a general purpose Ubuntu VM, for demonstration purposes. --Create a VM to be used as the NVA in the *subnet-dmz* subnet with [az vm create](/cli/azure/vm). When you create a VM, Azure creates and assigns a network interface *vm-nvaVMNic* and a subnet-public IP address to the VM, by default. The `--public-ip-address ""` parameter instructs Azure not to create and assign a subnet-public IP address to the VM, since the VM doesn't need to be connected to from the internet. --The following example creates a VM and adds a user account. The `--generate-ssh-keys` parameter causes the CLI to look for an available ssh key in `~/.ssh`. If one is found, that key is used. If not, one is generated and stored in `~/.ssh`. Finally, we deploy the latest `Ubuntu 22.04` image. --```azurecli-interactive -az vm create \ - --resource-group test-rg \ - --name vm-nva \ - --image Ubuntu2204 \ - --public-ip-address "" \ - --subnet subnet-dmz \ - --vnet-name vnet-1 \ - --generate-ssh-keys -``` --The VM takes a few minutes to create. Don't continue to the next step until Azure finishes creating the VM and returns output about the VM. --For a network interface **vm-nvaVMNic** to be able to forward network traffic sent to it, that isn't destined for its own IP address, IP forwarding must be enabled for the network interface. Enable IP forwarding for the network interface with [az network nic update](/cli/azure/network/nic). --```azurecli-interactive -az network nic update \ - --name vm-nvaVMNic \ - --resource-group test-rg \ - --ip-forwarding true -``` --Within the VM, the operating system, or an application running within the VM, must also be able to forward network traffic. We use the `sysctl` command to enable the Linux kernel to forward packets. To run this command without logging onto the VM, we use the [Custom Script extension](/azure/virtual-machines/extensions/custom-script-linux) [az vm extension set](/cli/azure/vm/extension): --```azurecli-interactive -az vm extension set \ - --resource-group test-rg \ - --vm-name vm-nva \ - --name customScript \ - --publisher Microsoft.Azure.Extensions \ - --settings '{"commandToExecute":"sudo sysctl -w net.ipv4.ip_forward=1"}' -``` --The command might take up to a minute to execute. This change won't persist after a VM reboot, so if the NVA VM is rebooted for any reason, the script will need to be repeated. --## Create virtual machines --Create two VMs in the virtual network so you can validate that traffic from the *subnet-public* subnet is routed to the *subnet-private* subnet through the NVA in a later step. --Create a VM in the *subnet-public* subnet with [az vm create](/cli/azure/vm). The `--no-wait` parameter enables Azure to execute the command in the background so you can continue to the next command. --The following example creates a VM and adds a user account. The `--generate-ssh-keys` parameter causes the CLI to look for an available ssh key in `~/.ssh`. If one is found, that key is used. If not, one is generated and stored in `~/.ssh`. Finally, we deploy the latest `Ubuntu 22.04` image. --```azurecli-interactive -az vm create \ - --resource-group test-rg \ - --name vm-public \ - --image Ubuntu2204 \ - --vnet-name vnet-1 \ - --subnet subnet-public \ - --admin-username azureuser \ - --generate-ssh-keys \ - --no-wait -``` --Create a VM in the *subnet-private* subnet. --```azurecli-interactive -az vm create \ - --resource-group test-rg \ - --name vm-private \ - --image Ubuntu2204 \ - --vnet-name vnet-1 \ - --subnet subnet-private \ - --admin-username azureuser \ - --generate-ssh-keys -``` --The VM takes a few minutes to create. After the VM is created, the Azure CLI shows information similar to the following example: --```output -{ - "fqdns": "", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/test-rg/providers/Microsoft.Compute/virtualMachines/vm-private", - "location": "westus2", - "macAddress": "00-0D-3A-23-9A-49", - "powerState": "VM running", - "privateIpAddress": "10.0.1.4", - "publicIpAddress": "203.0.113.24", - "resourceGroup": "test-rg" -} -``` --## Enable Microsoft Entra ID sign in for the virtual machines --The following code example installs the extension to enable a Microsoft Entra ID sign-in for a Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines. --```bash -az vm extension set \ - --publisher Microsoft.Azure.ActiveDirectory \ - --name AADSSHsign-inForLinux \ - --resource-group test-rg \ - --vm-name vm-private -``` --```bash -az vm extension set \ - --publisher Microsoft.Azure.ActiveDirectory \ - --name AADSSHsign-inForLinux \ - --resource-group test-rg \ - --vm-name vm-public -``` --## Route traffic through an NVA --Using an SSH client of your choice, connect to the VMs created previously. For example, the following command can be used from a command line interface such as [Windows Subsystem for Linux](/windows/wsl/install) to create an SSH session with the *vm-private* VM. In the previous steps, we enabled Microsoft Entra ID sign-in for the VMs. You can sign-in to the virtual machines using your Microsoft Entra ID credentials or you can use the SSH key that you used to create the VMs. In the following example, we use the SSH key to sign-in to the VMs. --For more information about how to SSH to a Linux VM and sign in with Microsoft Entra ID, see [Sign in to a Linux virtual machine in Azure by using Microsoft Entra ID and OpenSSH](/entra/identity/devices/howto-vm-sign-in-azure-ad-linux). --```bash --### Store IP address of VM in order to SSH --Run the following command to store the IP address of the VM as an environment variable: --```bash -export IP_ADDRESS=$(az vm show --show-details --resource-group test-rg --name vm-private --query publicIps --output tsv) -``` --```bash -ssh -o StrictHostKeyChecking=no azureuser@$IP_ADDRESS -``` --Use the following command to install trace route on the *vm-private* VM: --```bash -sudo apt update -sudo apt install traceroute -``` --Use the following command to test routing for network traffic to the *vm-public* VM from the *vm-private* VM. --```bash -traceroute vm-public -``` --The response is similar to the following example: --```output -azureuser@vm-private:~$ traceroute vm-public -traceroute to vm-public (10.0.0.4), 30 hops max, 60 byte packets - 1 vm-public.internal.cloudapp.net (10.0.0.4) 2.613 ms 2.592 ms 2.553 ms -``` --You can see that traffic is routed directly from the *vm-private* VM to the *vm-public* VM. Azure's default routes, route traffic directly between subnets. Close the SSH session to the *vm-private* VM. --### Store IP address of VM in order to SSH --Run the following command to store the IP address of the VM as an environment variable: --```bash -export IP_ADDRESS=$(az vm show --show-details --resource-group test-rg --name vm-public --query publicIps --output tsv) -``` --```bash -ssh -o StrictHostKeyChecking=no azureuser@$IP_ADDRESS -``` --Use the following command to install trace route on the *vm-public* VM: --```bash -sudo apt update -sudo apt install traceroute -``` --Use the following command to test routing for network traffic to the *vm-private* VM from the *vm-public* VM. --```bash -traceroute vm-private -``` --The response is similar to the following example: --```output -azureuser@vm-public:~$ traceroute vm-private -traceroute to vm-private (10.0.1.4), 30 hops max, 60 byte packets - 1 vm-nva.internal.cloudapp.net (10.0.2.4) 1.010 ms 1.686 ms 1.144 ms - 2 vm-private.internal.cloudapp.net (10.0.1.4) 1.925 ms 1.911 ms 1.898 ms -``` --You can see that the first hop is 10.0.2.4, which is the NVA's private IP address. The second hop is 10.0.1.4, the private IP address of the *vm-private* VM. The route added to the *route-table--public* route table and associated to the *subnet-public* subnet caused Azure to route the traffic through the NVA, rather than directly to the *subnet-private* subnet. --Close the SSH session to the *vm-public* VM. --## Clean up resources --When no longer needed, use [az group delete](/cli/azure/group) to remove the resource group and all of the resources it contains. --```azurecli-interactive -az group delete \ - --name test-rg \ - --yes \ - --no-wait -``` --## Next steps --In this article, you created a route table and associated it to a subnet. You created a simple NVA that routed traffic from a subnet-public subnet to a private subnet. Deploy various preconfigured NVAs that perform network functions such as firewall and WAN optimization from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking). To learn more about routing, see [Routing overview](virtual-networks-udr-overview.md) and [Manage a route table](manage-route-table.yml). --While you can deploy many Azure resources within a virtual network, resources for some Azure PaaS services can't be deployed into a virtual network. You can still restrict access to the resources of some Azure PaaS services to traffic only from a virtual network subnet though. To learn how, see [Restrict network access to PaaS resources](tutorial-restrict-network-access-to-resources-cli.md). |
virtual-network | Tutorial Create Route Table Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-powershell.md | - Title: Route network traffic Azure PowerShell -description: In this article, learn how to route network traffic with a route table using PowerShell. ----- Previously updated : 03/13/2018---# Customer intent: I want to route traffic from one subnet, to a different subnet, through a network virtual appliance. ---# Route network traffic with a route table using PowerShell --Azure automatically routes traffic between all subnets within a virtual network, by default. You can create your own routes to override Azure's default routing. The ability to create custom routes is helpful if, for example, you want to route traffic between subnets through a network virtual appliance (NVA). In this article, you learn how to: --* Create a route table -* Create a route -* Create a virtual network with multiple subnets -* Associate a route table to a subnet -* Create an NVA that routes traffic -* Deploy virtual machines (VM) into different subnets -* Route traffic from one subnet to another through an NVA --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. --## Create a route table --Before you can create a route table, create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named *myResourceGroup* for all resources created in this article. --```azurepowershell-interactive -New-AzResourceGroup -ResourceGroupName myResourceGroup -Location EastUS -``` --Create a route table with [New-AzRouteTable](/powershell/module/az.network/new-azroutetable). The following example creates a route table named *myRouteTablePublic*. --```azurepowershell-interactive -$routeTablePublic = New-AzRouteTable ` - -Name 'myRouteTablePublic' ` - -ResourceGroupName myResourceGroup ` - -location EastUS -``` --## Create a route --Create a route by retrieving the route table object with [Get-AzRouteTable](/powershell/module/az.network/get-azroutetable), create a route with [Add-AzRouteConfig](/powershell/module/az.network/add-azrouteconfig), then write the route configuration to the route table with [Set-AzRouteTable](/powershell/module/az.network/set-azroutetable). --```azurepowershell-interactive -Get-AzRouteTable ` - -ResourceGroupName "myResourceGroup" ` - -Name "myRouteTablePublic" ` - | Add-AzRouteConfig ` - -Name "ToPrivateSubnet" ` - -AddressPrefix 10.0.1.0/24 ` - -NextHopType "VirtualAppliance" ` - -NextHopIpAddress 10.0.2.4 ` - | Set-AzRouteTable -``` --## Associate a route table to a subnet --Before you can associate a route table to a subnet, you have to create a virtual network and subnet. Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named *myVirtualNetwork* with the address prefix *10.0.0.0/16*. --```azurepowershell-interactive -$virtualNetwork = New-AzVirtualNetwork ` - -ResourceGroupName myResourceGroup ` - -Location EastUS ` - -Name myVirtualNetwork ` - -AddressPrefix 10.0.0.0/16 -``` --Create three subnets by creating three subnet configurations with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig). The following example creates three subnet configurations for *Public*, *Private*, and *DMZ* subnets: --```azurepowershell-interactive -$subnetConfigPublic = Add-AzVirtualNetworkSubnetConfig ` - -Name Public ` - -AddressPrefix 10.0.0.0/24 ` - -VirtualNetwork $virtualNetwork --$subnetConfigPrivate = Add-AzVirtualNetworkSubnetConfig ` - -Name Private ` - -AddressPrefix 10.0.1.0/24 ` - -VirtualNetwork $virtualNetwork --$subnetConfigDmz = Add-AzVirtualNetworkSubnetConfig ` - -Name DMZ ` - -AddressPrefix 10.0.2.0/24 ` - -VirtualNetwork $virtualNetwork -``` --Write the subnet configurations to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork), which creates the subnets in the virtual network: --```azurepowershell-interactive -$virtualNetwork | Set-AzVirtualNetwork -``` --Associate the *myRouteTablePublic* route table to the *Public* subnet with [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) and then write the subnet configuration to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork). --```azurepowershell-interactive -Set-AzVirtualNetworkSubnetConfig ` - -VirtualNetwork $virtualNetwork ` - -Name 'Public' ` - -AddressPrefix 10.0.0.0/24 ` - -RouteTable $myRouteTablePublic | ` -Set-AzVirtualNetwork -``` --## Create an NVA --An NVA is a VM that performs a network function, such as routing, firewalling, or WAN optimization. --Before creating a VM, create a network interface. --### Create a network interface --Before creating a network interface, you have to retrieve the virtual network Id with [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork), then the subnet Id with [Get-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/get-azvirtualnetworksubnetconfig). Create a network interface with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) in the *DMZ* subnet with IP forwarding enabled: --```azurepowershell-interactive -# Retrieve the virtual network object into a variable. -$virtualNetwork=Get-AzVirtualNetwork ` - -Name myVirtualNetwork ` - -ResourceGroupName myResourceGroup --# Retrieve the subnet configuration into a variable. -$subnetConfigDmz = Get-AzVirtualNetworkSubnetConfig ` - -Name DMZ ` - -VirtualNetwork $virtualNetwork --# Create the network interface. -$nic = New-AzNetworkInterface ` - -ResourceGroupName myResourceGroup ` - -Location EastUS ` - -Name 'myVmNva' ` - -SubnetId $subnetConfigDmz.Id ` - -EnableIPForwarding -``` --### Create a VM --To create a VM and attach an existing network interface to it, you must first create a VM configuration with [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig). The configuration includes the network interface created in the previous step. When prompted for a username and password, select the user name and password you want to log into the VM with. --```azurepowershell-interactive -# Create a credential object. -$cred = Get-Credential -Message "Enter a username and password for the VM." --# Create a VM configuration. -$vmConfig = New-AzVMConfig ` - -VMName 'myVmNva' ` - -VMSize Standard_DS2 | ` - Set-AzVMOperatingSystem -Windows ` - -ComputerName 'myVmNva' ` - -Credential $cred | ` - Set-AzVMSourceImage ` - -PublisherName MicrosoftWindowsServer ` - -Offer WindowsServer ` - -Skus 2016-Datacenter ` - -Version latest | ` - Add-AzVMNetworkInterface -Id $nic.Id -``` --Create the VM using the VM configuration with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates a VM named *myVmNva*. --```azurepowershell-interactive -$vmNva = New-AzVM ` - -ResourceGroupName myResourceGroup ` - -Location EastUS ` - -VM $vmConfig ` - -AsJob -``` --The `-AsJob` option creates the VM in the background, so you can continue to the next step. --## Create virtual machines --Create two VMs in the virtual network so you can validate that traffic from the *Public* subnet is routed to the *Private* subnet through the network virtual appliance in a later step. --Create a VM in the *Public* subnet with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates a VM named *myVmPublic* in the *Public* subnet of the *myVirtualNetwork* virtual network. --```azurepowershell-interactive -New-AzVm ` - -ResourceGroupName "myResourceGroup" ` - -Location "East US" ` - -VirtualNetworkName "myVirtualNetwork" ` - -SubnetName "Public" ` - -ImageName "Win2016Datacenter" ` - -Name "myVmPublic" ` - -AsJob -``` --Create a VM in the *Private* subnet. --```azurepowershell-interactive -New-AzVm ` - -ResourceGroupName "myResourceGroup" ` - -Location "East US" ` - -VirtualNetworkName "myVirtualNetwork" ` - -SubnetName "Private" ` - -ImageName "Win2016Datacenter" ` - -Name "myVmPrivate" -``` --The VM takes a few minutes to create. Don't continue with the next step until the VM is created and Azure returns output to PowerShell. --## Route traffic through an NVA --Use [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) to return the public IP address of the *myVmPrivate* VM. The following example returns the public IP address of the *myVmPrivate* VM: --```azurepowershell-interactive -Get-AzPublicIpAddress ` - -Name myVmPrivate ` - -ResourceGroupName myResourceGroup ` - | Select IpAddress -``` --Use the following command to create a remote desktop session with the *myVmPrivate* VM from your local computer. Replace `<publicIpAddress>` with the IP address returned from the previous command. --``` -mstsc /v:<publicIpAddress> -``` --Open the downloaded RDP file. If prompted, select **Connect**. --Enter the user name and password you specified when creating the VM (you may need to select **More choices**, then **Use a different account**, to specify the credentials you entered when you created the VM), then select **OK**. You may receive a certificate warning during the sign-in process. Select **Yes** to proceed with the connection. --In a later step, the `tracert.exe` command is used to test routing. Tracert uses the Internet Control Message Protocol (ICMP), which is denied through the Windows Firewall. Enable ICMP through the Windows firewall by entering the following command from PowerShell on the *myVmPrivate* VM: --```powershell -New-NetFirewallRule -DisplayName "Allow ICMPv4-In" -Protocol ICMPv4 -``` --Though trace route is used to test routing in this article, allowing ICMP through the Windows Firewall for production deployments is not recommended. --You enabled IP forwarding within Azure for the VM's network interface in Enable IP forwarding. Within the VM, the operating system, or an application running within the VM, must also be able to forward network traffic. Enable IP forwarding within the operating system of the *myVmNva*. --From a command prompt on the *myVmPrivate* VM, remote desktop to the *myVmNva*: --``` -mstsc /v:myvmnva -``` --To enable IP forwarding within the operating system, enter the following command in PowerShell from the *myVmNva* VM: --```powershell -Set-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters -Name IpEnableRouter -Value 1 -``` --Restart the *myVmNva* VM, which also disconnects the remote desktop session. --While still connected to the *myVmPrivate* VM, create a remote desktop session to the *myVmPublic* VM, after the *myVmNva* VM restarts: --``` -mstsc /v:myVmPublic -``` --Enable ICMP through the Windows firewall by entering the following command from PowerShell on the *myVmPublic* VM: --```powershell -New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4 -``` --To test routing of network traffic to the *myVmPrivate* VM from the *myVmPublic* VM, enter the following command from PowerShell on the *myVmPublic* VM: --``` -tracert myVmPrivate -``` --The response is similar to the following example: --``` -Tracing route to myVmPrivate.vpgub4nqnocezhjgurw44dnxrc.bx.internal.cloudapp.net [10.0.1.4] -over a maximum of 30 hops: --1 <1 ms * 1 ms 10.0.2.4 -2 1 ms 1 ms 1 ms 10.0.1.4 --Trace complete. -``` --You can see that the first hop is 10.0.2.4, which is the NVA's private IP address. The second hop is 10.0.1.4, the private IP address of the *myVmPrivate* VM. The route added to the *myRouteTablePublic* route table and associated to the *Public* subnet caused Azure to route the traffic through the NVA, rather than directly to the *Private* subnet. --Close the remote desktop session to the *myVmPublic* VM, which leaves you still connected to the *myVmPrivate* VM. --To test routing of network traffic to the *myVmPublic* VM from the *myVmPrivate* VM, enter the following command from a command prompt on the *myVmPrivate* VM: --``` -tracert myVmPublic -``` --The response is similar to the following example: --``` -Tracing route to myVmPublic.vpgub4nqnocezhjgurw44dnxrc.bx.internal.cloudapp.net [10.0.0.4] -over a maximum of 30 hops: --1 1 ms 1 ms 1 ms 10.0.0.4 --Trace complete. -``` --You can see that traffic is routed directly from the *myVmPrivate* VM to the *myVmPublic* VM. By default, Azure routes traffic directly between subnets. --Close the remote desktop session to the *myVmPrivate* VM. --## Clean up resources --When no longer needed, use [Remove-AzResourcegroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all of the resources it contains. --```azurepowershell-interactive -Remove-AzResourceGroup -Name myResourceGroup -Force -``` --## Next steps --In this article, you created a route table and associated it to a subnet. You created a simple network virtual appliance that routed traffic from a public subnet to a private subnet. Deploy a variety of pre-configured network virtual appliances that perform network functions such as firewall and WAN optimization from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking). To learn more about routing, see [Routing overview](virtual-networks-udr-overview.md) and [Manage a route table](manage-route-table.yml). --While you can deploy many Azure resources within a virtual network, resources for some Azure PaaS services cannot be deployed into a virtual network. You can still restrict access to the resources of some Azure PaaS services to traffic only from a virtual network subnet though. To learn how, see [Restrict network access to PaaS resources](tutorial-restrict-network-access-to-resources-powershell.md). |
virtual-network | Tutorial Create Route Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table.md | + + Title: 'Tutorial: Route network traffic with a route table' ++description: In this tutorial, learn how to route network traffic with a route table. ++ Last updated : 10/31/2024++++ - template-tutorial + - devx-track-azurecli + - devx-track-azurepowershell +content_well_notification: + - AI-contribution +ai-usage: ai-assisted +# Customer intent: I want to route traffic from one subnet, to a different subnet, through a network virtual appliance. +++# Tutorial: Route network traffic with a route table ++Azure routes traffic between all subnets within a virtual network, by default. You can create your own routes to override Azure's default routing. Custom routes are helpful when, for example, you want to route traffic between subnets through a network virtual appliance (NVA). +++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Create a virtual network and subnets +> * Create an NVA that routes traffic +> * Deploy virtual machines (VMs) into different subnets +> * Create a route table +> * Create a route +> * Associate a route table to a subnet +> * Route traffic from one subnet to another through an NVA ++## Prerequisites ++### [Portal](#tab/portal) ++- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++### [PowerShell](#tab/powershell) ++- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +++If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. ++### [CLI](#tab/cli) ++++- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ++++## Create subnets ++A **DMZ** and **Private** subnet are needed for this tutorial. The **DMZ** subnet is where you deploy the NVA, and the **Private** subnet is where you deploy the virtual machines that you want to route traffic to. The **subnet-1** is the subnet created in the previous steps. Use **subnet-1** for the public virtual machine. ++### [Portal](#tab/portal) +++1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. ++1. In **Virtual networks**, select **vnet-1**. ++1. In **vnet-1**, select **Subnets** from the **Settings** section. ++1. In the virtual network's subnet list, select **+ Subnet**. ++1. In **Add subnet**, enter or select the following information: ++ | Setting | Value | + | - | -- | + | Subnet purpose | Leave the default of **Default**. | + | Name | Enter **subnet-private**. | + | **IPv4** | + | IPv4 address range | Leave the default of **10.0.0.0/16**. | + | Starting address | Enter **10.0.2.0**. | + | Size | Leave the default of **/24 (256 addresses)**. | ++ :::image type="content" source="./media/tutorial-create-route-table-portal/create-private-subnet.png" alt-text="Screenshot of private subnet creation in virtual network."::: ++1. Select **Add**. ++1. Select **+ Subnet**. ++1. In **Add subnet**, enter or select the following information: ++ | Setting | Value | + | - | -- | + | Subnet purpose | Leave the default of **Default**. | + | Name | Enter **subnet-dmz**. | + | **IPv4** | + | IPv4 address range | Leave the default of **10.0.0.0/16**. | + | Starting address | Enter **10.0.3.0**. | + | Size | Leave the default of **/24 (256 addresses)**. | ++ :::image type="content" source="./media/tutorial-create-route-table-portal/create-dmz-subnet.png" alt-text="Screenshot of DMZ subnet creation in virtual network."::: ++1. Select **Add**. ++### [PowerShell](#tab/powershell) ++Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named *test-rg* for all resources created in this article. ++```azurepowershell-interactive +$rg = @{ + ResourceGroupName = "test-rg" + Location = "EastUS2" +} +New-AzResourceGroup @rg +``` ++Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named *vnet-1* with the address prefix *10.0.0.0/16*. ++```azurepowershell-interactive +$vnet = @{ + ResourceGroupName = "test-rg" + Location = "EastUS2" + Name = "vnet-1" + AddressPrefix = "10.0.0.0/16" +} ++$virtualNetwork = New-AzVirtualNetwork @vnet +``` ++Create four subnets by creating four subnet configurations with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig). The following example creates four subnet configurations for *Public*, *Private*, *DMZ*, and Azure Bastion subnets. ++```azurepowershell-interactive +$subnetConfigPublicParams = @{ + Name = "subnet-1" + AddressPrefix = "10.0.0.0/24" + VirtualNetwork = $virtualNetwork +} ++$subnetConfigBastionParams = @{ + Name = "AzureBastionSubnet" + AddressPrefix = "10.0.1.0/24" + VirtualNetwork = $virtualNetwork +} ++$subnetConfigPrivateParams = @{ + Name = "subnet-private" + AddressPrefix = "10.0.2.0/24" + VirtualNetwork = $virtualNetwork +} ++$subnetConfigDmzParams = @{ + Name = "subnet-dmz" + AddressPrefix = "10.0.3.0/24" + VirtualNetwork = $virtualNetwork +} ++$subnetConfigPublic = Add-AzVirtualNetworkSubnetConfig @subnetConfigPublicParams +$subnetConfigBastion = Add-AzVirtualNetworkSubnetConfig @subnetConfigBastionParams +$subnetConfigPrivate = Add-AzVirtualNetworkSubnetConfig @subnetConfigPrivateParams +$subnetConfigDmz = Add-AzVirtualNetworkSubnetConfig @subnetConfigDmzParams +``` ++Write the subnet configurations to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork), which creates the subnets in the virtual network: ++```azurepowershell-interactive +$virtualNetwork | Set-AzVirtualNetwork +``` ++### Create Azure Bastion ++Create a public IP address for the Azure Bastion host with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress). The following example creates a public IP address named *public-ip-bastion* in the *vnet-1* virtual network. ++```azurepowershell-interactive +$publicIpParams = @{ + ResourceGroupName = "test-rg" + Name = "public-ip-bastion" + Location = "EastUS2" + AllocationMethod = "Static" + Sku = "Standard" +} +New-AzPublicIpAddress @publicIpParams +``` ++Create an Azure Bastion host with [New-AzBastion](/powershell/module/az.network/new-azbastion). The following example creates an Azure Bastion host named *bastion* in the *AzureBastionSubnet* subnet of the *vnet-1* virtual network. Azure Bastion is used to securely connect Azure virtual machines without exposing them to the public internet. ++```azurepowershell-interactive +$bastionParams = @{ + ResourceGroupName = "test-rg" + Name = "bastion" + VirtualNetworkName = "vnet-1" + PublicIpAddressName = "public-ip-bastion" + PublicIpAddressRgName = "test-rg" + VirtualNetworkRgName = "test-rg" +} +New-AzBastion @bastionParams -AsJob +``` ++### [CLI](#tab/cli) ++Create a resource group with [az group create](/cli/azure/group) for all resources created in this article. ++```azurecli-interactive +# Create a resource group. +az group create \ + --name test-rg \ + --location eastus2 +``` ++Create a virtual network with one subnet with [az network vnet create](/cli/azure/network/vnet). ++```azurecli-interactive +az network vnet create \ + --name vnet-1 \ + --resource-group test-rg \ + --address-prefix 10.0.0.0/16 \ + --subnet-name subnet-1 \ + --subnet-prefix 10.0.0.0/24 +``` ++Create two more subnets with [az network vnet subnet create](/cli/azure/network/vnet/subnet). ++```azurecli-interactive +# Create a bastion subnet. +az network vnet subnet create \ + --vnet-name vnet-1 \ + --resource-group test-rg \ + --name AzureBastionSubnet \ + --address-prefix 10.0.1.0/24 ++# Create a private subnet. +az network vnet subnet create \ + --vnet-name vnet-1 \ + --resource-group test-rg \ + --name subnet-private \ + --address-prefix 10.0.2.0/24 ++# Create a DMZ subnet. +az network vnet subnet create \ + --vnet-name vnet-1 \ + --resource-group test-rg \ + --name subnet-dmz \ + --address-prefix 10.0.3.0/24 +``` ++### Create Azure Bastion ++Create a public IP address for the Azure Bastion host with [az network public-ip create](/cli/azure/network/public-ip). The following example creates a public IP address named *public-ip-bastion* in the *vnet-1* virtual network. ++```azurecli-interactive +az network public-ip create \ + --resource-group test-rg \ + --name public-ip-bastion \ + --location eastus2 \ + --allocation-method Static \ + --sku Standard +``` ++Create an Azure Bastion host with [az network bastion create](/cli/azure/network/bastion). The following example creates an Azure Bastion host named *bastion* in the *AzureBastionSubnet* subnet of the *vnet-1* virtual network. Azure Bastion is used to securely connect Azure virtual machines without exposing them to the public internet. ++```azurecli-interactive +az network bastion create \ + --resource-group test-rg \ + --name bastion \ + --vnet-name vnet-1 \ + --public-ip-address public-ip-bastion \ + --location eastus2 + --no-wait +``` ++++## Create an NVA virtual machine ++Network virtual appliances (NVAs) are virtual machines that help with network functions, such as routing and firewall optimization. In this section, create an NVA using an **Ubuntu 24.04** virtual machine. ++### [Portal](#tab/portal) ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. Select **+ Create** then **Azure virtual machine**. ++1. In **Create a virtual machine** enter or select the following information in the **Basics** tab: ++ | Setting | Value | + | - | -- | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Virtual machine name | Enter **vm-nva**. | + | Region | Select **(US) East US 2**. | + | Availability options | Select **No infrastructure redundancy required**. | + | Security type | Select **Standard**. | + | Image | Select **Ubuntu Server 24.04 LTS - x64 Gen2**. | + | VM architecture | Leave the default of **x64**. | + | Size | Select a size. | + | **Administrator account** | | + | Authentication type | Select **Password**. | + | Username | Enter a username. | + | Password | Enter a password. | + | Confirm password | Reenter password. | + | **Inbound port rules** | | + | Public inbound ports | Select **None**. | ++1. Select **Next: Disks** then **Next: Networking**. ++1. In the Networking tab, enter or select the following information: ++ | Setting | Value | + | - | -- | + | **Network interface** | | + | Virtual network | Select **vnet-1**. | + | Subnet | Select **subnet-dmz (10.0.3.0/24)**. | + | Public IP | Select **None**. | + | NIC network security group | Select **Advanced**. | + | Configure network security group | Select **Create new**. </br> In **Name** enter **nsg-nva**. </br> Select **OK**. | ++1. Leave the rest of the options at the defaults and select **Review + create**. ++1. Select **Create**. ++### [PowerShell](#tab/powershell) ++Create the VM with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates a VM named *vm-nva*. ++```azurepowershell-interactive +# Create a credential object +$cred = Get-Credential ++# Define the VM parameters +$vmParams = @{ + ResourceGroupName = "test-rg" + Location = "EastUS2" + Name = "vm-nva" + ImageName = "Canonical:ubuntu-24_04-lts:server-gen1:latest" + Size = "Standard_DS1_v2" + Credential = $cred + VirtualNetworkName = "vnet-1" + SubnetName = "subnet-dmz" + PublicIpAddressName = $null # No public IP address +} ++# Create the VM +New-AzVM @vmParams +``` ++### [CLI](#tab/cli) ++Create a VM to be used as the NVA in the *subnet-dmz* subnet with [az vm create](/cli/azure/vm). ++```azurecli-interactive +az vm create \ + --resource-group test-rg \ + --name vm-nva \ + --image Ubuntu2204 \ + --public-ip-address "" \ + --subnet subnet-dmz \ + --vnet-name vnet-1 \ + --admin-username azureuser \ + --authentication-type password +``` ++The VM takes a few minutes to create. Don't continue to the next step until Azure finishes creating the VM and returns output about the VM. ++++## Create public and private virtual machines ++Create two virtual machines in the **vnet-1** virtual network. One virtual machine is in the **subnet-1** subnet, and the other virtual machine is in the **subnet-private** subnet. Use the same virtual machine image for both virtual machines. ++### Create public virtual machine ++The public virtual machine is used to simulate a machine in the public internet. The public and private virtual machine are used to test the routing of network traffic through the NVA virtual machine. ++### [Portal](#tab/portal) ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. Select **+ Create** then **Azure virtual machine**. ++1. In **Create a virtual machine** enter or select the following information in the **Basics** tab: ++ | Setting | Value | + | - | -- | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Virtual machine name | Enter **vm-public**. | + | Region | Select **(US) East US 2**. | + | Availability options | Select **No infrastructure redundancy required**. | + | Security type | Select **Standard**. | + | Image | Select **Ubuntu Server 24.04 LTS - x64 Gen2**. | + | VM architecture | Leave the default of **x64**. | + | Size | Select a size. | + | **Administrator account** | | + | Authentication type | Select **Password**. | + | Username | Enter a username. | + | Password | Enter a password. | + | Confirm password | Reenter password. | + | **Inbound port rules** | | + | Public inbound ports | Select **None**. | ++1. Select **Next: Disks** then **Next: Networking**. ++1. In the Networking tab, enter or select the following information: ++ | Setting | Value | + | - | -- | + | **Network interface** | | + | Virtual network | Select **vnet-1**. | + | Subnet | Select **subnet-1 (10.0.0.0/24)**. | + | Public IP | Select **None**. | + | NIC network security group | Select **None**. | ++1. Leave the rest of the options at the defaults and select **Review + create**. ++1. Select **Create**. ++### Create private virtual machine ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. Select **+ Create** then **Azure virtual machine**. ++1. In **Create a virtual machine** enter or select the following information in the **Basics** tab: ++ | Setting | Value | + | - | -- | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Virtual machine name | Enter **vm-private**. | + | Region | Select **(US) East US 2**. | + | Availability options | Select **No infrastructure redundancy required**. | + | Security type | Select **Standard**. | + | Image | Select **Ubuntu Server 24.04 LTS - x64 Gen2**. | + | VM architecture | Leave the default of **x64**. | + | Size | Select a size. | + | **Administrator account** | | + | Authentication type | Select **Password**. | + | Username | Enter a username. | + | Password | Enter a password. | + | Confirm password | Reenter password. | + | **Inbound port rules** | | + | Public inbound ports | Select **None**. | ++1. Select **Next: Disks** then **Next: Networking**. ++1. In the Networking tab, enter or select the following information: ++ | Setting | Value | + | - | -- | + | **Network interface** | | + | Virtual network | Select **vnet-1**. | + | Subnet | Select **subnet-private (10.0.2.0/24)**. | + | Public IP | Select **None**. | + | NIC network security group | Select **None**. | ++1. Leave the rest of the options at the defaults and select **Review + create**. ++1. Select **Create**. ++### [PowerShell](#tab/powershell) ++Create a VM in the *subnet-1* subnet with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates a VM named *vm-public* in the *subnet-public* subnet of the *vnet-1* virtual network. ++```azurepowershell-interactive +# Create a credential object +$cred = Get-Credential ++# Define the VM parameters +$vmParams = @{ + ResourceGroupName = "test-rg" + Location = "EastUS2" + Name = "vm-public" + ImageName = "Canonical:ubuntu-24_04-lts:server-gen1:latest" + Size = "Standard_DS1_v2" + Credential = $cred + VirtualNetworkName = "vnet-1" + SubnetName = "subnet-1" + PublicIpAddressName = $null # No public IP address +} ++# Create the VM +New-AzVM @vmParams +``` ++Create a VM in the *subnet-private* subnet. ++```azurepowershell-interactive +# Create a credential object +$cred = Get-Credential ++# Define the VM parameters +$vmParams = @{ + ResourceGroupName = "test-rg" + Location = "EastUS2" + Name = "vm-private" + ImageName = "Canonical:ubuntu-24_04-lts:server-gen1:latest" + Size = "Standard_DS1_v2" + Credential = $cred + VirtualNetworkName = "vnet-1" + SubnetName = "subnet-private" + PublicIpAddressName = $null # No public IP address +} ++# Create the VM +New-AzVM @vmParams +``` ++The VM takes a few minutes to create. Don't continue with the next step until the VM is created and Azure returns output to PowerShell. ++### [CLI](#tab/cli) ++Create a VM in the *subnet-1* subnet with [az vm create](/cli/azure/vm). The `--no-wait` parameter enables Azure to execute the command in the background so you can continue to the next command. ++```azurecli-interactive +az vm create \ + --resource-group test-rg \ + --name vm-public \ + --image Ubuntu2204 \ + --vnet-name vnet-1 \ + --subnet subnet-1 \ + --public-ip-address "" \ + --admin-username azureuser \ + --authentication-type password \ + --no-wait +``` ++Create a VM in the *subnet-private* subnet. ++```azurecli-interactive +az vm create \ + --resource-group test-rg \ + --name vm-private \ + --image Ubuntu2204 \ + --vnet-name vnet-1 \ + --subnet subnet-private \ + --public-ip-address "" \ + --admin-username azureuser \ + --authentication-type password +``` +++## Enable IP forwarding ++To route traffic through the NVA, turn on IP forwarding in Azure and in the operating system of **vm-nva**. When IP forwarding is enabled, any traffic received by **vm-nva** that's destined for a different IP address, isn't dropped and is forwarded to the correct destination. ++### Enable IP forwarding in Azure ++In this section, you turn on IP forwarding for the network interface of the **vm-nva** virtual machine. ++### [Portal](#tab/portal) ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. In **Virtual machines**, select **vm-nva**. ++1. In **vm-nva**, expand **Networking** then select **Network settings**. ++1. Select the name of the interface next to **Network Interface:**. The name begins with **vm-nva** and has a random number assigned to the interface. The name of the interface in this example is **vm-nva313**. ++ :::image type="content" source="./media/tutorial-create-route-table-portal/nva-network-interface.png" alt-text="Screenshot of network interface of NVA virtual machine."::: ++1. In the network interface overview page, select **IP configurations** from the **Settings** section. ++1. In **IP configurations**, select the box next to **Enable IP forwarding**. ++ :::image type="content" source="./media/tutorial-create-route-table-portal/enable-ip-forwarding.png" alt-text="Screenshot of enablement of IP forwarding."::: ++1. Select **Apply**. ++### [PowerShell](#tab/powershell) ++Enable IP forwarding for the network interface of the **vm-nva** virtual machine with [Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface). The following example enables IP forwarding for the network interface named *vm-nva313*. ++```azurepowershell-interactive +$nicParams = @{ + Name = "vm-nva" + ResourceGroupName = "test-rg" +} +$nic = Get-AzNetworkInterface @nicParams ++$nic.EnableIPForwarding = $true ++Set-AzNetworkInterface -NetworkInterface $nic +``` ++### [CLI](#tab/cli) ++Enable IP forwarding for the network interface of the **vm-nva** virtual machine with [az network nic update](/cli/azure/network/nic). The following example enables IP forwarding for the network interface named *vm-nvaVMNic*. ++```azurecli-interactive +az network nic update \ + --name vm-nvaVMNic \ + --resource-group test-rg \ + --ip-forwarding true +``` ++++## Enable IP forwarding in the operating system ++In this section, turn on IP forwarding for the operating system of the **vm-nva** virtual machine to forward network traffic. Use the Azure Bastion service to connect to the **vm-nva** virtual machine. ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. In **Virtual machines**, select **vm-nva**. ++1. Select **Connect**, then **Connect via Bastion** in the **Overview** section. ++1. Enter the username and password you entered when the virtual machine was created. ++1. Select **Connect**. ++1. Enter the following information at the prompt of the virtual machine to enable IP forwarding: ++ ```bash + sudo vim /etc/sysctl.conf + ``` ++1. In the Vim editor, remove the **`#`** from the line **`net.ipv4.ip_forward=1`**: ++ Press the **Insert** key. ++ ```bash + # Uncomment the next line to enable packet forwarding for IPv4 + net.ipv4.ip_forward=1 + ``` ++ Press the **Esc** key. ++ Enter **`:wq`** and press **Enter**. ++1. Close the Bastion session. ++1. Restart the virtual machine. ++## Create a route table ++In this section, create a route table to define the route of the traffic through the NVA virtual machine. The route table is associated to the **subnet-1** subnet where the **vm-public** virtual machine is deployed. ++### [Portal](#tab/portal) ++1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. ++1. Select **+ Create**. ++1. In **Create Route table** enter or select the following information: ++ | Setting | Value | + | - | -- | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Region | Select **East US 2**. | + | Name | Enter **route-table-public**. | + | Propagate gateway routes | Leave the default of **Yes**. | ++1. Select **Review + create**. ++1. Select **Create**. ++## Create a route ++In this section, create a route in the route table that you created in the previous steps. ++1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. ++1. Select **route-table-public**. ++1. Expand **Settings** then select **Routes**. ++1. Select **+ Add** in **Routes**. ++1. Enter or select the following information in **Add route**: ++ | Setting | Value | + | - | -- | + | Route name | Enter **to-private-subnet**. | + | Destination type | Select **IP Addresses**. | + | Destination IP addresses/CIDR ranges | Enter **10.0.2.0/24**. | + | Next hop type | Select **Virtual appliance**. | + | Next hop address | Enter **10.0.3.4**. </br> **_This is the IP address of the vm-nva you created in the earlier steps._**. | ++ :::image type="content" source="./media/tutorial-create-route-table-portal/add-route.png" alt-text="Screenshot of route creation in route table."::: ++1. Select **Add**. ++1. Select **Subnets** in **Settings**. ++1. Select **+ Associate**. ++1. Enter or select the following information in **Associate subnet**: ++ | Setting | Value | + | - | -- | + | Virtual network | Select **vnet-1 (test-rg)**. | + | Subnet | Select **subnet-1**. | ++1. Select **OK**. ++### [PowerShell](#tab/powershell) ++Create a route table with [New-AzRouteTable](/powershell/module/az.network/new-azroutetable). The following example creates a route table named *route-table-public*. ++```azurepowershell-interactive +$routeTableParams = @{ + Name = 'route-table-public' + ResourceGroupName = 'test-rg' + Location = 'eastus2' +} +$routeTablePublic = New-AzRouteTable @routeTableParams +``` ++Create a route by retrieving the route table object with [Get-AzRouteTable](/powershell/module/az.network/get-azroutetable), create a route with [Add-AzRouteConfig](/powershell/module/az.network/add-azrouteconfig), then write the route configuration to the route table with [Set-AzRouteTable](/powershell/module/az.network/set-azroutetable). ++```azurepowershell-interactive +$routeTableParams = @{ + ResourceGroupName = "test-rg" + Name = "route-table-public" +} ++$routeConfigParams = @{ + Name = "to-private-subnet" + AddressPrefix = "10.0.2.0/24" + NextHopType = "VirtualAppliance" + NextHopIpAddress = "10.0.3.4" +} ++$routeTable = Get-AzRouteTable @routeTableParams +$routeTable | Add-AzRouteConfig @routeConfigParams | Set-AzRouteTable +``` ++Associate the route table with the **subnet-1** subnet with [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig). The following example associates the *route-table-public* route table with the *subnet-1* subnet. ++```azurepowershell-interactive +$vnetParams = @{ + Name = 'vnet-1' + ResourceGroupName = 'test-rg' +} +$virtualNetwork = Get-AzVirtualNetwork @vnetParams ++$subnetParams = @{ + VirtualNetwork = $virtualNetwork + Name = 'subnet-1' + AddressPrefix = '10.0.0.0/24' + RouteTable = $routeTablePublic +} +Set-AzVirtualNetworkSubnetConfig @subnetParams | Set-AzVirtualNetwork +``` ++### [CLI](#tab/cli) ++Create a route table with [az network route-table create](/cli/azure/network/route-table#az-network-route-table-create). The following example creates a route table named *route-table-public*. ++```azurecli-interactive +# Create a route table +az network route-table create \ + --resource-group test-rg \ + --name route-table-public +``` ++Create a route in the route table with [az network route-table route create](/cli/azure/network/route-table/route#az-network-route-table-route-create). ++```azurecli-interactive +az network route-table route create \ + --name to-private-subnet \ + --resource-group test-rg \ + --route-table-name route-table-public \ + --address-prefix 10.0.2.0/24 \ + --next-hop-type VirtualAppliance \ + --next-hop-ip-address 10.0.3.4 +``` ++Associate the *route-table-subnet-public* route table to the *subnet-1* subnet with [az network vnet subnet update](/cli/azure/network/vnet/subnet). ++```azurecli-interactive +az network vnet subnet update \ + --vnet-name vnet-1 \ + --name subnet-1 \ + --resource-group test-rg \ + --route-table route-table-public +``` ++++## Test the routing of network traffic ++Test routing of network traffic from **vm-public** to **vm-private**. Test routing of network traffic from **vm-private** to **vm-public**. ++### Test network traffic from vm-public to vm-private ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. In **Virtual machines**, select **vm-public**. ++1. Select **Connect** then **Connect via Bastion** in the **Overview** section. ++1. Enter the username and password you entered when the virtual machine was created. ++1. Select **Connect**. ++1. In the prompt, enter the following command to trace the routing of network traffic from **vm-public** to **vm-private**: ++ ```bash + tracepath vm-private + ``` ++ The response is similar to the following example: ++ ```output + azureuser@vm-public:~$ tracepath vm-private + 1?: [LOCALHOST] pmtu 1500 + 1: vm-nva.internal.cloudapp.net 1.766ms + 1: vm-nva.internal.cloudapp.net 1.259ms + 2: vm-private.internal.cloudapp.net 2.202ms reached + Resume: pmtu 1500 hops 2 back 1 + ``` + + You can see that there are two hops in the above response for **`tracepath`** ICMP traffic from **vm-public** to **vm-private**. The first hop is **vm-nva**. The second hop is the destination **vm-private**. ++ Azure sent the traffic from **subnet-1** through the NVA and not directly to **subnet-private** because you previously added the **to-private-subnet** route to **route-table-public** and associated it to **subnet-1**. ++1. Close the Bastion session. ++### Test network traffic from vm-private to vm-public ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. In **Virtual machines**, select **vm-private**. ++1. Select **Connect** then **Connect via Bastion** in the **Overview** section. ++1. Enter the username and password you entered when the virtual machine was created. ++1. Select **Connect**. ++1. In the prompt, enter the following command to trace the routing of network traffic from **vm-private** to **vm-public**: ++ ```bash + tracepath vm-public + ``` ++ The response is similar to the following example: ++ ```output + azureuser@vm-private:~$ tracepath vm-public + 1?: [LOCALHOST] pmtu 1500 + 1: vm-public.internal.cloudapp.net 2.584ms reached + 1: vm-public.internal.cloudapp.net 2.147ms reached + Resume: pmtu 1500 hops 1 back 2 + ``` ++ You can see that there's one hop in the above response, which is the destination **vm-public**. ++ Azure sent the traffic directly from **subnet-private** to **subnet-1**. By default, Azure routes traffic directly between subnets. ++1. Close the Bastion session. +++### [Portal](#tab/portal) +++### [PowerShell](#tab/powershell) ++When no longer needed, use [Remove-AzResourcegroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all of the resources it contains. ++```azurepowershell-interactive +$rgParams = @{ + Name = "test-rg" +} +Remove-AzResourceGroup @rgParams -Force +``` ++### [CLI](#tab/cli) ++When no longer needed, use [az group delete](/cli/azure/group) to remove the resource group and all of the resources it contains. ++```azurecli-interactive +az group delete \ + --name test-rg \ + --yes \ + --no-wait +``` ++++## Next steps ++In this tutorial, you: ++* Created a route table and associated it to a subnet. ++* Created a simple NVA that routed traffic from a public subnet to a private subnet. ++You can deploy different preconfigured NVAs from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking), which provide many useful network functions. ++To learn more about routing, see [Routing overview](virtual-networks-udr-overview.md) and [Manage a route table](manage-route-table.yml). ++To learn how to restrict network access to PaaS resources with virtual network service endpoints, advance to the next tutorial. ++> [!div class="nextstepaction"] +> [Restrict network access using service endpoints](tutorial-restrict-network-access-to-resources.md) |
vpn-gateway | Openvpn Azure Ad Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant.md | -You can also create this type of P2S VPN Gateway configuration using the steps for the new [Microsoft-registered VPN Client app](point-to-site-entra-gateway.md). Using the newer version bypasses the steps to register the Azure VPN Client with your Microsoft Entra tenant. It also supports more client operating systems. However, it might not yet support certain audience values. For more information about point-to-site protocols and authentication, see [About VPN Gateway point-to-site VPN](point-to-site-about.md). +You can also create this type of P2S VPN Gateway configuration using the steps for the new [Microsoft-registered VPN Client app](point-to-site-entra-gateway.md). Using the newer version bypasses the steps to register the Azure VPN Client with your Microsoft Entra tenant. It also supports more client operating systems. However, not all audience values are supported. For more information about point-to-site protocols and authentication, see [About VPN Gateway point-to-site VPN](point-to-site-about.md). For information about creating and modifying custom audiences, see [Create or modify a custom audience](point-to-site-entra-register-custom-app.md). ++> [!NOTE] +> When possible, we recommend that you use the new [Microsoft-registered VPN Client app](point-to-site-entra-gateway.md) instructions instead. ## Prerequisites In this section, you generate and download the Azure VPN Client profile configur ## Next steps * To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections- Windows](point-to-site-entra-vpn-client-windows.md) or [Configure a VPN client for P2S VPN connections- macOS](point-to-site-entra-vpn-client-mac.md).-* For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S). +* For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S). |
vpn-gateway | Vpn Gateway Classic Resource Manager Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-classic-resource-manager-migration.md | VPN gateways can now be migrated from the classic deployment model to [Resource > [!IMPORTANT] > [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)] -VPN gateways begins with a VNet migration from classic to Resource Manager. This migration is done by customers one VNet at a time. There aren't additional requirements in terms of tools or prerequisites to begin the VNet migration. Migration steps are identical to the existing VNet migration and are documented at [IaaS resources migration page](/azure/virtual-machines/migration-classic-resource-manager-ps). +VPN gateways begin with a VNet migration from classic to Resource Manager. This migration is done by customers one VNet at a time. There aren't additional requirements in terms of tools or prerequisites to begin the VNet migration. Migration steps are identical to the existing VNet migration and are documented at [IaaS resources migration page](/azure/virtual-machines/migration-classic-resource-manager-ps). There isn't a data path downtime during VNet migration and thus existing workloads continue to function without the loss of on-premises connectivity during migration. The public IP address associated with the VPN gateway doesn't change during the migration process. This implies that you won't need to reconfigure your on-premises router once the migration is completed. Once the VNet migration is completed, Azure will attempt to complete the remaind The Resource Manager model is different from the classic model and is composed of virtual network gateways, local network gateways and connection resources. These represent the VPN gateway itself, the local-site representing on premises address space and connectivity between the two respectively. Once migration is completed, your gateways won't be available in the classic model and all management operations on virtual network gateways, local network gateways, and connection objects must be performed using the Resource Manager model. +## Locating a classic VPN gateway +To locate a classic VPN gateway via PowerShell, you will need to install the Azure PowerShell Service Management module. Start here: [Installing the Azure PowerShell Service Management module](https://learn.microsoft.com/powershell/azure/servicemanagement/install-azure-ps). To view classic resources, you will need co-admin or owner permissions. You can't use Az cmdlets to access classic resources. +++To locate a classic VPN gateway via Azure portal, open the portal and search for "Virtual network(classic)". Select your classic virtual network and navigate to the gateway blade to find your classic virtual network gateway. ++ ## Supported scenarios Most common VPN connectivity scenarios are covered by classic to Resource Manager migration. The supported scenarios include: |