Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
api-management | Api Management Howto Add Products | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-add-products.md | In this tutorial, you learn how to: |--|-| | Display name | The name as you want it to be shown in the [developer portal](api-management-howto-developer-portal.md). | | Description | Provide information about the product such as its purpose, the APIs it provides access to, and other details. |- | State | Select **Published** if you want to publish the product. Before the APIs in a product can be called, the product must be published. By default, new products are unpublished, and are visible only to the **Administrators** group. | + | State | Select **Published** if you want to publish the product to the developer portal. Before the APIs in a product can be discovered by developers, the product must be published. By default, new products are unpublished. | | Requires subscription | Select if a user is required to subscribe to use the product (the product is *protected*) and a subscription key must be used to access the product's APIs. If a subscription isn't required (the product is *open*), a subscription key isn't required to access the product's APIs. See [Access to product APIs](#access-to-product-apis) later in this article. | | Requires approval | Select if you want an administrator to review and accept or reject subscription attempts to this product. If not selected, subscription attempts are auto-approved. | | Subscription count limit | Optionally limit the count of multiple simultaneous subscriptions. | You can specify various values for your product: |--|-| | `--product-name` | The name as you want it to be shown in the [developer portal](api-management-howto-developer-portal.md). | | `--description` | Provide information about the product such as its purpose, the APIs it provides access to, and other details. |- | `--state` | Select **published** if you want to publish the product. Before the APIs in a product can be called, the product must be published. By default, new products are unpublished, and are visible only to the **Administrators** group. | + | `--state` | Select **published** if you want to publish the product to the developer portal. Before the APIs in a product can be discovered by developers, the product must be published. By default, new products are unpublished. | | `--subscription-required` | Select if a user is required to subscribe to use the product (the product is *protected*) or a subscription isn't required (the product is *open*). See [Access to product APIs](#access-to-product-apis) later in this article. | | `--approval-required` | Select if you want an administrator to review and accept or reject subscription attempts to this product. If not selected, subscription attempts are auto-approved. | | `--subscriptions-limit` | Optionally, limit the count of multiple simultaneous subscriptions.| |
api-management | Api Management Howto Create Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-groups.md | Title: Manage developer accounts using groups in Azure API Management + Title: Manage developer accounts using groups - Azure API Management description: Learn how to manage developer accounts using groups in Azure API Management. Create groups, and then associate them with products or developers. - Previously updated : 03/17/2023+ Last updated : 09/03/2024 -In API Management, groups are used to manage the visibility of products to developers. Products are first made visible to groups, and then developers in those groups can view and subscribe to the products that are associated with the groups. +In API Management, groups are used to manage the visibility of products to developers in the developer portal. Products are first made visible to groups, and then developers in those groups can view and subscribe to the products that are associated with the groups. -API Management has the following immutable system groups: +API Management has the following immutable groups: -* **Administrators** - Azure subscription administrators are members of this group. Administrators manage API Management service instances, creating the APIs, operations, and products that are used by developers. You can't add users to this group. +* **Administrators** - Built-in group containing only the administrator email account provided at the time of service creation. Its membership is managed by the system; users can't be added to or removed from the group. The primary purpose of the administrator account is to access the developer portal's administrative interface to [customize and publish](api-management-howto-developer-portal-customize.md) the portal content. Any user that has [Azure RBAC permissions](/azure/api-management/developer-portal-faq#what-permissions-do-i-need-to-edit-the-developer-portal) to customize the developer portal can authenticate as the administrator to customize the portal. > [!NOTE]- > You can change the administrator [email settings](api-management-howto-configure-notifications.md#configure-email-settings) that are used in notifications sent to developers from your API Management instance. + > At any time, a service owner can update the administrator [email settings](api-management-howto-configure-notifications.md#configure-email-settings) that are used in notifications from your API Management instance. * **Developers** - Authenticated developer portal users fall into this group. Developers are the customers that build applications using your APIs. Developers are granted access to the developer portal and build applications that call the operations of an API. * **Guests** - Unauthenticated developer portal users, such as prospective customers visiting the developer portal of an API Management instance fall into this group. They can be granted certain read-only access, such as the ability to view APIs but not call them. This section shows how to add a new group to your API Management account. Once the group is created, it's added to the **Groups** list. * To edit the **Name** or **Description** of the group, click the name of the group and select **Settings** - * To delete the group, click the name of the group and press **Delete**. + * To delete the group, select the name of the group and press **Delete**. Now that the group is created, it can be associated with products and developers. This section shows how to associate groups with members. Once the association is added between the developer and the group, you can view it in the **Users** tab. -## <a name="next-steps"> </a>Next steps +## <a name="next-steps"> </a>Related content * Once a developer is added to a group, they can view and subscribe to the products associated with that group. For more information, see [How to create and publish a product in Azure API Management][How create and publish a product in Azure API Management]. * You can control how the developer portal content appears to different users and groups you've configured. Learn more about [visibility and access controls in the developer portal](developer-portal-overview.md#content-visibility-and-access). |
api-management | Api Management Key Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts.md | When a product is ready for use by developers, it can be published. Once publish ### Groups -Groups are used to manage the visibility of products to developers. API Management has the following built-in groups: --* **Administrators** - Manage API Management service instances and create the APIs, operations, and products that are used by developers. -- Azure subscription administrators are members of this group. +Groups are used to manage the visibility of products to developers. API Management has the following built-in groups for developers: * **Developers** - Authenticated developer portal users that build applications using your APIs. Developers are granted access to the developer portal and build applications that call the operations of an API. * **Guests** - Unauthenticated developer portal users, such as prospective customers visiting the developer portal. They can be granted certain read-only access, such as the ability to view APIs but not call them. -Administrators can also create custom groups or use external groups in an [associated Microsoft Entra tenant](api-management-howto-aad.md) to give developers visibility and access to API products. For example, create a custom group for developers in a partner organization to access a specific subset of APIs in a product. A user can belong to more than one group. +API Management service owners can also create custom groups or use external groups in an [associated Microsoft Entra tenant](api-management-howto-aad.md) to give developers visibility and access to API products. For example, create a custom group for developers in a partner organization to access a specific subset of APIs in a product. A user can belong to more than one group. **More information**: * [How to create and use groups][How to create and use groups]+* [How to manage user accounts](api-management-howto-create-or-invite-developers.md) ### Developers When developers subscribe to a product, they're granted the primary and secondar ### Workspaces -Workspaces allow decentralized API development teams to manage and productize their own APIs, while a central API platform team maintains the API Management infrastructure. Each workspace contains APIs, products, subscriptions, and related entities that are accessible only to the workspace collaborators. Access is controlled through Azure role-based access control (RBAC). +Workspaces allow decentralized API development teams to manage and productize their own APIs, while a central API platform team maintains the API Management infrastructure. Each workspace contains APIs, products, subscriptions, and related entities that are accessible only to the workspace collaborators. Access is controlled through Azure role-based access control (RBAC). Each workspace is associated with a workspace gateway that routes API traffic to its backend services. **More information**: |
api-management | Get Started Create Service Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-started-create-service-instance.md | Sign in to the [Azure portal](https://portal.azure.com). | **Region** | Select a geographic region near you from the available API Management service locations. | | **Resource name** | A unique name for your API Management instance. The name can't be changed later. The service name refers to both the service and the corresponding Azure resource. <br/><br/> The service name is used to generate a default domain name: *\<name\>.azure-api.net.* If you would like to configure a custom domain name later, see [Configure a custom domain](configure-custom-domain.md). | | **Organization name** | The name of your organization. This name is used in many places, including the title of the developer portal and sender of notification emails. | - | **Administrator email** | The email address to which all the notifications from **API Management** will be sent. | + | **Administrator email** | The email address to which all system notifications from **API Management** will be sent. | | **Pricing tier** | Select **Developer** tier to evaluate the service. This tier isn't for production use. For more information about scaling the API Management tiers, see [upgrade and scale](upgrade-and-scale.md). | 1. Select **Review + create**. |
app-service | Tutorial Dotnetcore Sqldb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md | In this tutorial, you learn how to: You can quickly deploy the sample app in this tutorial and see it running in Azure. Just run the following commands in the [Azure Cloud Shell](https://shell.azure.com), and follow the prompt: ```bash+dotnet tool install --global dotnet-ef mkdir msdocs-app-service-sqldb-dotnetcore cd msdocs-app-service-sqldb-dotnetcore azd init --template msdocs-app-service-sqldb-dotnetcore First, you set up a sample data-driven app as a starting point. For your conveni :::column span="2"::: **Step 2:** In the GitHub fork: 1. Select **main** > **starter-no-infra** for the starter branch. This branch contains just the sample project and no Azure-related files or configuration.- 1. Select **Code** > **Create codespace on main**. + 1. Select **Code** > **Create codespace on starter-no-infra**. The codespace takes a few minutes to set up. :::column-end::: :::column::: Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps 1. *Region*: Any Azure region near you. 1. *Name*: **msdocs-core-sql-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure. 1. *Runtime stack*: **.NET 8 (LTS)**.+ 1. *Engine*: **SQLAzure**. Azure SQL Database is a fully managed platform as a service (PaaS) database engine that's always running on the latest stable version of the SQL Server. 1. *Add Azure Cache for Redis?*: **Yes**. 1. *Hosting plan*: **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.- 1. Select **SQLAzure** as the database engine. Azure SQL Database is a fully managed platform as a service (PaaS) database engine that's always running on the latest stable version of the SQL Server. 1. Select **Review + create**. 1. After validation completes, select **Create**. :::column-end::: The creation wizard generated the connectivity string for you already as [.NET c :::row-end::: :::row::: :::column span="2":::- **Step 8:** To verify that you secured the secrets: + **Step 8:** To verify that your changes: 1. From the left menu, select **Environment variables > Connection strings** again. 1. Next to **AZURE_SQL_CONNECTIONSTRING**, select **Show value**. The value should be `@Microsoft.KeyValut(...)`, which means that it's a [key vault reference](app-service-key-vault-references.md) because the secret is now managed in the key vault. 1. To verify the Redis connection string, select the **App setting** tab. Next to **AZURE_REDIS_CONNECTIONSTRING**, select **Show value**. The value should be `@Microsoft.KeyValut(...)` too. In this step, you configure GitHub deployment using GitHub Actions. It's just on :::column span="2"::: **Step 7:** Back in the Deployment Center page in the Azure portal:- 1. Select **Logs**. A new deployment run is already started from your committed changes. You might need to select **Refresh** to see it. + 1. Select the **Logs** tab, then select **Refresh** to see the new deployment run. 1. In the log item for the deployment run, select the **Build/Deploy Logs** entry with the latest timestamp. :::column-end::: :::column::: |
automation | Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/modules.md | Title: Manage modules in Azure Automation description: This article tells how to use PowerShell modules to enable cmdlets in runbooks and DSC resources in DSC configurations. Previously updated : 08/09/2024 Last updated : 09/08/2024 +>[!NOTE] +> The AzureRM PowerShell module has been officially deprecated as of **February 29, 2024**. We recommend that you migrate from AzureRM module to the Az PowerShell module to ensure continued support and updates. While the AzureRM module may still work, it is no longer maintained or supported and continued use of AzureRM is at the user's own risk. For more information, see [migration resources](https://aka.ms/azpsmigrate) for guidance on transitioning to the Az module. + Azure Automation uses a number of PowerShell modules to enable cmdlets in runbooks and DSC resources in DSC configurations. Supported modules include: * [Azure PowerShell Az.Automation](/powershell/azure/new-azureps-module-az). |
azure-cache-for-redis | Cache Nodejs Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-nodejs-get-started.md | In this quickstart, you incorporate Azure Cache for Redis into a Node.js app to ## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- [node_redis](https://github.com/mranney/node_redis), which you can install with the command `npm install redis`.+- Node.js installed, if you haven't done so already. See [Install Node.js on Windows](/windows/dev-environment/javascript/nodejs-on-windows) for instructions on how to install Node and npm on a Windows computer. -For examples of using other Node.js clients, see the individual documentation for the Node.js clients listed at [Node.js Redis clients](https://redis.io/docs/connect/clients/nodejs/). --## Create a cache +## Create a cache instance [!INCLUDE [redis-cache-create](~/reusable-content/ce-skilling/azure/includes/azure-cache-for-redis/includes/redis-cache-create.md)] +## Install the node-redis client library ++The [node-redis](https://github.com/redis/node-redis) library is the primary Node.js client for Redis. You can install the client with [npm](https://docs.npmjs.com/about-npm) by using the following command: ++```bash +npm install redis +``` ++## Create a Node.js app to access a cache ++Create a Node.js app that uses either Microsoft Entra ID or access keys to connect to an Azure Cache for Redis. We recommend you use Microsoft Entra ID. ++## [Microsoft Entra ID Authentication (recommended)](#tab/entraid) +++### Install the JavaScript Azure Identity client library ++The [Microsoft Authentication Library (MSAL)](/entra/identity-platform/msal-overview) allows you to acquire security tokens from Microsoft identity to authenticate users. There's a [JavaScript Azure identity client library](/javascript/api/overview/azure/identity-readme) available that uses MSAL to provide token authentication support. Install this library using `npm`: ++```bash +npm install @azure/identity +``` ++### Create a new Node.js app using Microsoft Entra ID ++1. Add environment variables for your **Host name** and **Service Principal ID**, which is the object ID of your Microsoft Entra ID service principal or user. In the Azure portal, this is shown as the _Username_. ++ ```cmd + set AZURE_CACHE_FOR_REDIS_HOST_NAME=contosoCache + set REDIS_SERVICE_PRINCIPAL_ID=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX + ``` ++1. Create a new script file named _redistest.js_. ++1. Add the following example JavaScript to the file. This code shows you how to connect to an Azure Cache for Redis instance using the cache host name and key environment variables. The code also stores and retrieves a string value in the cache. The `PING` and `CLIENT LIST` commands are also executed. For more examples of using Redis with the [node-redis](https://github.com/redis/node-redis) client, see [https://redis.js.org/](https://redis.js.org/). ++ ```javascript + const { createClient } = require("redis"); + const { DefaultAzureCredential } = require("@azure/identity"); + + async function main() { + // Construct a Token Credential from Identity library, e.g. ClientSecretCredential / ClientCertificateCredential / ManagedIdentityCredential, etc. + const credential = new DefaultAzureCredential(); + const redisScope = "https://redis.azure.com/.default"; + + // Fetch a Microsoft Entra token to be used for authentication. This token will be used as the password. + let accessToken = await credential.getToken(redisScope); + console.log("access Token", accessToken); + + // Create redis client and connect to the Azure Cache for Redis over the TLS port using the access token as password. + const cacheConnection = createClient({ + username: process.env.REDIS_SERVICE_PRINCIPAL_ID, + password: accessToken.token, + url: `redis://${process.env.AZURE_CACHE_FOR_REDIS_HOST_NAME}:6380`, + pingInterval: 100000, + socket: { + tls: true, + keepAlive: 0 + }, + }); + + cacheConnection.on("error", (err) => console.log("Redis Client Error", err)); + await cacheConnection.connect(); + + // PING command + console.log("\nCache command: PING"); + console.log("Cache response : " + await cacheConnection.ping()); + + // SET + console.log("\nCache command: SET Message"); + console.log("Cache response : " + await cacheConnection.set("Message", + "Hello! The cache is working from Node.js!")); + + // GET + console.log("\nCache command: GET Message"); + console.log("Cache response : " + await cacheConnection.get("Message")); + + // Client list, useful to see if connection list is growing... + console.log("\nCache command: CLIENT LIST"); + console.log("Cache response : " + await cacheConnection.sendCommand(["CLIENT", "LIST"])); + + cacheConnection.disconnect(); + + return "Done" + } + + main().then((result) => console.log(result)).catch(ex => console.log(ex)); + ``` ++1. Run the script with Node.js. ++ ```bash + node redistest.js + ``` ++1. The output of your code looks like this. ++ ```bash + Cache command: PING + Cache response : PONG + + Cache command: GET Message + Cache response : Hello! The cache is working from Node.js! + + Cache command: SET Message + Cache response : OK + + Cache command: GET Message + Cache response : Hello! The cache is working from Node.js! + + Cache command: CLIENT LIST + Cache response : id=10017364 addr=76.22.73.183:59380 fd=221 name= age=1 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=32742 argv-mem=10 obl=0 oll=0 omem=0 tot-mem=61466 ow=0 owmem=0 events=r cmd=client user=default numops=6 + + Done + ``` ++### Create a sample JavaScript app with reauthentication ++Microsoft Entra ID access tokens have a limited lifespan, [averaging 75 minutes](/entra/identity-platform/configurable-token-lifetimes#token-lifetime-policies-for-access-saml-and-id-tokens). In order to maintain a connection to your cache, you need to refresh the token. This example demonstrates how to do this using JavaScript. ++1. Create a new script file named _redistestreauth.js_. ++1. Add the following example JavaScript to the file. ++ ```javascript + const { createClient } = require("redis"); + const { DefaultAzureCredential } = require("@azure/identity"); + + async function returnPassword(credential) { + const redisScope = "https://redis.azure.com/.default"; + + // Fetch a Microsoft Entra token to be used for authentication. This token will be used as the password. + return credential.getToken(redisScope); + } + + async function main() { + // Construct a Token Credential from Identity library, e.g. ClientSecretCredential / ClientCertificateCredential / ManagedIdentityCredential, etc. + const credential = new DefaultAzureCredential(); + let accessToken = await returnPassword(credential); + + // Create redis client and connect to the Azure Cache for Redis over the TLS port using the access token as password. + let cacheConnection = createClient({ + username: process.env.REDIS_SERVICE_PRINCIPAL_ID, + password: accessToken.token, + url: `redis://${process.env.AZURE_CACHE_FOR_REDIS_HOST_NAME}:6380`, + pingInterval: 100000, + socket: { + tls: true, + keepAlive: 0 + }, + }); + + cacheConnection.on("error", (err) => console.log("Redis Client Error", err)); + await cacheConnection.connect(); + + for (let i = 0; i < 3; i++) { + try { + // PING command + console.log("\nCache command: PING"); + console.log("Cache response : " + await cacheConnection.ping()); + + // SET + console.log("\nCache command: SET Message"); + console.log("Cache response : " + await cacheConnection.set("Message", + "Hello! The cache is working from Node.js!")); + + // GET + console.log("\nCache command: GET Message"); + console.log("Cache response : " + await cacheConnection.get("Message")); + + // Client list, useful to see if connection list is growing... + console.log("\nCache command: CLIENT LIST"); + console.log("Cache response : " + await cacheConnection.sendCommand(["CLIENT", "LIST"])); + break; + } catch (e) { + console.log("error during redis get", e.toString()); + if ((accessToken.expiresOnTimestamp <= Date.now())|| (redis.status === "end" || "close") ) { + await redis.disconnect(); + accessToken = await returnPassword(credential); + cacheConnection = createClient({ + username: process.env.REDIS_SERVICE_PRINCIPAL_ID, + password: accessToken.token, + url: `redis://${process.env.AZURE_CACHE_FOR_REDIS_HOST_NAME}:6380`, + pingInterval: 100000, + socket: { + tls: true, + keepAlive: 0 + }, + }); + } + } + } + } + + main().then((result) => console.log(result)).catch(ex => console.log(ex)); + ``` ++1. Run the script with Node.js. ++ ```bash + node redistestreauth.js + ``` ++1. The output of your code looks like this. ++ ```bash + Cache command: PING + Cache response : PONG + + Cache command: GET Message + Cache response : Hello! The cache is working from Node.js! + + Cache command: SET Message + Cache response : OK + + Cache command: GET Message + Cache response : Hello! The cache is working from Node.js! + + Cache command: CLIENT LIST + Cache response : id=10017364 addr=76.22.73.183:59380 fd=221 name= age=1 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=32742 argv-mem=10 obl=0 oll=0 omem=0 tot-mem=61466 ow=0 owmem=0 events=r cmd=client user=default numops=6 ++ ``` ++>[!NOTE] +>For additional examples of using Microsoft Entra ID to authenticate to Redis using the node-redis library, please see [this GitHub repo](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/samples/AzureCacheForRedis/node-redis.md) +> ++## [Access Key Authentication](#tab/accesskey) + [!INCLUDE [redis-cache-access-keys](includes/redis-cache-access-keys.md)] Add environment variables for your **HOST NAME** and **Primary** access key. Use these variables from your code instead of including the sensitive information directly in your code. set AZURE_CACHE_FOR_REDIS_HOST_NAME=contosoCache set AZURE_CACHE_FOR_REDIS_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ``` -## Connect to the cache +### Connect to the cache -The latest builds of [node_redis](https://github.com/mranney/node_redis) provide support several connection options. Don't create a new connection for each operation in your code. Instead, reuse connections as much as possible. +>[!NOTE] +> Don't create a new connection for each operation in your code. Instead, reuse connections as much as possible. +> -## Create a new Node.js app +### Create a new Node.js app -1. Create a new script file named *redistest.js*. -1. Use the command to install a redis package. -- ```bash - `npm install redis` - ``` +1. Create a new script file named _redistest.js_. 1. Add the following example JavaScript to the file. The latest builds of [node_redis](https://github.com/mranney/node_redis) provide testCache().then((result) => console.log(result)).catch(ex => console.log(ex)); ``` - This code shows you how to connect to an Azure Cache for Redis instance using the cache host name and key environment variables. The code also stores and retrieves a string value in the cache. The `PING` and `CLIENT LIST` commands are also executed. For more examples of using Redis with the [node_redis](https://github.com/mranney/node_redis) client, see [https://redis.js.org/](https://redis.js.org/). + This code shows you how to connect to an Azure Cache for Redis instance using the cache host name and key environment variables. The code also stores and retrieves a string value in the cache. The `PING` and `CLIENT LIST` commands are also executed. For more examples of using Redis with the [node_redis](https://github.com/redis/node-redis) client, see [https://redis.js.org/](https://redis.js.org/). 1. Run the script with Node.js. The latest builds of [node_redis](https://github.com/mranney/node_redis) provide 1. Example the output. - ```console + ```bash Cache command: PING Cache response : PONG The latest builds of [node_redis](https://github.com/mranney/node_redis) provide Done ``` -## Clean up resources --If you continue to the next tutorial, can keep the resources created in this quickstart and reuse them. Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges. --> [!IMPORTANT] -> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually instead of deleting the resource group. -> --1. Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**. --1. In the **Filter by name** text box, enter the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, select **...** then **Delete resource group**. -- ![Delete Azure Resource group](./media/cache-nodejs-get-started/redis-cache-delete-resource-group.png) --1. Confirm the deletion of the resource group. Enter the name of your resource group to confirm, and select **Delete**. + -1. After a few moments, the resource group and all of its contained resources are deleted. ## Get the sample code Get the [Node.js quickstart](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/nodejs) on GitHub. -## Next steps +## Related content In this quickstart, you learned how to use Azure Cache for Redis from a Node.js application. Continue to the next quickstart to use Azure Cache for Redis with an ASP.NET web app. -> [!div class="nextstepaction"] -> [Create an ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md) +- [Create an ASP.NET web app that uses an Azure Cache for Redis.](cache-web-app-howto.md) |
azure-maps | How To Use Image Templates Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-image-templates-web-sdk.md | Title: Image templates in the Azure Maps Web SDK | Microsoft Azure Maps description: Learn how to add image icons and pattern-filled polygons to maps by using the Azure Maps Web SDK. View available image and fill pattern templates. Previously updated : 8/6/2019 Last updated : 08/30/2024 Once an image template is loaded into the map image sprite, it can be rendered a The [Symbol layer with built-in icon template] sample demonstrates how to do this by rendering a symbol layer using the `marker-flat` image template with a teal primary color and a white secondary color, as shown in the following screenshot. For the source code for this sample, see [Symbol layer with built-in icon template sample code]. Once an image template is loaded into the map image sprite, it can be rendered a The [Line layer with built-in icon template] demonstrates how to do this. As show in the following screenshot, it renders a red line on the map and uses a symbol layer using the `car` image template with a dodger blue primary color and a white secondary color. For the source code for this sample, see [Line layer with built-in icon template sample code]. <!-- <br/> Once an image template is loaded into the map image sprite, it can be rendered a The [Fill polygon with built-in icon template] sample demonstrates how to render a polygon layer using the `dot` image template with a red primary color and a transparent secondary color, as shown in the following screenshot. For the source code for this sample, see [Fill polygon with built-in icon template sample code]. <!-- <br/> An image template can be retrieved using the `altas.getImageTemplate` function a The [HTML Marker with built-in icon template] sample demonstrates this using the `marker-arrow` template with a red primary color, a pink secondary color, and a text value of "00", as shown in the following screenshot. For the source code for this sample, see [HTML Marker with built-in icon template sample code]. <!-- <br/> SVG image templates support the following placeholder values: The [Add custom icon template to atlas namespace] sample demonstrates how to take an SVG template, and add it to the Azure Maps web SDK as a reusable icon template, as shown in the following screenshot. For the source code for this sample, see [Add custom icon template to atlas namespace sample code]. <!-- <br/> |
azure-monitor | Autoscale Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md | Autoscale supports the following services. | Azure Spring Apps | [Set up autoscale for applications](../../spring-apps/enterprise/how-to-setup-autoscale.md) | | Azure Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) | | Azure Service Bus | [Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md) |-| Azure Logic Apps - Integration service environment (ISE) | [Add ISE capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) | ## Next steps |
azure-resource-manager | Move Support Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md | Before starting your move operation, review the [checklist](./move-resource-grou > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |-> | actionrules | **Yes** | **Yes** | No | +> | alertprocessingrules | No | Yes | No | > | alerts | No | No | No | > | alertslist | No | No | No | > | alertsmetadata | No | No | No | |
communication-services | Program Brief Guidelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/program-brief-guidelines.md | -Azure Communication Services allows you to apply for a short code for SMS programs. In this document, we'll review the guidelines on how to fill out a program brief for short code registration. A program brief application consists of 4 sections: +Azure Communication Services enables you to apply for a short code for SMS programs. In this article, we review the guidelines on how to fill out a program brief for short code registration. A program brief application consists of four sections: - Program details - Contact details - Volume Azure Communication Services allows you to apply for a short code for SMS progra ## Program details ### Program Name-All Short Code programs are required to disclose program names, product description, or both, in messages, on the call-to-action, and in the terms and conditions. The program name is generally the sponsor of the Short Code program, often the brand name or company name associated with the Short Code. +All short code programs are required to disclose program names, product description (or both) in messages, on the call-to-action, and in the terms and conditions. The program name is generally the sponsor of the short code program, often the brand name or company name associated with the short code. As a best practice, the program name is composed of two pieces -- Name of the sponsor (company or brand name)+- Name of the sponsor (company or brand name) <br /> Example: Contoso-- Product description+- Product description <br /> Example: Promo alerts, SMS banking, Appointment reminders **Examples of program name:** As a best practice, the program name is composed of two pieces Communication Services offers two types of short codes: random and vanity. #### Random short code-A random short code is a 5ΓÇô6-digit phone number that is randomly selected and assigned by the U.S. Common Short Codes Association (CSCA). +A random short code is a 5ΓÇô6-digit phone number randomly selected and assigned by the U.S. Common Short Codes Association (CSCA). #### Vanity short code-A vanity short code is a 5ΓÇô6-digit phone number that is chosen by you for your program. You can look up the list of available short codes in the [US Short Codes directory](https://usshortcodedirectory.com/). -Additionally, you may pick a number that the customer can spell out on their phone dial pad as an alphanumeric equivalent ΓÇô for example, Contoso can use OFFERS (633377). +A vanity short code is a 5ΓÇô6-digit phone number that you choose for your program. You can look up the list of available short codes in the [US Short Codes directory](https://usshortcodedirectory.com/). -When you select a vanity short code, you are required to input a prioritized list of vanity short codes that youΓÇÖd like to use for your program. The alternatives in the list will be used if the first short code in your list isn't available to lease. +You can also pick a number that the customer can spell out on their phone dial pad as an alphanumeric equivalent ΓÇô for example, Contoso can use OFFERS (633377). ++When you select a vanity short code, you need to enter a prioritized list of vanity short codes that youΓÇÖd like to use for your program. The alternatives in the list are used if the first short code in your list isn't available to lease. Example: 234567, 234578, 234589. > [!Note] > Short codes in the US cannot start with 1. ### Directionality -This field captures the directionality of the SMS program ΓÇô 1-way or 2-way SMS. In United States, 2-way SMS is mandated to honor opt-out requests from customers. +This field captures the directionality of the SMS program ΓÇô 1-way or 2-way SMS. In the United States, 2-way SMS is mandated to honor opt-out requests from customers. ### Recurrence Programs are classified as Transactional programs or Subscription programs: Programs are classified as Transactional programs or Subscription programs: This field captures the time duration in days until full production volume is reached. ### Political Campaign-Short Code programs that solicit political donations are subject to [additional best practices](https://www.ctia.org/the-wireless-industry/industry-commitments/guidelines-for-federal-political-campaign-contributions-via-wireless-carriers-bill). +Short Code programs that solicit political donations are subject to [wireless industry best practices](https://www.ctia.org/the-wireless-industry/industry-commitments/guidelines-for-federal-political-campaign-contributions-via-wireless-carriers-bill). ### Privacy Policy and Terms and Conditions URL-Message Senders are required to maintain a privacy policy and terms and conditions that are specific to all short code programs and make it accessible to customers from the initial call-to-action. +Message Senders are required to maintain a privacy policy and terms and conditions specific to all short code programs and make it accessible to customers from the initial call-to-action. -In this field, you can provide a URL of the privacy policy and terms and conditions where customers can access it. If you donΓÇÖt have the short code program specific privacy policy or terms of service URL yet, you can provide the URL of screenshots of what the short code program policies will look like (a design mockup of the website that will go live once the campaign is launched). +In this field, you can provide a URL of the privacy policy and terms and conditions where customers can access it. If you donΓÇÖt have the short code program specific privacy policy or terms of service URL yet, you can provide the URL of screenshots of what you expect the short code program policies to look like. You can use a design mockup of the website planned to go live once the campaign is launched. -Your terms of service must include terms specific to the short code program brief and must contain ALL of the following: -- Program Name and Description-- Message Frequency, it can be either listed as Message Frequency Varies or the accurate frequency, it also needs to match with what is listed in the call-to-action-- The disclaimer: "Message and data rates may apply" written verbatim-- Customer care information, for example: "For help call [phone number] or send an email to [email]"-- Opt-Out message: "Text STOP to cancel"-- A link to the Privacy Policy or the whole Privacy policy+Your terms of service must include terms specific to the short code program brief and must contain ALL of the following information: +- Program Name and Description. +- Message Frequency can either be listed as Message Frequency Varies or the accurate frequency. Message Frequency also needs to match the value listed in the call-to-action. +- The disclaimer: "Message and data rates may apply" written verbatim. +- Customer care information, for example: "For help call \[phone number\] or send an email to \[email\]". +- Opt-Out message: "Text STOP to cancel". +- A link to the Privacy Policy or the whole Privacy policy. ##### Example: **Terms of Service** :::image type="content" source= "../media/short-code-terms.png" alt-text="Screenshot showing the terms of service mock up."::: -Your terms of service must contain ALL of the following: -- Program Name and Description-- Message Frequency, it can be either listed as Message Frequency Varies or the accurate frequency, it also needs to match with what is listed in the CTA (Call-To-Action)-- The disclaimer: "Message and data rates may apply" written verbatim-- Customer care information, for example: "For help call [phone number] or send an email to [email]"-- Opt-Out message: "Text STOP to cancel"+Your terms of service must contain ALL of the following information: +- Program Name and Description. +- Message Frequency can be either listed as Message Frequency Varies or the accurate frequency. Message Frequency also needs to match the value listed in the CTA (Call-To-Action). +- The disclaimer: "Message and data rates may apply" written verbatim. +- Customer care information, for example: "For help call \[phone number\] or send an email to \[email\]". +- Opt-Out message: "Text STOP to cancel". - A link to the Privacy Policy or the whole Privacy policy. > [!Note]-> If you donΓÇÖt have a URL of the website, mockups, or design, please send an email with the screenshots to phone@microsoft.com with "[CompanyName - ProgramName] Short Code Request". +> If you donΓÇÖt have a URL of the website, mockups, or design, please send an email with the screenshots to [phone@microsoft.com](mailto:phone@microsoft.com) with "\[CompanyName - ProgramName\] Short Code Request". ### Program Sign-up type and URL This field captures the call-to-action, an instruction for the customers to take action for ensuring that the customer consents to receive text messages, and understands the nature of the program. Call-to-action can be over SMS, Interactive Voice Response (IVR), website, or point of sale. Carriers require that all short code program brief applications are submitted with mock ups for the call-to-action. -In these fields, you must provide a URL of the website where customers will discover the program, URL for screenshots of the website, URL of mockup of the website, or URL with the design. If the program sign-up type is SMS, then you must provide the keywords the customer will send to the short code for opting in. +In these fields, you must provide a URL of the website where customers discover the program, URL for screenshots of the website, URL of mockup of the website, or URL with the design. If the program sign-up type is SMS, then you must provide the keywords the customer send to the short code for opting in. > [!Note] > If you donΓÇÖt have a URL of the website, mockups, or design, please send the screenshots to phone@microsoft.com with Subject "[CompanyName - ProgramName] Short Code Request". In these fields, you must provide a URL of the website where customers will disc #### Guidelines for designing the call-to-action: 1. The call-to-action needs to be clear as to what program the customer is joining or agreeing to. - Call-to-action must be clear and accurate; consent must not be obtained through deceptive means- - Enrolling a user in multiple programs based on a single opt-in is prohibited, even when all programs operate on the same short code. Please refer to the [CTIA monitoring handbook](https://www.wmcglobal.com/hubfs/CTIA%20Short%20Code%20Monitoring%20Handbook%20-%20v1.8.pdf) for best practices. + - Enrolling a user in multiple programs based on a single opt-in is prohibited, even when all programs operate on the same short code. Refer to the [CTIA monitoring handbook](https://www.wmcglobal.com/hubfs/CTIA%20Short%20Code%20Monitoring%20Handbook%20-%20v1.8.pdf) for best practices. 2. The call-to-action needs to include the abbreviated terms and conditions, which include:- - Program Name ΓÇô as described above + - Program Name ΓÇô as previously described - Message frequency (recurring message/subscriptions) - Message and Data rates may apply - Link to comprehensive Terms and Conditions (to a static website, or complete text) Contoso.com: Announcing our Holiday Sale. Reply YES to save 5% on your next Cont ## Contact Details ### Point of contact email address-You need to provide information about your company and point of contact. Status updates for your short code application will be sent to the point of contact email address. +You need to provide information about your company and point of contact. Status updates for your short code application are sent to the point of contact email address. ### Customer care-Customer care contact information must be clear and readily available to help customers understand program details as well as their status with the program. Customer care information should result in customers receiving help. +Customer care contact information must be clear and readily available to help customers understand program details and their status with the program. Customer care information should result in customers receiving help. In these fields, you're required to provide the customer care email address and a customer care phone number where customers can reach out to receive help. Example: Traffic spikes are expected for delivery notifications program around h ## Templates -Azure communication service offers an opt-out management service for short codes that allows customers to configure responses to mandatory keywords STOP/START/HELP. Prior to provisioning your short code, you'll be asked for your preference to manage opt-outs. If you opt-in, the opt-out management service will automatically use your responses in the program brief for Opt-in/ Opt-out/ Help keywords in response to STOP/START/HELP keyword. +Azure communication service offers an opt-out management service for short codes that allows customers to configure responses to mandatory keywords STOP/START/HELP. Before provisioning your short code, you're asked for your preference to manage opt-outs. If you opt-in, the opt-out management service automatically uses your responses in the program brief for Opt-in/ Opt-out/ Help keywords in response to STOP/START/HELP keyword. ### Opt-in confirmation message-CTIA requires that the customer must actively opt into short code programs by sending a keyword from their mobile device to the short code, providing consent on website, IVR, etc. +CTIA requires that the customer must actively opt into short code programs by sending a keyword from their mobile device to the short code, providing consent on website, IVR, and so on. In this field, you're required to provide a sample of the confirmation message that is sent to the customer upon receiving their consent. In this field, you're required to provide a sample of the response message that **Example:** Contoso Appointment reminders: Get help at support@contoso.com or 1-800 123 4567.Msg&Data Rates May Apply. Txt HELP for help. Txt STOP to opt out. ### Opt-out message-Message senders are required to have mechanisms to opt customers out of the program and respond to messages containing the STOP keyword with the program name and confirmation that no additional messages will be sent. +Message senders are required to have mechanisms for customers to opt out of the program. You need to respond to customer messages containing the STOP keyword with the program name and confirmation that they won't receive any more messages. In this field, you're required to provide a sample of the response message that is sent to the customer upon receiving the STOP keyword. -**Example:** Contoso Appointment reminders: YouΓÇÖre opted out and will receive no further messages. +**Example:** Contoso Appointment reminders: YouΓÇÖre opted out and won't receive any more messages. -Please see our [guide on opt-outs](./sms-faq.md#opt-out-handling) to learn about how Azure Communication Services handles opt-outs. +See our [guide on opt-outs](./sms-faq.md#opt-out-handling) to learn about how Azure Communication Services handles opt-outs. ### Example messages Message senders are required to disclose all the types/categories of messages with samples that will be sent over the short code. In this field, you're required to provide a sample message for each content type - **User**: 030322 - **Contoso**: Reply with time (HHMM) when you would like to reschedule on 030322. - **User**: 1200-- **Contoso**: Your reservation has been confirmed for 3rd March 2022 at 12:00 pm. Txt R to reschedule. Txt HELP or STOP. Msg&Data rates may apply.+- **Contoso**: Your reservation is confirmed for 3rd March 2022 at 12:00 pm. Txt R to reschedule. Txt HELP or STOP. Msg&Data rates may apply. ## Next steps > [!div class="nextstepaction"] > [Apply for a short code](../../quickstarts/sms/apply-for-short-code.md) -The following documents may be interesting to you: +## Related articles - Familiarize yourself with the [SMS SDK](../sms/sdk-features.md) |
communication-services | Apply For Short Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/apply-for-short-code.md | +- [An active Communication Services resource](../create-communication-resource.md). ## Get a short code To begin provisioning a short code, go to your Communication Services resource on the [Azure portal](https://portal.azure.com). |
connectors | Connectors Create Api Azure Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azure-event-hubs.md | In Azure Logic Apps, an [action](../logic-apps/logic-apps-overview.md#logic-app- For all the operations and other technical information, such as properties, limits, and so on, review the [Event Hubs connector's reference page](/connectors/eventhubs/). -> [!NOTE] -> For logic apps hosted in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), -> the connector's ISE version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead. - ## Next steps * [Managed connectors for Azure Logic Apps](managed.md) |
connectors | Connectors Create Api Azureblobstorage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md | The steps to add and use an Azure Blob action differ based on whether you want t You can add network security to an Azure storage account by [restricting access with a firewall and firewall rules](../storage/common/storage-network-security.md). However, this setup creates a challenge for Azure and other Microsoft services that need access to the storage account. Local communication in the data center abstracts the internal IP addresses, so just permitting traffic through IP addresses might not be enough to successfully allow communication across the firewall. Based on which Azure Blob Storage connector you use, the following options are available: -- To access storage accounts behind firewalls using the Azure Blob Storage managed connector in Consumption and ISE-based logic apps, review the following documentation:+- To access storage accounts behind firewalls using the Azure Blob Storage managed connector in Consumption logic apps, see the following documentation: - [Access storage accounts in same region with system-managed identities](#access-blob-storage-in-same-region-with-system-managed-identities) - [Access storage accounts in other regions](#access-storage-accounts-in-other-regions) -- To access storage accounts behind firewalls using the ISE-versioned Azure Blob Storage connector that's only available in an ISE-based logic app, review [Access storage accounts through trusted virtual network](#access-storage-accounts-through-trusted-virtual-network).- - To access storage accounts behind firewalls in Standard logic apps, review the following documentation: - Azure Blob Storage *built-in* connector: [Access storage accounts through virtual network integration](#access-storage-accounts-through-virtual-network-integration) You can add network security to an Azure storage account by [restricting access If you don't use managed identity authentication, logic app workflows can't directly access storage accounts behind firewalls when both the logic app resource and storage account exist in the same region. As a workaround, put your logic app resource in a different region than your storage account. Then, give access to the [outbound IP addresses for the managed connectors in your region](/connectors/common/outbound-ip-addresses#azure-logic-apps). > [!NOTE]+> > This solution doesn't apply to the Azure Table Storage connector and Azure Queue Storage connector. > Instead, to access your Table Storage or Queue Storage, [use the built-in HTTP trigger and action](../logic-apps/logic-apps-http-endpoint.md). To add your outbound IP addresses to the storage account firewall, follow these - Your logic app and storage account exist in different regions. - You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account. + Create a private endpoint on your storage account for access. ### Access storage accounts through virtual network integration To add your outbound IP addresses to the storage account firewall, follow these - Your logic app and storage account exist in different regions. - You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account. + Create a private endpoint on your storage account for access. ### Access Blob Storage in same region with system-managed identities |
connectors | Connectors Create Api Sqlazure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md | For more information, review the [SQL Server managed connector reference](/conne * Consumption workflow - * In multitenant Azure Logic Apps, you need the [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) installed on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md). -- * In an ISE, you don't need the on-premises data gateway for SQL Server Authentication and non-Windows Authentication connections, and you can use the ISE-versioned SQL Server connector. For Windows Authentication, you need the [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md). The ISE-version connector doesn't support Windows Authentication, so you have to use the regular SQL Server managed connector. + In multitenant Azure Logic Apps, you need the [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) installed on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md). * Standard workflow In the connection information box, complete the following steps: |-|-| | **Connection string** | - Supported only in Standard workflows with the SQL Server built-in connector. <br><br>- Requires the connection string to your SQL server and database. | | **Active Directory OAuth** | - Supported only in Standard workflows with the SQL Server built-in connector. For more information, see the following documentation: <br><br>- [Authentication for SQL Server connector](/connectors/sql/#authentication) <br>- [Enable Open Authorization with Microsoft Entra ID (Microsoft Entra ID OAuth)](../logic-apps/logic-apps-securing-a-logic-app.md#enable-oauth) <br>- [OAuth with Microsoft Entra ID](../logic-apps/logic-apps-securing-a-logic-app.md#oauth-microsoft-entra) |- | **Logic Apps Managed Identity** | - Supported with the SQL Server managed connector and ISE-versioned connector. In Standard workflows, this authentication type is available for the SQL Server built-in connector, but the option is named **Managed identity** instead. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see the following documentation: <br><br>- [Managed identity authentication for SQL Server connector](/connectors/sql/#managed-identity-authentication) <br>- [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles) | + | **Logic Apps Managed Identity** | - Supported with the SQL Server managed connector. In Standard workflows, this authentication type is available for the SQL Server built-in connector, but the option is named **Managed identity** instead. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see the following documentation: <br><br>- [Managed identity authentication for SQL Server connector](/connectors/sql/#managed-identity-authentication) <br>- [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles) | | **Service principal (Microsoft Entra application)** | - Supported with the SQL Server managed connector. <br><br>- Requires a Microsoft Entra application and service principal. For more information, see [Create a Microsoft Entra application and service principal that can access resources using the Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). |- | [**Microsoft Entra integrated**](/azure/azure-sql/database/authentication-aad-overview) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires a valid managed identity in Microsoft Entra that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. For more information, see these topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) <br>- [Azure SQL - Microsoft Entra integrated authentication](/azure/azure-sql/database/authentication-aad-overview) | - | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multitenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) | + | [**Microsoft Entra integrated**](/azure/azure-sql/database/authentication-aad-overview) | - Supported with the SQL Server managed connector. <br><br>- Requires a valid managed identity in Microsoft Entra that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. For more information, see these topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) <br>- [Azure SQL - Microsoft Entra integrated authentication](/azure/azure-sql/database/authentication-aad-overview) | + | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multitenant Azure Logic Apps. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) | The following examples show how the connection information box might appear if you use the SQL Server *managed* connector and select **Microsoft Entra integrated** authentication: In the connection information box, complete the following steps: | Authentication | Description | |-|-|- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector, SQL Server built-in connector, and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multitenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server. <br><br>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). | - | [**Windows Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication) | - Supported with the SQL Server managed connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multitenant Azure Logic Apps or an ISE. <br><br> A valid Windows user name and password to confirm your identity through your Windows account. <br><br>For more information, see [Windows Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication). | + | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector and SQL Server built-in connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multitenant Azure Logic Apps. <br><br> A valid user name and strong password that are created and stored in your SQL Server. <br><br>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). | + | [**Windows Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication) | - Supported with the SQL Server managed connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multitenant Azure Logic Apps. <br><br> A valid Windows user name and password to confirm your identity through your Windows account. <br><br>For more information, see [Windows Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication). | 1. Select or provide the following values for your SQL database: |
connectors | File System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/file-system.md | The File System connector has different versions, based on [logic app type and h 1. After you add a File System managed connector trigger or action to your workflow, select the data gateway resource that you previously created so you can connect to your file system. - - In an ISE, you don't need the on-premises data gateway. Instead, you can use the ISE-versioned File System connector. - - Standard logic app workflows You can use the File System built-in connector or managed connector. The File System connector has different versions, based on [logic app type and h | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |- | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | + | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. | | **Password** | Yes | <*password*> | The password for the computer where you have your file system | | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | The File System connector has different versions, based on [logic app type and h ![Screenshot showing Consumption workflow designer and connection information for File System managed connector trigger.](media/connect-file-systems/file-system-connection-consumption.png) - The following example shows the connection information for the File System ISE-based trigger: -- ![Screenshot showing Consumption workflow designer and connection information for File System ISE-based connector trigger.](media/connect-file-systems/file-system-connection-ise.png) - 1. When you're done, select **Create**. Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger. If successful, your workflow sends an email about the new file. | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |- | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | + | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. | | **Password** | Yes | <*password*> | The password for the computer where you have your file system | | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | The example logic app workflow starts with the [Dropbox trigger](/connectors/dro | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |- | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | + | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. | | **Password** | Yes | <*password*> | The password for the computer where you have your file system | | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | The example logic app workflow starts with the [Dropbox trigger](/connectors/dro ![Screenshot showing connection information for File System managed connector action.](media/connect-file-systems/file-system-connection-consumption.png) - The following example shows the connection information for the File System ISE-based connector action: -- ![Screenshot showing connection information for File System ISE-based connector action.](media/connect-file-systems/file-system-connection-ise.png) - 1. When you're done, select **Create**. Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action. If successful, your workflow creates a file on your file system server, based on | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |- | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | + | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. | | **Password** | Yes | <*password*> | The password for the computer where you have your file system | | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | |
connectors | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/introduction.md | In Standard workflows for single-tenant Azure Logic Apps, you can create nativel * [Create service provider-based custom built-in connectors for Standard workflows](../logic-apps/create-custom-built-in-connector-standard.md) -## ISE and connectors --> [!IMPORTANT] -> -> On August 31, 2024, the ISE resource retires, due to its dependency on Azure Cloud Services (classic), -> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard -> logic apps to avoid service disruption. Standard logic app workflows run in single-tenant Azure Logic Apps -> and provide the same capabilities plus more. For example, Standard workflows support using private endpoints -> for inbound traffic so that your workflows can communicate privately and securely with virtual networks. -> Standard workflows also support virtual network integration for outbound traffic. For more information, -> review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](/azure/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint). --If you use a dedicated [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) where workflows can directly access to resources in an Azure virtual network, you can build, deploy, and run your workflows on dedicated resources. --Custom connectors created within an ISE don't work with the on-premises data gateway. However, these connectors can directly access on-premises data sources that are connected to an Azure virtual network hosting the ISE. So, logic app workflows in an ISE most likely don't need the data gateway when communicating with those resources. If you have custom connectors that you created outside an ISE that require the on-premises data gateway, workflows in an ISE can use those connectors. --In the workflow designer, when you browse the built-in connectors or managed connectors that you want to use for workflows in an ISE, the **CORE** label appears on built-in connectors, while the **ISE** label appears on managed connectors that are designed to work with an ISE. -- :::column::: - ![Example CORE connector](./media/apis-list/example-core-connector.png) - <br><br>**CORE** - <br><br>Built-in connectors with this label run in the same ISE as your workflows. - :::column-end::: - :::column::: - ![Example ISE connector](./media/apis-list/example-ise-connector.png) - <br><br>**ISE** - <br><br>Managed connectors with this label run in the same ISE as your workflows. - <br><br>If you have an on-premises system that's connected to an Azure virtual network, an ISE lets your workflows directly access that system without using the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). Instead, you can either use that system's **ISE** connector if available, an HTTP action, or a [custom connector](#custom-connectors-and-apis). - <br><br>For on-premises systems that don't have **ISE** connectors, use the on-premises data gateway. To find available ISE connectors, review [ISE connectors](#ise-and-connectors). - :::column-end::: - :::column::: - ![Example non-ISE connector](./media/apis-list/example-multitenant-connector.png) - <br><br>No label - <br><br>All other connectors without a label, which you can continue to use, run in the global, multitenant Logic Apps service. - :::column-end::: - :::column::: - :::column-end::: - ## Known issues The following table includes known issues for connectors in Azure Logic Apps: |
connectors | Managed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md | For more information, review the following documentation: :::column-end::: :::row-end::: -## ISE connectors --In an integration service environment (ISE), these managed connectors also have [ISE versions](introduction.md#ise-and-connectors), which have different capabilities than their multitenant versions: --> [!NOTE] -> -> Workflows that run in an ISE and their connectors, regardless where those connectors run, follow a fixed pricing plan versus the Consumption pricing plan. For more information, review [Azure Logic Apps pricing model](../logic-apps/logic-apps-pricing.md) and [Azure Logic Apps pricing details](https://azure.microsoft.com/pricing/details/logic-apps/). -- :::column::: - [![AS2 ISE icon][as2-icon]][as2-doc] - <br><br>[**AS2** ISE][as2-doc] - :::column-end::: - :::column::: - [![Azure Automation ISE icon][azure-automation-icon]][azure-automation-doc] - <br><br>[**Azure Automation** ISE][azure-automation-doc] - :::column-end::: - :::column::: - [![Azure Blob Storage ISE icon][azure-blob-storage-icon]][azure-blob-storage-doc] - <br><br>[**Azure Blob Storage** ISE][azure-blob-storage-doc] - :::column-end::: - :::column::: - [![Azure Cosmos DB ISE icon][azure-cosmos-db-icon]][azure-cosmos-db-doc] - <br><br>[**Azure Cosmos DB** ISE][azure-cosmos-db-doc] - :::column-end::: - :::column::: - [![Azure Event Hubs ISE icon][azure-event-hubs-icon]][azure-event-hubs-doc] - <br><br>[**Azure Event Hubs** ISE][azure-event-hubs-doc] - :::column-end::: - :::column::: - [![Azure Event Grid ISE icon][azure-event-grid-icon]][azure-event-grid-doc] - <br><br>[**Azure Event Grid** ISE][azure-event-grid-doc] - :::column-end::: - :::column::: - [![Azure Files ISE icon][azure-file-storage-icon]][azure-file-storage-doc] - <br><br>[**Azure Files** ISE][azure-file-storage-doc] - :::column-end::: - :::column::: - [![Azure Key Vault ISE icon][azure-key-vault-icon]][azure-key-vault-doc] - <br><br>[**Azure Key Vault** ISE][azure-key-vault-doc] - :::column-end::: - :::column::: - [![Azure Monitor Logs ISE icon][azure-monitor-logs-icon]][azure-monitor-logs-doc] - <br><br>[**Azure Monitor Logs** ISE][azure-monitor-logs-doc] - :::column-end::: - :::column::: - [![Azure Service Bus ISE icon][azure-service-bus-icon]][azure-service-bus-doc] - <br><br>[**Azure Service Bus** ISE][azure-service-bus-doc] - :::column-end::: - :::column::: - [![Azure Synapse Analytics ISE icon][azure-sql-data-warehouse-icon]][azure-sql-data-warehouse-doc] - <br><br>[**Azure Synapse Analytics** ISE][azure-sql-data-warehouse-doc] - :::column-end::: - :::column::: - [![Azure Table Storage ISE icon][azure-table-storage-icon]][azure-table-storage-doc] - <br><br>[**Azure Table Storage** ISE][azure-table-storage-doc] - :::column-end::: - :::column::: - [![Azure Queues ISE icon][azure-queues-icon]][azure-queues-doc] - <br><br>[**Azure Queues** ISE][azure-queues-doc] - :::column-end::: - :::column::: - [![EDIFACT ISE icon][edifact-icon]][edifact-doc] - <br><br>[**EDIFACT** ISE][edifact-doc] - :::column-end::: - :::column::: - [![File System ISE icon][file-system-icon]][file-system-doc] - <br><br>[**File System** ISE][file-system-doc] - :::column-end::: - :::column::: - [![FTP ISE icon][ftp-icon]][ftp-doc] - <br><br>[**FTP** ISE][ftp-doc] - :::column-end::: - :::column::: - [![IBM 3270 ISE icon][ibm-3270-icon]][ibm-3270-doc] - <br><br>[**IBM 3270** ISE][ibm-3270-doc] - :::column-end::: - :::column::: - [![IBM DB2 ISE icon][ibm-db2-icon]][ibm-db2-doc] - <br><br>[**IBM DB2** ISE][ibm-db2-doc] - :::column-end::: - :::column::: - [![IBM MQ ISE icon][ibm-mq-icon]][ibm-mq-doc] - <br><br>[**IBM MQ** ISE][ibm-mq-doc] - :::column-end::: - :::column::: - [![SAP ISE icon][sap-icon]][sap-connector-doc] - <br><br>[**SAP** ISE][sap-connector-doc] - :::column-end::: - :::column::: - [![SFTP-SSH ISE icon][sftp-ssh-icon]][sftp-ssh-doc] - <br><br>[**SFTP-SSH** ISE][sftp-ssh-doc] - :::column-end::: - :::column::: - [![SMTP ISE icon][smtp-icon]][smtp-doc] - <br><br>[**SMTP** ISE][smtp-doc] - :::column-end::: - :::column::: - [![SQL Server ISE icon][sql-server-icon]][sql-server-doc] - <br><br>[**SQL Server** ISE][sql-server-doc] - :::column-end::: - :::column::: - [![X12 ISE icon][x12-icon]][x12-doc] - <br><br>[**X12** ISE][x12-doc] - :::column-end::: --For more information, see these topics: --* [Access to Azure virtual network resources from Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) -* [Azure Logic Apps pricing model](../logic-apps/logic-apps-pricing.md) ## Next steps |
container-registry | Container Registry Tutorial Sign Build Push | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md | In this tutorial: > * Build and push a container image with [ACR Tasks](container-registry-tasks-overview.md) > * Sign a container image with Notation CLI and AKV plugin > * Validate a container image against the signature with Notation CLI+> * Timestamping ## Prerequisites To verify the container image, add the root certificate that signs the leaf cert Upon successful verification of the image using the trust policy, the sha256 digest of the verified image is returned in a successful output message. +## Timestamping ++Since Notation v1.2.0 release, Notation supports [RFC 3161](https://www.rfc-editor.org/rfc/rfc3161) compliant timestamping. This enhancement extends the trust of signatures created within certificates validity, enabling successful signature verification even after certificates have expired. Timestamping reduces costs by eliminating the need to periodically re-sign images due to certificate expiry, which is especially critical when using short-lived certificates. For detailed instructions on how to sign and verify using timestamping, please refer to the [Notary Project timestamping guide](https://v1-2.notaryproject.dev/docs/user-guides/how-to/timestamping/). + ## Next steps Notation also provides CI/CD solutions on Azure Pipeline and GitHub Actions Workflow: |
container-registry | Container Registry Tutorial Sign Trusted Ca | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-trusted-ca.md | In this article: > * Build and push a container image with ACR task > * Sign a container image with Notation CLI and AKV plugin > * Verify a container image signature with Notation CLI+> * Timestamping ## Prerequisites To learn more about assigning policy to a principal, see [Assign Access Policy]( If the certificate is revoked, it invalidates the signature. The most common reason for revoking a certificate is when the certificateΓÇÖs private key has been compromised. To resolve this issue, you should obtain a new certificate from a trusted CA vendor and sign container images again. +## Timestamping ++Since Notation v1.2.0 release, Notation supports [RFC 3161](https://www.rfc-editor.org/rfc/rfc3161) compliant timestamping. This enhancement extends the trust of signatures created within certificates validity, enabling successful signature verification even after certificates have expired. Timestamping reduces costs by eliminating the need to periodically re-sign images due to certificate expiry, which is especially critical when using short-lived certificates. For detailed instructions on how to sign and verify using timestamping, please refer to the [Notary Project timestamping guide](https://v1-2.notaryproject.dev/docs/user-guides/how-to/timestamping/). + ## Next steps Notation also provides CI/CD solutions on Azure Pipeline and GitHub Actions Workflow: |
databox-gateway | Data Box Gateway 1905 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-1905-release-notes.md | -The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they are added. Before you deploy your Data Box Edge/Data Box Gateway, carefully review the information contained in the release notes. +The release notes are continuously updated, and critical issues requiring a workaround are added as they're discovered. Before you deploy your Data Box Edge/Data Box Gateway, carefully review the information contained in the release notes. This release corresponds to the software versions: This release corresponds to the software versions: ## What's new -- **Field Programmable Gate Array (FPGA) logging improvements** - In this release, we have made logging and alert enhancements related to FPGA. This is a required update for Data Box Edge if you are using the Edge compute feature with the FPGA. For more information, see how to [transform data with Edge compute on your Data Box Edge](../databox-online/azure-stack-edge-deploy-configure-compute-advanced.md).+- **Field Programmable Gate Array (FPGA) logging improvements** - This release includes improvements in logging and alert enhancements related to field-programmable gate arrays (FPGAs). This update is required for Data Box Edge if you're using the Edge compute feature with the FPGA. For more information, see how to [transform data with Edge compute on your Data Box Edge](../databox-online/azure-stack-edge-deploy-configure-compute-advanced.md). ## Known issues in GA release -No new issues are release noted for this release. All the release noted issues have carried over from the previous releases. To see a list of known issues, go to [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release). +No new issues are release noted for this release. All the release noted issues are carried over from the previous releases. To see a list of known issues, go to [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release). ## Next steps |
databox-gateway | Data Box Gateway 1906 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-1906-release-notes.md | -The following release notes identify the critical open issues and the resolved issues for the 1906 release for Azure Data Box Edge and Azure Data Box Gateway. +The following release notes identify the critical open issues and the resolved issues for the 1906 release for Azure Data Box Edge and Azure Data Box Gateway. -The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they are added. Before you deploy your Data Box Edge/Data Box Gateway, carefully review the information contained in the release notes. +The release notes are continuously updated, and critical issues requiring a workaround are added as they're discovered. Before you deploy your Data Box Edge/Data Box Gateway, carefully review the information contained in the release notes. This release corresponds to the software versions: This release corresponds to the software versions: ## What's new -- **Bug fix in the recovery key management workflow** - In the earlier release, there was a bug owing to which the recovery key was not getting applied. This bug is fixed in this release. We strongly recommend that you apply this update as the recovery key allows you to recover the data on the device, in the event the device doesn't boot up. For more information, see how to [save the recovery key when deploying Data Box Edge or Data Box Gateway](../databox-online/azure-stack-edge-deploy-connect-setup-activate.md#set-up-and-activate-the-physical-device).-- **Field Programmable Gate Array (FPGA) logging improvements** - Starting 1905 release, logging and alert enhancements related to FPGA were made. This continues to be a required update for Data Box Edge if you are using the Edge compute feature with the FPGA. For more information, see how to [transform data with Edge compute on your Data Box Edge](../databox-online/azure-stack-edge-deploy-configure-compute-advanced.md).+- **Bug fix in the recovery key management workflow** - In the earlier release, there was a bug owing to which the recovery key wasn't getting applied. This bug is fixed in this release. We strongly recommend that you apply this update as the recovery key allows you to recover the data on the device if the device doesn't boot. For more information, see how to [save the recovery key when deploying Data Box Edge or Data Box Gateway](../databox-online/azure-stack-edge-deploy-connect-setup-activate.md#set-up-and-activate-the-physical-device). +- **Field Programmable Gate Array (FPGA) logging improvements** - Starting in the 1905 release, both logging and alert enhancements related to Field Programmable Gate Arrays (FPGA) were made. This update continues to be required for Data Box Edge if you're using the Edge compute feature with the FPGA. For more information, see how to [transform data with Edge compute on your Data Box Edge](../databox-online/azure-stack-edge-deploy-configure-compute-advanced.md). ## Known issues in GA release -No new issues are release noted for this release. All the release noted issues have carried over from the previous releases. To see a list of known issues, go to [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release). -+No new issues are release noted for this release. All the release noted issues carry over from the previous releases. To see a list of known issues, go to [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release). ## Next steps |
databox-gateway | Data Box Gateway 1911 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-1911-release-notes.md | |
databox-gateway | Data Box Gateway 2007 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-2007-release-notes.md | |
databox-gateway | Data Box Gateway 2101 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-2101-release-notes.md | -The release notes are continuously updated. As critical issues that require a workaround are discovered, they are added. Before you deploy your Azure Data Box Gateway, carefully review the information in the release notes. +The release notes are continuously updated. Critical issues that require a workaround are added as they're discovered. Before you deploy your Azure Data Box Gateway, carefully review the information in the release notes. This release corresponds to the software versions: This release corresponds to the software versions: This release contains the following bug fix: -- **Upload issue** - This release fixes an upload problem where upload restarts because of failures can slow the rate of upload completion. This problem can occur when uploading a dataset that primarily consists of files that are large in size relative to available bandwidth, particularly, but not limited to, when bandwidth throttling is active. This change ensures sufficient opportunity for upload completion before restarting upload for a given file.+- **Upload issue** - This release fixes an upload problem where upload restarts because of failures can slow the rate of upload completion. This problem can occur when uploading datasets consisting primarily of files that are large in size relative to available bandwidth, particularly, but not limited to, when bandwidth throttling is active. This change ensures sufficient opportunity for upload completion before restarting upload for a given file. This release also contains the following updates: -- All cumulative Windows updates and .NET framework updates released through October 2020.+- All cumulative Windows and .NET framework updates released through October 2020. - The static IP address for Azure Data Box Gateway is retained across software updates. ## Known issues in this release -No new issues are release noted for this release. All the release noted issues have carried over from the previous releases. To see a list of known issues, go to [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release). +No new issues are release noted for this release. All the release noted issues are carried over from the previous releases. To see a list of known issues, go to [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release). ## Next steps |
databox-gateway | Data Box Gateway 2105 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-2105-release-notes.md | -The release notes are continuously updated. As critical issues that require a workaround are discovered, they are added. Before you deploy your Azure Data Box Gateway, carefully review the information in the release notes. +The release notes are continuously updated. Critical issues that require a workaround are added as they're discovered. Before you deploy your Azure Data Box Gateway, carefully review the information in the release notes. This release corresponds to the software version: Update 2105 can be applied to all prior releases of Data Box Gateway. This release contains the following bug fix: -- **Buffer overrun results in abrupt reboot of gateway** - This release fixes a bug that can cause a buffer overrun resulting in access of invalid memory, leading to an abrupt, unexpected reboot of the gateway device. The error can occur when a client accesses the last several bytes of a file whose data needs to be read back by the appliance from Azure, and the file size isn't a multiple of 4096 bytes.+- **Buffer overrun results in abrupt reboot of gateway** - This release fixes a bug that can cause a buffer overrun resulting in access of invalid memory, leading to an abrupt, unexpected reboot of the gateway device. The error can occur when a client accesses the last several bytes of a file whose data needs to be read back by the appliance from Azure, and the file size isn't a multiple of 4,096 bytes. This release also contains the following updates: This release also contains the following updates: ## Known issues in this release -No new issues are release noted for this release. All the release noted issues have carried over from the previous releases. To see a list of known issues, go to [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release). +No new issues are release noted for this release. All release noted issues carry over from the previous releases. To see a list of known issues, go to [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release). ## Next steps |
databox-gateway | Data Box Gateway 2301 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-2301-release-notes.md | |
databox-gateway | Data Box Gateway Apply Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-apply-updates.md | |
databox-gateway | Data Box Gateway Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-limits.md | |
databox-gateway | Data Box Gateway Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-release-notes.md | -The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they are added. Before you deploy your Data Box Edge/Data Box Gateway, carefully review the information contained in the release notes. +The release notes are continuously updated, and critical issues requiring a workaround are added as they're discovered. Before you deploy your Data Box Edge/Data Box Gateway, carefully review the information contained in the release notes. The GA release corresponds to the software versions: The GA release corresponds to the software versions: ## What's new -- **New virtual disk images** - New VHDX and VMDK are now available in the Azure portal. Download these images to provision, configure, and deploy new Data Box Gateway GA devices. The Data Box Gateway devices created in the earlier preview releases cannot be updated to this version. For more information, go to [Prepare to deploy Azure Data Box Gateway](data-box-gateway-deploy-prep.md).+- **New virtual disk images** - New VHDX and VMDK are now available in the Azure portal. Download these images to provision, configure, and deploy new Data Box Gateway GA devices. The Data Box Gateway devices created in the earlier preview releases can't be updated to this version. For more information, go to [Prepare to deploy Azure Data Box Gateway](data-box-gateway-deploy-prep.md). - **NFS support** - NFS support is currently in preview and available for v3.0 and v4.1 clients that access the Data Box Edge and Data Box Gateway devices. - **Storage resiliency** - Your Data Box Edge device can withstand the failure of one data disk with the Storage resiliency feature. This feature is currently in preview. You can enable storage resiliency by selecting the **Resilient** option in the **Storage settings** in the local web UI. The following table provides a summary of known issues for the Data Box Gateway | No. | Feature | Issue | Workaround/comments | | | | | |-| **1.** |File types | The following file types are not supported: character files, block files, sockets, pipes, symbolic links. |Copying these files results in 0-length files getting created on the NFS share. These files remain in an error state and are also reported in *error.xml*. <br> Symbolic links to directories result in directories never getting marked offline. As a result, you may not see the gray cross on the directories that indicates that the directories are offline and all the associated content was completely uploaded to Azure. | -| **2.** |Deletion | Due to a bug in this release, if an NFS share is deleted, then the share may not be deleted. The share status will display *Deleting*. |This occurs only when the share is using an unsupported file name. | -| **3.** |Copy | Data copy fails with Error: The requested operation could not be completed due to a file system limitation. |The Alternate Data Stream (ADS) associated with file size greater than 128 KB is not supported. | +| **1.** |File types | The following file types aren't supported: character files, block files, sockets, pipes, symbolic links. |Copying these files results in 0-length files getting created on the NFS share. These files remain in an error state and are also reported in *error.xml*. <br> Symbolic links to directories result in directories never getting marked offline. As a result, you may not see the gray cross on the directories that indicates that the directories are offline and all the associated content was uploaded to Azure. | +| **2.** |Deletion | Due to a bug in this release, if an NFS share is deleted, then the share may not be deleted. The share status displays *Deleting*. |This occurs only when the share is using an unsupported file name. | +| **3.** |Copy | Data copy fails with Error: The requested operation couldn't be completed due to a file system limitation. |The Alternate Data Stream (ADS) associated with file size greater than 128 KB is not supported. | ## Next steps |
databox-gateway | Data Box Gateway Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-security.md | |
databox-gateway | Data Box Gateway System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-system-requirements.md | |
databox-gateway | Data Box Gateway Use Cases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-use-cases.md | |
databox | Data Box Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-audit-logs.md | |
databox | Data Box Bring Your Own Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-bring-your-own-certificates.md | |
databox | Data Box Disk Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-limits.md | |
databox | Data Box Disk System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-system-requirements.md | |
databox | Data Box Export Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-export-logs.md | |
databox | Data Box Hardware Additional Terms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-hardware-additional-terms.md | This article documents additional terms for Azure Data Box hardware. ## Availability of Data Box Device -The Data Box Device may not be offered in all regions or jurisdictions, and even where it is offered, it may be subject to availability. Microsoft is not responsible for delays related to the Service outside of its direct control. Microsoft reserves the right to refuse to offer the Service and corresponding Data Box Device to anyone in its sole discretion and judgment. +The Data Box Device might not be offered in all regions or jurisdictions, and even where offered, might be subject to availability. Microsoft isn't responsible for delays related to the Service outside of its direct control. Microsoft reserves the right to refuse to offer the Service and corresponding Data Box Device to anyone in its sole discretion and judgment. ## Possession and Return of the Data Box Device -As part of the Service, Microsoft allows Customer to retain the Data Box Device for limited periods of time which may vary based on the Data Box Device type. If Customer retains the Data Box Device beyond the specified time period, Microsoft may charge Customer additional daily fees as described at https://go.microsoft.com/fwlink/?linkid=2052173. +As part of the Service, Microsoft allows Customer to retain the Data Box Device for limited periods of time which might vary based on the Data Box Device type. If Customer retains the Data Box Device beyond the specified time period, Microsoft might charge Customer additional daily fees as described at https://go.microsoft.com/fwlink/?linkid=2052173. ## Shipment and Title; Fees ### Title and Risk of Loss -All right, title and interest in each Data Box Device is and shall remain the property of Microsoft, and except as described in the Additional Terms, no rights are granted to any Data Box Device (including under any patent, copyright, trade secret, trademark or other proprietary rights). Customer will compensate Microsoft for any loss, material damage or destruction to or of any Data Box Device while it is at any of CustomerΓÇÖs locations as described in Shipment and Title; Fees, Table 1. Customer is responsible for inspecting the Data Box Device upon receipt from the carrier and for promptly reporting any damage to Microsoft Support at databoxsupport@microsoft.com. Customer is responsible for the entire risk of loss of, or any damage to, the Data Box Device once it has been delivered by the carrier to CustomerΓÇÖs designated address until the Microsoft-designated carrier accepts the Data Box Device for delivery back to the Designated Azure Data Center. +All right, title, and interest in each Data Box Device is and shall remain the property of Microsoft, and except as described in the Additional Terms, no rights are granted to any Data Box Device (including under any patent, copyright, trade secret, trademark, or other proprietary rights). Customer compensates Microsoft for any loss, material damage, or destruction to or of any Data Box Device while it is at any of CustomerΓÇÖs locations as described in Shipment and Title; Fees, Table 1. Customer is responsible for inspecting the Data Box Device upon receipt from the carrier and for promptly reporting any damage to Microsoft Support at databoxsupport@microsoft.com. Customer is responsible for the entire risk of loss of, or any damage to, the Data Box Device once delivered by the carrier to CustomerΓÇÖs designated address until the Microsoft-designated carrier accepts the Data Box Device for delivery back to the Designated Azure Data Center. ### Fees -Microsoft may charge Customer specified fees in connection with its use of the Data Box Device as part of the Service, as described at https://go.microsoft.com/fwlink/?linkid=2052173. For clarity, Azure Storage and Azure IoT Hub are separate Azure Services, and if used (even in connection with its use of the Service), separate Azure metered fees will apply. Additional Azure services Customer uses after completing a transfer of data using the Azure Data Box Service are also subject to separate usage fees. For Data Box Devices, Microsoft may charge Customer a lost device fee, as provided in Table 1 below, if the Data Box Device is lost or materially damaged while it is in CustomerΓÇÖs care. Microsoft reserves the right to change the fees charged for Data Box Device types, including charging different amounts for different device form factors. +Microsoft might charge Customer specified fees in connection with its use of the Data Box Device as part of the Service, as described at https://go.microsoft.com/fwlink/?linkid=2052173. For clarity, Azure Storage and Azure IoT Hub are separate Azure Services, and if used (even in connection with its use of the Service), separate Azure metered fees apply. Additional Azure services Customer uses after completing a transfer of data using the Azure Data Box Service are also subject to separate usage fees. For Data Box Devices, Microsoft might charge Customer a lost device fee, as provided in Table 1 below, if the Data Box Device is lost or materially damaged while it is in CustomerΓÇÖs care. Microsoft reserves the right to change the fees charged for Data Box Device types, including charging different amounts for different device form factors. Table 1: Table 1: ### Shipment and Return of Data Box Device -Microsoft will designate a carrier for shipping and delivery of Data Box Devices that are transported or delivered between Customer and a Designated Azure Data Center or a Microsoft entity. Customer will be responsible for costs of shipping a Data Box Device from Microsoft or a Designated Azure Data Center to Customer and return shipping of the Data Box Device, including any metered amounts for carrier charges, any taxes, or applicable customs fees. When returning a Data Box Device to Microsoft, Customer will package and ship the Data Box Device in accordance with MicrosoftΓÇÖs instructions, including using a carrier designated by Microsoft and the packaging materials provided by Microsoft. +Microsoft designates a carrier for shipping and delivery of Data Box Devices that are transported or delivered between Customer and a Designated Azure Data Center or a Microsoft entity. Customer is responsible for costs of shipping a Data Box Device from Microsoft or a Designated Azure Data Center to Customer and return shipping of the Data Box Device, including any metered amounts for carrier charges, any taxes, or applicable customs fees. When returning a Data Box Device to Microsoft, Customer will package and ship the Data Box Device in accordance with MicrosoftΓÇÖs instructions, including using a carrier designated by Microsoft and the packaging materials provided by Microsoft. ### Transit Risks -Although data on a Data Box Device is encrypted, Customer acknowledges that there are inherent risks in shipping data on and in connection with the Data Box Device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to a Data Box Device or any data stored on one, including during transit. +Although data on a Data Box Device is encrypted, Customer acknowledges that there are inherent risks in shipping data on and in connection with the Data Box Device, and that Microsoft has no liability to Customer for any damage, theft, or loss occurring to a Data Box Device or any data stored on one, including during transit. ### Self-Managed Shipment -Alternatively, Customer may elect to use CustomerΓÇÖs designated carrier or Customer itself to ship and return the Data Box Device by selecting this option in the Service portal. Once selected, (i) Microsoft will inform Customer about Data Box Device availability; (ii) Microsoft will prepare the Data Box Device for pick-up by CustomerΓÇÖs designated carrier or Customer itself; and (iii) Customer will coordinate with Microsoft and Designated Azure Data Center personnel for pick-up and return of the Data Box Device by CustomerΓÇÖs designated carrier or Customer directly. CustomerΓÇÖs election for self-managed shipment is subject to the following: (i) Customer abides by all other applicable terms and conditions related to the Service and Data Box Device, including the Product Terms and the Azure Data Box Hardware Terms; (ii) Customer is responsible for the entire risk of loss of, or any damage to, the Data Box Device (as described in the ΓÇ£Shipment and Title; FeesΓÇ¥ section, under subsection (a) ΓÇ£Title and Risk of LossΓÇ¥) from the time that Microsoft makes the Data Box Device available for pick-up by CustomerΓÇÖs designated carrier or Customer, to the time Microsoft has accepted the Data Box Device from CustomerΓÇÖs designated carrier or Customer at the Designated Azure Data Center; (iii) Customer is fully responsible for the costs of shipping a Data Box Device from Microsoft or a Designated Azure Data Center to Customer and return shipping of the same, including carrier charges, any taxes, or applicable customs fees; (iv) When returning a Data Box Device to Microsoft or a Designated Azure Data Center, Customer will package and ship the Data Box Device in accordance with MicrosoftΓÇÖs instructions and any packaging materials provided by Microsoft; (v) Customer will be charged applicable fees (as described in the ΓÇ£Shipment and Title; FeesΓÇ¥ section, under subsection (b) ΓÇ£FeesΓÇ¥) which commence from the time the Data Box Device is ready for pick-up at the agreed upon time and location, and will cease once the Data Box Device has been delivered to Microsoft or the Designated Azure Data Center; and (vi) Customer acknowledges that there are inherent risks in shipping data on and in connection with the Data Box Device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to a Data Box Device or any data stored on one, including during transit when shipped by CustomerΓÇÖs designated carrier. +Alternatively, Customer might elect to use CustomerΓÇÖs designated carrier or Customer itself to ship and return the Data Box Device by selecting this option in the Service portal. Once selected, (i) Microsoft informs Customer about Data Box Device availability; (ii) Microsoft prepares the Data Box Device for pick-up by CustomerΓÇÖs designated carrier or Customer itself; and (iii) Customer coordinates with Microsoft and Designated Azure Data Center personnel for pick-up and return of the Data Box Device by CustomerΓÇÖs designated carrier or Customer directly. CustomerΓÇÖs election for self-managed shipment is subject to the following: (i) Customer abides by all other applicable terms and conditions related to the Service and Data Box Device, including the Product Terms and the Azure Data Box Hardware Terms; (ii) Customer is responsible for the entire risk of loss of, or any damage to, the Data Box Device (as described in the ΓÇ£Shipment and Title; FeesΓÇ¥ section, under subsection (a) ΓÇ£Title and Risk of LossΓÇ¥) from the time that Microsoft makes the Data Box Device available for pick-up by CustomerΓÇÖs designated carrier or Customer, to the time Microsoft has accepted the Data Box Device from CustomerΓÇÖs designated carrier or Customer at the Designated Azure Data Center; (iii) Customer is fully responsible for the costs of shipping a Data Box Device from Microsoft or a Designated Azure Data Center to Customer and return shipping of the same, including carrier charges, any taxes, or applicable customs fees; (iv) When returning a Data Box Device to Microsoft or a Designated Azure Data Center, Customer will package and ship the Data Box Device in accordance with MicrosoftΓÇÖs instructions and any packaging materials provided by Microsoft; (v) Customer will be charged applicable fees (as described in the ΓÇ£Shipment and Title; FeesΓÇ¥ section, under subsection (b) ΓÇ£FeesΓÇ¥) which commence from the time the Data Box Device is ready for pick-up at the agreed upon time and location, and will cease once the Data Box Device has been delivered to Microsoft or the Designated Azure Data Center; and (vi) Customer acknowledges that there are inherent risks in shipping data on and in connection with the Data Box Device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to a Data Box Device or any data stored on one, including during transit when shipped by CustomerΓÇÖs designated carrier. ## Responsibilities if Customer Moves a Data Box Device between Locations While Customer is in possession of a Data Box Device, Customer may, at its sole risk and expense, transport the Data Box Device to its domestic locations, and international locations as permitted by Microsoft in writing, for use to upload its data in accordance with this section and the requirements of the Additional Terms. -If Customer wishes to move a Data Box Device to another country/region, then Customer must be the exporter of record from the country/region of export and importer of record into the country/region where the Data Box Device is being imported. Customer is responsible for obtaining, at its own risk and expense, any export license, import license and other official authorization for the exportation and importation of the Data Box Device and CustomerΓÇÖs data to any such different Customer location. Customer shall also be responsible for customs clearance at any such different Customer location, and will bear all duties, taxes, fines, penalties (if applicable) and all charges payable for exporting and importing the Data Box Device, as well as any and all costs and risks of carrying out customs formalities in a timely manner. Customer agrees to comply with and be responsible for all applicable import, export and general trade laws and regulations should Customer decide to transport the Data Box Device beyond the country/region border in which Customer receives the Data Box Device. Additionally, if Customer transports the Data Box Device to a different country/region, prior to shipping the Data Box Device back to the original point of origin, whether a specified Microsoft entity or a Designated Azure Data Center, Customer agrees to return the Data Box Device to the country/region location where Customer initially received the Data Box Device. If requested, Microsoft may provide MicrosoftΓÇÖs estimated value of the Data Box Device as supplied by Microsoft to Customer and share available product certifications for the Data Box Device. +If Customer wishes to move a Data Box Device to another country/region, then Customer must be the exporter of record from the country/region of export and importer of record into the country/region where the Data Box Device is being imported. Customer is responsible for obtaining, at its own risk and expense, any export license, import license and other official authorization for the exportation and importation of the Data Box Device and CustomerΓÇÖs data to any such different Customer location. Customer shall also be responsible for customs clearance at any such different Customer location, and will bear all duties, taxes, fines, penalties (if applicable) and all charges payable for exporting and importing the Data Box Device, as well as any and all costs and risks of carrying out customs formalities in a timely manner. Customer agrees to comply with and be responsible for all applicable import, export, and general trade laws and regulations should Customer decide to transport the Data Box Device beyond the country/region border in which Customer receives the Data Box Device. Additionally, if Customer transports the Data Box Device to a different country/region, prior to shipping the Data Box Device back to the original point of origin, whether a specified Microsoft entity or a Designated Azure Data Center, Customer agrees to return the Data Box Device to the country/region location where Customer initially received the Data Box Device. If requested, Microsoft might provide MicrosoftΓÇÖs estimated value of the Data Box Device as supplied by Microsoft to Customer and share available product certifications for the Data Box Device. ## Next steps |
databox | Data Box Heavy Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-limits.md | |
databox | Data Box Heavy Safety | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-safety.md | This article contains the safety information for your Azure Data Box Heavy. Read all the safety information in this article before you use your Azure Data Box Heavy. Failure to follow instructions could result in fire, electric shock, or other injuries, or damage to your properties. ## Safety icon conventions-Here are the icons that you will find when you review the safety precautions to be observed when setting up and running your Data Box. +Here are the icons that you find when you review the safety precautions to be observed when setting up and running your Data Box. | Icon | Description | |: |: |-| ![Danger icon](./media/data-box-heavy-safety/warning-icon.png) **DANGER!** |Indicates a hazardous situation that, if not avoided, will result in death or serious injury. This signal word is to be limited to the most extreme situations. | -| ![Warning icon](./media/data-box-heavy-safety/warning-icon.png) **WARNING!** |Indicates a hazardous situation that, if not avoided, could result in death or serious injury. | -| ![Warning icon](./media/data-box-heavy-safety/warning-icon.png) **CAUTION!** |Indicates a hazardous situation that, if not avoided, could result in minor or moderate injury. | +| ![Danger icon](./media/data-box-heavy-safety/warning-icon.png) **DANGER!** |Indicates a hazardous situation that, if not avoided, might result in death or serious injury. This signal word is to be limited to the most extreme situations. | +| ![Warning icon](./media/data-box-heavy-safety/warning-icon.png) **WARNING!** |Indicates a hazardous situation that, if not avoided, might result in death or serious injury. | +| ![Warning icon](./media/data-box-heavy-safety/warning-icon.png) **CAUTION!** |Indicates a hazardous situation that, if not avoided, might result in minor or moderate injury. | | ![Notice icon](./media/data-box-heavy-safety/notice-icon.png) **NOTICE:** |Indicates information considered important, but not hazard-related. | | ![Electrical shock icon](./media/data-box-heavy-safety/electrical-shock-hazard-icon.png) **Electrical shock hazard** |High voltage. | | ![Heavy weight icon](./media/data-box-heavy-safety/heavy-weight-hazard-icon.png) **Heavy weight** | |-| ![No user serviceable parts icon](./media/data-box-heavy-safety/no-user-serviceable-parts-icon.png) **No user serviceable parts** |Do not access unless properly trained. | +| ![No user serviceable parts icon](./media/data-box-heavy-safety/no-user-serviceable-parts-icon.png) **No user serviceable parts** |Don't access unless properly trained. | | ![Read safety notice icon](./media/data-box-heavy-safety/read-safety-and-health-information-icon.png) **Read all instructions first** | | | ![Tip hazard icon](./media/data-box-heavy-safety/tip-hazard-icon.png) **Tip hazard** | | | ![Overload tip hazard icon](./media/data-box-heavy-safety/overload-tip-hazard-icon.png) **Overload tip hazard** | | Here are the icons that you will find when you review the safety precautions to ![Warning Icon](./media/data-box-heavy-safety/warning-icon.png) ![Tip Hazard Icon](./media/data-box-heavy-safety/tip-hazard-icon.png) **Tip hazard** -* Place the equipment on a flat, hard and stable surface to avoid a potential tip or crushing hazard. +* Place the equipment on a flat, hard, and stable surface to avoid a potential tip or crushing hazard. * Verify the casters are locked before you inspect, turn on, and operate the equipment. ![Warning icon](./media/data-box-heavy-safety/warning-icon.png) ![Electrical shock icon](./media/data-box-heavy-safety/electrical-shock-hazard-icon.png)![No user serviceable parts icon](./media/data-box-heavy-safety/no-user-serviceable-parts-icon.png) **CAUTION!** -* Inspect the *as-received* device for damages. If the device enclosure is damaged, contact [Microsoft Support](data-box-disk-contact-microsoft-support.md) to obtain a replacement. Do not attempt to operate the device. -* The device is equipped with tamper-proof screws. If you suspect the device is malfunctioning, [Microsoft Support](data-box-disk-contact-microsoft-support.md) to obtain a replacement. Do not attempt to service the device. -* The device contains no user-serviceable parts. Hazardous voltage, current, and energy levels are present inside. Do not open. Return the device to Microsoft for servicing. +* Inspect the *as-received* device for damages. If the device enclosure is damaged, contact [Microsoft Support](data-box-disk-contact-microsoft-support.md) to obtain a replacement. Don't attempt to operate the device. +* The device is equipped with tamper-proof screws. If you suspect the device is malfunctioning, [Microsoft Support](data-box-disk-contact-microsoft-support.md) to obtain a replacement. Don't attempt to service the device. +* The device contains no user-serviceable parts. Hazardous voltage, current, and energy levels are present inside. Don't open. Return the device to Microsoft for servicing. ![Warning icon](./media/data-box-heavy-safety/warning-icon.png) ![Heavy weight icon](./media/data-box-heavy-safety/heavy-weight-hazard-icon.png) **WARNING!** -* A fully configured enclosure can weigh up to 326 kg (719 lbs); do not try to lift it by yourself. -* Do not attempt to lift the equipment without proper mechanical aid. Be aware that any attempts to lift this weight can cause severe injuries. +* A fully configured enclosure can weigh up to 326 kg (719 lbs); don't try to lift it by yourself. +* Don't attempt to lift the equipment without proper mechanical aid. Be aware that any attempts to lift this weight can cause severe injuries. * Conform to local occupational health and safety requirements when moving and lifting this equipment. * Use mechanical assistance or other suitable assistance when moving and lifting equipment. ![Warning icon](./media/data-box-heavy-safety/warning-icon.png) ![Overload tip hazard icon](./media/data-box-heavy-safety/overload-tip-hazard-icon.png) ![Tip hazard icon](./media/data-box-heavy-safety/tip-hazard-icon.png)![Heavy weight icon](./media/data-box-heavy-safety/heavy-weight-hazard-icon.png) **WARNING!**-* Data Box Heavy is not to be used as a table or workspace. Adding any type of load can create a potential hazard which could lead to injury or property damage. -* Rack-mounted equipment is not to be used as shelves or work spaces. Do not place the Data Box Heavy on top of rack-mounted equipment. Adding any type of load to an extended rack-mounted unit can create a potential tip hazard that could lead to injury, death, or product damage. +* Data Box Heavy isn't to be used as a table or workspace. Adding any type of load can create a potential hazard which could lead to injury or property damage. +* Rack-mounted equipment isn't to be used as shelves or work spaces. Don't place the Data Box Heavy on top of rack-mounted equipment. Adding any type of load to an extended rack-mounted unit can create a potential tip hazard that could lead to injury, death, or product damage. ![Warning icon](./media/data-box-heavy-safety/warning-icon.png) **WARNING!** Here are the icons that you will find when you review the safety precautions to - Away from sources of vibration or physical shock. - Isolated from strong electromagnetic fields produced by electrical devices. - Provided with properly grounded wall outlets.- - Provided with sufficient space to access the power supply cord(s), because they serve as the product's main power disconnect. + - Provided with sufficient space to access the power supply cords, because they serve as the product's main power disconnect. * Set up the device in a work area allowing for adequate air circulation around the device. * Install the device in a temperature-controlled indoor area free of conductive contaminants and allow for adequate air circulation around the device. Here are the icons that you will find when you review the safety precautions to ![Warning icon](./media/data-box-heavy-safety/warning-icon.png) ![Electrical shock icon](./media/data-box-heavy-safety/electrical-shock-hazard-icon.png) **WARNING!** -* Provide a safe electrical earth connection to the power supply cord. The AC cord has a three-wire grounding plug (a plug that has a grounding pin). This plug fits only a grounded AC outlet. Do not defeat the purpose of the grounding pin. +* Provide a safe electrical earth connection to the power supply cord. The AC cord has a three-wire grounding plug (a plug that has a grounding pin). This plug fits only a grounded AC outlet. Don't defeat the purpose of the grounding pin. * Given that the plug on the power supply cord is the main disconnect device, ensure that the socket outlets are located near the device and are easily accessible. * Unplug the power cord (by pulling the plug, not the cord) and disconnect all cables if any of the following conditions exist: Here are the icons that you will find when you review the safety precautions to - The device is exposed to rain or excess moisture. - The device was dropped and the device casing is damaged. - You suspect the device needs service or repair.-* Permanently unplug the unit before you move it or if you think it has become damaged in any way. +* Permanently unplug the unit before you move it or if you think it is damaged in any way. * Provide a suitable power source with electrical overload protection to meet the following power specifications: - Voltage: 100 V AC to 240 V AC - Current: 6 A to 10 A, maximum per power cord. Four power cords are provided. - Frequency: 50 Hz to 60 Hz-* Do not attempt to modify or use AC power cord(s) other than the ones provided with the equipment. The power cord(s) must meet the following criteria: +* Don't attempt to modify or use AC power cord(s) other than the ones provided with the equipment. The power cord(s) must meet the following criteria: - The power cord must have an electrical rating that is greater than that of the electrical current rating marked on the product. - The power cord must have safety ground pin or contact that is suitable for the electrical outlet. ![Warning icon](./media/data-box-heavy-safety/warning-icon.png) ![Electrical shock icon](./media/data-box-heavy-safety/electrical-shock-hazard-icon.png) ![Multiple power sources icon](./media/data-box-heavy-safety/multiple-power-sources-icon.png) **WARNING!** -* Unplug all AC power cord(s) to completely remove the AC power from the equipment. +* Unplug all AC power cords to completely remove the AC power from the equipment. ![Warning icon](./media/data-box-heavy-safety/warning-icon.png) **CAUTION!** -* This device contains coin cell batteries. Do not attempt to service the device. Batteries in this device are not user serviceable. +* This device contains coin cell batteries. Don't attempt to service the device. Batteries in this device aren't user serviceable. * **For service personnel only**: Risk of explosion if battery is replaced by an incorrect type. Dispose of the used batteries according to instructions.-* Laser peripherals or devices are present. To avoid risk or radiation exposure and/or personal injury, do not open the enclosure of any laser peripheral or device. Laser peripherals or devices are not serviceable. Only use certified and rated Laser Class I for optical transceiver product. +* Laser peripherals or devices are present. To avoid risk or radiation exposure and/or personal injury, don't open the enclosure of any laser peripheral or device. Laser peripherals or devices aren't serviceable. Only use certified and rated Laser Class I for optical transceiver product. ![Notice icon](./media/data-box-heavy-safety/notice-icon.png) **NOTICE:** This device is: - Operating temperature: 41┬░ to 95┬░ F (5┬░ to 35┬░ C) - Storage temperature: -40┬░ to 149┬░ F (-40┬░ to 65┬░ C) - Relative humidity: 20% to 85% (noncondensing) - - Operating altitude: Tested up to 6,560 feet (up to 2000 meters) + - Operating altitude: Tested up to 6,560 feet (up to 2,000 meters) For electrical supply ratings, refer to the device rating label provided with the unit. Changes or modifications made to the device not expressly approved by Microsoft ![Notice icon](./media/data-box-heavy-safety/notice-icon.png) **NOTICE:** -This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at their own expense. +This equipment is tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at their own expense. This device complies with part 15 of the FCC Rules and Industry Canada license-exempt RSS standard(s). Operation is subject to the following two conditions: This symbol on the product or its batteries or its packaging means that this pro This product contains coin cell battery(ies). -Microsoft Ireland Sandyford Ind Est Dublin D18 KX32 IRL +Microsoft Ireland Sandyford Industrial Estate Dublin D18 KX32 IRL Telephone number: +353 1 295 3826 Fax number: +353 1 706 4110 |
databox | Data Box Heavy System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-system-requirements.md | |
databox | Data Box Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-limits.md | Consider these limits as you deploy and operate your Microsoft Azure Data Box. T - Data Box can store a maximum of 500 million files for both import and export. - Data Box supports a maximum of 512 containers or shares in the cloud. The top-level directories within the user share become containers or Azure file shares in the cloud. -- Data Box usage capacity may be less than 80 TiB because of ReFS metadata space consumption.-- Data Box supports a maximum of 10 client connections at a time on an NFS share.+- Data Box usage capacity might be less than 80 TiB because of ReFS metadata space consumption. +- Data Box supports a maximum of 10 client connections at a time on a Network File System (NFS) share. ## Azure storage limits Data Box caveats for an import order include: Data Box caveats for an export order include: -- Data Box is a Windows-based device and doesnΓÇÖt support case-sensitive file names. For example, you may have two different files in Azure with names that just differ in casing. Don't use Data Box to export such files as the files will be overwritten on the device.+- Data Box is a Windows-based device and doesnΓÇÖt support case-sensitive file names. For example, you might have two different files in Azure with names that just differ in casing. Don't use Data Box to export such files as the files are overwritten on the device. - If you have duplicate tags in input files or tags referring to the same data, the Data Box export might skip or overwrite the files. The number of files and size of data that the Azure portal displays might differ from the actual size of data on the device. -- Data Box exports data to Windows-based system over SMB and is limited by SMB limitations for files and folders. Files and folders with unsupported names aren't exported.-- There is a 1:1 mapping from prefix to container.-- Maximum filename size is 1024 characters. Filenames that exceed this length aren't exported. -- Duplicate prefixes in the *xml* file (uploaded during order creation) are exported. Duplicate prefixes are not ignored.-- Page blobs and container names are case-sensitive. If the casing is mismatched, the blob and/or container will not be found.+- Data Box exports data to Windows-based system over the Server Message Block (SMB) protocol and is limited by SMB limitations for files and folders. Files and folders with unsupported names aren't exported. +- There's a 1:1 mapping from prefix to container. +- Maximum filename size is 1,024 characters. Filenames that exceed this length aren't exported. +- Duplicate prefixes in the *xml* file (uploaded during order creation) are exported. Duplicate prefixes aren't ignored. +- Page blobs and container names are case-sensitive. If the casing is mismatched, the blob and/or container won't be found. ## Azure storage account size limits |
databox | Data Box Local Web Ui Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-local-web-ui-admin.md | |
databox | Data Box Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-logs.md | |
databox | Data Box Portal Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-portal-admin.md | |
databox | Data Box Safety | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-safety.md | -Here are the icons that you will find when you review the safety precautions to be observed when setting up and running your Data Box. +Here are the icons that you'll find when you review the safety precautions to be observed when setting up and running your Data Box. | Icon | Description | |: |: |-| ![Danger Icon](./media/data-box-safety/warning_icon.png) **DANGER!** |Indicates a hazardous situation that, if not avoided, will result in death or serious injury. This signal word is to be limited to the most extreme situations. | +| ![Danger Icon](./media/data-box-safety/warning_icon.png) **DANGER!** |Indicates a hazardous situation that, if not avoided, result in death or serious injury. This signal word is to be limited to the most extreme situations. | | ![Warning Icon](./media/data-box-safety/warning_icon.png) **WARNING!** |Indicates a hazardous situation that, if not avoided, could result in death or serious injury. | | ![Warning Icon](./media/data-box-safety/warning_icon.png) **CAUTION!** |Indicates a hazardous situation that, if not avoided, could result in minor or moderate injury. | | ![Notice Icon](./media/data-box-safety/notice_icon.png) **NOTICE:** |Indicates information considered important, but not hazard-related. | | ![Electrical Shock Icon](./media/data-box-safety/electrical_shock_hazard_icon.png) **Electrical Shock Hazard** |High voltage. | | ![Heavy Weight Icon](./media/data-box-safety/heavy_weight_hazard_icon.png) **Heavy Weight** | |-| ![No User Serviceable Parts Icon](./media/data-box-safety/no_user_serviceable_parts_icon.png) **No User Serviceable Parts** |Do not access unless properly trained. | +| ![No User Serviceable Parts Icon](./media/data-box-safety/no_user_serviceable_parts_icon.png) **No User Serviceable Parts** | Don't access unless properly trained. | | ![Read Safety Notice Icon](./media/data-box-safety/read_safety_and_health_information_icon.png) **Read All Instructions First** | | | ![Tip Hazard Icon](./media/data-box-safety/tip_hazard_icon.png) **Tip Hazard** | | Here are the icons that you will find when you review the safety precautions to ![Warning Icon](./media/data-box-safety/warning_icon.png) ![Electrical Shock Icon](./media/data-box-safety/electrical_shock_hazard_icon.png)![No User Serviceable Parts Icon](./media/data-box-safety/no_user_serviceable_parts_icon.png) **CAUTION** -* Inspect the *as-received* device for damages. If the device enclosure is damaged, [contact Microsoft Support](data-box-disk-contact-microsoft-support.md) to obtain a replacement. Do not attempt to operate the device. -* The device is equipped with tamper-proof screws. If you suspect the device is malfunctioning, [contact Microsoft Support](data-box-disk-contact-microsoft-support.md) to obtain a replacement. Do not attempt to service the device. -* The device contains no user-serviceable parts. Hazardous voltage, current, and energy levels are present inside. Do not open. Return the device to Microsoft for servicing. +* Inspect the *as-received* device for damages. If the device enclosure is damaged, [contact Microsoft Support](data-box-disk-contact-microsoft-support.md) to obtain a replacement. Don't attempt to operate the device. +* The device is equipped with tamper-proof screws. If you suspect the device is malfunctioning, [contact Microsoft Support](data-box-disk-contact-microsoft-support.md) to obtain a replacement. Don't attempt to service the device. +* The device contains no user-serviceable parts. Hazardous voltage, current, and energy levels are present inside. Don't open. Return the device to Microsoft for servicing. ![Warning Icon](./media/data-box-safety/warning_icon.png) ![Heavy Weight Icon](./media/data-box-safety/heavy_weight_hazard_icon.png) **WARNING!** -* A fully configured enclosure can weigh up to 22.7 kg (50 lbs); do not try to lift it by yourself. +* A fully configured enclosure can weigh up to 22.7 kg (50 lbs); don't try to lift it by yourself. * Before moving the enclosure, always ensure that two people are available to handle the weight. Be aware that one person attempting to lift this weight can sustain injuries. ![Warning Icon](./media/data-box-safety/warning_icon.png) ![Tip Hazard Icon](./media/data-box-safety/tip_hazard_icon.png) **WARNING!** * Place the device on a flat, hard, and stable surface to avoid a potential tip hazard.-* Rack-mounted equipment is not to be used as shelves or work spaces. Do not place the Data Box on top of rack-mounted equipment. Adding any type of load to an extended rack-mounted unit can create a potential tip hazard that could lead to injury, death, or product damage. +* Rack-mounted equipment isn't to be used as shelves or work spaces. Don't place the Data Box on top of rack-mounted equipment. Adding any type of load to an extended rack-mounted unit can create a potential tip hazard that could lead to injury, death, or product damage. ![Warning Icon](./media/data-box-safety/warning_icon.png) **WARNING!** Here are the icons that you will find when you review the safety precautions to ![Warning Icon](./media/data-box-safety/warning_icon.png) ![Electrical Shock Icon](./media/data-box-safety/electrical_shock_hazard_icon.png) **WARNING!** -* Provide a safe electrical earth connection to the power supply cord. The AC cord has a three-wire grounding plug (a plug that has a grounding pin). This plug fits only a grounded AC outlet. Do not defeat the purpose of the grounding pin. +* Provide a safe electrical earth connection to the power supply cord. The AC cord has a three-wire grounding plug (a plug that has a grounding pin). This plug fits only a grounded AC outlet. Don't defeat the purpose of the grounding pin. * Given that the plug on the power supply cord is the main disconnect device, ensure that the socket outlets are located near the device and are easily accessible. * Unplug the power cord (by pulling the plug, not the cord) and disconnect all cables if any of the following conditions exist: Here are the icons that you will find when you review the safety precautions to - The device is exposed to rain or excess moisture. - The device was dropped and the device casing is damaged. - You suspect the device needs service or repair.-* Permanently unplug the unit before you move it or if you think it has become damaged in any way. +* Permanently unplug the unit before you move it or if you think it is damaged in any way. * Provide a suitable power source with electrical overload protection to meet the following power specifications: - Voltage: 100 V AC to 240 V AC Here are the icons that you will find when you review the safety precautions to ![Warning Icon](./media/data-box-safety/warning_icon.png) **CAUTION:** -* This device contains coin cell batteries. Do not attempt to service the device. Batteries in this device are not user serviceable. +* This device contains coin cell batteries. Don't attempt to service the device. Batteries in this device aren't user serviceable. * **For service personnel only**: Risk of Explosion if battery is replaced by an incorrect type. Dispose of the used batteries according to instructions. ![Notice Icon](./media/data-box-safety/notice_icon.png) **NOTICE:** This device is: - Operating temperature: 50┬░ to 95┬░ F (10┬░ to 35┬░ C) - Storage temperature: -4┬░ to 122┬░ F (-20┬░ to 50┬░ C) - Relative humidity: 15% to 85% (noncondensing) - - Operating altitude: Tested up to 6500 feet (0 meters to 2000 meters) + - Operating altitude: Tested up to 6,500 feet (0 meters to 2,000 meters) For electrical supply ratings, refer to the device rating label provided with the unit. Changes or modifications made to the device not expressly approved by Microsoft ![Notice Icon](./media/data-box-safety/notice_icon.png) **NOTICE:** -This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at their own expense. +This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user is required to correct the interference at their own expense. -This device complies with part 15 of the FCC Rules and Industry Canada license-exempt RSS standard(s). Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation of the device. +This device complies with part 15 of the FCC Rules and Industry Canada license-exempt RSS standards. Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation of the device. ![Screenshot shows a notification required for Canada.](./media/data-box-safety/canada.png) Canada: (800) 933-4750 ![Warning Icon](./media/data-box-safety/warning_icon.png) **WARNING:** -This is a class A product. In a domestic environment, this product may cause radio interference in which case the user may be required to take adequate measures. +This device is a class A product. In a domestic environment, this product may cause radio interference in which case the user may be required to take adequate measures. **Disposal of waste batteries and electrical and electronic equipment:** ![Battery disposal icon](./media/data-box-safety/battery_disposal_icon.png) -This symbol on the product or its batteries or its packaging means that this product and any batteries it contains must not be disposed of with your household waste. Instead, it is your responsibility to hand this over to an applicable collection point for the recycling of batteries and electrical and electronic equipment. This separate collection and recycling will help to conserve natural resources and prevent potential negative consequences for human health and the environment due to the possible presence of hazardous substances in batteries and electrical and electronic equipment, which could be caused by inappropriate disposal. For more information about where to drop off your batteries and electrical and electronic waste, please contact your local city/municipality office, your household waste disposal service, or the shop where you purchased this product. Contact *erecycle\@microsoft.com* for additional information on WEEE. +This symbol on the product or its batteries or its packaging means that this product and any batteries it contains must not be disposed of with your household waste. Instead, it is your responsibility to hand this over to an applicable collection point for the recycling of batteries and electrical and electronic equipment. This separate collection and recycling will help to conserve natural resources and prevent potential negative consequences for human health and the environment due to the possible presence of hazardous substances in batteries and electrical and electronic equipment, which could be caused by inappropriate disposal. For more information about where to drop off your batteries and electrical and electronic waste, contact your local city/municipality office, your household waste disposal service, or the shop where you purchased this product. Contact *erecycle\@microsoft.com* for additional information on WEEE. This product contains coin cell battery(ies). -Microsoft Ireland Sandyford Ind Est Dublin D18 KX32 IRL +Microsoft Ireland Sandyford Industrial Estate Dublin D18 KX32 IRL Telephone number: +353 1 295 3826 Fax number: +353 1 706 4110 ![Taiwan](./media/data-box-safety/taiwan.png)--> -After you have reviewed these safety notices, you can set up and cable your device. +After you review these safety notices, you can set up and cable your device. ## Next steps |
databox | Data Box Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-security.md | |
databox | Data Box System Requirements Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements-rest.md | |
databox | Data Box System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements.md | |
logic-apps | Add Artifacts Integration Service Environment Ise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/add-artifacts-integration-service-environment-ise.md | - Title: Add resources to integration service environments -description: Add logic apps, integration accounts, custom connectors, and managed connectors to your integration service environment (ISE). --- Previously updated : 08/23/2023---# Add resources to your integration service environment (ISE) in Azure Logic Apps --> [!IMPORTANT] -> -> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic), -> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard -> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure -> Logic Apps and provide the same capabilities plus more. -> -> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing -> before this date are supported through August 31, 2024. For more information, see the following resources: -> -> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220) -> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md) -> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) -> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md) -> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) -> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/) --After you create an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), you can add resources such as **Consumption** logic apps, integration accounts, and connectors so that they can access the resources in your Azure virtual network. For example, managed ISE connectors that become available after you create your ISE don't automatically appear in the Logic App Designer. Before you can use these ISE connectors, you have to manually [add and deploy those connectors to your ISE](#add-ise-connectors-environment) so that they appear in the Logic App Designer. --> [!IMPORTANT] -> For logic apps and integration accounts to work together in an ISE, both must use the *same ISE* as their location. --## Prerequisites --* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). --* The ISE that you created to run your Consumption logic app workflows --* To create, add, or update resources that are deployed to an ISE, you need to be assigned the Owner or Contributor role on that ISE, or you have permissions inherited through the Azure subscription or Azure resource group associated with the ISE. For people who don't have owner, contributor, or inherited permissions, they can be assigned the Integration Service Environment Contributor role or Integration Service Environment Developer role. For more information, see [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)? --<a name="create-logic-apps-environment"></a> --## Create logic apps --To develop logic apps that run in your integration service environment (ISE), follow these steps: --1. Find and open your ISE, if not already open. From the ISE menu, under **Settings**, select **Logic apps** > **Add**. -- ![Add new logic app to ISE](./media/add-artifacts-integration-service-environment-ise/add-logic-app-to-ise.png) --1. Provide information about the logic app that you want to create, for example: -- ![Screenshot that shows the "Create a logic app" pane with example information entered.](./media/add-artifacts-integration-service-environment-ise/create-logic-app-integration-service-environment.png) -- | Property | Required | Description | - |-|-|-| - | **Logic app name** | Yes | The name for the logic app to create | - | **Subscription** | Yes | The name for the Azure subscription to use | - | **Resource group** | Yes | The name for the new or existing Azure resource group to use | - | **Region** | Yes | The Azure region for your logic app, which matches the location for the ISE that you later select | - | **Associate with integration service environment*** | Yes | Select this option so you can choose an ISE to use. | - | **Integration service environment** | Yes | From the list, select the ISE that you want to use, if not already selected. <p><p>**Important**: To use an integration account with your logic app, both must use the same ISE. | - |||| --1. When you're done, select **Create**. --1. Continue [creating your logic app in the usual way](../logic-apps/quickstart-create-example-consumption-workflow.md). -- For differences in how triggers and actions work and how they're labeled when you use an ISE compared to the multi-tenant Logic Apps service, see [Isolated versus multi-tenant in the ISE overview](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#difference). --1. To manage logic apps and API connections in your ISE, see [Manage your integration service environment](../logic-apps/ise-manage-integration-service-environment.md). --<a name="create-integration-account-environment"></a> --## Create integration accounts --Based on the [ISE SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level) selected at creation, your ISE includes specific integration account usage at no additional cost. Logic apps that exist in an integration service environment (ISE) can reference only integration accounts that exist in the same ISE. So, for an integration account to work with logic apps in an ISE, both the integration account and logic apps must use the *same environment* as their location. For more information about integration accounts and ISEs, see [Integration accounts with ISE](connect-virtual-network-vnet-isolated-environment-overview.md#create-integration-account-environment). --To create an integration account that uses an ISE, follow these steps: --1. Find and open your ISE, if not already open. From the ISE menu, under **Settings**, select **Integration accounts** > **Add**. -- ![Add new integration account to ISE](./media/add-artifacts-integration-service-environment-ise/add-integration-account-to-ise.png) --1. Provide information about the logic app that you want to create, for example: -- ![Select integration service environment](./media/add-artifacts-integration-service-environment-ise/create-integration-account-integration-service-environment.png) -- | Property | Required | Description | - |-|-|-| - | **Name** | Yes | The name for the integration account that you want to create | - | **Subscription** | Yes | The name for the Azure subscription that you want to use | - | **Resource group** | Yes | The name for the Azure resource group (new or existing) to use | - | **Pricing Tier** | Yes | The pricing tier to use for the integration account | - | **Location** | Yes | From the list, under **Integration service environments**, select the same ISE that your logic apps use, if not already selected. <p><p>**Important**: To use an integration account with your logic app, both must use the same ISE. | - |||| --1. When you're done, select **Create**. --1. [Link your logic app to your integration account in the usual way](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md#link-account). --1. Continue by adding resources to your integration account, such as [trading partners](../logic-apps/logic-apps-enterprise-integration-partners.md) and [agreements](../logic-apps/logic-apps-enterprise-integration-agreements.md). --1. To manage integration accounts in your ISE, see [Manage your integration service environment](../logic-apps/ise-manage-integration-service-environment.md). --<a name="add-ise-connectors-environment"></a> --## Add ISE connectors --After you create your ISE, managed ISE connectors don't automatically appear in the connector picker on the Logic App Designer. Before you can use these ISE connectors, you have to manually add and deploy these connectors to your ISE so that they appear in the Logic App Designer. --> [!IMPORTANT] -> Managed ISE connectors currently don't support [tags](../azure-resource-manager/management/tag-support.md). -> If you set up a policy that enforces tagging, trying to add ISE connectors might fail with an error similar to this example: -> -> ```json -> { -> "error": { -> "code": "IntergrationServiceEnvironmentManagedApiDefinitionTagsNotSupported", -> "message": "The tags are not supported in the managed API 'azureblob'." -> } -> } -> ``` -> -> So, to add ISE connectors, you have to either disable or remove your policy. --1. On your ISE menu, under **Settings**, select **Managed connectors**. On the toolbar, select **Add**. -- ![View managed connectors](./media/add-artifacts-integration-service-environment-ise/ise-view-managed-connectors.png) --1. On the **Add a new managed connector** pane, open the **Find connector** list. Find and select the ISE connector that you want to use but isn't yet deployed in your ISE. When you're done, select **Create**. -- ![Select the ISE connector that you want to deploy in your ISE](./media/add-artifacts-integration-service-environment-ise/add-managed-connector.png) -- Only ISE connectors that are eligible but not yet deployed to your ISE appear available for you to select. Connectors that are already deployed in your ISE appear unavailable for selection. --<a name="create-custom-connectors-environment"></a> --## Create custom connectors --To use custom connectors in your ISE, create those custom connectors from directly inside your ISE. --1. Find and open your ISE, if not already open. From the ISE menu, under **Settings**, select **Custom connectors** > **Add**. -- ![Create custom connector](./media/add-artifacts-integration-service-environment-ise/add-custom-connector-to-ise.png) --1. Provide the name, Azure subscription, and Azure resource group (new or existing) to use for your custom connector. --1. From the **Location** list, under the **Integration service environments** section, select the same ISE that your logic apps use, and select **Create**, for example: -- ![Screenshot that shows the "Create Logic Apps Custom Connector" window with example information selected.](./media/add-artifacts-integration-service-environment-ise/create-custom-connector-integration-service-environment.png) --1. Select your new custom connector, and then select **Edit**, for example: -- ![Select and edit custom connector](./media/add-artifacts-integration-service-environment-ise/edit-custom-connectors.png) --1. Continue by creating the connector in the usual way from an [OpenAPI definition](/connectors/custom-connectors/define-openapi-definition#import-the-openapi-definition) or [SOAP](/connectors/custom-connectors/create-register-logic-apps-soap-connector#2-define-your-connector). --1. To manage custom connectors in your ISE, see [Manage your integration service environment](../logic-apps/ise-manage-integration-service-environment.md). --## Next steps --* [Manage integration service environments](../logic-apps/ise-manage-integration-service-environment.md) |
logic-apps | Authenticate With Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/authenticate-with-managed-identity.md | Based on your logic app resource type, you can enable either the system-assigned | Logic app | Environment | Managed identity support | |--|-|--|-| Consumption | - Multitenant Azure Logic Apps <br><br>- Integration service environment (ISE) | - You can enable *either* the system-assigned identity or the user-assigned identity, but not both on your logic app. <br><br>- You can use the managed identity at the logic app resource level and at the connection level. <br><br>- If you create and enable the user-assigned identity, your logic app can have *only one* user-assigned identity at a time. | +| Consumption | - Multitenant Azure Logic Apps | - You can enable *either* the system-assigned identity or the user-assigned identity, but not both on your logic app. <br><br>- You can use the managed identity at the logic app resource level and at the connection level. <br><br>- If you create and enable the user-assigned identity, your logic app can have *only one* user-assigned identity at a time. | | Standard | - Single-tenant Azure Logic Apps <br><br>- App Service Environment v3 (ASEv3) <br><br>- Azure Arc enabled Logic Apps | - You can enable *both* the system-assigned identity, which is enabled by default, and the user-assigned identity at the same time. You can also add multiple user-assigned identities to your logic app. However, your logic app can use only one managed identity at a time. <br><br>- You can use the managed identity at the logic app resource level and at the connection level. | For information about managed identity limits in Azure Logic Apps, see [Limits on managed identities for logic apps](logic-apps-limits-and-config.md#managed-identity). For more information about the Consumption and Standard logic app resource types and environments, see the following documentation: |
logic-apps | Business Continuity Disaster Recovery Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/business-continuity-disaster-recovery-guidance.md | Each logic app needs to specify the location that you want to use for deployment > If your logic app also works with B2B artifacts, such as trading partners, agreements, schemas, maps, and certificates, > which are stored in an integration account, both your integration account and logic apps must use the same location. -If you follow good DevOps practices, you already use [Azure Resource Manager templates](../azure-resource-manager/management/overview.md) to define and deploy your logic apps and their dependent resources. Resource Manager templates give you the capability to use a single deployment definition and then use parameter files to provide the configuration values to use for each deployment destination. This capability means that you can deploy the same logic app to different environments, for example, development, test, and production. You can also deploy the same logic app to different Azure regions or ISEs, which support disaster recovery strategies that use [paired-regions](../availability-zones/cross-region-replication-azure.md). +If you follow good DevOps practices, you already use [Azure Resource Manager templates](../azure-resource-manager/management/overview.md) to define and deploy your logic apps and their dependent resources. Resource Manager templates give you the capability to use a single deployment definition and then use parameter files to provide the configuration values to use for each deployment destination. This capability means that you can deploy the same logic app to different environments, for example, development, test, and production. You can also deploy the same logic app to different Azure regions, which support disaster recovery strategies that use [paired-regions](../availability-zones/cross-region-replication-azure.md). For the failover strategy, your logic apps and locations must meet these requirements: * The secondary logic app instance has access to the same apps, services, and systems as the primary logic app instance. -* Both logic app instances have the same host type. So, either both instances are deployed to regions in global multitenant Azure, or both instances are deployed to ISEs, which let your logic apps directly access resources in an Azure virtual network. For best practices and more information about paired regions for BCDR, see [Cross-region replication in Azure: Business continuity and disaster recovery](../availability-zones/cross-region-replication-azure.md). -- For example, both the primary and secondary locations must be ISEs when the primary logic app runs in an ISE and uses [ISE-versioned connectors](../connectors/managed.md#ise-connectors), HTTP actions to call resources in the Azure virtual network, or both. In this scenario, your secondary logic app must also have a similar setup in the secondary location as the primary logic app. -- > [!NOTE] - > For more advanced scenarios, you can mix both multitenant Azure and an - > ISE as locations. However, make sure that you consider and understand the - > [differences between how logic apps run in an ISE versus multitenant Azure](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#difference). --* If you use ISEs, [make sure that they are scaled out or have enough capacity](../logic-apps/ise-manage-integration-service-environment.md#add-capacity) to handle the load. +* Both logic app instances have the same host type. So, both instances are deployed to regions in global multitenant Azure Logic Apps or regions in single-tenant Azure Logic Apps. For best practices and more information about paired regions for BCDR, see [Cross-region replication in Azure: Business continuity and disaster recovery](../availability-zones/cross-region-replication-azure.md). #### Example: Multitenant Azure This example shows primary and secondary logic app instances, which are deployed ![Primary and secondary logic app instances in separate locations](./media/business-continuity-disaster-recovery-guidance/primary-secondary-locations.png) -#### Example: Integration service environment --This example shows the previous primary and secondary logic app instances but deployed to separate ISEs. A single Resource Manager template defines both logic app instances, the dependent resources required by those logic apps, and the ISEs as the deployment locations. Separate parameter files define the configuration values to use for deployment in each location: --![Primary and secondary logic apps in different locations](./media/business-continuity-disaster-recovery-guidance/primary-secondary-locations-ise.png) - <a name="resource-connections"></a> ## Connections to resources If your logic app runs in multitenant Azure and needs access to on-premises reso The data gateway resource is associated with a location or Azure region, just like your logic app resource. In your disaster recovery strategy, make sure that the data gateway remains available for your logic app to use. You can [enable high availability for your gateway](../logic-apps/logic-apps-gateway-install.md#high-availability) when you have multiple gateway installations. -> [!NOTE] -> If your logic app runs in an integration service environment (ISE) and uses only -> ISE-versioned connectors for on-premises data sources, you don't need the data -> gateway because ISE connectors provide direct access to the the on-premises resource. -> -> If no ISE-versioned connector is available for the on-premises resource that you want, -> your logic app can still create the connection by using a non-ISE connector, -> which runs in the global multitenant Azure, not your ISE. However, this connection -> requires the on-premises data gateway. - <a name="roles"></a> ## Active-active versus active-passive roles |
logic-apps | Connect Virtual Network Vnet Isolated Environment Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md | - Title: Overview - Access to Azure virtual networks -description: Learn about accessing Azure virtual networks (VNETs) from Azure Logic Apps using an integration service environment (ISE). --- Previously updated : 08/23/2023---# Access to Azure virtual networks from Azure Logic Apps using an integration service environment (ISE) --> [!IMPORTANT] -> -> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic), -> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard -> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure -> Logic Apps and provide the same capabilities plus more. For example Standard workflows support using private -> endpoints for inbound traffic so that your workflows can communicate privately and securely with virtual -> networks. Standard workflows also support virtual network integration for outbound traffic. For more information, -> review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md). --Since November 1, 2022, the capability to create new ISE resources is no longer available, which also means that capability to set up your own encryption keys, known as "Bring Your Own Key" (BYOK), during ISE creation using the Logic Apps REST API is also no longer available. However, ISE resources existing before this date are supported through August 31, 2024. --For more information, see the following resources: --- [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220)-- [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)-- [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)-- [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md)-- [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/)-- [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)--This overview provides more information about [how an ISE works with a virtual network](#how-ise-works), the [benefits to using an ISE](#benefits), the [differences between the dedicated and multi-tenant Logic Apps service](#difference), and how you can directly access resources that are inside or connected your Azure virtual network. --<a name="how-ise-works"></a> --## How an ISE works with a virtual network --At ISE creation, you select the Azure virtual network where you want Azure to *inject* or deploy your ISE. When you create logic apps and integration accounts that need access to this virtual network, you can select your ISE as the host location for those logic apps and integration accounts. Inside the ISE, logic apps run on dedicated resources separately from others in the multi-tenant Azure Logic Apps environment. Data in an ISE stays in the [same region where you create and deploy that ISE](https://azure.microsoft.com/global-infrastructure/data-residency/). --![Screenshot shows Azure portal with integration service environment selected.](./media/connect-virtual-network-vnet-isolated-environment-overview/select-logic-app-integration-service-environment.png) --<a name="benefits"></a> --## Why use an ISE --Running logic app workflows in your own separate dedicated instance helps reduce the impact that other Azure tenants might have on your apps' performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors). An ISE also provides these benefits: --* Direct access to resources that are inside or connected to your virtual network -- Logic apps that you create and run in an ISE can use [specifically designed connectors that run in your ISE](../connectors/managed.md#ise-connectors). If an ISE connector exists for an on-premises system or data source, you can connect directly without having to use the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). For more information, see [Dedicated versus multi-tenant](#difference) and [Access to on-premises systems](#on-premises) later in this topic. --* Continued access to resources that are outside or not connected to your virtual network -- Logic apps that you create and run in an ISE can still use connectors that run in the multi-tenant Logic Apps service when an ISE-specific connector isn't available. For more information, see [Dedicated versus multi-tenant](#difference). --* Your own static IP addresses, which are separate from the static IP addresses that are shared by the logic apps in the multi-tenant service. You can also set up a single public, static, and predictable outbound IP address to communicate with destination systems. That way, you don't have to set up additional firewall openings at those destination systems for each ISE. --* Increased limits on run duration, storage retention, throughput, HTTP request and response timeouts, message sizes, and custom connector requests. For more information, see [Limits and configuration for Azure Logic Apps](logic-apps-limits-and-config.md). --<a name="difference"></a> --## Dedicated versus multi-tenant --When you create and run logic apps in an ISE, you get the same user experiences and similar capabilities as the multi-tenant Logic Apps service. You can use all the same built-in triggers, actions, and managed connectors that are available in the multi-tenant Logic Apps service. Some managed connectors offer additional ISE versions. The difference between ISE connectors and non-ISE connectors exists in where they run and the labels that they have in the Logic App Designer when you work within an ISE. --![Connectors with and without labels in an ISE](./media/connect-virtual-network-vnet-isolated-environment-overview/labeled-trigger-actions-integration-service-environment.png) --* Built-in triggers and actions, such as HTTP, display the **CORE** label and run in the same ISE as your logic app. --* Managed connectors that display the **ISE** label are specially designed for ISEs and *always run in the same ISE as your logic app*. For example, here are some [connectors that offer ISE versions](../connectors/managed.md#ise-connectors):<p> -- * Azure Blob Storage, File Storage, and Table Storage - * Azure Service Bus, Azure Queues, Azure Event Hubs - * Azure Automation, Azure Key Vault, Azure Event Grid, and Azure Monitor Logs - * FTP, SFTP-SSH, File System, and SMTP - * SAP, IBM MQ, IBM DB2, and IBM 3270 - * SQL Server, Azure Synapse Analytics, Azure Cosmos DB - * AS2, X12, and EDIFACT -- With rare exceptions, if an ISE connector is available for an on-premises system or data source, you can connect directly without using the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). For more information, see [Access to on-premises systems](#on-premises) later in this topic. --* Managed connectors that don't display the **ISE** label continue to work for logic apps inside an ISE. These connectors *always run in the multi-tenant Logic Apps service*, not in the ISE. --* Custom connectors that you create *outside an ISE*, whether or not they require the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md), continue to work for logic apps inside an ISE. However, custom connectors that you create *inside an ISE* won't work with the on-premises data gateway. For more information, see [Access to on-premises systems](#on-premises). --<a name="on-premises"></a> --## Access to on-premises systems --Logic app workflows that run inside an ISE can directly access on-premises systems and data sources that are inside or connected to an Azure virtual network by using these items:<p> --* The HTTP trigger or action, which displays the **CORE** label --* The **ISE** connector, if available, for an on-premises system or data source -- If an ISE connector is available, you can directly access the system or data source without the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). However, if you need to access SQL Server from an ISE and use Windows authentication, you must use the connector's non-ISE version and the on-premises data gateway. The connector's ISE version doesn't support Windows authentication. For more information, see [ISE connectors](../connectors/managed.md#ise-connectors) and [Connect from an integration service environment](../connectors/managed.md#integration-account-connectors). --* A custom connector -- * Custom connectors that you create *outside an ISE*, whether or not they require the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md), continue to work for logic apps inside an ISE. -- * Custom connectors that you create *inside an ISE* don't work with the on-premises data gateway. However, these connectors can directly access on-premises systems and data sources that are inside or connected to the virtual network that hosts your ISE. So, logic apps that are inside an ISE usually don't need the data gateway when accessing those resources. --To access on-premises systems and data sources that don't have ISE connectors, are outside your virtual network, or aren't connected to your virtual network, you still have to use the on-premises data gateway. Logic apps within an ISE can continue using connectors that don't have the **CORE** or **ISE** label. Those connectors run in the multi-tenant Logic Apps service, rather than in your ISE. --<a name="data-at-rest"></a> --## Encrypted data at rest --By default, Azure Storage uses Microsoft-managed keys to encrypt your data. Azure Logic Apps relies on Azure Storage to store and automatically [encrypt data at rest](../storage/common/storage-service-encryption.md). This encryption protects your data and helps you meet your organizational security and compliance commitments. For more information about how Azure Storage encryption works, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md) and [Azure Data Encryption-at-Rest](../security/fundamentals/encryption-atrest.md). --For more control over the encryption keys used by Azure Storage, ISE supports using and managing your own key using [Azure Key Vault](/azure/key-vault/general/overview). This capability is also known as "Bring Your Own Key" (BYOK), and your key is called a "customer-managed key". However, this capability is available *only when you create your ISE*, not afterwards. You can't disable this key after your ISE is created. Currently, no support exists for rotating a customer-managed key for an ISE. --* Customer-managed key support for an ISE is available only in the following regions: -- * Azure: West US 2, East US, and South Central US. -- * Azure Government: Arizona, Virginia, and Texas. --* The key vault that stores your customer-managed key must exist in the same Azure region as your ISE. --* To support customer-managed keys, your ISE requires that you enable either the [system-assigned or user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). This identity lets your ISE authenticate access to secured resources, such as virtual machines and other systems or services, that are in or connected to an Azure virtual network. That way, you don't have to sign in with your credentials. --* You must give your key vault access to your ISE's managed identity, but the timing depends on which managed identity that you use. -- * **System-assigned managed identity**: Within *30 minutes after* you send the HTTPS PUT request that creates your ISE. Otherwise, ISE creation fails, and you get a permissions error. -- * **User-assigned managed identity**: Before you send the HTTPS PUT request that creates your ISE --<a name="ise-level"></a> --## ISE SKUs --When you create your ISE, you can select the Developer SKU or Premium SKU. This SKU option is available only at ISE creation and can't be changed later. Here are the differences between these SKUs: --* **Developer** -- Provides a lower-cost ISE that you can use for exploration, experiments, development, and testing, but not for production or performance testing. The Developer SKU includes built-in triggers and actions, Standard connectors, Enterprise connectors, and a single [Free tier](../logic-apps/logic-apps-limits-and-config.md#artifact-number-limits) integration account for a [fixed monthly price](https://azure.microsoft.com/pricing/details/logic-apps). -- > [!IMPORTANT] - > This SKU has no service-level agreement (SLA), scale up capability, - > or redundancy during recycling, which means that you might experience - > delays or downtime. Backend updates might intermittently interrupt service. -- For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). To learn how billing works for ISEs, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). --* **Premium** -- Provides an ISE that you can use for production and performance testing. The Premium SKU includes SLA support, built-in triggers and actions, Standard connectors, Enterprise connectors, a single [Standard tier](../logic-apps/logic-apps-limits-and-config.md#artifact-number-limits) integration account, scale up capability, and redundancy during recycling for a [fixed monthly price](https://azure.microsoft.com/pricing/details/logic-apps). -- For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). To learn how billing works for ISEs, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). --<a name="endpoint-access"></a> --## ISE endpoint access --During ISE creation, you can choose to use either internal or external access endpoints. Your selection determines whether request or webhook triggers on logic apps in your ISE can receive calls from outside your virtual network. These endpoints also affect the way that you can access the inputs and outputs from your logic apps' runs history. --> [!IMPORTANT] -> You can select the access endpoint only during ISE creation and can't change this option later. --* **Internal**: Private endpoints permit calls to logic apps in your ISE where you can view and access inputs and outputs from logic app workflow run history *only from inside your virtual network*. -- > [!IMPORTANT] - > If you need to use these webhook-based triggers, and the service is outside your virtual network and - > peered virtual networks, use external endpoints, *not* internal endpoints, when you create your ISE: - > - > * Azure DevOps - > * Azure Event Grid - > * Common Data Service - > * Office 365 - > * SAP (multi-tenant version) - > - > Also, make sure that you have network connectivity between the private endpoints and the computer from - > where you want to access the run history. Otherwise, when you try to view your workflow's run history, - > you get an error that says "Unexpected error. Failed to fetch". - > - > ![Screenshot shows Azure portal and Azure Storage action error resulting from inability to send traffic through firewall.](./media/connect-virtual-network-vnet-isolated-environment-overview/integration-service-environment-error.png) - > - > For example, your client computer can exist inside the ISE's virtual network or inside a virtual network that's connected to the ISE's virtual network through peering or a virtual private network. --* **External**: Public endpoints permit calls to logic app workflows in your ISE where you can view and access inputs and outputs from logic apps' runs history *from outside your virtual network*. If you use network security groups (NSGs), make sure they're set up with inbound rules to allow access to the run history's inputs and outputs. --To determine whether your ISE uses an internal or external access endpoint, on your ISE's menu, under **Settings**, select **Properties**, and find the **Access endpoint** property: --![Screenshot shows Azure portal, ISE menu, with the options selected for Settings, Properties, and Access endpoint.](./media/connect-virtual-network-vnet-isolated-environment-overview/find-ise-access-endpoint.png) --<a name="pricing-model"></a> --## Pricing model --Logic apps, built-in triggers, built-in actions, and connectors that run in your ISE use a fixed pricing plan that differs from the Consumption pricing plan. For more information, see [Azure Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). --<a name="create-integration-account-environment"></a> --## Integration accounts with ISE --You can use integration accounts with logic apps inside an integration service environment (ISE). However, those integration accounts must use the *same ISE* as the linked logic apps. Logic apps in an ISE can reference only those integration accounts that are in the same ISE. When you create an integration account, you can select your ISE as the location for your integration account. To learn how pricing and billing work for integration accounts with an ISE, see the [Azure Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). For limits information, see [Integration account limits](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits). --## Next steps --* [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md) |
logic-apps | Connect Virtual Network Vnet Set Up Single Ip Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-set-up-single-ip-address.md | - Title: Set up a public outbound IP address for ISEs -description: Learn how to set up a single public outbound IP address for integration service environments (ISEs) in Azure Logic Apps. --- Previously updated : 01/04/2024---# Set up a single IP address for one or more integration service environments in Azure Logic Apps --> [!IMPORTANT] -> -> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic), -> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard -> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure -> Logic Apps and provide the same capabilities plus more. -> -> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing -> before this date are supported through August 31, 2024. For more information, see the following resources: -> -> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220) -> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md) -> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) -> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md) -> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) -> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/) --When you work with Azure Logic Apps, you can use an integration service environment (ISE) for hosting logic apps that need access to resources in an [Azure virtual network](../virtual-network/virtual-networks-overview.md). When you have multiple ISE instances that need access to other endpoints that have IP restrictions, deploy an [Azure Firewall](../firewall/overview.md) or a [network virtual appliance](../virtual-network/virtual-networks-overview.md#filter-network-traffic) into your virtual network and route outbound traffic through that firewall or network virtual appliance. You can then have all the ISE instances in your virtual network use a single, public, static, and predictable IP address to communicate with the destination systems that you want. That way, you don't have to set up additional firewall openings at your destination systems for each ISE. --This topic shows how to route outbound traffic through an Azure Firewall, but you can apply similar concepts to a network virtual appliance such as a third-party firewall from the Azure Marketplace. While this topic focuses on setup for multiple ISE instances, you can also use this approach for a single ISE when your scenario requires limiting the number of IP addresses that need access. Consider whether the additional costs for the firewall or virtual network appliance make sense for your scenario. Learn more about [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/). --## Prerequisites --* An Azure firewall that runs in the same virtual network as your ISE. If you don't have a firewall, first [add a subnet](../virtual-network/virtual-network-manage-subnet.md#add-a-subnet) that's named `AzureFirewallSubnet` to your virtual network. You can then [create and deploy a firewall](../firewall/tutorial-firewall-deploy-portal.md#create-a-virtual-network) in your virtual network. --* An Azure [route table](../virtual-network/manage-route-table.yml). If you don't have one, first [create a route table](../virtual-network/manage-route-table.yml#create-a-route-table). For more information about routing, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). --## Set up route table --1. In the [Azure portal](https://portal.azure.com), select the route table, for example: -- ![Select route table with rule for directing outbound traffic](./media/connect-virtual-network-vnet-set-up-single-ip-address/select-route-table-for-virtual-network.png) --1. To [add a new route](../virtual-network/manage-route-table.yml#create-a-route), on the route table menu, select **Routes** > **Add**. -- ![Add route for directing outbound traffic](./media/connect-virtual-network-vnet-set-up-single-ip-address/add-route-to-route-table.png) --1. On the **Add route** pane, [set up the new route](../virtual-network/manage-route-table.yml#create-a-route) with a rule that specifies that all the outbound traffic to the destination system follows this behavior: -- * Uses the [**Virtual appliance**](../virtual-network/virtual-networks-udr-overview.md#user-defined) as the next hop type. -- * Goes to the private IP address for the firewall instance as the next hop address. -- To find this IP address, on your firewall menu, select **Overview**, find the address under **Private IP address**, for example: -- ![Find firewall private IP address](./media/connect-virtual-network-vnet-set-up-single-ip-address/find-firewall-private-ip-address.png) -- Here's an example that shows how such a rule might look: -- ![Set up rule for directing outbound traffic](./media/connect-virtual-network-vnet-set-up-single-ip-address/add-rule-to-route-table.png) -- | Property | Value | Description | - |-|-|-| - | **Route name** | <*unique-route-name*> | A unique name for the route in the route table | - | **Address prefix** | <*destination-address*> | The address prefix for your destination system where you want outbound traffic to go. Make sure that you use [Classless Inter-Domain Routing (CIDR) notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) for this address. In this example, this address prefix is for an SFTP server, which is described in the section, [Set up network rule](#set-up-network-rule). | - | **Next hop type** | **Virtual appliance** | The [hop type](../virtual-network/virtual-networks-udr-overview.md#next-hop-types-across-azure-tools) that's used by outbound traffic | - | **Next hop address** | <*firewall-private-IP-address*> | The private IP address for your firewall | - ||| --<a name="set-up-network-rule"></a> --## Set up network rule --1. In the Azure portal, find and select your firewall. On the firewall menu, under **Settings**, select **Rules**. On the rules pane, select **Network rule collection** > **Add network rule collection**. -- ![Add network rule collection to firewall](./media/connect-virtual-network-vnet-set-up-single-ip-address/add-network-rule-collection.png) --1. In the collection, add a rule that permits traffic to the destination system. -- For example, suppose that you have a logic app that runs in an ISE and needs to communicate with an SFTP server. You create a network rule collection that's named `LogicApp_ISE_SFTP_Outbound`, which contains a network rule named `ISE_SFTP_Outbound`. This rule permits traffic from the IP address of any subnet where your ISE runs in your virtual network to the destination SFTP server by using your firewall's private IP address. -- ![Set up network rule for firewall](./media/connect-virtual-network-vnet-set-up-single-ip-address/set-up-network-rule-for-firewall.png) -- **Network rule collection properties** -- | Property | Value | Description | - |-|-|-| - | **Name** | <*network-rule-collection-name*> | The name for your network rule collection | - | **Priority** | <*priority-level*> | The order of priority to use for running the rule collection. For more information, see [What are some Azure Firewall concepts](../firewall/firewall-faq.yml#what-are-some-azure-firewall-concepts)? | - | **Action** | **Allow** | The action type to perform for this rule | - ||| -- **Network rule properties** -- | Property | Value | Description | - |-|-|-| - | **Name** | <*network-rule-name*> | The name for your network rule | - | **Protocol** | <*connection-protocols*> | The connection protocols to use. For example, if you're using NSG rules, select both **TCP** and **UDP**, not only **TCP**. | - | **Source addresses** | <*ISE-subnet-addresses*> | The subnet IP addresses where your ISE runs and where traffic from your logic app originates | - | **Destination addresses** | <*destination-IP-address*> | The IP address for your destination system where you want outbound traffic to go. In this example, this IP address is for the SFTP server. | - | **Destination ports** | <*destination-ports*> | Any ports that your destination system uses for inbound communication | - - For more information about network rules, see these articles: -- * [Configure a network rule](../firewall/tutorial-firewall-deploy-portal.md#configure-a-network-rule) - * [Azure Firewall rule processing logic](../firewall/rule-processing.md#network-rules-and-applications-rules) - * [Azure PowerShell: New-AzFirewallNetworkRule](/powershell/module/az.network/new-azfirewallnetworkrule) - * [Azure CLI: az network firewall network-rule](/cli/azure/network/firewall/network-rule#az-network-firewall-network-rule-create) --## Related content --* [Azure Firewall FAQ](../firewall/firewall-faq.yml) |
logic-apps | Sap Create Example Scenario Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connectors/sap-create-example-scenario-workflows.md | The following example workflow shows how to extract individual IDocs from a pack ## Filter received messages with SAP actions -If you use the SAP managed connector or ISE-versioned SAP connector, under the trigger in your workflow, set up a way to explicitly filter out any unwanted actions from your SAP server, based on the root node namespace in the received XML payload. You can provide a list (array) with a single or multiple SAP actions. By default, this array is empty, which means that your workflow receives all the messages from your SAP server without filtering. When you set up the array filter, the trigger receives messages only from the specified SAP action types and rejects all other messages from your SAP server. However, this filter doesn't affect whether the typing of the received payload is weak or strong. Any SAP action filtering happens at the level of the SAP Adapter for your on-premises data gateway. For more information, review [how to test sending IDocs to Azure Logic Apps from SAP](sap.md#test-sending-idocs-from-sap). +If you use the SAP managed connector under the trigger in your workflow, set up a way to explicitly filter out any unwanted actions from your SAP server, based on the root node namespace in the received XML payload. You can provide a list (array) with a single or multiple SAP actions. By default, this array is empty, which means that your workflow receives all the messages from your SAP server without filtering. When you set up the array filter, the trigger receives messages only from the specified SAP action types and rejects all other messages from your SAP server. However, this filter doesn't affect whether the typing of the received payload is weak or strong. Any SAP action filtering happens at the level of the SAP Adapter for your on-premises data gateway. For more information, review [how to test sending IDocs to Azure Logic Apps from SAP](sap.md#test-sending-idocs-from-sap). ## Set up asynchronous request-reply pattern for triggers Now, set up your workflow to return the results from your SAP server to the orig ### Create a remote function call (RFC) request-response pattern -For the Consumption workflows that use the SAP managed connector and ISE-versioned SAP connector, if you have to receive replies by using a remote function call (RFC) to Azure Logic Apps from SAP ABAP, you must implement a request and response pattern. To receive IDocs in your workflow when you use the [**Request** trigger](../../connectors/connectors-native-reqres.md), make sure that the workflow's first action is a [Response action](../../connectors/connectors-native-reqres.md#add-response) that uses the **200 OK** status code without any content. This recommended step immediately completes the SAP Logical Unit of Work (LUW) asynchronous transfer over tRFC, which leaves the SAP CPIC conversation available again. You can then add more actions to your workflow for processing the received IDoc without blocking later transfers. +For the Consumption workflows that use the SAP managed connector, if you have to receive replies by using a remote function call (RFC) to Azure Logic Apps from SAP ABAP, you must implement a request and response pattern. To receive IDocs in your workflow when you use the [**Request** trigger](../../connectors/connectors-native-reqres.md), make sure that the workflow's first action is a [Response action](../../connectors/connectors-native-reqres.md#add-response) that uses the **200 OK** status code without any content. This recommended step immediately completes the SAP Logical Unit of Work (LUW) asynchronous transfer over tRFC, which leaves the SAP CPIC conversation available again. You can then add more actions to your workflow for processing the received IDoc without blocking later transfers. > [!NOTE] > If you receive a **500 Bad Gateway** or **400 Bad Request** error with a message } ``` -* **Option 1:** In your API connection and trigger configuration, replace your gateway service name with its port number. In the example error, `sapgw00` needs to be replaced with a real port number, for example, `3300`. This is the only available option for ISE. +* **Option 1:** In your API connection and trigger configuration, replace your gateway service name with its port number. In the example error, `sapgw00` needs to be replaced with a real port number, for example, `3300`. * **Option 2:** If you're using the on-premises data gateway, you can add the gateway service name to the port mapping in `%windir%\System32\drivers\etc\services` and then restart the on-premises data gateway service, for example: If you receive a **500 Bad Gateway** or **400 Bad Request** error with a message sapgw00 3300/tcp ``` -You might get a similar error when SAP Application server or Message server name resolves to the IP address. For ISE, you must specify the IP address for your SAP Application server or Message server. For the on-premises data gateway, you can instead add the name to the IP address mapping in `%windir%\System32\drivers\etc\hosts`, for example: +You might get a similar error when SAP Application server or Message server name resolves to the IP address. For the on-premises data gateway, you can instead add the name to the IP address mapping in `%windir%\System32\drivers\etc\hosts`, for example: ```text 10.0.1.9 SAPDBSERVER01 # SAP System Server VPN IP by computer name To have these segments released by SAP, contact the ABAP engineer for your SAP s ### The RequestContext on the IReplyChannel was closed without a reply being sent -For SAP managed connector and ISE-versioned SAP connector, this error message means unexpected failures happen when the catch-all handler for the channel terminates the channel due to an error, and rebuilds the channel to process other messages. +For the SAP managed connector, this error message means unexpected failures happen when the catch-all handler for the channel terminates the channel due to an error, and rebuilds the channel to process other messages. > [!NOTE] >-> The SAP managed trigger and ISE-versioned SAP triggers are webhooks that use the SOAP-based SAP adapter. However, the SAP built-in trigger is an Azure Functions-based trigger that doesn't use a SOAP SAP adapter and doesn't get this error message. +> The SAP managed trigger is a webhook trigger that uses the SOAP-based SAP adapter. However, the SAP built-in trigger is an Azure Functions-based trigger that doesn't use a SOAP SAP adapter and doesn't get this error message. - To acknowledge that your workflow received the IDoc, [add a Response action](../../connectors/connectors-native-reqres.md#add-a-response-action) that returns a **200 OK** status code. Leave the body empty and don't change or add to the headers. The IDoc is transported through tRFC, which doesn't allow for a response payload. |
logic-apps | Sap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connectors/sap.md | To use the SAP connector operations, you have to first authenticate your connect * The SAP connector supports authentication with [SAP Secure Network Communications (SNC)](https://help.sap.com/docs/r/e73bba71770e4c0ca5fb2a3c17e8e229/7.31.25/en-US/e656f466e99a11d1a5b00000e835363f.html). -You can use SNC for SAP NetWeaver single sign-on (SSO) or for security capabilities from external products. If you choose to use SNC, review the [SNC prerequisites](#snc-prerequisites) and the [SNC prerequisites for the ISE connector](#snc-prerequisites-ise). +You can use SNC for SAP NetWeaver single sign-on (SSO) or for security capabilities from external products. If you choose to use SNC, review the [SNC prerequisites](#snc-prerequisites). ## Connector technical reference The SAP connector has different versions, based on [logic app type and host envi | Logic app | Environment | Connector version | |--|-|-|-| **Consumption** | Multitenant Azure Logic Apps | Managed connector, which appears in the designer under the **Enterprise** label. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [Managed connectors in Azure Logic Apps](../../connectors/managed.md) | -| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Enterprise** label, and the ISE-native version, which appears in the designer with the **ISE** label and has different message limits than the managed connector. <br><br>**Note**: Make sure to use the ISE-native version, not the managed version. <br><br>For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [ISE message limits](../logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../../connectors/managed.md) | +| **Consumption** | Multitenant Azure Logic Apps | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [Managed connectors in Azure Logic Apps](../../connectors/managed.md) | | **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [SAP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sap/) <br><br>- [Managed connectors in Azure Logic Apps](../../connectors/managed.md) <br>- [Built-in connectors in Azure Logic Apps](../../connectors/built-in.md) | ## Connector differences -The SAP built-in connector significantly differs from the SAP managed connector and SAP ISE-versioned connector in the following ways: +The SAP built-in connector significantly differs from the SAP managed connector in the following ways: * On-premises connections don't require the on-premises data gateway. The SAP built-in connector significantly differs from the SAP managed connector The **Call BAPI** action includes up to two responses with the returned JSON, the XML response from the called BAPI, and the BAPI commit or BAPI rollback response as well and if you use auto-commit. This capability addresses the problem with the SAP managed connector where the outcome from the auto-commit is silent and observable only through logs. -* Longer timeout at 5 minutes compared to managed connector and ISE-versioned connector. +* Longer timeout at 5 minutes compared to managed connector. - The SAP built-in connector doesn't use the shared or global connector infrastructure, which means timeouts are longer at 5 minutes compared to the SAP managed connector (two minutes) and the SAP ISE-versioned connector (four minutes). Long-running requests work without you having to implement the long-running webhook-based request action pattern. + The SAP built-in connector doesn't use the shared or global connector infrastructure, which means timeouts are longer at 5 minutes compared to the SAP managed connector (two minutes). Long-running requests work without you having to implement the long-running webhook-based request action pattern. * By default, the SAP built-in connector operations are *stateless*. However, you can [enable stateful mode (affinity) for these operations](../../connectors/enable-stateful-affinity-built-in-connectors.md). If you specify an IP address to connect to an SAP Message Server, for example, a For this problem, the following workarounds or solutions exist: -- Make sure that the client making the connection, such as the computer with the on-premises data gateway for the SAP connector or the ISE connector host for the ISE-based SAP connector, can resolve the hostnames returned by the message server.+- Make sure that the client making the connection, such as the computer with the on-premises data gateway for the SAP connector, can resolve the hostnames returned by the message server. - In the transaction named **RZ11**, change or add the SAP setting named **ms/lg_with_hostname=0**. SAP upgraded their .NET connector (NCo) to version 3.1, which changed the way th * For a Standard workflow in single-tenant Azure Logic Apps, see [Single-tenant prerequisites](#single-tenant-prerequisites). - * For a Consumption workflow in a Premium-level [integration service environment (ISE)](../connect-virtual-network-vnet-isolated-environment-overview.md), see [ISE prerequisites](#ise-prerequisites). -- > [!NOTE] - > - > When you use a Premium-level ISE, use the ISE-native SAP connector, not the SAP managed connector, - > which doesn't natively run in an ISE. For more information, review the [ISE prerequisites](#ise-prerequisites). - * By default, the SAP built-in connector operations are *stateless*. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../../connectors/enable-stateful-affinity-built-in-connectors.md). * To use either the SAP managed or built-in connector trigger named **When a message is received**, complete the following tasks: SAP upgraded their .NET connector (NCo) to version 3.1, which changed the way th This requirement is necessary because the flat file IDoc data record that's sent by SAP on the tRFC call `IDOC_INBOUND_ASYNCHRONOUS` isn't padded to the full SDATA field length. Azure Logic Apps provides the flat file IDoc original data without padding as received from SAP. Also, when you combine this SAP trigger with the **Flat File Decode** action, the schema that's provided to the action must match. - > [!NOTE] - > - > In Consumption and Standard workflows, the SAP managed trigger named **When a message is received** - > uses the same URI location to both renew and unsubscribe from a webhook subscription. The renewal operation - > uses the HTTP `PATCH` method, while the unsubscribe operation uses the HTTP `DELETE` method. This behavior - > might make a renewal operation appear as an unsubscribe operation in your trigger's history, but the operation - > is still a renewal because the trigger uses `PATCH` as the HTTP method, not `DELETE`. - > - > In Standard workflows, the SAP built-in trigger named **When a message is received** uses the Azure - > Functions trigger instead, and shows only the actual callbacks from SAP. + * In Consumption and Standard workflows, the SAP managed trigger named **When a message is received** uses the same URI location to both renew and unsubscribe from a webhook subscription. The renewal operation uses the HTTP `PATCH` method, while the unsubscribe operation uses the HTTP `DELETE` method. This behavior might make a renewal operation appear as an unsubscribe operation in your trigger's history, but the operation is still a renewal because the trigger uses `PATCH` as the HTTP method, not `DELETE`. ++ In Standard workflows, the SAP built-in trigger named **When a message is received** uses the Azure Functions trigger instead, and shows only the actual callbacks from SAP. * For the SAP built-in connector trigger named **When a message is received**, you have to enable virtual network integration and private ports by following the article at [Enabling Service Bus and SAP built-in connectors for stateful Logic Apps in Standard](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/enabling-service-bus-and-sap-built-in-connectors-for-stateful/ba-p/3820381). You can also run the workflow in Visual Studio Code to fire the trigger locally. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](../create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code: The SAP system requires network connectivity from the host of the SAP .NET Conne * For Standard logic app workflows in single-tenant Azure Logic Apps, the logic app resource hosts the SAP .NET Connector (NCo) library. So, the logic app resource itself must enable virtual network integration, and that virtual network must have network connectivity to the SAP system. -* For Consumption logic app workflows in an ISE, the ISE virtual network hosts the SAP .NET Connector (NCo) library. - The SAP system-required network connectivity includes the following servers and * SAP Application Server, Dispatcher service (for all Logon types) To use the SAP connector, you have to install the SAP Connector NCo client libra 1. Make sure to install the version of the [SAP Connector (NCo 3.1) for Microsoft .NET 3.1.3.0 compiled with .NET Framework 4.6.2](https://support.sap.com/en/product/connectors/msnet.html) that matches your platform configuration. -* From the client library's default installation folder, copy the assembly (.dll) files to another location, based on your scenario as follows. Or, optionally, if you're using only the SAP managed connector, when you install the SAP NCo client library, select **Global Assembly Cache registration**. The ISE zip archive and SAP built-in connector currently doesn't support GAC registration. +* From the client library's default installation folder, copy the assembly (.dll) files to another location, based on your scenario as follows. Or, optionally, if you're using only the SAP managed connector, when you install the SAP NCo client library, select **Global Assembly Cache registration**. The SAP built-in connector currently doesn't support GAC registration. * For a Consumption workflow that runs in multitenant Azure Logic Apps and uses your on-premises data gateway, copy the following assembly (.dll) files to the on-premises data gateway installation folder, for example, **C:\Program Files\On-Premises Data Gateway**. The SAP NCo 3.0 client library contains the following assemblies: To use the SAP connector, you have to install the SAP Connector NCo client libra - **sapnco.dll** - **sapnco_utils.dll** - * For a Consumption workflow in an ISE, follow the [ISE prerequisites](#ise-prerequisites) instead. - The following relationships exist between the SAP NCo client library, the .NET Framework, the .NET runtime, and the data gateway: * The Microsoft SAP Adapter and the gateway host service both use .NET Framework 4.7.2. To download the current **CommonCryptoLib** package, follow these steps: - **sapgenpse.exe** - **slcryptokernal.dll** -### [ISE](#tab/ise) --<a name="snc-prerequisites-ise"></a> --The ISE-versioned SAP connector supports SNC X.509, not single sign-on (SSO) authentication. If you previously used the SAP connector without SNC, you can enable SNC for ISE-native SAP connections. --Before you redeploy an SAP connector to use SNC, or if you deployed the SAP connector without the SNC or SAPGENPSE libraries, you must delete all previously existing SAP connections and then the SAP connector. Multiple logic app workflows can use the same SAP connection. So, make sure that you delete any previously existing SAP connections from all logic app workflows in your ISE. --After you delete the SAP connections, you must delete the SAP connector from your ISE. You can still keep the logic app workflows that use this connector. After you redeploy the connector, you can then authenticate the new connection in your workflows' SAP operations. --1. To delete existing SAP connections, follow either path: -- * In the [Azure portal](https://portal.azure.com), open each logic app resource and workflow to delete the SAP connections. -- 1. Open your logic app workflow in the designer. -- 1. On your logic app menu, under **Development Tools**, select **API connections**. -- 1. On the **API connections** page, select your SAP connection. -- 1. On the connection's page menu, select **Delete**. -- 1. Accept the confirmation prompt to delete the connection. -- 1. Wait for the portal notification that the connection has been deleted. -- * In the [Azure portal](https://portal.azure.com), open your ISE resource to delete the SAP connections. -- 1. On your ISE menu, under **Settings**, select **API connections**. -- 1. On the **API connections** page, select your SAP connection. -- 1. On the connection's page menu, select **Delete**. -- 1. Accept the confirmation prompt to delete the connection. -- 1. Wait for the portal notification that the connection has been deleted. --1. Delete the SAP connector from your ISE. You must delete all connections to this connector in all your logic app workflows before you can delete the connector. If you haven't already deleted all connections, review the previous steps. -- 1. In the [Azure portal](https://portal.azure.com), open your ISE resource. -- 1. On your ISE menu, under **Settings**, select **Managed connectors**. -- 1. On the **Managed connectors** page, select the checkbox for the SAP connector. -- 1. On the toolbar, select **Delete**. -- 1. Accept the confirmation prompt to delete the connector. -- 1. Wait for the portal notification that the connector has been deleted. --1. Deploy or redeploy the SAP connector in your ISE. -- 1. Prepare a new zip archive file to use in your SAP connector deployment. You must include the SNC library and the SAPGENPSE utility. -- * You must use the 64-bit SNC library. There's no 32-bit support. -- * Your SNC library and dependencies must be compatible with your SAP environment. For how to check compatibility, the [ISE prerequisites](#ise-prerequisites). -- 1. Copy all SNC, SAPGENPSE, and NCo libraries to the root folder of your zip archive. Don't put these binaries in a subfolder. -- 1. For your new zip archive, follow the deployment steps in [ISE prerequisites](#ise-prerequisites). --1. For each workflow that uses the ISE-native SAP connector, [create a new SAP connection that enables SNC](#enable-secure-network-communications). --#### Certificate rotation --1. For all connections that use SAP ISE X.509 in your ISE, update the base64-encoded binary PSE. --1. In your copy of the PSE, import the new certificates. --1. Encode the PSE file as a base64-encoded binary. --1. Edit your SAP connection information, and save the new PSE file there. -- The connector detects the PSE change and updates its own copy during the next connection request. - ### Azure Logic Apps environment prerequisites For a Standard workflow in single-tenant Azure Logic Apps, use the SAP *built-in 1. In the **net472** folder, upload the assembly files larger than 4 MB. -### [ISE](#tab/ise) --<a name="ise-prerequisites"></a> --For a Consumption workflow in an ISE, the ISE provides access to resources that are protected by an Azure virtual network and offers other ISE-native connectors that let workflows directly access on-premises resources without having to use the on-premises data gateway. --> [!IMPORTANT] -> -> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic), -> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard -> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure -> Logic Apps and provide the same capabilities plus more. -> -> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing -> before this date are supported through August 31, 2024. For more information, see the following resources: -> -> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/ise-retirement-what-you-need-to-know/ba-p/3645220) -> - [Single-tenant versus multitenant and integration service environment for Azure Logic Apps](../single-tenant-overview-compare.md) -> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) -> - [Export ISE workflows to a Standard logic app](../export-from-ise-to-standard-logic-app.md) -> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) -> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/) --1. If you don't already have an Azure Storage account with a blob container, create a container using either the [Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Storage Explorer](../../storage/blobs/quickstart-storage-explorer.md). --1. On your local computer, [download and install the latest SAP NCo client library](#sap-client-library-prerequisites). You should have the following assembly (.dll) files: -- * **libicudecnumber.dll** - * **rscp4n.dll** - * **sapnco.dll** - * **sapnco_utils.dll** --1. From the root folder, create a .zip file that includes these assembly files. Upload the package to your blob container in Azure Storage. -- > [!NOTE] - > - > Don't use a subfolder inside the .zip file. Only assemblies in the archive's root folder - > are deployed with the SAP connector in your ISE. - > - > If you use SNC, also include the SNC assemblies and binaries in the same .zip file at the root. - > For more information, review the [SNC prerequisites for ISE](#snc-prerequisites-ise). --1. In either the Azure portal or Azure Storage Explorer, browse to the container location where you uploaded the .zip file. --1. Copy the URL for the container location. Make sure to include the Shared Access Signature (SAS) token, so the SAS token is authorized. Otherwise, deployment for the SAP ISE connector fails. --1. In your ISE, install and deploy the SAP connector. For more information, review [Add ISE connectors](../add-artifacts-integration-service-environment-ise.md#add-ise-connectors-environment). -- 1. In the [Azure portal](https://portal.azure.com), find and open your ISE. -- 1. On the ISE menu, select **Managed connectors** > **Add**. From the connectors list, find and select **SAP**. -- 1. On the **Add a new managed connector** pane, in the **SAP package** box, paste the URL for the .zip file that has the SAP assemblies. Again, make sure to include the SAS token. -- 1. Select **Create** to finish creating your ISE connector. --1. If your SAP instance and ISE are in different virtual networks, you also need to [peer those networks](../../virtual-network/tutorial-connect-virtual-networks-portal.md) so they're connected. Review the [SNC prerequisites for ISE](#snc-prerequisites-ise). --1. Get the IP addresses for the SAP Application, Message, and Gateway servers that you plan to use for connecting from your workflow. Network name resolution isn't available for SAP connections in an ISE. --1. Get the port numbers for the SAP Application, Message, and Gateway services that you plan to use for connecting from your workflow. Service name resolution isn't available for SAP connections in an ISE. - <a name="enable-secure-network-communications"></a> For a Standard workflow that runs in single-tenant Azure Logic Apps, you can ena | Name | Value | Description | ||-|-|- | **SAP_PSE** | <*PSE-value*> | Enter your SNC Personal Security Environment (PSE) as a base64-encoded binary. <br><br>- Your PSE must contain the private key for the client certificate where the thumbprint matches the public key for the client certificate in the **SNC Certificate** parameter. <br><br>- Although your PSE might contain multiple client certificates, to use different client certificates, create separate workflows instead. <br><br>- The PSE must have no PIN. If necessary, set the PIN to empty using the SAPGENPSE utility. <br><br>- If you're using more than one SNC client certificate for your ISE, you must provide the same PSE for all connections. Your PSE must contain the matching private key for the client certificate for each and all the connections. You must set the **SNC Certificate** to match the specific private certificate for each connection. | + | **SAP_PSE** | <*PSE-value*> | Enter your SNC Personal Security Environment (PSE) as a base64-encoded binary. <br><br>- Your PSE must contain the private key for the client certificate where the thumbprint matches the public key for the client certificate in the **SNC Certificate** parameter. <br><br>- Although your PSE might contain multiple client certificates, to use different client certificates, create separate workflows instead. <br><br>- The PSE must have no PIN. If necessary, set the PIN to empty using the SAPGENPSE utility. | | **SAP_PSE_Password** | <*PSE-password*> | The password, also known as PIN, for your PSE | 1. Now, either create or open the workflow you want to use in the designer. On your logic app resource menu, under **Workflows**, select **Workflows**. For a Standard workflow that runs in single-tenant Azure Logic Apps, you can ena 1. To finish creating your connection, select **Create**. -### [ISE](#tab/ise) --For a Consumption workflow that runs in an ISE, you can enable SNC for authentication. Before you start, make sure that you met all the necessary [prerequisites](#prerequisites) and [SNC prerequisites for ISE](#snc-prerequisites-ise). --1. In the [Azure portal](https://portal.azure.com), open your ISE resource and logic app workflow in the designer. --1. Add or edit an *ISE-versioned* SAP connector operation. Make sure that the SAP connector operation displays the **ISE** label. --1. In the SAP connection information box, provide the following [required information](/connectors/sap/#default-connection). The **Authentication Type** that you select changes the available options. -- ![Screenshot showing SAP connection settings for ISE.](./media/sap/sap-connection-ise.png) -- > [!NOTE] - > - > The **SAP Username** and **SAP Password** fields are optional. If you don't provide a username - > and password, the connector uses the client certificate provided in a later step for authentication. --1. To enable SNC, in the SAP connection information box, provide the following required information instead: -- ![Screenshot showing SAP connection settings with SNC enabled for ISE.](./media/sap/sap-connection-snc-ise.png) -- | Parameter | Description | - |--|-| - | **Use SNC** | Select the checkbox. | - | **SNC Library** | Enter the name for your SNC library, for example, **sapcrypto.dll**. | - | **SNC Partner Name** | Enter the name for the backend SNC, for example, **p:CN=DV3, OU=LA, O=MS, C=US**. | - | **SNC My Name** and **SNC Quality of Protection** | Optional: Enter these values, as necessary. | - | **SNC Certificate** | Enter your SNC client's public certificate in base64-encoded format with the following guidance: <br><br>- Don't include the PEM header or footer. <br><br>- Don't enter the private certificate here because the Personal Security Environment (PSE) might contain multiple private certificates. However, this **SNC Certificate** parameter identifies the certificates that this connection must use. For more information, review the next parameter. | - | **PSE** (Personal Security Environment) | Enter your SNC PSE as a base64-encoded binary with the following guidance: <br><br>- The PSE must contain the private client certificate where the thumbprint matches the public client certificate that you provided in the previous step. <br><br>- Although the PSE might contain multiple client certificates. to use different client certificates, create separate workflow apps instead. <br><br>- The PSE must have no PIN. If necessary, set the PIN to empty using the SAPGENPSE utility. <br><br>- If you're using more than one SNC client certificate for your ISE, you must provide the same PSE for all connections. The PSE must contain the client private certificate for each and all the connections. You must set the client public certificate parameter to match the specific private certificate for each connection used in your ISE. | --1. To finish creating your connection, select **Create**. -- If the parameters are correct, the connection is created. If there's a problem with the parameters, the connection creation dialog displays an error message. To troubleshoot connection parameter issues, you can use an on-premises data gateway and the gateway's local logs. - ### Convert a binary PSE file into base64-encoded format Based on whether you have a Consumption workflow in multitenant Azure Logic Apps 1. Open the most recent run, which shows a manual run. Find and review the trigger outputs section. -### [ISE](#tab/ise) --See the steps for [SAP logging for Consumption logic apps in multitenant workflows](?tabs=consumption#test-workflow-logging). - ## Enable SAP client library (NCo) logging and tracing (built-in connector only) |
logic-apps | Create Integration Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/enterprise-integration/create-integration-account.md | For this task, you can use the Azure portal, [Azure CLI](/cli/azure/resource#az- | **Subscription** | Yes | <*Azure-subscription-name*> | The name for your Azure subscription | | **Resource group** | Yes | <*Azure-resource-group-name*> | The name for the [Azure resource group](../../azure-resource-manager/management/overview.md) to use for organizing related resources. For this example, create a new resource group named **FabrikamIntegration-RG**. | | **Integration account name** | Yes | <*integration-account-name*> | Your integration account's name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`()`), and periods (`.`). This example uses **Fabrikam-Integration**. |- | **Pricing Tier** | Yes | <*pricing-level*> | The pricing tier for the integration account, which you can change later. For this example, select **Free**. For more information, review the following documentation: <br><br>- [Logic Apps pricing model](../logic-apps-pricing.md#integration-accounts) <br>- [Logic Apps limits and configuration](../logic-apps-limits-and-config.md#integration-account-limits) <br>- [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) | + | **Pricing Tier** | Yes | <*pricing-level*> | The pricing tier for the integration account, which you can change later. For this example, select **Free**. For more information, see the following documentation: <br><br>- [Logic Apps pricing model](../logic-apps-pricing.md#integration-accounts) <br>- [Logic Apps limits and configuration](../logic-apps-limits-and-config.md#integration-account-limits) <br>- [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) | | **Storage account** | Available only for the Premium (preview) integration account | None | The name for an existing [Azure storage account](../../storage/common/storage-account-create.md). For the example in this guide, this option doesn't apply. |- | **Region** | Yes | <*Azure-region*> | The Azure region where to store your integration account metadata. Either select the same location as your logic app resource, or create your logic apps in the same location as your integration account. For this example, use **West US**. <br><br>To use your integration account with an [integration service environment (ISE)](../connect-virtual-network-vnet-isolated-environment-overview.md), select **Associate with integration service environment**, and then select your ISE as the location. To create an integration account from inside an ISE, see [Create integration accounts from inside an ISE](../add-artifacts-integration-service-environment-ise.md#create-integration-account-environment). <br><br>**Note**: The ISE resource will retire on August 31, 2024, due to its dependency on Azure Cloud Services (classic), which retires at the same time. Currently in preview, the capability is available for you to [export a Standard integration account for an ISE to a Premium integration account](../ise-manage-integration-service-environment.md#export-integration-account). | + | **Region** | Yes | <*Azure-region*> | The Azure region where to store your integration account metadata. Either select the same location as your logic app resource, or create your logic apps in the same location as your integration account. For this example, use **West US**. | | **Enable log analytics** | No | Unselected | For this example, don't select this option. | 1. When you're done, select **Review + create**. |
logic-apps | Export From Consumption To Standard Logic App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-consumption-to-standard-logic-app.md | You can now export Consumption logic apps to a Standard logic app. Using Visual > exported workflows to your satisfaction with the destination environment. You choose when > to disable or delete your source logic apps. -This article provides information about the export process and shows how to export your logic app workflows from an ISE to a local Standard logic app project in Visual Studio Code. +This article provides information about the export process and shows how to export your logic app workflows from a Consumption workflow to a local Standard logic app project in Visual Studio Code. ## Known issues and limitations |
logic-apps | Ise Manage Integration Service Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/ise-manage-integration-service-environment.md | - Title: Manage integration service environments in Azure Logic Apps -description: Check network health and manage logic apps, connections, custom connectors, and integration accounts in your integration service environment (ISE) for Azure Logic Apps. --- Previously updated : 09/27/2023---# Manage your integration service environment (ISE) in Azure Logic Apps --> [!IMPORTANT] -> -> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic), -> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard -> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure -> Logic Apps and provide the same capabilities plus more. -> -> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing -> before this date are supported through August 31, 2024. For more information, see the following resources: -> -> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220) -> - [Single-tenant versus multitenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md) -> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) -> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md) -> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) -> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/) --This article shows how to perform management tasks for your [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), for example: --* Find and view your ISE. --* Enable access for your ISE. --* Check your ISE's network health. --* Manage the resources such as multitenant based logic apps, connections, integration accounts, and connectors in your ISE. --* Add capacity, restart your ISE, or delete your ISE, follow the steps in this topic. To add these artifacts to your ISE, see [Add artifacts to your integration service environment](../logic-apps/add-artifacts-integration-service-environment-ise.md). --## View your ISE --1. In the [Azure portal](https://portal.azure.com) search box, enter **integration service environments**, and select **Integration Service Environments**. -- ![Find integration service environments](./media/ise-manage-integration-service-environment/find-integration-service-environment.png) --1. From the results list, select your integration service environment. -- ![Select integration service environment](./media/ise-manage-integration-service-environment/select-integration-service-environment.png) --1. Continue to the next sections to find logic apps, connections, connectors, or integration accounts in your ISE. --## Enable access for your ISE --When you use an ISE with an Azure virtual network, a common setup problem is having one or more blocked ports. The connectors that you use for creating connections between your ISE and destination systems might also have their own port requirements. For example, if you communicate with an FTP system by using the FTP connector, the port that you use on your FTP system needs to be available, for example, port 21 for sending commands. --To make sure that your ISE is accessible and that the logic apps in that ISE can communicate across each subnet in your virtual network, [open the ports described in this table for each subnet](#network-ports-for-ise). If any required ports are unavailable, your ISE won't work correctly. --* If you have multiple ISE instances that need access to other endpoints that have IP restrictions, deploy an [Azure Firewall](../firewall/overview.md) or a [network virtual appliance](../virtual-network/virtual-networks-overview.md#filter-network-traffic) into your virtual network and route outbound traffic through that firewall or network virtual appliance. You can then [set up a single, outbound, public, static, and predictable IP address](connect-virtual-network-vnet-set-up-single-ip-address.md) that all the ISE instances in your virtual network can use to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE. -- > [!NOTE] - > You can use this approach for a single ISE when your scenario requires limiting the - > number of IP addresses that need access. Consider whether the extra costs for - > the firewall or virtual network appliance make sense for your scenario. Learn more about - > [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/). --* If you created a new Azure virtual network and subnets without any constraints, you don't need to set up [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md#network-security-groups) in your virtual network to control traffic across subnets. --* For an existing virtual network, you can *optionally* set up [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md#network-security-groups) to [filter network traffic across subnets](../virtual-network/tutorial-filter-network-traffic.md). If you want to go this route, or if you're already using NSGs, make sure that you [open the ports described in this table](#network-ports-for-ise) for those NSGs. -- When you set up [NSG security rules](../virtual-network/network-security-groups-overview.md#security-rules), you need to use *both* the **TCP** and **UDP** protocols, or you can select **Any** instead so you don't have to create separate rules for each protocol. NSG security rules describe the ports that you must open for the IP addresses that need access to those ports. Make sure that any firewalls, routers, or other items that exist between these endpoints also keep those ports accessible to those IP addresses. --* For an ISE that has *external* endpoint access, you must create a network security group (NSG), if you don't have one already. You need to add an inbound security rule to the NSG to allow traffic from managed connector outbound IP addresses. To set up this rule, follow these steps: -- 1. On your ISE menu, under **Settings**, select **Properties**. -- 1. Under **Connector outgoing IP addresses**, copy the public IP address ranges, which also appear in this article, [Limits and configuration - Outbound IP addresses](logic-apps-limits-and-config.md#outbound). -- 1. Create a network security group, if you don't have one already. - - 1. Based on the following information, add an inbound security rule for the public outbound IP addresses that you copied. For more information, review [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md#create-a-network-security-group). -- | Purpose | Source service tag or IP addresses | Source ports | Destination service tag or IP addresses | Destination ports | Notes | - |||--|--|-|-| - | Permit traffic from connector outbound IP addresses | <*connector-public-outbound-IP-addresses*> | * | Address space for the virtual network with ISE subnets | * | | --* If you set up forced tunneling through your firewall to redirect Internet-bound traffic, review the [forced tunneling requirements](#forced-tunneling). --<a name="network-ports-for-ise"></a> --### Network ports used by your ISE --This table describes the ports that your ISE requires to be accessible and the purpose for those ports. To help reduce complexity when you set up security rules, the table uses [service tags](../virtual-network/service-tags-overview.md) that represent groups of IP address prefixes for a specific Azure service. Where noted, *internal ISE* and *external ISE* refer to the access endpoint that's selected during ISE creation. For more information, see [Endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access). --> [!IMPORTANT] -> -> For all rules, make sure that you set source ports to `*` because source ports are ephemeral. --#### Inbound security rules --| Source ports | Destination ports | Source service tag or IP addresses | Destination service tag or IP addresses | Purpose | Notes | -|--|-||--||-| -| * | * | Address space for the virtual network with ISE subnets | Address space for the virtual network with ISE subnets | Intersubnet communication within virtual network. | Required for traffic to flow *between* the subnets in your virtual network. <br><br>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. | -| * | 443 | Internal ISE: <br>**VirtualNetwork** <br><br>External ISE: **Internet** or see **Notes** | **VirtualNetwork** | - Communication to your logic app <br><br>- Runs history for your logic app | Rather than use the **Internet** service tag, you can specify the source IP address for these items: <br><br>- The computer or service that calls any request triggers or webhooks in your logic app <br><br>- The computer or service from where you want to access logic app runs history <br><br>**Important**: Closing or blocking this port prevents calls to logic apps that have request triggers or webhooks. You're also prevented from accessing inputs and outputs for each step in runs history. However, you're not prevented from accessing logic app runs history. | -| * | 454 | **LogicAppsManagement** |**VirtualNetwork** | Azure Logic Apps designer - dynamic properties| Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) for that region. <br><br>**Important**: If you're working with Azure Government cloud, the **LogicAppsManagement** service tag won't work. Instead, you have to provide the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) for Azure Government. | -| * | 454 | **LogicApps** | **VirtualNetwork** | Network health check | Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#outbound) for that region. <br><br>**Important**: If you're working with Azure Government cloud, the **LogicApps** service tag won't work. Instead, you have to provide both the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. | -| * | 454 | **AzureConnectors** | **VirtualNetwork** | Connector deployment | Required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. <br><br>**Important**: If you're working with Azure Government cloud, the **AzureConnectors** service tag won't work. Instead, you have to provide the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. | -| * | 454, 455 | **AppServiceManagement** | **VirtualNetwork** | App Service Management dependency || -| * | Internal ISE: 454 <br><br>External ISE: 443 | **AzureTrafficManager** | **VirtualNetwork** | Communication from Azure Traffic Manager || -| * | 3443 | **APIManagement** | **VirtualNetwork** | Connector policy deployment <br><br>API Management - management endpoint | For connector policy deployment, port access is required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. | -| * | 6379 - 6383, plus see **Notes** | **VirtualNetwork** | **VirtualNetwork** | Access Azure Cache for Redis Instances between Role Instances | For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). | --#### Outbound security rules --| Source ports | Destination ports | Source service tag or IP addresses | Destination service tag or IP addresses | Purpose | Notes | -|--|-||--||-| -| * | * | Address space for the virtual network with ISE subnets | Address space for the virtual network with ISE subnets | Intersubnet communication within virtual network | Required for traffic to flow *between* the subnets in your virtual network. <br><br>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. | -| * | 443, 80 | **VirtualNetwork** | Internet | Communication from your logic app | This rule is required for Secure Socket Layer (SSL) certificate verification. This check is for various internal and external sites, which is the reason that the Internet is required as the destination. | -| * | Varies based on destination | **VirtualNetwork** | Varies based on destination | Communication from your logic app | Destination ports vary based on the endpoints for the external services with which your logic app needs to communicate. <br><br>For example, the destination port is port 25 for an SMTP service, port 22 for an SFTP service, and so on. | -| * | 80, 443 | **VirtualNetwork** | **AzureActiveDirectory** | Microsoft Entra ID || -| * | 80, 443, 445 | **VirtualNetwork** | **Storage** | Azure Storage dependency || -| * | 443 | **VirtualNetwork** | **AppService** | Connection management || -| * | 443 | **VirtualNetwork** | **AzureMonitor** | Publish diagnostic logs & metrics || -| * | 1433 | **VirtualNetwork** | **SQL** | Azure SQL dependency || -| * | 1886 | **VirtualNetwork** | **AzureMonitor** | Azure Resource Health | Required for publishing health status to Resource Health. | -| * | 5672 | **VirtualNetwork** | **EventHub** | Dependency from Log to Event Hubs policy and monitoring agent || -| * | 6379 - 6383, plus see **Notes** | **VirtualNetwork** | **VirtualNetwork** | Access Azure Cache for Redis Instances between Role Instances | For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). | -| * | 53 | **VirtualNetwork** | IP addresses for any custom Domain Name System (DNS) servers on your virtual network | DNS name resolution | Required only when you use custom DNS servers on your virtual network | --In addition, you need to add outbound rules for [App Service Environment (ASE)](../app-service/environment/intro.md): --* If you use Azure Firewall, you need to set up your firewall with the App Service Environment (ASE) [fully qualified domain name (FQDN) tag](../firewall/fqdn-tags.md#current-fqdn-tags), which permits outbound access to ASE platform traffic. --* If you use a firewall appliance other than Azure Firewall, you need to set up your firewall with *all* the rules listed in the [firewall integration dependencies](../app-service/environment/firewall-integration.md#dependencies) that are required for App Service Environment. --<a name="forced-tunneling"></a> --#### Forced tunneling requirements --If you set up or use [forced tunneling](../firewall/forced-tunneling.md) through your firewall, you have to permit extra external dependencies for your ISE. Forced tunneling lets you redirect Internet-bound traffic to a designated next hop, such as your virtual private network (VPN) or to a virtual appliance, rather than to the Internet so that you can inspect and audit outbound network traffic. --If you don't permit access for these dependencies, your ISE deployment fails and your deployed ISE stops working. --* User-defined routes -- To prevent asymmetric routing, you must define a route for each and every IP address that's listed below with **Internet** as the next hop. -- * [Azure Logic Apps inbound and outbound addresses for the ISE region](logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags) - * [Azure IP addresses for connectors in the ISE region, available in this download file](https://www.microsoft.com/download/details.aspx?id=56519) - * [App Service Environment management addresses](../app-service/environment/management-addresses.md) - * [Azure Traffic Manager management addresses](/azure/traffic-manager/traffic-manager-faqs#what-are-the-ip-addresses-from-which-the-health-checks-originate) - * [Azure API Management Control Plane IP addresses](../api-management/virtual-network-reference.md#control-plane-ip-addresses) --* Service endpoints -- You need to enable service endpoints for Azure SQL, Storage, Service Bus, KeyVault, and Event Hubs because you can't send traffic through a firewall to these services. --* Other inbound and outbound dependencies -- Your firewall *must* allow the following inbound and outbound dependencies: -- * [Azure App Service Dependencies](../app-service/environment/firewall-integration.md#deploying-your-ase-behind-a-firewall) - * [Azure Cache Service Dependencies](../azure-cache-for-redis/cache-how-to-premium-vnet.md#what-are-some-common-misconfiguration-issues-with-azure-cache-for-redis-and-virtual-networks) - * [Azure API Management Dependencies](../api-management/virtual-network-reference.md) --<a name="check-network-health"></a> --## Check network health --On your ISE menu, under **Settings**, select **Network health**. This pane shows the health status for your subnets and outbound dependencies on other services. --![Check network health](./media/ise-manage-integration-service-environment/ise-check-network-health.png) --> [!CAUTION] -> If your ISE's network becomes unhealthy, the internal App Service Environment (ASE) that's used by your ISE can also become unhealthy. -> If the ASE is unhealthy for more than seven days, the ASE is suspended. To resolve this state, check your virtual network setup. -> Resolve any problems that you find, and then restart your ISE. Otherwise, after 90 days, the suspended ASE is deleted, and your -> ISE becomes unusable. So, make sure that you keep your ISE healthy to permit the necessary traffic. -> -> For more information, see these topics: -> -> * [Azure App Service diagnostics overview](../app-service/overview-diagnostics.md) -> * [Message logging for Azure App Service Environment](../app-service/environment/using-an-ase.md#logging) --<a name="find-logic-apps"></a> --## Manage your logic apps --You can view and manage the logic apps that are in your ISE. --1. On your ISE menu, under **Settings**, select **Logic apps**. -- ![View logic apps](./media/ise-manage-integration-service-environment/ise-find-logic-apps.png) --1. To remove logic apps that you no longer need in your ISE, select those logic apps, and then select **Delete**. To confirm that you want to delete, select **Yes**. --> [!NOTE] -> If you delete and recreate a child logic app, you must resave the parent logic app. The recreated child app will have different metadata. -> If you don't resave the parent logic app after recreating its child, your calls to the child logic app will fail with an error of "unauthorized". -> This behavior applies to parent-child logic apps, for example, those that use artifacts in integration accounts or call Azure functions. --<a name="find-api-connections"></a> --## Manage API connections --You can view and manage the connections that were created by the logic apps running in your ISE. --1. On your ISE menu, under **Settings**, select **API connections**. -- ![View API connections](./media/ise-manage-integration-service-environment/ise-find-api-connections.png) --1. To remove connections that you no longer need in your ISE, select those connections, and then select **Delete**. To confirm that you want to delete, select **Yes**. --<a name="manage-api-connectors"></a> --## Manage ISE connectors --You can view and manage the API connectors that are deployed to your ISE. --1. On your ISE menu, under **Settings**, select **Managed connectors**. -- ![View managed connectors](./media/ise-manage-integration-service-environment/ise-view-managed-connectors.png) --1. To remove connectors that you don't want available in your ISE, select those connectors, and then select **Delete**. To confirm that you want to delete, select **Yes**. --<a name="find-custom-connectors"></a> --## Manage custom connectors --You can view and manage the custom connectors that you deployed to your ISE. --1. On your ISE menu, under **Settings**, select **Custom connectors**. -- ![Find custom connectors](./media/ise-manage-integration-service-environment/ise-find-custom-connectors.png) --1. To remove custom connectors that you no longer need in your ISE, select those connectors, and then select **Delete**. To confirm that you want to delete, select **Yes**. --<a name="find-integration-accounts"></a> --## Manage integration accounts --1. On your ISE menu, under **Settings**, select **Integration accounts**. -- ![Find integration accounts](./media/ise-manage-integration-service-environment/ise-find-integration-accounts.png) --1. To remove integration accounts from your ISE when no longer needed, select those integration accounts, and then select **Delete**. --<a name="export-integration-account"></a> --## Export integration account (preview) --For a Standard integration account created from inside an ISE, you can export that integration account to an existing Premium integration account. The export process has two steps: export the artifacts, and then export the agreement states. Artifacts include partners, agreements, certificates, schemas, and maps. However, the export process currently doesn't support assemblies and RosettaNet PIPs. --Your integration account also stores the runtime states for specific B2B actions and EDI standards, such as the MIC number for AS2 actions and the control numbers for X12 actions. If you configured your agreements to update these states every time that a transaction is processed and to use these states for message reconciliation and duplicate detection, make sure that you also export these states. You can export either all agreement states or one agreement state at a time. --> [!IMPORTANT] -> -> Make sure to choose a time window when your source integration account doesn't have any activity in your agreements to avoid state inconsistencies. --### Prerequisites --If you don't have a Premium integration account, [create a Premium integration account](./enterprise-integration/create-integration-account.md). --### Export artifacts --This process copies artifacts from the source to the destination. --1. In the [Azure portal](https://portal.azure.com), open your Standard integration account. --1. On the integration account menu, under **Settings**, select **Export**. -- > [!NOTE] - > - > If the **Export** option doesn't appear, make sure that you selected a Standard integration account that was created from inside an ISE. --1. On the **Export** page toolbar, select **Export Artifacts**. --1. Open the **Target integration account** list, which contains all the Premium accounts in your Azure subscription, select the Premium integration account that you want, and then select **OK**. -- The **Export** page now shows the export status for your artifacts. --1. To confirm the exported artifacts, open your destination Premium integration account. --### Export agreement state (optional) --1. On the **Export** page toolbar, select **Export Agreement State**. --1. On the **Export Agreement State** pane, open the **Target integration account** list, and select the Premium integration account that you want. --1. To export all agreement states, don't select any agreement from the **Agreement** list. To export an individual agreement state, select an agreement from the **Agreement** list. --1. When you're done, select **OK**. -- The **Export** page now shows the export status for your agreement states. --<a name="add-capacity"></a> --## Add ISE capacity --The Premium ISE base unit has fixed capacity, so if you need more throughput, you can add more scale units, either during creation or afterwards. The Developer SKU doesn't include the capability to add scale units. --> [!IMPORTANT] -> Scaling out an ISE can take 20-30 minutes on average. ---1. In the [Azure portal](https://portal.azure.com), go to your ISE. --1. To review usage and performance metrics for your ISE, on your ISE menu, select **Overview**. -- ![View usage for ISE](./media/ise-manage-integration-service-environment/integration-service-environment-usage.png) --1. Under **Settings**, select **Scale out**. On the **Configure** pane, select from these options: -- * [**Manual scale**](#manual-scale): Scale based on the number of processing units that you want to use. - * [**Custom autoscale**](#custom-autoscale): Scale based on performance metrics by selecting from various criteria and specifying the threshold conditions for meeting that criteria. -- ![Screenshot that shows the "Scale out" page with "Manual scale" selected.](./media/ise-manage-integration-service-environment/select-scale-out-options.png) --<a name="manual-scale"></a> --### Manual scale --1. After you select **Manual scale**, for **Additional capacity**, select the number of scaling units that you want to use. -- ![Select the scaling type that you want](./media/ise-manage-integration-service-environment/select-manual-scale-out-units.png) --1. When you're done, select **Save**. --<a name="custom-autoscale"></a> --### Custom autoscale --1. After you select **Custom autoscale**, for **Autoscale setting name**, provide a name for your setting and optionally, select the Azure resource group where the setting belongs. -- ![Provide name for autoscale setting and select resource group](./media/ise-manage-integration-service-environment/select-custom-autoscale.png) --1. For the **Default** condition, select either **Scale based on a metric** or **Scale to a specific instance count**. -- * If you choose instance-based, enter the number for the processing units, which is a value from 0 to 10. -- * If you choose metric-based, follow these steps: -- 1. In the **Rules** section, select **Add a rule**. -- 1. On the **Scale rule** pane, set up your criteria and action to take when the rule triggers. -- 1. For **Instance limits**, specify these values: -- * **Minimum**: The minimum number of processing units to use - * **Maximum**: The maximum number of processing units to use - * **Default**: If any problems happen while reading the resource metrics, and the current capacity is below the default capacity, autoscaling scales out to the default number of processing units. However, if the current capacity exceeds the default capacity, autoscaling doesn't scale in. --1. To add another condition, select **Add scale condition**. --1. When you're finished with your autoscale settings, save your changes. --<a name="restart-ISE"></a> --## Restart ISE --If you change your DNS server or DNS server settings, you have to restart your ISE so that the ISE can pick up those changes. Restarting a Premium SKU ISE doesn't result in downtime due to redundancy and components that restart one at a time during recycling. However, a Developer SKU ISE experiences downtime because no redundancy exists. For more information, see [ISE SKUs](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level). --1. In the [Azure portal](https://portal.azure.com), go to your ISE. --1. On the ISE menu, select **Overview**. On the Overview toolbar, **Restart**. --<a name="delete-ise"></a> --## Delete ISE --Before you delete an ISE that you no longer need or an Azure resource group that contains an ISE, check that you have no policies or locks on the Azure resource group that contains these resources or on your Azure virtual network because these items can block deletion. --After you delete your ISE, you might have to wait up to 9 hours before you try to delete your Azure virtual network or subnets. --## Next steps --* [Add resources to integration service environments](../logic-apps/add-artifacts-integration-service-environment-ise.md) |
logic-apps | Logic Apps Enterprise Integration As2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2.md | The **AS2** connector has different versions, based on [logic app type and host | Logic app | Environment | Connector version | |--|-|-|-| **Consumption** | multitenant Azure Logic Apps | **AS2 (v2)** and **AS2** managed connectors (Standard class). The **AS2 (v2)** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [AS2 managed connector reference](/connectors/as2/) <br>- [AS2 (v2) managed connector operations](#as2-v2-operations) <br>- [AS2 message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) | -| **Consumption** | Integration service environment (ISE) | **AS2 (v2)** and **AS2** managed connectors (Standard class) and **AS2** ISE version, which has different message limits than the Standard class. The **AS2 (v2)** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [AS2 managed connector reference](/connectors/as2/) <br>- [AS2 (v2) managed connector operations](#as2-v2-operations) <br>- [AS2 message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) | +| **Consumption** | Multitenant Azure Logic Apps | **AS2 (v2)** and **AS2** managed connectors (Standard class). The **AS2 (v2)** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [AS2 managed connector reference](/connectors/as2/) <br>- [AS2 (v2) managed connector operations](#as2-v2-operations) <br>- [AS2 message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) | | **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | **AS2 (v2)** built-in connector and **AS2** managed connector. The built-in version differs in the following ways: <br><br>- The built-in version provides only actions, but you can use any trigger that works for your scenario. <br><br>- The built-in version can directly access Azure virtual networks. You don't need an on-premises data gateway.<br><br>For more information, review the following documentation: <br><br>- [AS2 managed connector reference](/connectors/as2/) <br>- [AS2 (v2) built-in connector operations](#as2-v2-operations) <br>- [AS2 message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) | <a name="as-v2-operations"></a> |
logic-apps | Logic Apps Enterprise Integration Edifact | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact.md | The **EDIFACT** connector has different versions, based on [logic app type and h | Logic app | Environment | Connector version | |--|-|-|-| **Consumption** | multitenant Azure Logic Apps | **EDIFACT** managed connector (Standard class). The **EDIFACT** connector provides only actions, but you can use any trigger that works for your scenario. For more information, see the following documentation: <br><br>- [EDIFACT managed connector reference](/connectors/edifact/) <br>- [EDIFACT message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) | -| **Consumption** | Integration service environment (ISE) | **EDIFACT** managed connector (Standard class) and **EDIFACT** ISE version, which has different message limits than the Standard class. The **EDIFACT** connector provides only actions, but you can use any trigger that works for your scenario. For more information, see the following documentation: <br><br>- [EDIFACT managed connector reference](/connectors/edifact/) <br>- [EDIFACT message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) | +| **Consumption** | Multitenant Azure Logic Apps | **EDIFACT** managed connector (Standard class). The **EDIFACT** connector provides only actions, but you can use any trigger that works for your scenario. For more information, see the following documentation: <br><br>- [EDIFACT managed connector reference](/connectors/edifact/) <br>- [EDIFACT message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) | | **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | **EDIFACT** built-in connector (preview) and **EDIFACT** managed connector. The built-in version differs in the following ways: <br><br>- The built-in version provides only actions, but you can use any trigger that works for your scenario. <br><br>- The built-in version can directly access Azure virtual networks. You don't need an on-premises data gateway.<br><br>For more information, see the following documentation: <br><br>- [EDIFACT managed connector reference](/connectors/edifact/) <br>- [EDIFACT built-in connector operations](#edifact-built-in-operations) <br>- [EDIFACT message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) | <a name="edifact-built-in-operations"></a> |
logic-apps | Logic Apps Enterprise Integration Rosettanet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-rosettanet.md | The RosettaNet connector is available only for Consumption logic app workflows. | Logic app | Environment | Connector version | |--|-|-| | **Consumption** | Multitenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. The **RosettaNet** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [RosettaNet connector operations](#rosettanet-operations) <br>- [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |-| **Consumption** | Integration service environment (ISE) | Built-in connector, which appears in the designer with the **CORE** label. The **RosettaNet** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [RosettaNet connector operations](#rosettanet-operations) <br>- [ISE message limits](logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) | <a name="rosettanet-operations"></a> |
logic-apps | Logic Apps Enterprise Integration X12 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12.md | This how-to guide shows how to add the X12 encoding and decoding actions to an e ## Connector technical reference -The **X12** connector has one version across workflows in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For technical information about the **X12** connector, see the following documentation: +The **X12** connector has one version across workflows in [multitenant Azure Logic Apps and single-tenant Azure Logic Apps](logic-apps-overview.md#resource-environment-differences). For technical information about the **X12** connector, see the following documentation: * [Connector reference page](/connectors/x12/), which describes the triggers, actions, and limits as documented by the connector's Swagger file * [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) - For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits). - ## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). |
logic-apps | Logic Apps Enterprise Integration Xml Validation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-xml-validation.md | If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi 1. Under the step in your workflow where you want to add the **XML Validation** action, choose one of the following steps: - For a Consumption or ISE plan-based logic app, choose a step: + For a Consumption logic app, choose one of the following steps: * To add the **XML Validation** action at the end of your workflow, select **New step**. If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi The dynamic content list shows property tokens that represent the outputs from the previous steps in the workflow. If the list doesn't show an expected property, check the trigger or action heading in the list and whether you can select **See more**. - For a Consumption or ISE plan-based logic app, the designer looks like this example: + For a Consumption logic app, the designer looks like this example: ![Screenshot showing multi-tenant designer with opened dynamic content list, cursor in "Content" box, and opened dynamic content list.](./media/logic-apps-enterprise-integration-xml-validation/open-dynamic-content-list-multi-tenant.png) |
logic-apps | Logic Apps Limits And Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md | Last updated 05/20/2024 > For Power Automate, review [Limits and configuration in Power Automate](/power-automate/limits-and-config). -This reference guide describes the limits and configuration information for Azure Logic Apps and related resources. Based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows, you choose whether to create a Consumption logic app workflow that runs in *multitenant* Azure Logic Apps or an integration service environment (ISE). Or, create a Standard logic app workflow that runs in *single-tenant* Azure Logic Apps or an App Service Environment (v3 - Windows plans only). +This reference guide describes the limits and configuration information for Azure Logic Apps and related resources. Based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows, you choose whether to create a Consumption logic app workflow that runs in *multitenant* Azure Logic Apps or a Standard logic app workflow that runs in *single-tenant* Azure Logic Apps or an App Service Environment (v3 - Windows plans only). > [!NOTE] > > Many limits are the same across the available environments where Azure Logic Apps runs, but differences are noted where they exist. -The following table briefly summarizes differences between a Consumption logic app and a Standard logic app. You'll also learn how single-tenant Azure Logic Apps compares to multitenant Azure Logic Apps and an ISE for deploying, hosting, and running your logic app workflows. +The following table briefly summarizes differences between a Consumption logic app and a Standard logic app. [!INCLUDE [Logic app resource type and environment differences](../../includes/logic-apps-resource-environment-differences-table.md)] The following tables list the values for a single workflow definition: The following table lists the values for a single workflow run: -| Name | Multitenant | Single-tenant | Integration service environment | Notes | -||--|||-| -| Run history retention in storage | 90 days | 90 days <br>(Default) | 366 days | The amount of time to keep a workflow's run history in storage after a run starts. <br><br>**Note**: If the workflow's run duration exceeds the retention limit, this run is removed from the run history in storage. If a run isn't immediately removed after reaching the retention limit, the run is removed within 7 days. <br><br>Whether a run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <br><br>For more information, review [Change duration and run history retention in storage](#change-retention). | -| Run duration | 90 days | - Stateful workflow: 90 days <br>(Default) <br><br>- Stateless workflow: 5 min <br>(Default) | 366 days | The amount of time that a workflow can continue running before forcing a timeout. The run duration is calculated by using a run's start time and the limit that's specified in the workflow setting, [**Run history retention in days**](#change-duration) at that start time. <br><br>**Important**: Make sure the run duration value is always less than or equal to the run history retention in storage value. Otherwise, run histories might be deleted before the associated jobs are complete. <br><br>For more information, review [Change run duration and history retention in storage](#change-duration). | -| Recurrence interval | - Min: 1 sec <br><br>- Max: 500 days | - Min: 1 sec <br><br>- Max: 500 days | - Min: 1 sec <br><br>- Max: 500 days || +| Name | Multitenant | Single-tenant | Notes | +||-||-| +| Run history retention in storage | 90 days | 90 days <br>(Default) | The amount of time to keep a workflow's run history in storage after a run starts. <br><br>**Note**: If the workflow's run duration exceeds the retention limit, this run is removed from the run history in storage. If a run isn't immediately removed after reaching the retention limit, the run is removed within 7 days. <br><br>Whether a run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <br><br>For more information, review [Change duration and run history retention in storage](#change-retention). | +| Run duration | 90 days | - Stateful workflow: 90 days <br>(Default) <br><br>- Stateless workflow: 5 min <br>(Default) | The amount of time that a workflow can continue running before forcing a timeout. The run duration is calculated by using a run's start time and the limit that's specified in the workflow setting, [**Run history retention in days**](#change-duration) at that start time. <br><br>**Important**: Make sure the run duration value is always less than or equal to the run history retention in storage value. Otherwise, run histories might be deleted before the associated jobs are complete. <br><br>For more information, review [Change run duration and history retention in storage](#change-duration). | +| Recurrence interval | - Min: 1 sec <br><br>- Max: 500 days | - Min: 1 sec <br><br>- Max: 500 days || <a name="change-duration"></a> <a name="change-retention"></a> For Consumption logic app workflows, the same setting controls the maximum numbe * In multitenant Azure Logic Apps, the 90-day default limit is the same as the maximum limit. You can only decrease this value. -* In an ISE, you can decrease or increase the 90-day default limit. - For example, suppose that you reduce the retention limit from 90 days to 30 days. A 60-day-old run is removed from the runs history. If you increase the retention period from 30 days to 60 days, a 20-day-old run stays in the runs history for another 40 days. #### Portal The following table lists the values for a single workflow run: The following table lists the values for a **For each** loop: -| Name | Multitenant | Single-tenant | Integration service environment | Notes | -||--|||-| -| Array items | 100,000 items | - Stateful workflow: 100,000 items <br>(Default) <br><br>- Stateless workflow: 100 items <br>(Default) | 100,000 items | The number of array items that a **For each** loop can process. <br><br>To filter larger arrays, you can use the [query action](logic-apps-perform-data-operations.md#filter-array-action). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | -| Concurrent iterations | Concurrency off: 20 <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <br>(Default) <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | The number of **For each** loop iterations that can run at the same time, or in parallel. <br><br>To change this value in multitenant Azure Logic Apps, see [Change **For each** concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-for-each-concurrency) or [Run **For each** loops sequentially](logic-apps-workflow-actions-triggers.md#sequential-for-each). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Name | Multitenant | Single-tenant | Notes | +||-||-| +| Array items | 100,000 items | - Stateful workflow: 100,000 items <br>(Default) <br><br>- Stateless workflow: 100 items <br>(Default) | The number of array items that a **For each** loop can process. <br><br>To filter larger arrays, you can use the [query action](logic-apps-perform-data-operations.md#filter-array-action). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Concurrent iterations | Concurrency off: 20 <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <br>(Default) <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | The number of **For each** loop iterations that can run at the same time, or in parallel. <br><br>To change this value in multitenant Azure Logic Apps, see [Change **For each** concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-for-each-concurrency) or [Run **For each** loops sequentially](logic-apps-workflow-actions-triggers.md#sequential-for-each). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | <a name="until-loop"></a> The following table lists the values for a **For each** loop: The following table lists the values for an **Until** loop: -| Name | Multitenant | Single-tenant | Integration service environment | Notes | -||--|||-| -| Iterations | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | Stateful workflow: <br><br>- Default: 60 <br>- Min: 1 <br>- Max: 5,000 <br><br>Stateless workflow: <br><br>- Default: 60 <br>- Min: 1 <br>- Max: 100 | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | The number of cycles that an **Until** loop can have during a workflow run. <br><br>To change this value in multitenant Azure Logic Apps, in the **Until** loop shape, select **Change limits**, and specify the value for the **Count** property. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | -| Timeout | Default: PT1H (1 hour) | Stateful workflow: PT1H (1 hour) <br><br>Stateless workflow: PT5M (5 min) | Default: PT1H (1 hour) | The amount of time that the **Until** loop can run before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met. <br><br>To change this value in multitenant Azure Logic Apps, in the **Until** loop shape, select **Change limits**, and specify the value for the **Timeout** property. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Name | Multitenant | Single-tenant | Notes | +||-||-| +| Iterations | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | Stateful workflow: <br><br>- Default: 60 <br>- Min: 1 <br>- Max: 5,000 <br><br>Stateless workflow: <br><br>- Default: 60 <br>- Min: 1 <br>- Max: 100 | The number of cycles that an **Until** loop can have during a workflow run. <br><br>To change this value in multitenant Azure Logic Apps, in the **Until** loop shape, select **Change limits**, and specify the value for the **Count** property. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Timeout | Default: PT1H (1 hour) | Stateful workflow: PT1H (1 hour) <br><br>Stateless workflow: PT5M (5 min) | The amount of time that the **Until** loop can run before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met. <br><br>To change this value in multitenant Azure Logic Apps, in the **Until** loop shape, select **Change limits**, and specify the value for the **Timeout** property. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | <a name="concurrency-debatching"></a> ### Concurrency and debatching -| Name | Multitenant | Single-tenant | Integration service environment | Notes | -||--|||-| -| Trigger - concurrent runs | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 25 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 100 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 25 <br>- Min: 1 <br>- Max: 100 | The number of concurrent runs that a trigger can start at the same time, or in parallel. <br><br>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items for [debatching arrays](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch). <br><br>To change this value in multitenant Azure Logic Apps, see [Change trigger concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-trigger). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | -| Maximum waiting runs | Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br>(Default)<br>- Max: 100 runs | Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br>(Default)<br>- Max: 200 runs <br> | Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br>(Default)<br>- Max: 100 runs | The number of workflow instances that can wait to run when your current workflow instance is already running the maximum concurrent instances. This setting takes effect only if concurrency is turned on. <br><br>To change this value in multitenant Azure Logic Apps, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | -| **SplitOn** items | Concurrency off: 100,000 items <br><br>Concurrency on: 100 items | Concurrency off: 100,000 items <br><br>Concurrency on: 100 items | Concurrency off: 100,000 items <br>(Default) <br><br>Concurrency on: 100 items <br>(Default) | For triggers that return an array, you can specify an expression that uses a **SplitOn** property that [splits or debatches array items into multiple workflow instances](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch) for processing, rather than use a **For each** loop. This expression references the array to use for creating and running a workflow instance for each array item. <br><br>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items. | +| Name | Multitenant | Single-tenant | Notes | +||-||-| +| Trigger - concurrent runs | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 25 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 100 <br>- Min: 1 <br>- Max: 100 | The number of concurrent runs that a trigger can start at the same time, or in parallel. <br><br>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items for [debatching arrays](logic-apps-workflow-actions-triggers.md#split-on-debatch). <br><br>To change this value in multitenant Azure Logic Apps, see [Change trigger concurrency limit](logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](logic-apps-workflow-actions-triggers.md#sequential-trigger). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Maximum waiting runs | Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br>(Default)<br>- Max: 100 runs | Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br>(Default)<br>- Max: 200 runs <br> | The number of workflow instances that can wait to run when your current workflow instance is already running the maximum concurrent instances. This setting takes effect only if concurrency is turned on. <br><br>To change this value in multitenant Azure Logic Apps, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| **SplitOn** items | Concurrency off: 100,000 items <br><br>Concurrency on: 100 items | Concurrency off: 100,000 items <br><br>Concurrency on: 100 items | For triggers that return an array, you can specify an expression that uses a **SplitOn** property that [splits or debatches array items into multiple workflow instances](logic-apps-workflow-actions-triggers.md#split-on-debatch) for processing, rather than use a **For each** loop. This expression references the array to use for creating and running a workflow instance for each array item. <br><br>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items. | <a name="throughput-limits"></a> To enable this setting in an ARM template for deploying your logic app, in the ` } ``` -For more information about your logic app resource definition, review [Overview: Automate deployment for Azure Logic Apps by using Azure Resource Manager templates](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#logic-app-resource-definition). +For more information about your logic app resource definition, review [Overview: Automate deployment for Azure Logic Apps by using Azure Resource Manager templates](logic-apps-azure-resource-manager-templates-overview.md#logic-app-resource-definition). -### Integration service environment (ISE) --* [Developer ISE SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level): Provides up to 500 executions per minute, but note these considerations: -- * Make sure that you use this SKU only for exploration, experiments, development, or testing - not for production or performance testing. This SKU has no service-level agreement (SLA), scale up capability, or redundancy during recycling, which means that you might experience delays or downtime. -- * Backend updates might intermittently interrupt service. --* [Premium ISE SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level): The following table describes this SKU's throughput limits, but to exceed these limits in normal processing, or run load testing that might go above these limits, [contact the Logic Apps team](mailto://logicappsemail@microsoft.com) for help with your requirements. -- | Name | Limit | Notes | - ||-|-| - | Base unit execution limit | System-throttled when infrastructure capacity reaches 80% | Provides ~4,000 action executions per minute, which is ~160 million action executions per month | - | Scale unit execution limit | System-throttled when infrastructure capacity reaches 80% | Each scale unit can provide ~2,000 more action executions per minute, which is ~80 million more action executions per month | - | Maximum scale units that you can add | 10 scale units | | - <a name="gateway-limits"></a> ## Data gateway limits The following table lists the retry policy limits for a trigger or action, based The following table lists the values for a single workflow definition: -| Name | Multitenant | Single-tenant | Integration service environment | Notes | -||-|||-| -| Variables per workflow | 250 variables | 250 variables <br>(Default) | 250 variables || -| Variable - Maximum content size | 104,857,600 characters | Stateful workflow: 104,857,600 characters <br>(Default) <br><br>Stateless workflow: 1,024 characters <br>(Default) | 104,857,600 characters | To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | -| Variable (Array type) - Maximum number of array items | 100,000 items | 100,000 items <br>(Default) | Premium SKU: 100,000 items <br><br>Developer SKU: 5,000 items | To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Name | Multitenant | Single-tenant | Notes | +||-||-| +| Variables per workflow | 250 variables | 250 variables <br>(Default) || +| Variable - Maximum content size | 104,857,600 characters | Stateful workflow: 104,857,600 characters <br>(Default) <br><br>Stateless workflow: 1,024 characters <br>(Default) | To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Variable (Array type) - Maximum number of array items | 100,000 items | 100,000 items <br>(Default) | To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | <a name="http-limits"></a> The following tables list the values for a single inbound or outbound call: By default, the HTTP action and APIConnection actions follow the [standard asynchronous operation pattern](/azure/architecture/patterns/async-request-reply), while the Response action follows the *synchronous operation pattern*. Some managed connector operations make asynchronous calls or listen for webhook requests, so the timeout for these operations might be longer than the following limits. For more information, review [each connector's technical reference page](/connectors/connector-reference/connector-reference-logicapps-connectors) and also the [Workflow triggers and actions](../logic-apps/logic-apps-workflow-actions-triggers.md#http-action) documentation. > [!NOTE]-> For the **Logic App (Standard)** resource type in the single-tenant service, stateless workflows can only run *synchronously*. +> +> For a Standard logic app resource in single-tenant Azure Logic Apps, stateless workflows can only run *synchronously*. -| Name | Multitenant | Single-tenant | Integration service environment | Notes | -||-|||-| -| Outbound request | 120 sec <br>(2 min) | 235 sec <br>(3.9 min) <br>(Default) | 240 sec <br>(4 min) | Examples of outbound requests include calls made by the HTTP trigger or action. <br><br>**Tip**: For longer running operations, use an [asynchronous polling pattern](../logic-apps/logic-apps-create-api-app.md#async-pattern) or an ["Until" loop](../logic-apps/logic-apps-workflow-actions-triggers.md#until-action). To work around timeout limits when you call another workflow that has a [callable endpoint](logic-apps-http-endpoint.md), you can use the built-in Azure Logic Apps action instead, which you can find in the designer's operation picker under **Built-in**. <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | -| Inbound request | 120 sec <br>(2 min) | 235 sec <br>(3.9 min) <br>(Default) | 240 sec <br>(4 min) | Examples of inbound requests include calls received by the Request trigger, HTTP Webhook trigger, and HTTP Webhook action. <br><br>**Note**: For the original caller to get the response, all steps in the response must finish within the limit unless you call another nested workflow. For more information, see [Call, trigger, or nest logic apps](../logic-apps/logic-apps-http-endpoint.md). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Name | Multitenant | Single-tenant | Notes | +||-||-| +| Outbound request | 120 sec <br>(2 min) | 235 sec <br>(3.9 min) <br>(Default) | Examples of outbound requests include calls made by the HTTP trigger or action. <br><br>**Tip**: For longer running operations, use an [asynchronous polling pattern](logic-apps-create-api-app.md#async-pattern) or an ["Until" loop](logic-apps-workflow-actions-triggers.md#until-action). To work around timeout limits when you call another workflow that has a [callable endpoint](logic-apps-http-endpoint.md), you can use the built-in Azure Logic Apps action instead, which you can find in the designer's operation picker under **Built-in**. <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Inbound request | 120 sec <br>(2 min) | 235 sec <br>(3.9 min) <br>(Default) | Examples of inbound requests include calls received by the Request trigger, HTTP Webhook trigger, and HTTP Webhook action. <br><br>**Note**: For the original caller to get the response, all steps in the response must finish within the limit unless you call another nested workflow. For more information, see [Call, trigger, or nest logic apps](../logic-apps/logic-apps-http-endpoint.md). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | <a name="content-storage-size-limits"></a> By default, the HTTP action and APIConnection actions follow the [standard async ### Messages -| Name | Chunking enabled | Multitenant | Single-tenant | Integration service environment | Notes | -|||-|-||-| -| Content download - Maximum number of requests | Yes | 1,000 requests | 1,000 requests <br>(Default) | 1,000 requests || -| Message size | No | 100 MB | 100 MB | 200 MB | To work around this limit, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). However, some connectors and APIs don't support chunking or even the default limit. <br><br>- Connectors such as AS2, X12, and EDIFACT have their own [B2B message limits](#b2b-protocol-limits). <br><br>- ISE connectors use the ISE limit, not the non-ISE connector limits. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | -| Message size per action | Yes | 1 GB | 1,073,741,824 bytes <br>(1 GB) <br>(Default) | 5 GB | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. <br><br>If you're using an ISE, the Azure Logic Apps engine supports this limit, but connectors have their own chunking limits up to the engine limit, for example, see the [Azure Blob Storage connector's API reference](/connectors/azureblob/). For more information about chunking, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | -| Content chunk size per action | Yes | Varies per connector | 52,428,800 bytes (52 MB) <br>(Default) | Varies per connector | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Name | Chunking enabled | Multitenant | Single-tenant | Notes | +|||-||-| +| Content download - Maximum number of requests | Yes | 1,000 requests | 1,000 requests <br>(Default) || +| Message size | No | 100 MB | 100 MB | To work around this limit, see [Handle large messages with chunking](logic-apps-handle-large-messages.md). However, some connectors and APIs don't support chunking or even the default limit. <br><br>- Connectors such as AS2, X12, and EDIFACT have their own [B2B message limits](#b2b-protocol-limits). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Message size per action | Yes | 1 GB | 1,073,741,824 bytes <br>(1 GB) <br>(Default) | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. For more information about chunking, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | +| Content chunk size per action | Yes | Varies per connector | 52,428,800 bytes (52 MB) <br>(Default) | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | ### Character limits The following table lists the values for a single workflow definition: The following table lists the values for a single workflow definition: -| Name | Multitenant | Single-tenant | Integration service environment | Notes | -||-|||-| -| Maximum number of code characters | 1,024 characters | 100,000 characters | 1,024 characters | To use the higher limit, create a **Logic App (Standard)** resource, which runs in single-tenant Azure Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Standard)** extension](create-single-tenant-workflows-visual-studio-code.md). | -| Maximum duration for running code | 5 sec | 15 sec | 1,024 characters | To use the higher limit, create a **Logic App (Standard)** resource, which runs in single-tenant Azure Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Standard)** extension](create-single-tenant-workflows-visual-studio-code.md). | +| Name | Multitenant | Single-tenant | Notes | +||-||-| +| Maximum number of code characters | 1,024 characters | 100,000 characters | To use the higher limit, create a Standard logic app resource, which runs in single-tenant Azure Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Standard)** extension](create-single-tenant-workflows-visual-studio-code.md). | +| Maximum duration for running code | 5 sec | 15 sec | To use the higher limit, create a Standard logic app resource, which runs in single-tenant Azure Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Standard)** extension](create-single-tenant-workflows-visual-studio-code.md). | <a name="custom-connector-limits"></a> ## Custom connector limits -In multitenant Azure Logic Apps and the integration service environment only, you can create and use [custom managed connectors](/connectors/custom-connectors), which are wrappers around an existing REST API or SOAP API. In single-tenant Azure Logic Apps, you can create and use only [custom built-in connectors](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). +In multitenant Azure Logic Apps only, you can create and use [custom managed connectors](/connectors/custom-connectors), which are wrappers around an existing REST API or SOAP API. In single-tenant Azure Logic Apps, you can create and use only [custom built-in connectors](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). The following table lists the values for custom connectors: -| Name | Multitenant | Single-tenant | Integration service environment | Notes | -||-|||-| -| Custom connectors | 1,000 per Azure subscription | Unlimited | 1,000 per Azure subscription || -| APIs per service | SOAP-based: 50 | Not applicable | SOAP-based: 50 || -| Parameters per API | SOAP-based: 50 | Not applicable | SOAP-based: 50 || -| Requests per minute for a custom connector | 500 requests per minute per connection | Based on your implementation | 2,000 requests per minute per *custom connector* || -| Connection timeout | 2 min | Idle connection: <br>4 min <br><br>Active connection: <br>10 min | 2 min || +| Name | Multitenant | Single-tenant | Notes | +||-||-| +| Custom connectors | 1,000 per Azure subscription | Unlimited || +| APIs per service | SOAP-based: 50 | Not applicable || +| Parameters per API | SOAP-based: 50 | Not applicable || +| Requests per minute for a custom connector | 500 requests per minute per connection | Based on your implementation || +| Connection timeout | 2 min | Idle connection: <br>4 min <br><br>Active connection: <br>10 min || For more information, review the following documentation: For more information, review the following documentation: Each Azure subscription has these integration account limits: -* One [Free tier](../logic-apps/logic-apps-pricing.md#integration-accounts) integration account per Azure region. This tier is available only for public regions in Azure, for example, West US or Southeast Asia, but not for [Microsoft Azure operated by 21Vianet](/azure/chin). --* 1,000 total integration accounts, including integration accounts in any [integration service environments (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) across both [Developer and Premium SKUs](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level). +* One [Free tier](logic-apps-pricing.md#integration-accounts) integration account per Azure region. This tier is available only for public regions in Azure, for example, West US or Southeast Asia, but not for [Microsoft Azure operated by 21Vianet](/azure/chin). -* Each ISE, whether [Developer or Premium](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level), can use a single integration account at no extra cost, although the included account type varies by ISE SKU. You can create more integration accounts for your ISE up to the total limit for an [extra cost](logic-apps-pricing.md#ise-pricing). +* 1,000 total integration accounts - | ISE SKU | Integration account limits | - ||-| - | **Premium** | 20 total accounts, including one Standard account at no extra cost. With this SKU, you can have only [Standard](../logic-apps/logic-apps-pricing.md#integration-accounts) accounts. No Free or Basic accounts are permitted. | - | **Developer** | 20 total accounts, including one [Free](../logic-apps/logic-apps-pricing.md#integration-accounts) account (limited to 1). With this SKU, you can have either combination: <br><br>- A Free account and up to 19 [Standard](../logic-apps/logic-apps-pricing.md#integration-accounts) accounts. <br>- No Free account and up to 20 Standard accounts. <br><br>No Basic or more Free accounts are permitted. <br><br>**Important**: Use the [Developer SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level) for experimenting, development, and testing, but not for production or performance testing. | --To learn how pricing and billing work for ISEs, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). +To learn how pricing and billing work, see the [Logic Apps pricing model](logic-apps-pricing.md). For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). <a name="artifact-number-limits"></a> ### Artifact limits per integration account -The following tables list the values for the number of artifacts limited to each integration account tier. For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). To learn how pricing and billing work for integration accounts, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#integration-accounts). +The following tables list the values for the number of artifacts limited to each integration account tier. For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). To learn how pricing and billing work for integration accounts, see the [Logic Apps pricing model](logic-apps-pricing.md#integration-accounts). > [!NOTE] > Use the Free tier only for exploratory scenarios, not production scenarios. The following tables list the values for the number of artifacts limited to each | EDI trading partners | 25 | 2 | 1,000 | Unlimited | | Maps | 25 | 500 | 1,000 | Unlimited | | Schemas | 25 | 500 | 1,000 | Unlimited |-| Assemblies | 10 | 25 | 1,000 | Unlimited, but currently unsupported for export from an ISE. | +| Assemblies | 10 | 25 | 1,000 | Unlimited, but currently unsupported for export from an integration service environment. | | Certificates | 25 | 2 | 1,000 | Unlimited | | Batch configurations | 5 | 1 | 50 | Unlimited |-| RosettaNet partner interface process (PIP) | 10 | 1 | 500 | Unlimited, but currently unsupported for export from an ISE. | +| RosettaNet partner interface process (PIP) | 10 | 1 | 500 | Unlimited, but currently unsupported for export from an integration service environment. | <a name="artifact-capacity-limits"></a> The following tables list the values for the number of artifacts limited to each | Artifact | Limit | Notes | | -- | -- | -- |-| Assembly | 8 MB | To upload files larger than 2 MB, use an [Azure storage account and blob container](../logic-apps/logic-apps-enterprise-integration-schemas.md). | +| Assembly | 8 MB | To upload files larger than 2 MB, use an [Azure storage account and blob container](logic-apps-enterprise-integration-schemas.md). | | Map (XSLT file) | 8 MB | To upload files larger than 2 MB, use the [Azure Logic Apps REST API - Maps](/rest/api/logic/maps/createorupdate). <br><br>**Note**: The amount of data or records that a map can successfully process is based on the message size and action timeout limits in Azure Logic Apps. For example, if you use an HTTP action, based on [HTTP message size and timeout limits](#http-limits), a map can process data up to the HTTP message size limit if the operation completes within the HTTP timeout limit. |-| Schema | 8 MB | To upload files larger than 2 MB, use an [Azure storage account and blob container](../logic-apps/logic-apps-enterprise-integration-schemas.md). | +| Schema | 8 MB | To upload files larger than 2 MB, use an [Azure storage account and blob container](logic-apps-enterprise-integration-schemas.md). | <a name="integration-account-throughput-limits"></a> The following tables list the values for the number of artifacts limited to each The following table lists the message size limits that apply to B2B protocols: -| Name | Multitenant | Single-tenant | Integration service environment | Notes | -||-|||-| -| AS2 | v2 - 100 MB<br>v1 - 25 MB | Unavailable | v2 - 200 MB <br>v1 - 25 MB | Applies to decode and encode | -| X12 | 50 MB | Unavailable | 50 MB | Applies to decode and encode | -| EDIFACT | 50 MB | Unavailable | 50 MB | Applies to decode and encode | +| Name | Multitenant | Single-tenant | Notes | +||-||-| +| AS2 | v2 - 100 MB<br>v1 - 25 MB | Unavailable | Applies to decode and encode | +| X12 | 50 MB | Unavailable | Applies to decode and encode | +| EDIFACT | 50 MB | Unavailable | Applies to decode and encode | <a name="configuration"></a> <a name="firewall-ip-configuration"></a> For Azure Logic Apps to receive incoming communication through your firewall, yo > - **Office 365**: The return caller is actually the Office 365 connector. You can specify the managed connector outbound > IP address prefixes for each region, or optionally, you can use the **AzureConnectors** service tag for these managed connectors. >-> - **SAP**: The return caller depends on whether the deployment environment is either multitenant Azure or ISE. +> - **SAP**: The return caller depends on whether the deployment environment is either multitenant Azure. > In the multitenant environment, the on-premises data gateway makes the call back to the Azure Logic Apps service. -> In an ISE, the SAP connector makes the call back to Azure Logic Apps. <a name="multitenant-inbound"></a> |
logic-apps | Logic Apps Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-overview.md | Hosting and running logic app workflows in your own dedicated instance helps red Azure Logic Apps (Standard) provides the following benefits: -* Your own static IP addresses, which are separate from the static IP addresses that logic apps share in multitenant Azure Logic Apps. You can also set up a single public, static, and predictable outbound IP address to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE. +* Your own static IP addresses, which are separate from the static IP addresses that logic apps share in multitenant Azure Logic Apps. You can also set up a single public, static, and predictable outbound IP address to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems. * Increased limits on run duration, storage retention, throughput, HTTP request and response timeouts, message sizes, and custom connector requests. For more information, review [Limits and configuration for Azure Logic Apps](logic-apps-limits-and-config.md). |
logic-apps | Logic Apps Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-pricing.md | Last updated 01/10/2024 * [Pricing rates for Azure Logic Apps](https://azure.microsoft.com/pricing/details/logic-apps) * [Plan and manage costs for Azure Logic Apps](plan-manage-costs.md)-* [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md) +* [Single-tenant versus multitenant](single-tenant-overview-compare.md) <a name="consumption-pricing"></a> -## Consumption (multi-tenant) +## Consumption (multitenant) -In multi-tenant Azure Logic Apps, a logic app and its workflow follow the [**Consumption** plan](https://azure.microsoft.com/pricing/details/logic-apps) for pricing and billing. You create such logic apps in various ways, for example, when you choose the **Logic App (Consumption)** resource type, use the **Azure Logic Apps (Consumption)** extension in Visual Studio Code, or when you create [automation tasks](create-automation-tasks-azure-resources.md). +In multitenant Azure Logic Apps, a logic app and its workflow follow the [**Consumption** plan](https://azure.microsoft.com/pricing/details/logic-apps) for pricing and billing. You create such logic apps in various ways, for example, when you choose the **Logic App (Consumption)** resource type, use the **Azure Logic Apps (Consumption)** extension in Visual Studio Code, or when you create [automation tasks](create-automation-tasks-azure-resources.md). -The following table summarizes how the Consumption model handles metering and billing for the following components when used with a logic app and a workflow in multi-tenant Azure Logic Apps: +The following table summarizes how the Consumption model handles metering and billing for the following components when used with a logic app and a workflow in multitenant Azure Logic Apps: | Component | Metering and billing | | -|-| | Trigger and action operations | The Consumption model includes an *initial number* of free built-in operations, per Azure subscription, that a workflow can run. Above this number, metering applies to *each execution*, and billing follows the [*Actions* pricing for the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). For other operation types, such as managed connectors, billing follows the [*Standard* or *Enterprise* connector pricing for the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). For more information, review [Trigger and action operations in the Consumption model](#consumption-operations). | | Storage operations | Metering applies *only to data retention-related storage consumption* such as saving inputs and outputs from your workflow's run history. Billing follows the [data retention pricing for the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps/). For more information, review [Storage operations](#storage-operations). |-| Integration accounts | Metering applies based on the integration account type that you create and use with your logic app. Billing follows [*Integration Account* pricing](https://azure.microsoft.com/pricing/details/logic-apps/) unless your logic app is deployed and hosted in an [integration service environment (ISE)](#ise-pricing). For more information, review [Integration accounts](#integration-accounts). | +| Integration accounts | Metering applies based on the integration account type that you create and use with your logic app. Billing follows [*Integration Account* pricing](https://azure.microsoft.com/pricing/details/logic-apps/). For more information, review [Integration accounts](#integration-accounts). | <a name="consumption-operations"></a> The Consumption model meters and bills an operation *per execution, not per call > where the triggers don't instantiate and start the workflow, but the trigger state is **Succeeded**, > **Failed**, or **Skipped**. -The following table summarizes how the Consumption model handles metering and billing for these operation types when used with a logic app and workflow in multi-tenant Azure Logic Apps: +The following table summarizes how the Consumption model handles metering and billing for these operation types when used with a logic app and workflow in multitenant Azure Logic Apps: | Operation type | Description | Metering and billing | |-|-|-| The following table summarizes how the Standard model handles metering and billi For more information about how the Standard model works with operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies, review [Other operation behavior](#other-operation-behavior). -<a name="ise-pricing"></a> --## Integration service environment (ISE) --When you create a logic app using the **Logic App (Consumption)** resource type, and you deploy to a dedicated [*integration service environment (ISE)*](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), the logic app and its workflow follow the [Integration Service Environment plan](https://azure.microsoft.com/pricing/details/logic-apps) for pricing and billing. This pricing model depends on your [ISE level or *SKU*](connect-virtual-network-vnet-isolated-environment-overview.md#ise-level) and differs from the Consumption plan in that you're billed for reserved capacity and dedicated resources whether or not you use them. --The following table summarizes how the ISE model handles metering and billing for capacity and other dedicated resources based on your ISE level or SKU: --| ISE SKU | Metering and billing | -||-| -| **Premium** | The base unit has [fixed capacity](logic-apps-limits-and-config.md#integration-service-environment-ise) and is [billed at an hourly rate for the Premium SKU](https://azure.microsoft.com/pricing/details/logic-apps). If you need more throughput, you can [add more scale units](../logic-apps/ise-manage-integration-service-environment.md#add-capacity) when you create your ISE or afterwards. Each scale unit is billed at an [hourly rate that's roughly half the base unit rate](https://azure.microsoft.com/pricing/details/logic-apps). <p><p>For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). | -| **Developer** | The base unit has [fixed capacity](logic-apps-limits-and-config.md#integration-service-environment-ise) and is [billed at an hourly rate for the Developer SKU](https://azure.microsoft.com/pricing/details/logic-apps). However, this SKU has no service-level agreement (SLA), scale up capability, or redundancy during recycling, which means that you might experience delays or downtime. Backend updates might intermittently interrupt service. <p><p>**Important**: Make sure that you use this SKU only for exploration, experiments, development, and testing - not for production or performance testing. <p><p>For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). | --The following table summarizes how the ISE model handles the following components when used with a logic app and a workflow in an ISE: --| Component | Description | -|--|-| -| Trigger and action operations | The ISE model includes free built-in, managed connector, and custom connector operations that your workflow can run, but subject to the [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise) and [custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). For more information, review [Trigger and action operations in the ISE model](#integration-service-environment-operations). | -| Storage operations | The ISE model includes free storage consumption, such as data retention. For more information, review [Storage operations](#storage-operations). | -| Integration accounts | The ISE model includes a single free integration account tier, based on your selected ISE SKU. For an [extra cost](https://azure.microsoft.com/pricing/details/logic-apps/), you can create more integration accounts for your ISE to use up to the [total ISE limit](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits). For more information, review [Integration accounts](#integration-accounts). | --<a name="integration-service-environment-operations"></a> --### Trigger and action operations in the ISE model --The following table summarizes how the ISE model handles the following operation types when used with a logic app and workflow in an ISE: --| Operation type | Description | Metering and billing | -|-|-|-| -| [*Built-in*](../connectors/built-in.md) | These operations run directly and natively with the Azure Logic Apps runtime and in the same ISE as your logic app workflow. In the designer, you can find these operations under the **Built-in** label, but each operation also displays the **CORE** label. <p>For example, the HTTP trigger and Request trigger are built-in triggers. The HTTP action and Response action are built-in actions. Other built-in operations include workflow control actions such as loops and conditions, data operations, batch operations, and others. | The ISE model includes these operations *for free*, but are subject to the [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). | -| [*Managed connector*](../connectors/managed.md) | Whether *Standard* or *Enterprise*, managed connector operations run in either your ISE or multi-tenant Azure, based on whether the connector or operation displays the **ISE** label. <p><p>- **ISE** label: These operations run in the same ISE as your logic app and work without requiring the [on-premises data gateway](#data-gateway). <p><p>- No **ISE** label: These operations run in multi-tenant Azure. | The ISE model includes both **ISE** and no **ISE** labeled operations *for free*, but are subject to the [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). | -| [*Custom connector*](../connectors/introduction.md#custom-connectors-and-apis) | In the designer, you can find these operations under the **Custom** label. | The ISE model includes these operations *for free*, but are subject to [custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). | --For more information about how the ISE model works with operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies, review [Other operation behavior](#other-operation-behavior). - <a name="other-operation-behavior"></a> ## Other operation behavior -The following table summarizes how the Consumption, Standard, and ISE models handle operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies: +The following table summarizes how the Consumption and Standard models handle operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies: -| Operation | Description | Consumption | Standard | ISE | -|--|-|-|-|--| -| [Loop actions](logic-apps-control-flow-loops.md) | A loop action, such as the **For each** or **Until** loop, can include other actions that run during each loop cycle. | Except for the initial number of included built-in operations, the loop action and each action in the loop are metered each time the loop cycle runs. If an action processes any items in a collection, such as a list or array, the number of items is also used in the metering calculation. <p><p>For example, suppose you have a **For each** loop with actions that process a list. The service multiplies the number of list items against the number of actions in the loop, and adds the action that starts the loop. So, the calculation for a 10-item list is (10 * 1) + 1, which results in 11 action executions. <p><p>Pricing is based on whether the operation types are built-in, Standard, or Enterprise. | Except for the included built-in operations, same as the Consumption model. | Not metered or billed. | -| [Retry policies](logic-apps-exception-handling.md#retry-policies) | On supported operations, you can implement basic exception and error handling by setting up a [retry policy](logic-apps-exception-handling.md#retry-policies). | Except for the initial number of built-in operations, the original execution plus each retried execution are metered. For example, an action that executes with 5 retries is metered and billed as 6 executions. <p><p>Pricing is based on whether the operation types are built-in, Standard, or Enterprise. | Except for the built-in included operations, same as the Consumption model. | Not metered or billed. | +| Operation | Description | Consumption | Standard | +|--|-|-|-| +| [Loop actions](logic-apps-control-flow-loops.md) | A loop action, such as the **For each** or **Until** loop, can include other actions that run during each loop cycle. | Except for the initial number of included built-in operations, the loop action and each action in the loop are metered each time the loop cycle runs. If an action processes any items in a collection, such as a list or array, the number of items is also used in the metering calculation. <p><p>For example, suppose you have a **For each** loop with actions that process a list. The service multiplies the number of list items against the number of actions in the loop, and adds the action that starts the loop. So, the calculation for a 10-item list is (10 * 1) + 1, which results in 11 action executions. <p><p>Pricing is based on whether the operation types are built-in, Standard, or Enterprise. | Except for the included built-in operations, same as the Consumption model. | +| [Retry policies](logic-apps-exception-handling.md#retry-policies) | On supported operations, you can implement basic exception and error handling by setting up a [retry policy](logic-apps-exception-handling.md#retry-policies). | Except for the initial number of built-in operations, the original execution plus each retried execution are metered. For example, an action that executes with 5 retries is metered and billed as 6 executions. <p><p>Pricing is based on whether the operation types are built-in, Standard, or Enterprise. | Except for the built-in included operations, same as the Consumption model. | <a name="storage-operations"></a> The following table summarizes how the Consumption, Standard, and ISE models han Azure Logic Apps uses [Azure Storage](../storage/index.yml) for any required storage transactions, such as using queues for scheduling trigger operations or using tables and blobs for storing workflow states. Based on the operations in your workflow, storage costs vary because different triggers, actions, and payloads result in different storage operations and needs. The service also saves and stores inputs and outputs from your workflow's run history, based on the logic app resource's [run history retention limit](logic-apps-limits-and-config.md#run-duration-retention-limits). You can manage this retention limit at the logic app resource level, not the workflow level. -The following table summarizes how the Consumption, Standard, and ISE models handle metering and billing for storage operations: +The following table summarizes how the Consumption and Standard models handle metering and billing for storage operations: | Model | Description | Metering and billing | |-|-|-|-| Consumption (multi-tenant) | Storage resources and usage are attached to the logic app resource. | Metering and billing *apply only to data retention-related storage consumption* and follow the [data retention pricing for the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). | +| Consumption (multitenant) | Storage resources and usage are attached to the logic app resource. | Metering and billing *apply only to data retention-related storage consumption* and follow the [data retention pricing for the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). | | Standard (single-tenant) | You can use your own Azure [storage account](../azure-functions/storage-considerations.md#storage-account-requirements), which gives you more control and flexibility over your workflow's data. | Metering and billing follow the [Azure Storage pricing model](https://azure.microsoft.com/pricing/details/storage/). Storage costs appear separately on your Azure billing invoice. <p><p>**Tip**: To help you better understand the number of storage operations that a workflow might run and their cost, try using the [Logic Apps Storage calculator](https://logicapps.azure.com/calculator). Select either a sample workflow or use an existing workflow definition. The first calculation estimates the number of storage operations in your workflow. You can then use these numbers to estimate possible costs using the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). For more information, review [Estimate storage needs and costs for workflows in single-tenant Azure Logic Apps](estimate-storage-costs.md). |-| Integration service environment (ISE) | Storage resources and usage are attached to the logic app resource. | Not metered or billed. | For more information, review the following documentation: The [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) is a An [integration account](logic-apps-enterprise-integration-create-integration-account.md) is a separate Azure resource that you create as a container to define and store business-to-business (B2B) artifacts such as trading partners, agreements, schemas, maps, and so on. After you create this account and define these artifacts, link this account to your logic app so that you can use these artifacts and various B2B operations in workflows to explore, build, and test integration solutions that use [EDI](logic-apps-enterprise-integration-b2b.md) and [XML processing](logic-apps-enterprise-integration-xml.md) capabilities. -The following table summarizes how the Consumption, Standard, and ISE models handle metering and billing for integration accounts: +The following table summarizes how the Consumption and Standard models handle metering and billing for integration accounts: | Model | Metering and billing | |-|-|-| Consumption (multi-tenant) | Metering and billing use the [integration account pricing](https://azure.microsoft.com/pricing/details/logic-apps/), based on the account tier that you use. | +| Consumption (multitenant) | Metering and billing use the [integration account pricing](https://azure.microsoft.com/pricing/details/logic-apps/), based on the account tier that you use. | | Standard (single-tenant) | Metering and billing use the [integration account pricing](https://azure.microsoft.com/pricing/details/logic-apps/), based on the account tier that you use. |-| ISE | This model includes a single integration account, based on your ISE SKU. For an [extra cost](https://azure.microsoft.com/pricing/details/logic-apps/), you can create more integration accounts for your ISE to use up to the [total ISE limit](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits). | For more information, review the following documentation: |
logic-apps | Logic Apps Securing A Logic App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md | If your organization doesn't permit connecting to specific resources by using th ## Isolation guidance for logic apps -You can use Azure Logic Apps in [Azure Government](../azure-government/documentation-government-welcome.md) supporting all impact levels in the regions described by the [Azure Government Impact Level 5 Isolation Guidance](../azure-government/documentation-government-impact-level-5.md). To meet these requirements, Azure Logic Apps supports the capability for you to create and run workflows in an environment with dedicated resources so that you can reduce the performance impact by other Azure tenants on your logic apps and avoid sharing computing resources with other tenants. +* You can use Azure Logic Apps in [Azure Government](../azure-government/documentation-government-welcome.md) supporting all impact levels in the regions described by the [Azure Government Impact Level 5 Isolation Guidance](../azure-government/documentation-government-impact-level-5.md). To meet these requirements, Azure Logic Apps supports the capability for you to create and run workflows in an environment with dedicated resources so that you can reduce the performance impact by other Azure tenants on your logic apps and avoid sharing computing resources with other tenants. ++* Standard logic app workflows can privately and securely communicate with an Azure virtual network through private endpoints that you set up for inbound traffic and virtual network integration for outbound traffic. For more information, review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md). * To run your own code or perform XML transformation, [create and call an Azure function](../logic-apps/logic-apps-azure-functions.md), rather than use the [inline code capability](../logic-apps/logic-apps-add-run-inline-code.md) or provide [assemblies to use as maps](../logic-apps/logic-apps-enterprise-integration-maps.md), respectively. Also, set up the hosting environment for your function app to comply with your isolation requirements. You can use Azure Logic Apps in [Azure Government](../azure-government/documenta * [Virtual machine isolation in Azure](/azure/virtual-machines/isolation) * [Deploy dedicated Azure services into virtual networks](../virtual-network/virtual-network-for-azure-services.md) -* Based on whether you have Consumption or Standard logic app workflows, you have these options: -- * Standard logic app workflows can privately and securely communicate with an Azure virtual network through private endpoints that you set up for inbound traffic and virtual network integration for outbound traffic. For more information, review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md). -- * Consumption logic app workflows can run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) where they can use dedicated resources and access resources protected by an Azure virtual network. However, the ISE resource retires on August 31, 2024, due to its dependency on Azure Cloud Services (classic), which retires at the same time. -- > [!IMPORTANT] - > - > Some Azure virtual networks use private endpoints ([Azure Private Link](../private-link/private-link-overview.md)) - > for providing access to Azure PaaS services, such as Azure Storage, Azure Cosmos DB, or Azure SQL Database, - > partner services, or customer services that are hosted on Azure. - > - > To create Consumption logic app workflows that need access to virtual networks with private endpoints, - > you *must create and run your Consumption workflows in an ISE*. Or, you can create Standard workflows instead, - > which don't need an ISE. Instead, your workflows can communicate privately and securely with virtual networks - > by using private endpoints for inbound traffic and virtual network integration for outbound traffic. For more information, see - > [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md). - For more information about isolation, see the following documentation: * [Isolation in the Azure Public Cloud](../security/fundamentals/isolation-choices.md) * [Security for highly sensitive IaaS apps in Azure](/azure/architecture/reference-architectures/n-tier/high-security-iaas) -## Next steps +## Related contet * [Azure security baseline for Azure Logic Apps](security-baseline.md) * [Automate deployment for Azure Logic Apps](logic-apps-azure-resource-manager-templates-overview.md) |
logic-apps | Manage Logic Apps With Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/manage-logic-apps-with-visual-studio.md | In Visual Studio, if your logic app exists as a JSON (.json) file within an [Azu To change your logic app's location type or location, you have to open your logic app's workflow definition (.json) file from Solution Explorer by using the workflow designer. You can't change these properties by using Cloud Explorer. -> [!IMPORTANT] -> Changing the location type from **Region** to -> [**Integration Service Environment**](connect-virtual-network-vnet-isolated-environment-overview.md) -> affects your logic app's [pricing model](logic-apps-pricing.md#ise-pricing) that's used for billing, -> [limits](logic-apps-limits-and-config.md#integration-account-limits), [integration account support](connect-virtual-network-vnet-isolated-environment-overview.md#ise-skus), and so on. -> Before you select a different location type, make sure that you understand the resulting impact on your logic app. - 1. In Visual Studio, open the Azure Resource Group project that contains your logic app. 1. In Solution Explorer, open the `<logic-app-name>.json` file's shortcut menu, and select **Open With Logic App Designer**. (Keyboard: Ctrl + L) To change your logic app's location type or location, you have to open your logi > [!TIP] > If you don't have this command in Visual Studio 2019, check that you have the latest updates to Visual Studio and the Azure Logic Apps Tools extension. -1. Make sure that the workflow designer has focus by selecting the designer's tab or surface so that the Properties window shows the **Choose Location Type** and **Location** properties for your logic app. The project's location type is set to either **Region** or **Integration Service Environment**. +1. Make sure that the workflow designer has focus by selecting the designer's tab or surface so that the Properties window shows the **Choose Location Type** and **Location** properties for your logic app. The project's location type is set to **Region**. ![Properties window - "Choose Location Type" & "Location" properties](./media/manage-logic-apps-with-visual-studio/open-logic-app-properties-location.png) > [!TIP] > If the Properties window isn't already open, from the **View** menu, select **Properties Window**. (Keyboard: Press F4) -1. To change the location type, open the **Choose Location Type** property list, and select the location type that you want. -- For example, if the location type is **Integration Service Environment**, you can select **Region**. -- !["Choose Location Type" property - change location type](./media/manage-logic-apps-with-visual-studio/change-location-type.png) - 1. To change the specific location, open the **Location** property list. Based on the location type, select the location that you want, for example: - * Select a different Azure region: - ![Open "Location" property list, select another Azure region](./media/manage-logic-apps-with-visual-studio/change-azure-resource-group-region.png) - * Select a different ISE: -- ![Open "Location" property list, select another ISE](./media/manage-logic-apps-with-visual-studio/change-integration-service-environment.png) - 1. When you're done, remember to save your Visual Studio solution. When you change the location type or location in Visual Studio and save your logic app as an Azure Resource Manager template, that template also includes parameter declarations for that location type and location. For more information about template parameters and logic apps, see [Overview: Automate logic app deployment](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#template-parameters). |
logic-apps | Move Logic App Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/move-logic-app-resources.md | To migrate your logic app or related resources to another Azure resource group, * After you migrate logic apps between subscriptions, resource groups, or regions, you must recreate or reauthorize any connections that require Open Authentication (OAuth). -* You can move an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) only to another resource group that exists in the same Azure region or Azure subscription. You can't move an ISE to a resource group that exists in a different Azure region or Azure subscription. Also, after such a move, you must update all references to the ISE in your logic app workflows, integration accounts, connections, and so on. - ## Prerequisites * The same Azure subscription that was used to create the logic app or integration account that you want to move To move a resource, such as a logic app or integration account, to another Azure ## Move resources between resource groups -To move a resource, such as a logic app, integration account, or [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), to another Azure resource group, you can use the Azure portal, Azure PowerShell, Azure CLI, or REST API. These steps cover the Azure portal, which you can use when the resource's region stays the same. For other steps and general preparation, see [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md). +To move a resource, such as a logic app or integration account, to another Azure resource group, you can use the Azure portal, Azure PowerShell, Azure CLI, or REST API. These steps cover the Azure portal, which you can use when the resource's region stays the same. For other steps and general preparation, see [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md). Before actually moving resources between groups, you can test whether you can successfully move your resource to another group. For more information, see [Validate your move](../azure-resource-manager/management/move-resource-group-and-subscription.md#use-rest-api). |
logic-apps | Plan Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/plan-manage-costs.md | Azure Logic Apps applies different pricing models, based on the resources that y * Logic app resources that you create and run in single-tenant Azure Logic Apps use a [hosting plan pricing model](../logic-apps/logic-apps-pricing.md#standard-pricing). -* Logic app resources that you create and run in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) use the [ISE pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). - Here are other resources that incur costs when you create them for use with logic apps: * An [integration account](../logic-apps/logic-apps-pricing.md#integration-accounts) is a separate resource that you create and link to logic apps for building B2B integrations. Integration accounts use a [fixed pricing model](../logic-apps/logic-apps-pricing.md#integration-accounts) where the rate is based on the integration account type or *tier* that you use. -* An [ISE](../logic-apps/logic-apps-pricing.md#ise-pricing) is a separate resource that you create as a deployment location for logic apps that need direct access to resources in a virtual network. ISEs use the [ISE pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing) where the rate is based on the ISE SKU that you create and other settings. However, data retention and storage consumption don't incur costs. --* A [custom connector](../logic-apps/logic-apps-pricing.md#consumption-pricing) is a separate resource that you create for a REST API that has no prebuilt connector for you to use in your logic apps. Custom connector executions use a [consumption pricing model](../logic-apps/logic-apps-pricing.md#consumption-pricing) except when you use them in an ISE. +* A [custom connector](../logic-apps/logic-apps-pricing.md#consumption-pricing) is a separate resource that you create for a REST API that has no prebuilt connector for you to use in your logic apps. Custom connector executions use the [Consumption pricing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). <a name="storage-operations-costs"></a> If you have these resources after deleting a logic app, these resources continue * Integration accounts -* Integration service environments (ISEs) -- If you [delete an ISE](ise-manage-integration-service-environment.md#delete-ise), the associated Azure virtual network, subnets, and other related resources continue to exist. After you delete the ISE, you might have to wait up to a specific number of hours before you can try deleting the virtual network or subnets. - ### Using Monetary Credit with Azure Logic Apps You can pay for Azure Logic Apps charges with your EA monetary commitment credit. However, you can't use EA monetary commitment credit to pay for charges for third-party products and services, including those from the Azure Marketplace. |
logic-apps | Quickstart Create Logic Apps With Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-logic-apps-with-visual-studio.md | When you have your Azure Resource Group project, create your logic app with the | User account | Fabrikam <br> sophia-owen@fabrikam.com | The account that you used when you signed in to Visual Studio | | **Subscription** | Pay-As-You-Go <br> (sophia-owen@fabrikam.com) | The name for your Azure subscription and associated account | | **Resource Group** | MyLogicApp-RG <br> (West US) | The Azure resource group and location for storing and deploying your logic app's resources |- | **Location** | **Same as Resource Group** | The location type and specific location for deploying your logic app resource. The location type is either an Azure region or an existing [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). <p>For this quickstart, keep the location type set to **Region** and the location set to **Same as Resource Group**. <p>**Note**: After you create your resource group project, you can [change the location type and the location](manage-logic-apps-with-visual-studio.md#change-location), but different location type affects your logic app in various ways. | + | **Location** | **Same as Resource Group** | The location type and location to deploy your logic app resource. <br><br>For this quickstart, keep the location type set to **Region** and the location set to **Same as Resource Group**. <br><br>**Note**: After you create your resource group project, you can [change the location type and the location](manage-logic-apps-with-visual-studio.md#change-location), but different location type affects your logic app in various ways. | 1. The workflow designer opens a page that shows an introduction video and commonly used triggers. Scroll down past the video and triggers to **Templates**, and select **Blank Logic App**. |
logic-apps | Single Tenant Overview Compare | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md | Last updated 08/11/2024 # Differences between Standard single-tenant logic apps versus Consumption multitenant logic apps -Azure Logic Apps is a cloud-based platform for creating and running automated *logic app workflows* that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. When you create a logic app resource, you select either the **Consumption** workflow type or **Standard** workflow type. A Consumption logic app can have only one workflow that runs in *multitenant* Azure Logic Apps or an *integration service environment*. A Standard logic app can have one or multiple workflows that run in *single-tenant* Azure Logic Apps or an App Service Environment v3 (ASE v3). +Azure Logic Apps is a cloud-based platform for creating and running automated *logic app workflows* that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. When you create a logic app resource, you select either the **Consumption** or **Standard** hosting option. A Consumption logic app can have only one workflow that runs in *multitenant* Azure Logic Apps. A Standard logic app can have one or multiple workflows that run in *single-tenant* Azure Logic Apps or an App Service Environment v3 (ASE v3). Before you choose which logic app resource to create, review the following guide to learn how the logic app workflow types compare with each other. You can then make a better choice about which logic app workflow and environment best suits your scenario, solution requirements, and the destination where you want to deploy and run your workflows. If you're new to Azure Logic Apps, review [What is Azure Logic Apps?](logic-apps ## Logic app workflow types and environments -The following table summarizes the differences between a **Consumption** logic app workflow and **Standard** logic app workflow. You also learn how the single-tenant environment differs from the multitenant environment and an integration service environment (ISE) for deploying, hosting, and running your workflows. +The following table summarizes the differences between a **Consumption** logic app workflow and **Standard** logic app workflow. You also learn how the single-tenant environment differs from the multitenant environment for deploying, hosting, and running your workflows. [!INCLUDE [Logic app workflow and environment differences](../../includes/logic-apps-resource-environment-differences-table.md)] When you use the new built-in connector operations, you create connections calle ### Direct access to resources in Azure virtual networks -Workflows that run in either single-tenant Azure Logic Apps or in an *integration service environment (ISE)* can directly access secured resources such as virtual machines (VMs), other services, and systems that exist in an [Azure virtual network](../virtual-network/virtual-networks-overview.md). +Workflows that run in either single-tenant Azure Logic Apps can directly access secured resources such as virtual machines (VMs), other services, and systems that exist in an [Azure virtual network](../virtual-network/virtual-networks-overview.md). -Both single-tenant Azure Logic Apps and an ISE are dedicated instances of the Azure Logic Apps service, use dedicated resources, and run separately from multitenant Azure Logic Apps. Running workflows in a dedicated instance helps reduce the impact that other Azure tenants might have on app performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors). +Single-tenant Azure Logic Apps is a dedicated instance of the Azure Logic Apps service, uses dedicated resources, and runs separately from multitenant Azure Logic Apps. Running workflows in a dedicated instance helps reduce the impact that other Azure tenants might have on app performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors). -Single-tenant Azure Logic Apps and an ISE also provide the following benefits: +Single-tenant Azure Logic Apps also provide the following benefits: -* Your own static IP addresses, which are separate from the static IP addresses that are shared by the logic apps in the multitenant Azure Logic Apps. You can also set up a single public, static, and predictable outbound IP address to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE. +* Your own static IP addresses, which are separate from the static IP addresses that are shared by the logic apps in the multitenant Azure Logic Apps. You can also set up a single public, static, and predictable outbound IP address to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems. * Increased limits on run duration, storage retention, throughput, HTTP request and response timeouts, message sizes, and custom connector requests. For more information, review [Limits and configuration for Azure Logic Apps](logic-apps-limits-and-config.md). To create a logic app resource based on the environment that you want, you have | Azure PowerShell | [Az.LogicApp module](/powershell/module/az.logicapp) | [Get started with Azure PowerShell](/powershell/azure/get-started-azureps) | | Azure REST API | [Azure Logic Apps REST API](/rest/api/logic) | [Get started with Azure REST API reference](/rest/api/azure) | -**Integration service environment** --| Option | Resources and tools | More information | -|--||| -| Azure portal | **Consumption** logic app deployed to an existing ISE resource | Same as [Quickstart: Create an example Consumption logic app workflow in multitenant Azure Logic Apps - Azure portal](quickstart-create-example-consumption-workflow.md), but select an ISE, not a multitenant region. | - Although your development experiences differ based on whether you create **Consumption** or **Standard** logic app resources, you can find and access all your deployed logic apps under your Azure subscription. For example, in the Azure portal, the **Logic apps** page shows both **Consumption** and **Standard** logic app resources. In Visual Studio Code, deployed logic apps appear under your Azure subscription, but **Consumption** logic apps appear in the **Azure** window under the **Azure Logic Apps (Consumption)** extension, while **Standard** logic apps appear under the **Resources** section. For the **Standard** logic app workflow, these capabilities have changed, or the * **Backup and restore for workflow run history**: **Standard** logic apps currently don't support backup and restore for workflow run history. -* **Deployment targets**: You can't deploy a **Standard** logic app resource to an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). - * **Terraform templates**: You can't use these templates with a **Standard** logic app resource for complete infrastructure deployment. For more information, see [What is Terraform on Azure](/azure/developer/terraform/overview)? * **Azure API Management**: You currently can't import a **Standard** logic app resource into Azure API Management. However, you can import a **Consumption** logic app resource. |
logic-apps | View Workflow Status Run History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/view-workflow-status-run-history.md | To monitor and review the workflow run status for Standard workflows, see the fo For real-time event monitoring and richer debugging, you can set up diagnostics logging for your logic app workflow by using [Azure Monitor logs](../azure-monitor/overview.md). This Azure service helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. You can then find and view events, such as trigger events, run events, and action events. By storing this information in [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md), you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you find and analyze this information. You can also use this diagnostic data with other Azure services, such as Azure Storage and Azure Event Hubs. For more information, see [Monitor logic apps by using Azure Monitor](monitor-workflows-collect-diagnostic-data.md). -> [!NOTE] -> -> If your workflow runs in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) -> that was created to use an [internal access endpoint](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access), -> you can view and access inputs and outputs from a workflow run history *only from inside your virtual network*. Make sure that you have network -> connectivity between the private endpoints and the computer from where you want to access run history. For example, your client computer can exist -> inside the ISE's virtual network or inside a virtual network that's connected to the ISE's virtual network, for example, through peering or a virtual -> private network. For more information, see [ISE endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access). - <a name="review-trigger-history"></a> ## Review trigger history |
openshift | Howto Deploy Java Jboss Enterprise Application Platform With Auto Redeploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-with-auto-redeploy.md | - Title: Auto-redeploy JBoss EAP with Source-to-Image -titleExtension: Azure Red Hat OpenShift -description: Shows you how to quickly set up JBoss EAP on Azure Red Hat OpenShift (ARO) using the Azure portal and deploy an app with the Source-to-Image (S2I) feature. --- Previously updated : 06/26/2024--# customer intent: As a developer, I want to learn how to auto redeploy JBoss EAP on Azure Red Hat OpenShift using Source-to-Image (S2I) so that I can quickly deploy and update my application. ---# Quickstart: Auto-redeploy JBoss EAP on Azure Red Hat OpenShift with Source-to-Image (S2I) --This article shows you how to quickly set up JBoss Enterprise Application Platform (EAP) on Azure Red Hat OpenShift (ARO) and deploy an app with the Source-to-Image (S2I) feature. The Source-to-Image feature enables you to build container images from source code without having to write Dockerfiles. The article uses a sample application that you can fork from GitHub and deploy to Azure Red Hat OpenShift. The article also shows you how to set up a webhook in GitHub to trigger a new build in OpenShift every time you push a change to the repository. --This article uses the Azure Marketplace offer for JBoss EAP to accelerate your journey to ARO. The offer automatically provisions resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the JBoss EAP Operator, and optionally a container image including JBoss EAP and your application using Source-to-Image (S2I). To see the offer, visit the [Azure portal](https://aka.ms/eap-aro-portal). If you prefer manual step-by-step guidance for running JBoss EAP on ARO that doesn't use the automation enabled by the offer, see [Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster](/azure/developer/java/ee/jboss-eap-on-aro). --## Prerequisites --- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]--- A Red Hat account with complete profile. If you don't have one, you can sign up for a free developer subscription through the [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register).--- A local developer command line with a UNIX-like command environment - for example, Ubuntu, macOS, or Windows Subsystem for Linux - and Azure CLI installed. To learn how to install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).--- An Azure identity that you use to sign in that has either the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role and the [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) role or the [Owner](../role-based-access-control/built-in-roles.md#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](../role-based-access-control/overview.md)-----## Create a Microsoft Entra service principal --Use the following steps to create a service principal: --1. Create a service principal by using the following command: -- ```azurecli - az ad sp create-for-rbac --name "sp-aro-s2i-$(date +%s)" - ``` -- This command produces output similar to the following example: -- ```output - { - "appId": <app-ID>, - "displayName": <display-Name>, - "password": <password>, - "tenant": <tenant> - } - ``` --1. Copy the value of the `appId` and `password` fields. You use these values later in the deployment process. --## Fork the repository on GitHub --Use the following steps to fork the sample repo: --1. Open the repository <https://github.com/redhat-mw-demos/eap-on-aro-helloworld> in your browser. -1. Fork the repository to your GitHub account. -1. Copy the URL of the forked repository. --## Deploy JBoss EAP on Azure Red Hat OpenShift --This section shows you how to deploy JBoss EAP on Azure Red Hat OpenShift. --Use the following steps to find the offer and fill out the **Basics** pane: --1. In the search bar at the top of the Azure portal, enter *JBoss EAP*. In the search results, in the **Marketplace** section, select **JBoss EAP on Azure Red Hat OpenShift**, as shown in the following screenshot: -- :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/marketplace-search-results.png" alt-text="Screenshot of the Azure portal that shows JBoss EAP on Azure Red Hat OpenShift in search results." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/marketplace-search-results.png"::: -- You can also go directly to the [JBoss EAP on Azure Red Hat OpenShift offer](https://aka.ms/eap-aro-portal) on the Azure portal. --1. On the offer page, select **Create**. --1. On the **Basics** pane, ensure that the value shown in the **Subscription** field is the same one that has the roles listed in the prerequisites section. --1. You must deploy the offer in an empty resource group. In the **Resource group** field, select **Create new** and fill in a value for the resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, *eaparo033123rg*. --1. Under **Instance details**, select the region for the deployment. For a list of Azure regions where OpenShift operates, see [Regions for Red Hat OpenShift 4.x on Azure](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=openshift®ions=all). --1. Select **Next: ARO**. --Use the following steps to fill out the **ARO** pane shown in the following screenshot: ---1. Under **Create a new cluster**, select **Yes**. --1. Under **Provide information to create a new cluster**, for **Red Hat pull secret**, use the Red Hat pull secret that you obtained in the [Get a Red Hat pull secret](#get-a-red-hat-pull-secret) section. Use the same value for **Confirm secret**. --1. For **Service principal client ID**, use the `appId` value that you obtained in the [Create a Microsoft Entra service principal](#create-a-microsoft-entra-service-principal) section. --1. For **Service principal client secret**, use the `password` value that you obtained in the [Create a Microsoft Entra service principal](#create-a-microsoft-entra-service-principal) section. Use the same value for **Confirm secret**. --1. Select **Next EAP Application**. --The following steps show you how to fill out the **EAP Application** pane shown in the following screenshot, and then start the deployment. ---1. Select **YES** for **Deploy an application to OpenShift using Source-to-Image (S2I)?**. -1. For **Deploy your own application or a sample application?**, select **Your own application**. -1. For **Application source code repository URL**, use the URL of the forked repository that you created in the [Fork repository from GitHub](#fork-the-repository-on-github) section. -1. For **Red Hat Container Registry Service account username**, use the username of the Red Hat Container Registry service account that you created in the [Create a Red Hat Container Registry service account](#create-a-red-hat-container-registry-service-account) section. -1. For **Red Hat Container Registry Service account password**, use the password of the Red Hat Container Registry service account that you created in the [Create a Red Hat Container Registry service account](#create-a-red-hat-container-registry-service-account) section. -1. For **Confirm password**, use the same value as in the previous step. -1. Leave other fields with default values. -1. Select **Next: Review + create**. -1. Select **Review + create**. Ensure that the green **Validation Passed** message appears at the top. If the message doesn't appear, fix any validation problems, and then select **Review + create** again. -1. Select **Create**. -1. Track the progress of the deployment on the **Deployment is in progress** page. --Depending on network conditions and other activity in your selected region, the deployment might take up to 40 minutes to complete. --## Verify the functionality of the deployment --This section shows you how to verify that the deployment completed successfully. --If you navigated away from the **Deployment is in progress** page, use the following steps to get back to that page. If you're still on the page that shows **Your deployment is complete**, you can skip to step 5. --1. In the corner of any Azure portal page, select the hamburger menu and then select **Resource groups**. --1. In the box with the text **Filter for any field**, enter the first few characters of the resource group you created previously. If you followed the recommended convention, enter your initials, then select the appropriate resource group. --1. In the navigation pane, in the **Settings** section, select **Deployments**. You see an ordered list of the deployments to this resource group, with the most recent one first. --1. Scroll to the oldest entry in this list. This entry corresponds to the deployment you started in the preceding section. Select the oldest deployment, as shown in the following screenshot. -- :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/deployments.png" alt-text="Screenshot of the Azure portal that shows JBoss EAP on Azure Red Hat OpenShift deployments with the oldest deployment highlighted." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/deployments.png"::: --1. In the navigation pane, select **Outputs**. This list shows the output values from the deployment, which includes some useful information like **cmdToGetKubeadminCredentials** and **consoleUrl**. -- :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/deployment-outputs.png" alt-text="Screenshot of the Azure portal that shows JBoss EAP on Azure Red Hat OpenShift deployment outputs." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/deployment-outputs.png"::: --1. Open your local terminal, paste the value from the **cmdToGetKubeadminCredentials** field, and execute it. You see the admin account and credential for signing in to the OpenShift cluster console portal. The following example shows an admin account: -- ```azurecli - az aro list-credentials -g eaparo033123rg -n aro-cluster - ``` -- This command produces output similar to the following example: -- ```output - { - "kubeadminPassword": "xxxxx-xxxxx-xxxxx-xxxxx", - "kubeadminUsername": "kubeadmin" - } - ``` --1. Paste the value from the **consoleUrl** field into an Internet-connected web browser, and then press <kbd>Enter</kbd>. -1. Fill in the admin user name and password, then select **Log in**. -1. In the admin console of Azure Red Hat OpenShift, select **Operators** > **Installed Operators**, where you can find that the **JBoss EAP** operator is successfully installed, as shown in the following screenshot: -- :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/red-hat-openshift-cluster-console-portal-operators.png" alt-text="Screenshot of the Red Hat OpenShift cluster console portal that shows the Installed operators page." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/red-hat-openshift-cluster-console-portal-operators.png"::: --1. Paste the value from the **appEndpoint** field into an Internet-connected web browser, and then press <kbd>Enter</kbd>. You see the JBoss EAP application running on Azure Red Hat OpenShift, as shown in the following screenshot: -- :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/jboss-eap-application.png" alt-text="Screenshot of the JBoss EAP application running on Azure Red Hat OpenShift." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/jboss-eap-application.png"::: ---## Set up webhooks with OpenShift --This section shows you how to set up and use GitHub webhooks with OpenShift. --### Get the GitHub webhook URL --Use the following steps to get the webhook URL: --1. Navigate to the **OpenShift Web Console** with the URL provided in the **consoleUrl** field. -1. Navigate to **Builds** > **BuildConfigs** > **eap-app-build-artifacts**. -1. Select **Copy URL with Secret**, as shown in the following screenshot: -- :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/github-webhook-url.png" alt-text="Screenshot of the Red Hat OpenShift cluster console portal BuildConfig details page with the Copy URL with Secret link highlighted." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/github-webhook-url.png"::: --### Configure GitHub webhooks --Use the following steps to configure webhooks: --1. Open the forked repository in your GitHub account. -1. Navigate to the **Settings** tab. -1. Navigate to the **Webhooks** tab. -1. Select **Add webhook**. -1. Paste the **URL with Secret** value into the **Payload URL** field. -1. Change the **Content type** value to **application/json**. -1. For **Which events would you like to trigger this webhook?**, select **Just the push event**. -1. Select **Add webhook**. -- :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/github-webhook-settings.png" alt-text="Screenshot of GitHub that shows the Settings tab and Webhooks pane." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/github-webhook-settings.png"::: --From now on, every time you push a change to the repository, the webhook triggers a new build in OpenShift. --### Test the GitHub webhooks --Use the following steps to test the webhooks: --1. Select the **Code** tab in the forked repository. -1. Navigate to the *src/main/webapp/https://docsupdatetracker.net/index.html* file. -1. After you have the file on the screen, navigate to the **Edit** button. -1. Change the line 38 from `<h1 class="display-4">JBoss EAP on Azure Red Hat OpenShift</h1>` to `<h1 class="display-4">JBoss EAP on Azure Red Hat OpenShift - Updated - 01 </h1>`. -1. Select **Commit changes**. --After you commit the changes, the webhook triggers a new build in OpenShift. From the OpenShift Web Console, navigate to **Builds** > **Builds** to see a new build in **Running** status. --### Verify the update --Use the following steps to verify the update: --1. After the build completes, navigate to **Builds** > **Builds** to see two new builds in **Complete** status. -1. Open a new browser tab and navigate to the **appEndpoint** URL. -- You should see the updated message on the screen. -- :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/jboss-eap-application-with-updated-info.png" alt-text="Screenshot of the JBoss EAP sample application with updated information." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/jboss-eap-application-with-updated-info.png"::: -- |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | description: Learn about the latest new features and announcement in Microsoft S Previously updated : 09/04/2024 Last updated : 09/09/2024 # What's new in Microsoft Sentinel The listed features were released in the last three months. For information abou [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] +## September 2024 ++- [Microsoft Sentinel now generally available (GA) in Azure Israel Central](#microsoft-sentinel-now-generally-available-ga-in-azure-israel-central) ++### Microsoft Sentinel now generally available (GA) in Azure Israel Central ++Microsoft Sentinel is now available in the *Israel Central* Azure region, with the same feature set as all other Azure Commercial regions. ++For more information, see as [Microsoft Sentinel feature support for Azure commercial/other clouds](feature-availability.md) and [Geographical availability and data residency in Microsoft Sentinel](geographical-availability-data-residency.md). + ## August 2024 - [Export and import automation rules (Preview)](#export-and-import-automation-rules-preview) |
storage-mover | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/release-notes.md | |