Updates from: 05/17/2021 03:03:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Threat Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/threat-management.md
Previously updated : 05/14/2021 Last updated : 05/15/2021
Credential attacks lead to unauthorized access to resources. Passwords that are set by users are required to be reasonably complex. Azure AD B2C has mitigation techniques in place for credential attacks. Mitigation includes detection of brute-force credential attacks and dictionary credential attacks. By using various signals, Azure Active Directory B2C (Azure AD B2C) analyzes the integrity of requests. Azure AD B2C is designed to intelligently differentiate intended users from hackers and botnets.
-Azure AD B2C uses a sophisticated strategy to lock accounts. The accounts are locked based on the IP of the request and the passwords entered. The duration of the lockout also increases based on the likelihood that it's an attack. After a password is tried 10 times unsuccessfully (the default attempt threshold), a one-minute lockout occurs. The next time a login is unsuccessful after the account is unlocked (that is, after the account has been automatically unlocked by the service once the lockout period expires), another one-minute lockout occurs and continues for each unsuccessful login. Entering the same password repeatedly doesn't count as multiple unsuccessful logins.
+Azure AD B2C uses a sophisticated strategy to lock accounts. The accounts are locked based on the IP of the request and the passwords entered. The duration of the lockout also increases based on the likelihood that it's an attack. After a password is tried 10 times unsuccessfully (the default attempt threshold), a one-minute lockout occurs. The next time a login is unsuccessful after the account is unlocked (that is, after the account has been automatically unlocked by the service once the lockout period expires), another one-minute lockout occurs and continues for each unsuccessful login. Entering the same, or similar password repeatedly doesn't count as multiple unsuccessful logins.
> [!NOTE] > This feature is supported by [user flows, custom policies](user-flow-overview.md), and [ROPC](add-ropc-policy.md) flows. ItΓÇÖs activated by default so you donΓÇÖt need to configure it in your user flows or custom policies.
-The first 10 lockout periods are one minute long. The next 10 lockout periods are slightly longer and increase in duration after every 10 lockout periods. The lockout counter resets to zero after a successful login when the account isnΓÇÖt locked. Lockout periods can last up to five hours.
+## Unlock accounts
-### Manage password protection settings
+The first 10 lockout periods are one minute long. The next 10 lockout periods are slightly longer and increase in duration after every 10 lockout periods. The lockout counter resets to zero after a successful login when the account isnΓÇÖt locked. Lockout periods can last up to five hours. Users must wait for the lockout duration to expire. However, the user can unlock by using self-service [password user flow](add-password-reset-policy.md).
+
+## Manage password protection settings
To manage password protection settings, including the lockout threshold:
To manage password protection settings, including the lockout threshold:
<br />*Setting the lockout threshold to 5 in **Password protection** settings*. 1. Select **Save**.
-### Testing the password protection settings
+
+## Testing the password protection settings
The smart lockout feature uses many factors to determine when an account should be locked, but the primary factor is the password pattern. The smart lockout feature considers slight variations of a password as a set, and theyΓÇÖre counted as a single try. For example:
The smart lockout feature uses many factors to determine when an account should
When testing the smart lockout feature, use a distinctive pattern for each password you enter. Consider using password generation web apps, such as [https://passwordsgenerator.net/](https://passwordsgenerator.net/).
-When the smart lockout threshold is reached, you'll see the following message while the account is locked:
+When the smart lockout threshold is reached, you'll see the following message while the account is locked: **Your account is temporarily locked to prevent unauthorized use. Try again later**. The error messages can be [localized](localization-string-ids.md#sign-up-or-sign-in-error-messages).
- **Your account is temporarily locked to prevent unauthorized use. Try again later, and if you still have trouble, contact your admin.**
-
-### Viewing locked-out accounts
+## Viewing locked-out accounts
To obtain information about locked-out accounts, you can check the Active Directory [sign-in activity report](../active-directory/reports-monitoring/concept-sign-ins.md). Under **Status**, select **Failure**. Failed sign-in attempts with a **Sign-in error code** of `50053` indicate a locked account: ![Section of Azure AD sign-in report showing locked-out account](./media/threat-management/portal-01-locked-account.png)
-To learn about viewing the sign-in activity report in Azure Active Directory, see [Sign-in activity report error codes](../active-directory/reports-monitoring/concept-sign-ins.md).
+To learn about viewing the sign-in activity report in Azure Active Directory, see [Sign-in activity report error codes](../active-directory/reports-monitoring/concept-sign-ins.md).
+
active-directory Msal Node Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-node-migration.md
When working with ADAL Node, you were likely using the **Azure AD v1.0 endpoint**. Apps migrating from ADAL to MSAL should also consider switching to **Azure AD v2.0 endpoint**.
-1. Review the [differences between v1 and v2 endpoints](https://docs.microsoft.com/azure/active-directory/azuread-dev/azure-ad-endpoint-comparison)
+1. Review the [differences between v1 and v2 endpoints](../azuread-dev/azure-ad-endpoint-comparison.md)
1. Update, if necessary, your existing app registrations accordingly. > [!NOTE]
adal.logging.setLoggingOptions({
console.log(message); if (error) {
- console.log(error);
+ console.log(error);
} }, level: logging.LOGGING_LEVEL.VERBOSE, // provide the logging level
const cca = new msal.ConfidentialClientApplication(msalConfig);
An important difference between v1.0 vs. v2.0 endpoints is about how the resources are accessed. In ADAL Node, you would first register a permission on app registration portal, and then request an access token for a resource (such as Microsoft Graph) as shown below: ```javascript
- authenticationContext.acquireTokenWithAuthorizationCode(
+authenticationContext.acquireTokenWithAuthorizationCode(
req.query.code, redirectUri, resource, // e.g. 'https://graph.microsoft.com' clientId, clientSecret, function (err, response) {
- // do something with the authentication response
- );
+ // do something with the authentication response
+ }
+);
``` MSAL Node supports both **v1.0** and **v2.0** endpoints. The v2.0 endpoint employs a *scope-centric* model to access resources. Thus, when you request an access token for a resource, you also need to specify the scope for that resource: ```javascript
- const tokenRequest = {
- code: req.query.code,
- scopes: ["https://graph.microsoft.com/User.Read"],
- redirectUri: REDIRECT_URI,
- };
+const tokenRequest = {
+ code: req.query.code,
+ scopes: ["https://graph.microsoft.com/User.Read"],
+ redirectUri: REDIRECT_URI,
+};
- pca.acquireTokenByCode(tokenRequest).then((response) => {
- // do something with the authentication response
- }).catch((error) => {
- console.log(error);
- });
+pca.acquireTokenByCode(tokenRequest).then((response) => {
+ // do something with the authentication response
+}).catch((error) => {
+ console.log(error);
+});
``` One advantage of the scope-centric model is the ability to use *dynamic scopes*. When building applications using v1.0, you needed to register the full set of permissions (called *static scopes*) required by the application for the user to consent to at the time of login. In v2.0, you can use the scope parameter to request the permissions at the time you want them (hence, *dynamic scopes*). This allows the user to provide **incremental consent** to scopes. So if at the beginning you just want the user to sign in to your application and you donΓÇÖt need any kind of access, you can do so. If later you need the ability to read the calendar of the user, you can then request the calendar scope in the acquireToken methods and get the user's consent. See for more: [Resources and scopes](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/resources-and-scopes.md)
However, some methods in ADAL Node are deprecated, while MSAL Node offers new me
| ADAL | MSAL | Notes | |--||--| | `acquireUserCode` | N/A | Merged with `acquireTokeByDeviceCode` (see above)|
-| N/A | `acquireTokenOnBehalfOf` | A new method that abstracts [OBO flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow) |
+| N/A | `acquireTokenOnBehalfOf` | A new method that abstracts [OBO flow](./v2-oauth2-on-behalf-of-flow.md) |
| `acquireTokenWithClientCertificate` | N/A | No longer needed as certificates are assigned during initialization now (see [configuration options](#configure-msal)) |
-| N/A | `getAuthCodeUrl` | A new method that abstracts [authorize endpoint](https://docs.microsoft.com/azure/active-directory/develop/active-directory-v2-protocols#endpoints) URL construction |
+| N/A | `getAuthCodeUrl` | A new method that abstracts [authorize endpoint](./active-directory-v2-protocols.md#endpoints) URL construction |
## Use promises instead of callbacks
In ADAL Node, callbacks are used for any operation after the authentication succ
var context = new AuthenticationContext(authorityUrl, validateAuthority); context.acquireTokenWithClientCredentials(resource, clientId, clientSecret, function(err, response) {
- if (err) {
- console.log(err);
- } else {
- // do something with the authentication response
- }
+ if (err) {
+ console.log(err);
+ } else {
+ // do something with the authentication response
+ }
}); ```
const cachePlugin = {
}; ```
-If you are developing [public client applications](https://docs.microsoft.com/azure/active-directory/develop/msal-client-applications) like desktop apps, the [Microsoft Authentication Extensions for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/extensions/msal-node-extensions) offers secure mechanisms for client applications to perform cross-platform token cache serialization and persistence. Supported platforms are Windows, Mac and Linux.
+If you are developing [public client applications](./msal-client-applications.md) like desktop apps, the [Microsoft Authentication Extensions for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/extensions/msal-node-extensions) offers secure mechanisms for client applications to perform cross-platform token cache serialization and persistence. Supported platforms are Windows, Mac and Linux.
> [!NOTE] > [Microsoft Authentication Extensions for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/extensions/msal-node-extensions) is **not** recommended for web applications, as it may lead to scale and performance issues. Instead, web apps are recommended to persist the cache in session.
npm start
## Example: Securing web apps with ADAL Node vs. MSAL Node
-The snippet below demonstrates a confidential client web app in the Express.js framework. The app is secured with ADAL Node. It performs a sign-in when a user hits the authentication route `/auth`, acquires an access token for Microsoft Graph via the `/redirect` route and then displays the content of the said token.
+The snippet below demonstrates a confidential client web app in the Express.js framework. It performs a sign-in when a user hits the authentication route `/auth`, acquires an access token for Microsoft Graph via the `/redirect` route and then displays the content of the said token.
++
+<table>
+<tr><td> Using ADAL Node </td><td> Using MSAL Node </td></tr>
+<tr>
+<td>
```javascript // Import dependencies
adal.Logging.setLoggingOptions({
}); // Auth code request URL template
-var templateAuthzUrl = 'https://login.microsoftonline.com/' + tenant +
- '/oauth2/authorize?response_type=code&client_id=' + clientId + '&redirect_uri='
- + redirectUri + '&state=<state>&resource=' + resource;
+var templateAuthzUrl = 'https://login.microsoftonline.com/'
+ + tenant + '/oauth2/authorize?response_type=code&client_id='
+ + clientId + '&redirect_uri=' + redirectUri
+ + '&state=<state>&resource=' + resource;
// Initialize express var app = express();
app.locals.state = "";
app.get('/auth', function(req, res) {
- // Create a random string as state parameter, which is used against XSRF
+ // Create a random string to use against XSRF
crypto.randomBytes(48, function(ex, buf) {
- app.locals.state = buf.toString('base64').replace(/\//g, '_').replace(/\+/g, '-');
+ app.locals.state = buf.toString('base64')
+ .replace(/\//g, '_')
+ .replace(/\+/g, '-');
// Construct auth code request URL
- var authorizationUrl = templateAuthzUrl.replace('<state>', app.locals.state);
+ var authorizationUrl = templateAuthzUrl
+ .replace('<state>', app.locals.state);
+ res.redirect(authorizationUrl); }); });
app.get('/redirect', function(req, res) {
} // Initialize an AuthenticationContext object
- var authenticationContext = new adal.AuthenticationContext(authorityUrl);
+ var authenticationContext =
+ new adal.AuthenticationContext(authorityUrl);
// Exchange auth code for tokens authenticationContext.acquireTokenWithAuthorizationCode(
app.get('/redirect', function(req, res) {
); });
-app.listen(3000, function() { console.log(`listening on port 3000!`); });
+app.listen(3000, function() {
+ console.log(`listening on port 3000!`);
+});
```
-A web app with equivalent functionality can be secured with MSAL Node as shown below:
+</td>
+<td>
```javascript // Import dependencies
app.get('/auth', (req, res) => {
}; // Request auth code, then redirect
- cca.getAuthCodeUrl(authCodeUrlParameters).then((response) => {
- res.redirect(response);
- }).catch((error) => res.send(error));
+ cca.getAuthCodeUrl(authCodeUrlParameters)
+ .then((response) => {
+ res.redirect(response);
+ }).catch((error) => res.send(error));
}); app.get('/redirect', (req, res) => {
app.get('/redirect', (req, res) => {
}; // Exchange the auth code for tokens
- cca.acquireTokenByCode(tokenRequest).then((response) => {
- res.send(response);
- }).catch((error) => res.status(500).send(error));
+ cca.acquireTokenByCode(tokenRequest)
+ .then((response) => {
+ res.send(response);
+ }).catch((error) => res.status(500).send(error));
});
-app.listen(3000, () => console.log(`listening on port 3000!`));
+app.listen(3000, () =>
+ console.log(`listening on port 3000!`));
```
+</td>
+</tr>
+</table>
+ ## Next steps - [MSAL Node API reference](https://azuread.github.io/microsoft-authentication-library-for-js/ref/modules/_azure_msal_node.html)
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
These samples show how to write a single-page application secured with Microsoft
> | Language/<br/>Platform | Code sample | Description | Auth libraries | Auth flow | > | - | -- | | - | -- | > |Angular|[GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa)| &#8226; Signs in users with AAD <br/>&#8226; Calls Microsoft Graph | MSAL Angular | Auth code flow (with PKCE) |
-> | Angular | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial) | &#8226; [Signs in users](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Signs in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Calls .NET Core web API](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Calls .NET Core web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Calls Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/4-AdvancedGrants/1-call-api-graph/README.md)<br/>&#8226; [Calls .NET Core web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/4-AdvancedGrants/2-call-api-api-c)| MSAL Angular | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Proof of Possession (PoP)|
+> | Angular | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial) | &#8226; [Signs in users](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Signs in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Calls .NET Core web API](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Calls .NET Core web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Calls Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/7-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Calls .NET Core web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/7-AdvancedScenarios/2-call-api-pop/README.md)<br/>&#8226; [Uses App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Uses Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploys to Azure Storage & App Service](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/4-Deployment/README.md)| MSAL Angular | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Proof of Possession (PoP)|
> | Blazor WebAssembly | [GitHub repo](https://github.com/Azure-Samples/ms-identity-blazor-wasm) | &#8226; Signs in users<br/>&#8226; Calls Microsoft Graph | MSAL.js | Auth code flow (with PKCE) | > | JavaScript | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-v2) | &#8226; Signs in users<br/>&#8226; Calls Microsoft Graph | MSAL.js | Auth code flow (with PKCE) | > | JavaScript | [GitHub repo](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa) | &#8226; Signs in users (B2C)<br/>&#8226; Calls Node.js web API | MSAL.js | Auth code flow (with PKCE) |
-> | JavaScript | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-tutorial) | &#8226; [Signs in users](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Signs in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Calls Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/3-Authorization-II/1-call-api/README.md)<br/>&#8226; [Calls Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/README.md)<br/>&#8226; [Calls Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/4-AdvancedGrants/1-call-api-graph/README.md)<br/>&#8226; [Calls Node.js web API via OBO & CA](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/4-AdvancedGrants/2-call-api-api-c)| MSAL.js | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Conditional Access (CA) |
+> | JavaScript | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-tutorial) | &#8226; [Signs in users](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Signs in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Calls Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/1-call-api/README.md)<br/>&#8226; [Calls Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/2-call-api-b2c/README.md)<br/>&#8226; [Calls Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/1-call-api-graph/README.md)<br/>&#8226; [Calls Node.js web API via OBO & CA](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/2-call-api-api-c)| MSAL.js | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Conditional Access (CA) |
> | React | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-react-spa) | &#8226; Signs in users<br/>&#8226; Calls Microsoft Graph | MSAL React | Auth code flow (with PKCE) |
-> | React | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial) | &#8226; [Signs in users](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Signs in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Calls Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Calls Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Calls Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Uses App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Uses Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploys to Azure Storage & App Service](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/4-Deployment/1-deploy-storage/README.md)<br/>&#8226; [Deploys to Azure Static Web Apps](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/4-Deployment/2-deploy-static/README.md)| MSAL React | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Conditional Access (CA) |
+> | React | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial) | &#8226; [Signs in users](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Signs in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Calls Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Calls Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Calls Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Calls Node.js web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/2-call-api-pop/README.md)<br/>&#8226; [Uses App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Uses Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploys to Azure Storage & App Service](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/1-deploy-storage/README.md)<br/>&#8226; [Deploys to Azure Static Web Apps](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/2-deploy-static/README.md)| MSAL React | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Conditional Access (CA)<br/>&#8226; Proof of Possession (PoP) |
## Web applications
azure-app-configuration Monitor App Configuration Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/monitor-app-configuration-reference.md
+
+ Title: Monitoring Azure App Configuration data reference
+description: Important Reference material needed when you monitor App Configuration
+++++ Last updated : 05/05/2021+++
+# Monitoring App Configuration data reference
+
+This article is a reference for the monitoring data collected by App Configuration. See [Monitoring App Configuration](monitor-app-configuration.md) for a walk through on to collect and analyze monitoring data for App Configuration.
+
+## Metrics
+Resource Provider and Type: [App Configuration Platform Metrics](/azure/azure-monitor/essentials/metrics-supported#microsoftappconfigurationconfigurationstores)
+
+| Metric | Unit | Description |
+|-|--| -- |
+| Http Incoming Request Count | Count | Total number of incoming Http Request |
+|Http Incoming Request Duration | Milliseconds | Server side duration of an Http Request |
+| Throttled Http Request Count | Count | Throttled requests are Http Requests that return a 429 Status Code (too many requests) |
+
+For more information, see a list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
++
+## Metric Dimensions
+App Configuration has the following dimensions associated with its metrics.
+
+| Metric Name | Dimension description |
+|-|--|
+| Http Incoming Request Count | The count of total Http requests. The supported dimensions are the **HttpStatusCode** or **AuthenticationScheme** of each request. **AuthenticationScheme** can be filtered by AAD or HMAC authentication. |
+| Http Incoming Request Duration | The server side duration of each request. The supported dimensions are the using the **HttpStatusCode** or **AuthenticationScheme** of each request. **AuthenticationScheme** can be filtered by AAD or HMAC authentication. |
+| Throttled Http Request Count | This metric does not have any dimensions |
+
+ For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+
+## Resource logs
+This section lists the category types of resource log collected for App Configuration. 
+
+| Resource log type | Further information|
+|-|--|
+| HttpRequest | [App Configuration Resource Log Category Information](/azure/azure-monitor/platform/resource-logs-categories) |
+
+For more information, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
+ 
+## Azure Monitor Logs tables
+
+This section refers to all of the Azure Monitor Logs Kusto tables relevant to App Configuration and available for query by Log Analytics.
+
+|Resource type | Notes |
+|-|--|
+| [AACHttpRequest](/azure/azure-monitor/reference/tables/aachttprequest) | Entries of every Http request sent to a selected app configuration resource. |
+| [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity) | Entries from the Azure Activity log that provide insight into any subscription-level or management group level events that have occurred in Azure. |
+
+For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
+
+### Diagnostics tables
+
+App Configuration uses the [AACHttpRequest Table](/azure/azure-monitor/reference/tables/aachttprequest) to store resource log information.
+
+**Http Requests**
+
+|Property | Type | Description |
+|-|--| -- |
+|Category |string |The log category of the event, always HttpRequest.
+|ClientIPAddress | string| IP Address of the client that sent the request.
+|ClientRequestId| string| Request ID provided by client.
+|CorrelationId| string| GUID for correlated logs.
+|DurationMs| int |The duration of the operation in milliseconds.
+|Method string| HTTP| Http request method (get or post)
+|RequestId| string| Unique request ID generated by server.
+|RequestLength| int |Length in bytes of the HTTP request.
+|RequestURI| string| URI of the request, can include key and label name.
+|_ResourceId| string| A unique identifier for the resource that the record is associated with
+|ResponseLength| int| Length in bytes of the HTTP response.
+|SourceSystem| string|
+|StatusCode| int |HTTP Status Code of the request.
+|TenantId| string |WorkspaceId of the request.
+|TimeGenerated| datetime| Timestamp (UTC) when log was generated because a request was sent
+|Type |string| The name of the table
+|UserAgent| string| User Agent provided by the client.
++
+## See Also
+
+* See [Monitoring Azure App Configuration](monitor-app-configuration.md) for a description of monitoring Azure App Configuration.
+* See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
+
azure-app-configuration Monitor App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/monitor-app-configuration.md
+
+ Title: Monitor Azure App Configuration
+description: Start here to learn how to monitor App Configuration
+++++ Last updated : 05/05/2021++
+# Monitoring App Configuration
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by App Configuration. App Configuration uses [Azure Monitor](/azure/azure-monitor/overview). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+
+## Monitoring overview page in Azure portal
+The **Overview** page in the Azure portal includes a brief view of the resource usage, such as the total number of requests, number of throttled requests, and request duration per configuration store. This information is useful, but only displays a small amount of the monitoring data available. Some of this monitoring data is collected automatically and is available for analysis as soon as you create the resource. You can enable additional types of data collection with some configuration.
+
+> [!div class="mx-imgBorder"]
+> ![Monitoring on the Overview Page](./media/monitoring-overview-page.png)
+
+## Monitoring data 
+App Configuration collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/insights/monitor-azure-resource#monitoring-data-from-Azure-resources). See [Monitoring App Configuration data reference](/monitor-app-configuration-reference.md) for detailed information on the metrics and logs metrics created by App Configuration.
+
+## Collection and routing
+Platform metrics and the activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations. For example, to view logs and metrics for a configuration store in near real-time in Azure Monitor, collect the resource logs in a Log Analytics workspace. If you do not already have one, create a [Log Analytics Workspace](/azure/azure-monitor/logs/quick-create-workspace#:~:text=Create%20a%20Log%20Analytics%20workspace%20in%20the%20Azure,and%20add%20a%20management%20solution%20to%20provide%20) and follow these steps to create and enable a diagnostic setting.
+
+ #### [Portal](#tab/portal)
+
+1. Sign in to the Azure portal.
+
+1. Navigate to your App Configuration store.
+
+1. In the **Monitoring** section, select **Diagnostic settings**, then select **+Add diagnostic setting**.
+ > [!div class="mx-imgBorder"]
+ > ![Add a diagnostic setting](./media/diagnostic-settings-add.png)
+
+1. In the **Diagnostic setting** page, enter a name for your setting, then select **HttpRequest** and choose the destination to send your logs to. To send them to a Log Analytics workspace, choose **Send to Log Analytics workspace**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Details of the diagnostic settings](./media/monitoring-diagnostic-settings-details.png)
+
+1. Enter the name of your **Subscription** and **Log Analytics Workspace**.
+1. Select **Save** and verify that the Diagnostic Settings page now lists your new diagnostic setting.
+
+
+ ### [Azure CLI](#tab/cli)
+
+1. Open the Azure Cloud Shell, or if you've installed the Azure CLI locally, open a command console application such as Windows PowerShell.
+
+1. If your identity is associated with more than one subscription, then set your active subscription to the subscription of the storage account that you want to enable logs for.
+
+ ```Azure CLI
+ az account set --subscription <your-subscription-id>
+ ```
+
+1. Enable logs by using the az monitor [diagnostic-settings create command](/cli/azure/monitor/diagnostic-settings?view=azure-cli-latest#az_monitor_diagnostic_settings_create&preserve-view=true).
+
+ ```Azure CLI
+ az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <app-configuration-resource-id> --logs '[{"category": <category name>, "enabled": true "retentionPolicy": {"days": <days>, "enabled": <retention-bool}}]'
+ ```
+
+ ### [PowerShell](#tab/PowerShell)
+
+1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the Connect-AzAccount command. Then, follow the on-screen directions.
+
+ ```PowerShell
+ Connect-AzAccount
+ ```
+
+1. Set your active subscription to the subscription of the App Configuration account that you want to enable logging for.
+
+ ```PowerShell
+ Set-AzContext -SubscriptionId <subscription-id>
+ ```
+
+1. To enable logs for a Log Analytics Workspace, use the [Set-AzDiagnosticSetting PowerShell](/previous-versions/azure/mt631625(v=azure.100)?redirectedfrom=MSDN) cmdlet.
+
+ ```PowerShell
+ Set-AzDiagnosticSetting -ResourceId <app-configuration-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true
+ ```
+1. Verify that your diagnostic setting is correctly set and log categories are enabled.
+
+ ```PowerShell
+ Get-AzureRmDiagnosticSetting -ResourceId <app-configuration-resource-id>
+ ```
+
+For more information on creating a diagnostic setting using the Azure portal, CLI, or PowerShell, see [create a diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings).
+
+When you create a diagnostic setting, you specify which categories of logs to collect. For further information on the categories of logs for App Configuration, reference [App Configuration monitoring data reference](/monitor-app-configuration-reference.md#resource-logs).
+
+## Analyzing metrics
+
+You can analyze metrics for App Configuration with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/platform/metrics-getting-started) for details on using this tool. For App Configuration, the following metrics are collected:
+
+* Http Incoming Request Count
+* Http Incoming Request Duration
+* Throttled Http Request Count (Http Status Code 429 Responses)
+
+In the portal, navigate to the **Metrics** section and select the **Metric Namespaces** and **Metrics** you want to analyze. This screenshot shows you the metrics view when selecting **Http Incoming Request Count** for your configuration store.
+
+> [!div class="mx-imgBorder"]
+> ![How to use App Config Metrics](./media/monitoring-analyze-metrics.png)
+
+For a list of the platform metrics collected for App Configuration, see [Monitoring App Configuration data reference metrics](/monitor-app-configuration-reference#metrics). For reference, you can also see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+
+## Analyzing logs
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/platform/diagnostic-logs-schema#top-level-resource-logs-schema).
+
+The [Activity log](/azure/azure-monitor/platform/activity-log) is a platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+For a list of the types of resource logs collected for App Configuration, see [Monitoring App Configuration data reference](/monitor-app-configuration-reference#logs). For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring App Configuration data reference](/monitor-app-configuration-reference#azuremonitorlogstables)
+
+>[!IMPORTANT]
+> When you select **Logs** from the App Configuration menu, Log Analytics is opened with the query scope set to the current app configuration resource. This means that log queries will only include data from that resource.
++
+If you want to run a query that includes data from other configuration or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/log-query/scope/) for details.
+
+In the portal, navigate to the **Logs** section, and then to the query editor. On the left under the **Tables** tab, select **AACHttpRequest** to see the logs of your configuration store. Enter a Kusto query into the editor and results will be displayed below.
+
+> [!div class="mx-imgBorder"]
+> ![Writing kusto queries in our logs](./media/monitoring-writing-queries.png)
+
+Following are sample queries that you can use to help you monitor your App Configuration resource.
+++
+* List all Http Requests in the last three days
+ ```Kusto
+    AACHttpRequest
+ | where TimeGenerated > ago(3d)
+ ```
+
+* List all throttled requests (returned Http status code 429 for too many requests) in the last three days
+ ```Kusto
+    AACHttpRequest
+ | where TimeGenerated > ago(3d)
+ | where StatusCode == "429"
+ ```
+
+* List the number of requests sent in the last three days by IP Address
+ ```Kusto
+    AACHttpRequest
+ | where TimeGenerated > ago(3d)
+ | summarize requestCount= count() by ClientIPAddress
+ | order by requestCount desc
+ ```
+
+* Create a pie chart of the types of status codes received in the last three days
+ ```Kusto
+    AACHttpRequest
+ | where TimeGenerated > ago(3d)
+ | summarize requestCount=count() by StatusCode
+ | order by requestCount desc
+ | render piechart
+ ```
+
+* List the number of requests sent by day for the last 14 days
+ ```Kusto
+ AACHttpRequest
+ | where TimeGenerated > ago(14d)
+ | extend Day = startofday(TimeGenerated)
+ | summarize requestcount=count() by Day
+ | order by Day desc
+ ```
+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/platform/alerts-metric-overview), [logs](/azure/azure-monitor/platform/alerts-unified-log), and the [activity log](/azure/azure-monitor/platform/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+The following table lists common and recommended alert rules for App Configuration.
+
+| Alert type | Condition | Description  |
+|:|:|:|
+|Rate Limit on Http Requests | Status Code = 429  | The configuration store has exceeded the [hourly request quota](/faq#are-there-any-limits-on-the-number-of-requests-made-to-app-configuration). Upgrade to a standard store or follow the [best practices](/howto-best-practices#reduce-requests-made-to-app-configuration) to optimize your usage. |
+++
+## Next steps
+
+* See [Monitoring App Configuration data reference](/monitor-app-configuration-reference.md) for a reference of the metrics, logs, and other important values created by App Configuration.
+
+* See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resource) for details on monitoring Azure resources.
+++
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Connected Machine agent description: This article provides a detailed overview of the Azure Arc enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 05/10/2021 Last updated : 05/14/2021
Arc enabled servers support the installation of the Connected Machine agent on a
The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent: -- Windows Server 2008 R2, Windows Server 2012 R2 and higher (including Server Core)
+- Windows Server 2008 R2 SP1, Windows Server 2012 R2 and higher (including Server Core)
- Ubuntu 16.04, 18.04, and 20.04 LTS (x64) - CentOS Linux 7 and 8 (x64) - SUSE Linux Enterprise Server (SLES) 12 and 15 (x64)
The following versions of the Windows and Linux operating system are officially
> [!WARNING] > The Linux hostname or Windows computer name cannot use one of the reserved words or trademarks in the name, otherwise attempting to register the connected machine with Azure will fail. See [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md) for a list of the reserved words.
+### Software requirements
+
+* NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
+* Windows PowerShell 5.1 is required. [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
+ ### Required permissions * To onboard machines, you are a member of the **Azure Connected Machine Onboarding** or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
You can use the Azure portal to create a data collection rule and associate virt
> [!IMPORTANT] > There is currently a known issue where if the data collection rule creates a managed identity on a virtual machine that already has a user-assigned managed identity, the user-assigned identity is disabled.
+> [!NOTE]
+> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+ In the **Azure Monitor** menu in the Azure portal, select **Data Collection Rules** from the **Settings** section. Click **Add** to add a new Data Collection Rule and assignment. [![Data Collection Rules](media/data-collection-rule-azure-monitor-agent/data-collection-rules.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rules.png#lightbox)
The following table shows examples for filtering events using a custom XPath.
Follow the steps below to create a data collection rule and associations using the REST API.
+> [!NOTE]
+> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+ 1. Manually create the DCR file using the JSON format shown in [Sample DCR](data-collection-rule-overview.md#sample-data-collection-rule). 2. Create the rule using the [REST API](/rest/api/monitor/datacollectionrules/create#examples).
Follow the steps below to create a data collection rule and association
## Create association using Resource Manager template
+> [!NOTE]
+> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+ You can create an association between an Azure virtual machine or Azure Arc enabled server using a Resource Manager template. See [Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md) for sample templates.
azure-monitor Java 2X Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-2x-agent.md
# Monitor dependencies, caught exceptions, and method execution times in Java web apps > [!CAUTION]
-> The approach described in this document is no longer recommended.
+> This document applies to Application Insights Java 2.x which is no longer recommended.
>
-> The recommended approach to monitor Java applications is to use the auto-instrumentation without changing the code. Please follow the guidelines for [Application Insights Java 3.0 agent](./java-in-process-agent.md).
+> Documentation for the latest version can be found at [Application Insights Java 3.x](./java-in-process-agent.md).
If you have [instrumented your Java web app with Application Insights SDK][java], you can use the Java Agent to get deeper insights, without any code changes:
azure-monitor Java 2X Collectd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-2x-collectd.md
# collectd: Linux performance metrics in Application Insights [Deprecated]
-> [!IMPORTANT]
-> The **recommended approach** to monitor Java applications is to use the auto-instrumentation without changing the code. Please follow the guidelines for **[Application Insights Java 3.0 agent](./java-in-process-agent.md)**.
+> [!CAUTION]
+> This document applies to Application Insights Java 2.x which is no longer recommended.
+>
+> Documentation for the latest version can be found at [Application Insights Java 3.x](./java-in-process-agent.md).
To explore Linux system performance metrics in [Application Insights](./app-insights-overview.md), install [collectd](https://collectd.org/), together with its Application Insights plug-in. This open-source solution gathers various system and network statistics.
azure-monitor Java 2X Filter Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-2x-filter-telemetry.md
> [!CAUTION] > This document applies to Application Insights Java 2.x which is no longer recommended. >
-> Documentation for the latest version can be found at [Application Insights Java 3.0](./java-in-process-agent.md).
+> Documentation for the latest version can be found at [Application Insights Java 3.x](./java-in-process-agent.md).
Filters provide a way to select the telemetry that your [Java web app sends to Application Insights](java-2x-get-started.md). There are some out-of-the-box filters that you can use, and you can also write your own custom filters.
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-2x-get-started.md
> [!CAUTION] > This document applies to Application Insights Java 2.x which is no longer recommended. >
-> Documentation for the latest version can be found at [Application Insights Java 3.0](./java-in-process-agent.md).
+> Documentation for the latest version can be found at [Application Insights Java 3.x](./java-in-process-agent.md).
In this quickstart, you use Application Insights SDK to instrument request, track dependencies, and collect performance counters, diagnose performance issues and exceptions, and write code to track what users do with your app.
Then refresh the project dependencies to get the binaries downloaded.
* `applicationinsights-core` gives you just the bare API, for example, if your application isn't servlet-based. * *How should I update the SDK to the latest version?*
- * As of November 2020, for monitoring Java applications we recommend auto-instrumentation using the Azure Monitor Application Insights Java 3.0 agent. For more information on how to get started, see [Application Insights Java 3.0 agent](./java-in-process-agent.md).
+ * As of November 2020, for monitoring Java applications we recommend using Application Insights Java 3.x. For more information on how to get started, see [Application Insights Java 3.x](./java-in-process-agent.md).
## Add an *ApplicationInsights.xml* file Add *ApplicationInsights.xml* to the resources folder in your project, or make sure it's added to your project's deployment class path. Copy the following XML into it.
Now publish your app to the server, let people use it, and watch the telemetry s
## Azure App Service, AKS, VMs config
-The best and easiest approach to monitor your applications running on any of Azure resource providers is to use Application Insights auto-instrumentation via [Java 3.0 agent](./java-in-process-agent.md).
+The best and easiest approach to monitor your applications running on any of Azure resource providers is to use
+[Application Insights Java 3.x](./java-in-process-agent.md).
## Exceptions and request failures
azure-monitor Java 2X Micrometer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-2x-micrometer.md
Last updated 11/01/2018
> [!CAUTION] > This document applies to Application Insights Java 2.x which is no longer recommended. >
-> Documentation for the latest version can be found at [Application Insights Java 3.0](./java-in-process-agent.md).
+> Documentation for the latest version can be found at [Application Insights Java 3.x](./java-in-process-agent.md).
Micrometer application monitoring measures metrics for JVM-based application code and lets you export the data to your favorite monitoring systems. This article will teach you how to use Micrometer with Application Insights for both Spring Boot and non-Spring Boot applications.
azure-monitor Java 2X Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-2x-trace-logs.md
> [!CAUTION] > This document applies to Application Insights Java 2.x which is no longer recommended. >
-> Documentation for the latest version can be found at [Application Insights Java 3.0](./java-in-process-agent.md).
+> Documentation for the latest version can be found at [Application Insights Java 3.x](./java-in-process-agent.md).
If you're using Logback or Log4J (v1.2 or v2.0) for tracing, you can have your trace logs sent automatically to Application Insights where you can explore and search on them.
azure-monitor Java 2X Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-2x-troubleshoot.md
# Troubleshooting and Q and A for Application Insights for Java SDK
-> [!IMPORTANT]
-> The recommended approach to monitor Java applications is to use the auto-instrumentation without changing the code. Please follow the guidelines for [Application Insights Java 3.0 agent](./java-in-process-agent.md).
+> [!CAUTION]
+> This document applies to Application Insights Java 2.x which is no longer recommended.
+>
+> Documentation for the latest version can be found at [Application Insights Java 3.x](./java-in-process-agent.md).
Questions or problems with [Azure Application Insights in Java][java]? Here are some tips.
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
Java codeless application monitoring is all about simplicity - there are no code changes, the Java agent can be enabled through just a couple of configuration changes.
- The Java agent works in any environment, and allows you to monitor all of your Java applications. In other words, whether you are running your Java apps on VMs, on-premises, in AKS, on Windows, Linux - you name it, the Java 3.0 agent will monitor your app.
+The Java agent works in any environment, and allows you to monitor all of your Java applications. In other words, whether you are running your Java apps on VMs, on-premises, in AKS, on Windows, Linux - you name it,
+the Application Insights Java agent will monitor your app.
-Adding the Application Insights Java SDK to your application is no longer required, as the 3.0 agent auto-collects requests, dependencies and logs all on its own.
+Adding the Application Insights Java 2.x SDK to your application is no longer required,
+as the Application Insights Java 3.x agent auto-collects requests, dependencies and logs all on its own.
-You can still send custom telemetry from your application. The 3.0 agent will track and correlate it along with all of the auto-collected telemetry.
+You can still send custom telemetry from your application.
+The 3.x agent will track and correlate it along with all of the auto-collected telemetry.
-The 3.0 agent supports Java 8 and above.
+The 3.x agent supports Java 8 and above.
## Quickstart
to enable this preview feature and auto-collect the telemetry emitted by these A
## Send custom telemetry from your application
-Our goal in 3.0+ is to allow you to send your custom telemetry using standard APIs.
+Our goal in Application Insights Java 3.x is to allow you to send your custom telemetry using standard APIs.
We support Micrometer, popular logging frameworks, and the Application Insights Java 2.x SDK so far.
-Application Insights Java 3.0 automatically captures the telemetry sent through these APIs,
+Application Insights Java 3.x automatically captures the telemetry sent through these APIs,
and correlates it with auto-collected telemetry. ### Supported custom telemetry
-The table below represents currently supported custom telemetry types that you can enable to supplement the Java 3.0 agent. To summarize, custom metrics are supported through micrometer, custom exceptions and traces can be enabled through logging frameworks, and any type of the custom telemetry is supported through the [Application Insights Java 2.x SDK](#send-custom-telemetry-using-the-2x-sdk).
+The table below represents currently supported custom telemetry types that you can enable to supplement the Java 3.x agent. To summarize, custom metrics are supported through micrometer, custom exceptions and traces can be enabled through logging frameworks, and any type of the custom telemetry is supported through the [Application Insights Java 2.x SDK](#send-custom-telemetry-using-the-2x-sdk).
| | Micrometer | Log4j, logback, JUL | 2.x SDK | |||||
The table below represents currently supported custom telemetry types that you c
| **Requests** | | | Yes | | **Traces** | | Yes | Yes |
-We're not planning to release an SDK with Application Insights 3.0 at this time.
+We're not planning to release an SDK with Application Insights 3.x at this time.
-Application Insights Java 3.0 is already listening for telemetry that is sent to the Application Insights Java 2.x SDK. This functionality is an important part of the upgrade story for existing 2.x users, and it fills an important gap in our custom telemetry support until the OpenTelemetry API is GA.
+Application Insights Java 3.x is already listening for telemetry that is sent to the Application Insights Java 2.x SDK. This functionality is an important part of the upgrade story for existing 2.x users, and it fills an important gap in our custom telemetry support until the OpenTelemetry API is GA.
### Send custom metrics using Micrometer
If you want to attach custom dimensions to your logs, you can use
[Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html), [Log4j 2 MDC](https://logging.apache.org/log4j/2.x/manual/thread-context.html), or [Logback MDC](http://logback.qos.ch/manual/mdc.html),
-and Application Insights Java 3.0 will automatically capture those MDC properties as custom dimensions
+and Application Insights Java 3.x will automatically capture those MDC properties as custom dimensions
on your trace and exception telemetry. ### Send custom telemetry using the 2.x SDK
-Add `applicationinsights-core-2.6.2.jar` to your application (all 2.x versions are supported by Application Insights Java 3.0, but it's worth using the latest if you have a choice):
+Add `applicationinsights-core-2.6.3.jar` to your application
+(all 2.x versions are supported by Application Insights Java 3.x, but it's worth using the latest if you have a choice):
```xml <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>2.6.2</version>
+ <version>2.6.3</version>
</dependency> ```
try {
> [!NOTE] > This feature is only in 3.0.2 and later
-Add `applicationinsights-web-2.6.2.jar` to your application (all 2.x versions are supported by Application Insights Java 3.0, but it's worth using the latest if you have a choice):
+Add `applicationinsights-web-2.6.3.jar` to your application
+(all 2.x versions are supported by Application Insights Java 3.x, but it's worth using the latest if you have a choice):
```xml <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-web</artifactId>
- <version>2.6.2</version>
+ <version>2.6.3</version>
</dependency> ```
requestTelemetry.getProperties().put("mydimension", "myvalue");
> [!NOTE] > This feature is only in 3.0.2 and later
-Add `applicationinsights-web-2.6.2.jar` to your application (all 2.x versions are supported by Application Insights Java 3.0, but it's worth using the latest if you have a choice):
+Add `applicationinsights-web-2.6.3.jar` to your application
+(all 2.x versions are supported by Application Insights Java 3.x, but it's worth using the latest if you have a choice):
```xml <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-web</artifactId>
- <version>2.6.2</version>
+ <version>2.6.3</version>
</dependency> ```
requestTelemetry.getContext().getUser().setId("myuser");
> [!NOTE] > This feature is only in 3.0.2 and later
-Add `applicationinsights-web-2.6.2.jar` to your application (all 2.x versions are supported by Application Insights Java 3.0, but it's worth using the latest if you have a choice):
+Add `applicationinsights-web-2.6.3.jar` to your application
+(all 2.x versions are supported by Application Insights Java 3.x, but it's worth using the latest if you have a choice):
```xml <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-web</artifactId>
- <version>2.6.2</version>
+ <version>2.6.3</version>
</dependency> ```
requestTelemetry.setName("myname");
> [!NOTE] > This feature is only in 3.0.3 and later
-Add `applicationinsights-web-2.6.2.jar` to your application (all 2.x versions are supported by Application Insights Java 3.0, but it's worth using the latest if you have a choice):
+Add `applicationinsights-web-2.6.3.jar` to your application
+(all 2.x versions are supported by Application Insights Java 3.x, but it's worth using the latest if you have a choice):
```xml <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-web</artifactId>
- <version>2.6.2</version>
+ <version>2.6.3</version>
</dependency> ```
azure-monitor Java Jmx Metrics Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-jmx-metrics-configuration.md
# Configuring JMX metrics
-Application insights Java 3.0 agent collects some of the JMX metrics by default, but in many cases this is not enough. This document describes the JMX configuration option in details.
+Application Insights Java 3.x collects some of the JMX metrics by default, but in many cases this is not enough. This document describes the JMX configuration option in details.
## How do I collect additional JMX metrics?
azure-monitor Java On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-on-premises.md
Once enabled, the Java agent will automatically collect a multitude of requests,
Please follow [the detailed instructions](./java-in-process-agent.md) for all of the environments, including on-premises.
- ## Next steps
+## Next steps
-* [Get the instructions to download the Java agent](./java-in-process-agent.md)
-* [Configure your JVM args](https://github.com/microsoft/ApplicationInsights-Java/wiki/3.0-Preview:-Tips-for-updating-your-JVM-args)
-* [Customize the configuration](https://github.com/microsoft/ApplicationInsights-Java/wiki/3.0-Preview:-Configuration-Options)
+* [Application Insights Java 3.x](./java-in-process-agent.md)
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.0 expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.1.0.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.1.0.jar`.
You can specify your own configuration file path using either
You can also suppress these instrumentations by setting these environment variab
## Heartbeat
-By default, Application Insights Java 3.0 sends a heartbeat metric once every 15 minutes. If you are using the heartbeat metric to trigger alerts, you can increase the frequency of this heartbeat:
+By default, Application Insights Java 3.x sends a heartbeat metric once every 15 minutes.
+If you are using the heartbeat metric to trigger alerts, you can increase the frequency of this heartbeat:
```json {
By default, Application Insights Java 3.0 sends a heartbeat metric once every 15
## HTTP Proxy
-If your application is behind a firewall and cannot connect directly to Application Insights (see [IP addresses used by Application Insights](./ip-addresses.md)), you can configure Application Insights Java 3.0 to use an HTTP proxy:
+If your application is behind a firewall and cannot connect directly to Application Insights
+(see [IP addresses used by Application Insights](./ip-addresses.md)),
+you can configure Application Insights Java 3.x to use an HTTP proxy:
```json {
If your application is behind a firewall and cannot connect directly to Applicat
} ```
-Application Insights Java 3.0 also respects the global `https.proxyHost` and `https.proxyPort` system properties
+Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties
if those are set (and `http.nonProxyHosts` if needed). ## Metric interval
The setting applies to all of these metrics:
## Self-diagnostics
-"Self-diagnostics" refers to internal logging from Application Insights Java 3.0.
+"Self-diagnostics" refers to internal logging from Application Insights Java 3.x.
This functionality can be helpful for spotting and diagnosing issues with Application Insights itself.
-By default, Application Insights Java 3.0 logs at level `INFO` to both the file `applicationinsights.log`
+By default, Application Insights Java 3.x logs at level `INFO` to both the file `applicationinsights.log`
and the console, corresponding to this configuration: ```json
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-telemetry-processors.md
> [!NOTE] > The telemetry processors feature is in preview.
-The Java 3.0 agent for Application Insights can process telemetry data before the data is exported.
+Application Insights Java 3.x can process telemetry data before the data is exported.
Here are some use cases for telemetry processors: * Mask sensitive data.
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-troubleshoot.md
In this article, we cover some of the common issues that you might face while in
## Check the self-diagnostic log file
-By default, the Java 3.0 agent for Application Insights produces a log file named `applicationinsights.log` in the same directory that holds the `applicationinsights-agent-3.1.0.jar` file.
+By default, Application Insights Java 3.x produces a log file named `applicationinsights.log` in the same directory that holds the `applicationinsights-agent-3.1.0.jar` file.
This log file is the first place to check for hints to any issues you might be experiencing.
try re-downloading the agent jar file because it may have been corrupted during
## Upgrade from the Application Insights Java 2.x SDK
-If you're already using the Application Insights Java 2.x SDK in your application, you can keep using it. The Java 3.0 agent will detect it. For more information, see [Upgrade from the Java 2.x SDK](./java-standalone-upgrade-from-2x.md).
+If you're already using the Application Insights Java 2.x SDK in your application, you can keep using it.
+The Application Insights Java 3.x agent will detect it,
+and capture and correlate any custom telemetry you're sending via the 2.x SDK,
+while suppressing any auto-collection performed by the 2.x SDK to prevent duplicate telemetry.
+For more information, see [Upgrade from the Java 2.x SDK](./java-standalone-upgrade-from-2x.md).
## Upgrade from Application Insights Java 3.0 Preview
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
# Upgrading from Application Insights Java 2.x SDK
-If you're already using Application Insights Java 2.x SDK in your application, there is no need to remove it.
-The Java 3.0 agent will detect it, and capture and correlate any custom telemetry you're sending via the 2.x SDK,
+If you're already using Application Insights Java 2.x SDK in your application, you can keep using it.
+The Application Insights Java 3.x agent will detect it,
+and capture and correlate any custom telemetry you're sending via the 2.x SDK,
while suppressing any auto-collection performed by the 2.x SDK to prevent duplicate telemetry. If you were using Application Insights 2.x agent, you need to remove the `-javaagent:` JVM arg that was pointing to the 2.x agent. The rest of this document describes limitations and changes that you may encounter
-when upgrading from 2.x to 3.0, as well as some workarounds that you may find helpful.
+when upgrading from 2.x to 3.x, as well as some workarounds that you may find helpful.
## TelemetryInitializers and TelemetryProcessors
-The 2.x SDK TelemetryInitializers and TelemetryProcessors will not be run when using the 3.0 agent.
-Many of the use cases that previously required these can be solved in 3.0
+The 2.x SDK TelemetryInitializers and TelemetryProcessors will not be run when using the 3.x agent.
+Many of the use cases that previously required these can be solved in Application Insights Java 3.x
by configuring [custom dimensions](./java-standalone-config.md#custom-dimensions) or configuring [telemetry processors](./java-standalone-telemetry-processors.md). ## Multiple applications in a single JVM
-Currently, 3.0 only supports a single
+Currently, Application Insights Java 3.x only supports a single
[connection string and role name](./java-standalone-config.md#connection-string-and-role-name) per running process. In particular, you can't have multiple tomcat web apps in the same tomcat deployment using different connection strings or different role names yet. ## Operation names
-In the 2.x SDK, in some cases, the operation names contained the full path, e.g.
+In the Application Insights Java 2.x SDK, in some cases, the operation names contained the full path, e.g.
:::image type="content" source="media/java-ipa/upgrade-from-2x/operation-names-with-full-path.png" alt-text="Screenshot showing operation names with full path":::
-Operation names in 3.0 have changed to generally provide a better aggregated view
+Operation names in Application Insights Java 3.x have changed to generally provide a better aggregated view
in the Application Insights Portal U/X, e.g. :::image type="content" source="media/java-ipa/upgrade-from-2x/operation-names-parameterized.png" alt-text="Screenshot showing operation names parameterized"::: However, for some applications, you may still prefer the aggregated view in the U/X that was provided by the previous operation names, in which case you can use the
-[telemetry processors](./java-standalone-telemetry-processors.md) (preview) feature in 3.0
+[telemetry processors](./java-standalone-telemetry-processors.md) (preview) feature in 3.x
to replicate the previous behavior. The snippet below configures 3 telemetry processors that combine to replicate the previous behavior.
The telemetry processors perform the following actions (in order):
## Dependency names
-Dependency names in 3.0 have also changed, again to generally provide a better aggregated view
-in the Application Insights Portal U/X.
+Dependency names in Application Insights Java 3.x have also changed,
+again to generally provide a better aggregated view in the Application Insights Portal U/X.
Again, for some applications, you may still prefer the aggregated view in the U/X that was provided by the previous dependency names, in which case you can use similar
techniques as above to replicate the previous behavior.
## Operation name on dependencies
-Previously in the 2.x SDK, the operation name from the request telemetry was also set on the dependency telemetry.
-Application Insights Java 3.0 no longer populates operation name on dependency telemetry.
+Previously in the Application Insights Java 2.x SDK,
+the operation name from the request telemetry was also set on the dependency telemetry.
+Application Insights Java 3.x no longer populates operation name on dependency telemetry.
If you want to see the operation name for the request that is the parent of the dependency telemetry, you can write a Logs (Kusto) query to join from the dependency table to the request table, e.g.
dependencies
## 2.x SDK logging appenders
-The 3.0 agent [auto-collects logging](./java-standalone-config.md#auto-collected-logging)
+Application Insights Java 3.x [auto-collects logging](./java-standalone-config.md#auto-collected-logging)
without the need for configuring any logging appenders.
-If you are using 2.x SDK logging appenders, those can be removed, as they will be suppressed by the 3.0 agent anyways.
+If you are using 2.x SDK logging appenders, those can be removed,
+as they will be suppressed by the Application Insights Java 3.x anyways.
## 2.x SDK spring boot starter
-There is no 3.0 spring boot starter.
-The 3.0 agent setup and configuration follows the same [simple steps](./java-in-process-agent.md#quickstart)
+There is no Application Insights Java 3.x spring boot starter.
+3.x setup and configuration follows the same [simple steps](./java-in-process-agent.md#quickstart)
whether you are using spring boot or not.
-When upgrading from the 2.x SDK spring boot starter,
+When upgrading from the Application Insights Java 2.x SDK spring boot starter,
note that the cloud role name will no longer default to `spring.application.name`.
-See the [3.0 configuration docs](./java-standalone-config.md#cloud-role-name)
-for setting the cloud role name in 3.0 via json config or environment variable.
+See the [3.x configuration docs](./java-standalone-config.md#cloud-role-name)
+for setting the cloud role name in 3.x via json config or environment variable.
azure-monitor Container Insights Persistent Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-persistent-volumes.md
Last updated 03/03/2021
# Configure PV monitoring with Container insights
-Starting with agent version *ciprod10052020*, Azure Monitor for containers integrated agent now supports monitoring PV (persistent volume) usage. With agent version *ciprod01112021*, the agent supports monitoring PV inventory, including information about the status, storage class, type, access modes, and other details.
+Starting with agent version *ciprod10052020*, Container insights integrated agent now supports monitoring PV (persistent volume) usage. With agent version *ciprod01112021*, the agent supports monitoring PV inventory, including information about the status, storage class, type, access modes, and other details.
## PV metrics
-Container insights automatically starts monitoring PV usage by collecting the following metrics at 60 -sec intervals and storing them in the **InsightMetrics** table.
+Container insights automatically starts monitoring PV usage by collecting the following metrics at 60-second intervals and storing them in the **InsightMetrics** table.
| Metric name | Metric Dimension (tags) | Metric Description | |--|--|-|
-| `pvUsedBytes`| podUID, podName, pvcName, pvcNamespace, capacityBytes, clusterId, clusterName| Used space in bytes for a specific persistent volume with a claim used by a specific pod. `capacityBytes` is folded in as a dimension in the Tags field to reduce data ingestion cost and to simplify queries.|
+| `pvUsedBytes`| `podUID`, `podName`, `pvcName`, `pvcNamespace`, `capacityBytes`, `clusterId`, `clusterName`| Used space in bytes for a specific persistent volume with a claim used by a specific pod. `capacityBytes` is folded in as a dimension in the Tags field to reduce data ingestion cost and to simplify queries.|
Learn more about configuring collected PV metrics [here](./container-insights-agent-config.md). ## PV inventory
-Azure Monitor for containers automatically starts monitoring PVs by collecting the following information at 60-sec intervals and storing them in the **KubePVInventory** table.
+Container insights automatically starts monitoring PVs by collecting the following information at 60-second intervals and storing them in the **KubePVInventory** table.
|Data |Data Source| Data Type| Fields| |--|--|-|-|
-|Inventory of persistent volumes in a Kubernetes cluster |Kube API |`KubePVInventory` | PVName, PVCapacityBytes, PVCName, PVCNamespace, PVStatus, PVAccessModes, PVType, PVTypeInfo, PVStorageClassName, PVCreationTimestamp, TimeGenerated, ClusterId, ClusterName, _ResourceId |
+|Inventory of persistent volumes in a Kubernetes cluster |Kube API |`KubePVInventory` | `PVName`, `PVCapacityBytes`, `PVCName`, `PVCNamespace`, `PVStatus`, `PVAccessModes`, `PVType`, `PVTypeInfo`, `PVStorageClassName`, `PVCreationTimestamp`, `TimeGenerated`, `ClusterId`, `ClusterName`, `_ResourceId` |
## Monitor Persistent Volumes
-Azure Monitor for containers includes pre-configured charts for this usage metric and inventory information in workbook templates for every cluster. You can also enable a recommended alert for PV usage, and query these metrics in Log Analytics.
+Container insights includes pre-configured charts for this usage metric and inventory information in workbook templates for every cluster. You can also enable a recommended alert for PV usage, and query these metrics in Log Analytics.
### Workload Details Workbook
You can find usage charts for specific workloads in the Persistent Volume tab of
### Persistent Volume Details Workbook
-You can find an overview of persistent volume inventory in the **Persistent Volume Details** workbook directly from an AKS cluster by selecting Workbooks from the left-hand pane, from the **View Workbooks** drop-down list in the Insights pane, or from the **Reports (preview)** tab in the Insights pane.
+You can find an overview of persistent volume inventory in the **Persistent Volume Details** workbook directly from an AKS cluster by selecting Workbooks from the left-hand pane. You can also open this workbook from the **View Workbooks** drop-down list in the Insights pane or from the **Reports** tab in the Insights pane.
:::image type="content" source="./media/container-insights-persistent-volumes/pv-details-workbook-example.PNG" alt-text="Azure Monitor PV details workbook example":::
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/continuous-monitoring.md
Applications are only as reliable as their underlying infrastructure. Having mon
- You automatically get [platform metrics, activity logs and diagnostics logs](agents/data-sources.md) from most of your Azure resources with no configuration. - Enable deeper monitoring for VMs with [VM insights](vm/vminsights-overview.md).-- Enable deeper monitoring for AKS clusters with [Azure Monitor for containers](containers/container-insights-overview.md).
+- Enable deeper monitoring for AKS clusters with [Container insights](containers/container-insights-overview.md).
- Add [monitoring solutions](./monitor-reference.md) for different applications and services in your environment.
Applications are only as reliable as their underlying infrastructure. Having mon
## Combine resources in Azure Resource Groups A typical application on Azure today includes multiple resources such as VMs and App Services or microservices hosted on Cloud Services, AKS clusters, or Service Fabric. These applications frequently utilize dependencies like Event Hubs, Storage, SQL, and Service Bus. -- Combine resources in Azure Resource Groups to get full visibility across all your resources that make up your different applications. [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) provides a simple way to keep track of the health and performance of your entire full-stack application and enables drilling down into respective components for any investigations or debugging.
+- Combine resources in Azure Resource Groups to get full visibility across all your resources that make up your different applications. [Resource Group insights](./insights/resource-group-insights.md) provides a simple way to keep track of the health and performance of your entire full-stack application and enables drilling down into respective components for any investigations or debugging.
## Ensure quality through Continuous Deployment Continuous Integration / Continuous Deployment allows you to automatically integrate and deploy code changes to your application based on the results of automated testing. It streamlines the deployment process and ensures the quality of any changes before they move into production.
azure-monitor Azure Key Vault Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/azure-key-vault-deprecated.md
# Azure Key Vault Analytics solution in Azure Monitor > [!NOTE]
-> This solution is deprecated. [We now recommend using Azure Monitor for Key Vault](./key-vault-insights-overview.md).
+> This solution is deprecated. [We now recommend using Key Vault insights](./key-vault-insights-overview.md).
![Key Vault symbol](media/azure-key-vault/key-vault-analytics-symbol.png)
azure-monitor Cosmosdb Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/cosmosdb-insights-overview.md
Title: Monitor Azure Cosmos DB with Azure Monitor for Cosmos DB| Microsoft Docs
-description: This article describes the Azure Monitor for Cosmos DB feature that provides Cosmos DB owners with a quick understanding of performance and utilization issues with their CosmosDB accounts.
+ Title: Monitor Azure Cosmos DB with Azure Monitor Cosmos DB insights| Microsoft Docs
+description: This article describes the Cosmos DB insights feature of Azure Monitor that provides Cosmos DB owners with a quick understanding of performance and utilization issues with their CosmosDB accounts.
Last updated 05/11/2020
-# Explore Azure Monitor for Azure Cosmos DB
+# Explore Azure Monitor Cosmos DB insights
-Azure Monitor for Azure Cosmos DB provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. This article will help you understand the benefits of this new monitoring experience, and how you can modify and adapt the experience to fit the unique needs of your organization.
+Cosmos DB insights provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. This article will help you understand the benefits of this new monitoring experience, and how you can modify and adapt the experience to fit the unique needs of your organization.
## Introduction
To expand or collapse all drop-down views in the workbook, select the expand ico
![Expand workbook icon](./media/cosmosdb-insights-overview/expand.png)
-## Customize Azure Monitor for Azure Cosmos DB
+## Customize Cosmos DB insights
Since this experience is built on top of Azure Monitor workbook templates, you have the ability to **Customize** > **Edit** and **Save** a copy of your modified version into a custom workbook.
azure-monitor Key Vault Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/key-vault-insights-overview.md
Title: Monitor Key Vault with Azure Monitor for Key Vault | Microsoft Docs
-description: This article describes the Azure Monitor for Key Vaults.
+ Title: Monitor Key Vault with Key Vault insights | Microsoft Docs
+description: This article describes the Key Vault insights.
Last updated 09/10/2020
-# Monitoring your key vault service with Azure Monitor for Key Vault
-Azure Monitor for Key Vault provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency.
-This article will help you understand how to onboard and customize the experience of Azure Monitor for Key Vault.
+# Monitoring your key vault service with Key Vault insights
+Key Vault insights provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency.
+This article will help you understand how to onboard and customize the experience of Key Vault insights.
-## Introduction to Azure Monitor for Key Vault
+## Introduction to Key Vault insights
Before jumping into the experience, you should understand how it presents and visualizes information. - **At scale perspective** showing a snapshot view of performance based on the requests, breakdown of failures, and an overview of the operations and latency. - **Drill down analysis** of a particular key vault to perform detailed analysis. - **Customizable** where you can change which metrics you want to see, modify or set thresholds that align with your limits, and save your own workbook. Charts in the workbook can be pinned to Azure dashboards.
-Azure Monitor for Key Vault combines both logs and metrics to provide a global monitoring solution. All users can access the metrics-based monitoring data, however the inclusion of logs-based visualizations may require users to [enable logging of their Azure Key Vault](../../key-vault/general/logging.md).
+Key Vault insights combines both logs and metrics to provide a global monitoring solution. All users can access the metrics-based monitoring data, however the inclusion of logs-based visualizations may require users to [enable logging of their Azure Key Vault](../../key-vault/general/logging.md).
## View from Azure Monitor
To better understand what each of the status codes represent, we recommend readi
## View from a Key Vault resource
-To access Azure Monitor for Key Vault directly from a key Vault:
+To access Key Vault insights directly from a key Vault:
1. In the Azure portal, select Key Vaults.
The multi-subscription and key vaults overview or failures workbooks support exp
![Screenshot of pin icon selected](./media/key-vaults-insights-overview/pin.png)
-## Customize Azure Monitor for Key Vault
+## Customize Key Vault insights
This section highlights common scenarios for editing the workbook to customize in support of your data analytics needs: * Scope the workbook to always select a particular subscription or key vault(s)
You can configure the multi-subscription and key vault Overview or Failures work
For general troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](troubleshoot-workbooks.md).
-This section will help you with the diagnosis and troubleshooting of some of the common issues you may encounter when using Azure Monitor for Key Vault. Use the list below to locate the information relevant to your specific issue.
+This section will help you with the diagnosis and troubleshooting of some of the common issues you may encounter when using Key Vault insights. Use the list below to locate the information relevant to your specific issue.
### Resolving performance issues or failures
-To help troubleshoot any key vault related issues you identify with Azure Monitor for Key Vault, see the [Azure Key Vault documentation](../../key-vault/index.yml).
+To help troubleshoot any key vault related issues you identify with Key Vault insights, see the [Azure Key Vault documentation](../../key-vault/index.yml).
### Why can I only see 200 key vaults
azure-monitor Resource Group Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/resource-group-insights.md
Title: Azure Monitor Resource Group Insights | Microsoft Docs
-description: Understand the health and performance of your distributed applications and services at the Resource Group level with Azure Monitor
+ Title: Azure Monitor Resource Group insights | Microsoft Docs
+description: Understand the health and performance of your distributed applications and services at the Resource Group level with Resource Group insights feature of Azure Monitor.
Last updated 09/19/2018
-# Monitor resource groups with Azure Monitor (preview)
+# Monitor Azure Monitor Resource Group insights (preview)
Modern applications are often complex and highly distributed with many discrete parts working together to deliver a service. Recognizing this complexity, Azure Monitor provides monitoring insights for resource groups. This makes it easy to triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group&mdash;and your application&mdash;as a whole.
In this case, if you select edit you will see that this set of visualizations is
### Enabling access to alerts
-To see alerts in Azure Monitor for Resource Groups, someone with an Owner or Contributor role for this subscription needs to open Azure Monitor for Resource Groups for any resource group in the subscription. This will enable anyone with read access to see alerts in Azure Monitor for Resource Groups for all of the resource groups in the subscription. If you have an Owner or Contributor role, refresh this page in a few minutes.
+To see alerts in Resource Group insights, someone with an Owner or Contributor role for this subscription needs to open Resource Group insights for any resource group in the subscription. This will enable anyone with read access to see alerts in Resource Group insights for all of the resource groups in the subscription. If you have an Owner or Contributor role, refresh this page in a few minutes.
-Azure Monitor for Resource Groups relies on the Azure Monitor Alerts Management system to retrieve alert status. Alerts Management isn't configured for every resource group and subscription by default, and it can only be enabled by someone with an Owner or Contributor role. It can be enabled either by:
-* Opening Azure Monitor for Resource Groups for any resource group in the subscription.
+Resource Group insights relies on the Azure Monitor Alerts Management system to retrieve alert status. Alerts Management isn't configured for every resource group and subscription by default, and it can only be enabled by someone with an Owner or Contributor role. It can be enabled either by:
+* Opening Resource Group insights for any resource group in the subscription.
* Or by going to the subscription, clicking **Resource Providers**, then clicking **Register for Alerts.Management**. ## Next steps
azure-monitor Storage Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/storage-insights-overview.md
Title: Monitor Azure Storage services with Azure Monitor for Storage | Microsoft Docs
-description: This article describes the Azure Monitor for Storage feature that provides storage admins with a quick understanding of performance and utilization issues with their Azure Storage accounts.
+ Title: Monitor Azure Storage services with Azure Monitor Storage insights | Microsoft Docs
+description: This article describes the Storage insights feature of Azure Monitor that provides storage admins with a quick understanding of performance and utilization issues with their Azure Storage accounts.
Last updated 05/11/2020
-# Monitoring your storage service with Azure Monitor for Storage
+# Monitoring your storage service with Azure Monitor Storage insights
-Azure Monitor for Storage provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. You can observe storage capacity, and performance in two ways, view directly from a storage account or view from Azure Monitor to see across groups of storage accounts.
+Storage insights provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. You can observe storage capacity, and performance in two ways, view directly from a storage account or view from Azure Monitor to see across groups of storage accounts.
-This article will help you understand the experience Azure Monitor for Storage delivers to derive actionable knowledge on the health and performance of Storage accounts at scale, with a capability to focus on hotspots and diagnose latency, throttling, and availability issues.
+This article will help you understand the experience Storage insights delivers to derive actionable knowledge on the health and performance of Storage accounts at scale, with a capability to focus on hotspots and diagnose latency, throttling, and availability issues.
-## Introduction to Azure Monitor for Storage
+## Introduction to Storage insights
-Before diving into the experience, you should understand how it presents and visualizes information. Whether you select the Storage feature directly from a storage account or from Azure Monitor, Azure Monitor for Storage presents a consistent experience.
+Before diving into the experience, you should understand how it presents and visualizes information. Whether you select the Storage feature directly from a storage account or from Azure Monitor, Storage insights presents a consistent experience.
Combined it delivers:
The multi-subscription and storage account **Overview** or **Capacity** workbook
![Export workbook grid results example](./media/storage-insights-overview/workbook-export-example.png)
-## Customize Azure Monitor for Storage
+## Customize Storage insights
This section highlights common scenarios for editing the workbook to customize in support of your data analytics needs:
In this example, we are working with the storage account capacity workbook and d
For general troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](troubleshoot-workbooks.md).
-This section will help you with the diagnosis and troubleshooting of some of the common issues you may encounter when using Azure Monitor for Storage. Use the list below to locate the information relevant to your specific issue.
+This section will help you with the diagnosis and troubleshooting of some of the common issues you may encounter when using Storage insights. Use the list below to locate the information relevant to your specific issue.
### Resolving performance, capacity, or availability issues
-To help troubleshoot any storage-related issues you identify with Azure Monitor for Storage, see the Azure Storage [troubleshooting guidance](../../storage/common/storage-monitoring-diagnosing-troubleshooting.md#troubleshooting-guidance).
+To help troubleshoot any storage-related issues you identify with Storage insights, see the Azure Storage [troubleshooting guidance](../../storage/common/storage-monitoring-diagnosing-troubleshooting.md#troubleshooting-guidance).
### Why can I only see 200 storage accounts?
The number of selected storage accounts has a limit of 200, regardless of the nu
Refer to the [Modify the availability threshold](#modify-the-availability-threshold) section for the detailed steps on how to change the coloring and thresholds for availability.
-### How to analyze and troubleshoot the data shown in Azure Monitor for Storage?
+### How to analyze and troubleshoot the data shown in Storage insights?
- Refer to the [Monitor, diagnose, and troubleshoot Microsoft Azure Storage](../../storage/common/storage-monitoring-diagnosing-troubleshooting.md) article for details on how to analyze and troubleshoot the Azure Storage data shown in Azure Monitor for Storage.
+ Refer to the [Monitor, diagnose, and troubleshoot Microsoft Azure Storage](../../storage/common/storage-monitoring-diagnosing-troubleshooting.md) article for details on how to analyze and troubleshoot the Azure Storage data shown in Storage insights.
### Why donΓÇÖt I see all the types of errors in metrics?
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/monitor-reference.md
Insights provide a customized monitoring experience for particular applications
|:|:| | [Application Insights](app/app-insights-overview.md) | Extensible Application Performance Management (APM) service to monitor your live web application on any platform. | | [Container insights](containers/container-insights-overview.md) | Monitors the performance of container workloads deployed to either Azure Container Instances or managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). |
-| [Azure Monitor for Cosmos DB](insights/cosmosdb-insights-overview.md) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. |
-| [Azure Monitor for Networks (preview)](insights/network-insights-overview.md) | Provides a comprehensive view of health and metrics for all your network resource. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resource that are hosting your website, by simply searching for your website name. |
-[Azure Monitor for Resource Groups (preview)](insights/resource-group-insights.md) | Triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group as a whole. |
-| [Azure Monitor for Storage](insights/storage-insights-overview.md) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
+| [Cosmos DB insights](insights/cosmosdb-insights-overview.md) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. |
+| [Networks insights (preview)](insights/network-insights-overview.md) | Provides a comprehensive view of health and metrics for all your network resource. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resource that are hosting your website, by simply searching for your website name. |
+[Resource Group insights (preview)](insights/resource-group-insights.md) | Triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group as a whole. |
+| [Storage insights](insights/storage-insights-overview.md) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
| [VM insights](vm/vminsights-overview.md) | Monitors your Azure virtual machines (VM) and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and monitors their processes and dependencies on other resources and external processes. |
-| [Azure Monitor for Key Vault (preview)](./insights/key-vault-insights-overview.md) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
-| [Azure Monitor for Azure Cache for Redis (preview)](insights/redis-cache-insights-overview.md) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. |
+| [Key Vault insights (preview)](./insights/key-vault-insights-overview.md) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
+| [Azure Cache for Redis insights (preview)](insights/redis-cache-insights-overview.md) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. |
### Core solutions
azure-monitor Workbooks Automate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/workbooks-automate.md
For a technical reason, this mechanism cannot be used to create workbook instanc
## Next steps
-Explore how workbooks are being used to power the new [Azure Monitor for Storage experience](../insights/storage-insights-overview.md).
+Explore how workbooks are being used to power the new [Storage insights experience](../insights/storage-insights-overview.md).
azure-vmware Configure Dhcp Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-dhcp-azure-vmware-solution.md
+
+ Title: Configure and manage DHCP in Azure VMware Solution
+description: Learn how to create and manage DHCP for your Azure VMware Solution private cloud.
++ Last updated : 05/14/2021++
+# Configure and manage DHCP in Azure VMware Solution
+
+Applications and workloads running in a private cloud environment require DHCP services for IP address assignments. This article shows you how to create and manage DHCP in Azure VMware Solution in two ways:
+
+- If you're using NSX-T to host your DHCP server, you'll need to [create a DHCP server](#create-a-dhcp-server) and [relay to that server](#create-dhcp-relay-service). When you create the DHCP server, you'll also add a network segment and specify the DHCP IP address range.
+
+- If you're using a third-party external DHCP server in your network, you'll need to [create DHCP relay service](#create-dhcp-relay-service). When you create a relay to a DHCP server, whether using NSX-T or a third-party to host your DHCP server, you'll need to specify the DHCP IP address range.
+
+>[!IMPORTANT]
+>DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch network when the DHCP server is in the on-premises datacenter. NSX, by default, blocks all DHCP requests from traversing the L2 stretch. For the solution, see the [Configure DHCP on L2 stretched VMware HCX networks](#configure-dhcp-on-l2-stretched-vmware-hcx-networks) procedure.
++
+## Create a DHCP server
+
+If you want to use NSX-T to host your DHCP server, you'll create a DHCP server. Then you'll add a network segment and specify the DHCP IP address range.
+
+1. In NSX-T Manager, select **Networking** > **DHCP**, and then select **Add Server**.
+
+1. Select **DHCP** for the **Server Type**, provide the server name and IP address, and then select **Save**.
+
+ :::image type="content" source="./media/manage-dhcp/dhcp-server-settings.png" alt-text="add DHCP server" border="true":::
+
+1. Select **Tier 1 Gateways**, select the vertical ellipsis on the Tier-1 gateway, and then select **Edit**.
+
+ :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway.png" alt-text="select the gateway to use" border="true":::
+
+1. Select **No IP Allocation Set** to add a subnet.
+
+ :::image type="content" source="./media/manage-dhcp/add-subnet.png" alt-text="add a subnet" border="true":::
+
+1. For **Type**, select **DHCP Local Server**.
+
+1. For the **DHCP Server**, select **Default DHCP**, and then select **Save**.
+
+1. Select **Save** again and then select **Close Editing**.
+
+### Add a network segment
+++
+## Create DHCP relay service
+
+If you want to use a third-party external DHCP server, you'll need to create a DHCP relay service. You'll also specify the DHCP IP address range in NSX-T Manager.
+
+1. In NSX-T Manager, select **Networking** > **DHCP**, and then select **Add Server**.
+
+1. Select **DHCP Relay** for the **Server Type**, provide the server name and IP address, and then select **Save**.
+
+ :::image type="content" source="./media/manage-dhcp/create-dhcp-relay.png" alt-text="create dhcp relay service" border="true":::
+
+1. Select **Tier 1 Gateways**, select the vertical ellipsis on the Tier-1 gateway, and then select **Edit**.
+
+ :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway-relay.png" alt-text="edit tier 1 gateway" border="true":::
+
+1. Select **No IP Allocation Set** to define the IP address allocation.
+
+ :::image type="content" source="./media/manage-dhcp/edit-ip-address-allocation.png" alt-text="edit ip address allocation" border="true":::
+
+1. For **Type**, select **DHCP Server**.
+
+1. For the **DHCP Server**, select **DHCP Relay**, and then select **Save**.
+
+1. Select **Save** again and then select **Close Editing**.
++
+## Specify the DHCP IP address range
+
+1. In NSX-T Manager, select **Networking** > **Segments**.
+
+1. Select the vertical ellipsis on the segment name and select **Edit**.
+
+1. Select **Set Subnets** to specify the DHCP IP address for the subnet.
+
+ :::image type="content" source="./media/manage-dhcp/network-segments.png" alt-text="network segments" border="true":::
+
+1. Modify the gateway IP address if needed, and enter the DHCP range IP.
+
+ :::image type="content" source="./media/manage-dhcp/edit-subnet.png" alt-text="edit subnets" border="true":::
+
+1. Select **Apply**, and then **Save**. The segment is assigned a DHCP server pool.
+
+ :::image type="content" source="./media/manage-dhcp/assigned-to-segment.png" alt-text="DHCP server pool assigned to segment" border="true":::
+
+## Configure DHCP on L2 stretched VMware HCX networks
+If you want to send DHCP requests from your Azure VMware Solution VMs to a non-NSX-T DHCP server, you'll create a new security segment profile.
+
+>[!IMPORTANT]
+>VMs on the same L2 segment that runs as DHCP servers are blocked from serving client requests. Because of this, it's important to follow the steps in this section.
+
+1. (Optional) If you need to locate the segment name of the L2 extension:
+
+ 1. Sign in to your on-premises vCenter, and under **Home**, select **HCX**.
+
+ 1. Select **Network Extension** under **Services**.
+
+ 1. Select the network extension you want to support DHCP requests from Azure VMware Solution to on-premises.
+
+ 1. Take note of the destination network name.
+
+ :::image type="content" source="media/manage-dhcp/hcx-find-destination-network.png" alt-text="Screenshot of a network extension in VMware vSphere Client" lightbox="media/manage-dhcp/hcx-find-destination-network.png":::
+
+1. In the Azure VMware Solution NSX-T Manager, select **Networking** > **Segments** > **Segment Profiles**.
+
+1. Select **Add Segment Profile** and then **Segment Security**.
+
+ :::image type="content" source="media/manage-dhcp/add-segment-profile.png" alt-text="Screenshot of how to add a segment profile in NSX-T" lightbox="media/manage-dhcp/add-segment-profile.png":::
+1. Provide a name and a tag, and then set the **BPDU Filter** toggle to ON and all the DHCP toggles to OFF.
+
+ :::image type="content" source="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png" alt-text="Screenshot showing the BPDU Filter toggled on and the DHCP toggles off" lightbox="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png":::
+
+ :::image type="content" source="media/manage-dhcp/edit-segment-security.png" alt-text="Screenshot of the Segment Security field" lightbox="media/manage-dhcp/edit-segment-security.png":::
++
+## Next steps
+Learn more about [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
azure-vmware Deploy Vm Content Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-vm-content-library.md
In this article, we'll walk through the procedure for creating a content library
## Prerequisites
-An NSX-T segment (logical switch) and a managed DHCP service are required to complete this tutorial. For more information, see the [How to manage DHCP in Azure VMware Solution](configure-dhcp-l2-stretched-vmware-hcx-networks.md) article.
+An NSX-T segment (logical switch) and a managed DHCP service are required to complete this tutorial. For more information, see the [How to manage DHCP in Azure VMware Solution](configure-dhcp-azure-vmware-solution.md) article.
## Create a content library
azure-vmware Reserved Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reserved-instance.md
CSPs can cancel, exchange, or refund reservations, with certain limitations, pur
Now that you've covered reserved instance of Azure VMware Solution, you may want to learn about: - [Creating an Azure VMware Solution assessment](../migrate/how-to-create-azure-vmware-solution-assessment.md).-- [Managing DHCP for Azure VMware Solution](configure-dhcp-l2-stretched-vmware-hcx-networks.md).
+- [Managing DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md).
- [Monitor and manage Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
azure-vmware Tutorial Nsx T Network Segment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-nsx-t-network-segment.md
In this tutorial, you created a NSX-T network segment to use for VMs in vCenter.
You can now: -- [Create and manage DHCP for Azure VMware Solution](configure-dhcp-l2-stretched-vmware-hcx-networks.md)
+- [Create and manage DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md)
- [Create a content Library to deploy VMs in Azure VMware Solution](deploy-vm-content-library.md) - [Peer on-premises environments to a private cloud](tutorial-expressroute-global-reach-private-cloud.md)
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
It takes a few moments to deploy your new Speech resource. Once the deployment i
1. After getting the Azure account and the Speech resource, you can log into [Audio Content Creation](https://aka.ms/audiocontentcreation) by clicking **Get started**. 2. The home page lists all the products under Speech Studio. Click **Audio Content Creation** to start.
-3. The **Welcome to Speech Studio** page will be shown to you to set up the speech service. Select the Azure subscription and the Speech resource you want to work on. Click **Use resource** to complete the settings. When you log into the Audio Content Creation tool for the Next time, we will link you directly to the audio work files under the current speech resource. You can check your Azure subscriptions details and status in [Azure portal](https://portal.azure.com/). If you do not have available speech resource and you are the owner or admin of an Azure subscription, you can also create a new Speech resource in Speech Studio by clicking **Create a new resource**. If you are a user role for a certain Azure subscription, you may not have the permission to create a new speech resource. Please contact your admin to get the speech resource access.
+3. The **Welcome to Speech Studio** page will appear to you to set up the speech service. Select the Azure subscription and the Speech resource you want to work on. Click **Use resource** to complete the settings. When you log into the Audio Content Creation tool for the Next time, we will link you directly to the audio work files under the current speech resource. You can check your Azure subscriptions details and status in [Azure portal](https://portal.azure.com/). If you do not have available speech resource and you are the owner or admin of an Azure subscription, you can also create a new Speech resource in Speech Studio by clicking **Create a new resource**. If you are a user role for a certain Azure subscription, you may not have the permission to create a new speech resource. Please contact your admin to get the speech resource access.
4. You can modify your Speech resource at any time with the **Settings** option, located in the top nav.
-5. If you want to swith directory, please go the **Settings** or your profile to operate.
+5. If you want to switch directory, please go the **Settings** or your profile to operate.
## How to use the tool?
This diagram shows the steps it takes to fine-tune text-to-speech outputs. Use t
> [!NOTE] > Gated access is available for Custom Neural Voices, which allow you to create high-definition voices similar to natural-sounding speech. For additional details, see [Gating process](./text-to-speech.md).
-4. Select the content you want to preview and click the **play** icon (a triangle) to preview the default synthesis output. Please note that if you make any changes on teh text, you need to click the **Stop** icon and then click **play** icon again to re-generate the audio with changed scripts.
+4. Select the content you want to preview and click the **play** icon (a triangle) to preview the default synthesis output. Please note that if you make any changes on the text, you need to click the **Stop** icon and then click **play** icon again to re-generate the audio with changed scripts.
5. Improve the output by adjusting pronunciation, break, pitch, rate, intonation, voice style, and more. For a complete list of options, see [Speech Synthesis Markup Language](speech-synthesis-markup.md). Here is a [video](https://youtu.be/ygApYuOOG6w) to show how to fine-tune speech output with Audio Content Creation. 6. Save and [export your tuned audio](#export-tuned-audio). When you save the tuning track in the system, you can continue to work and iterate on the output. When you're satisfied with the output, you can create an audio creation task with the export feature. You can observe the status of the export task and download the output for use with your apps and products.
cognitive-services Windows Voice Assistants Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/windows-voice-assistants-best-practices.md
The Contoso assistant has a home on the taskbar: their swirling, circular icon.
![Screenshot of voice assistant on Windows listening in compact view](media/voice-assistants/windows_voice_assistant/compact_view_listening.png)
-**Quick answers** may be shown in the voice activation preview. A TryResizeView will allow assistants to request different sizes.
+**Quick answers** may appear in the voice activation preview. A TryResizeView will allow assistants to request different sizes.
![Screenshot of voice assistant on Windows replying in compact view](media/voice-assistants/windows_voice_assistant/compact_view_response.png)
cognitive-services Windows Voice Assistants Implementation Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/windows-voice-assistants-implementation-guide.md
private static async Task<ActivationSignalDetector> GetFirstEligibleDetectorAsyn
} ```
-After retrieving the ActivationSignalDetector object, call its `ActivationSignalDetector.CreateConfigurationAsync` method with the signal ID, model ID, and display name to register your keyword and retrieve your application's `ActivationSignalDetectionConfiguration`. The signal and model IDs should be guids decided on by the developer and stay consistent for the same keyword.
+After retrieving the ActivationSignalDetector object, call its `ActivationSignalDetector.CreateConfigurationAsync` method with the signal ID, model ID, and display name to register your keyword and retrieve your application's `ActivationSignalDetectionConfiguration`. The signal and model IDs should be GUIDs decided on by the developer and stay consistent for the same keyword.
### Verify that the voice activation setting is enabled
When an app shows a view above lock, it is considered to be in "Kiosk Mode". For
An activation above lock is similar to an activation below lock. If there are no active instances of the application, a new instance will be started in the background and `OnBackgroundActivated` in App.xaml.cs will be called. If there is an instance of the application, that instance will get a notification through the `ConversationalAgentSession.SignalDetected` event.
-If the application is not already showing above lock, it must call `ConversationalAgentSession.RequestForegroundActivationAsync`. This triggers the `OnLaunched` method in App.xaml.cs which should navigate to the view that will be shown above lock.
+If the application does not appear above lock, it must call `ConversationalAgentSession.RequestForegroundActivationAsync`. This triggers the `OnLaunched` method in App.xaml.cs which should navigate to the view that will appear above lock.
### Detecting lock screen transitions
cognitive-services Windows Voice Assistants Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/windows-voice-assistants-overview.md
Note that this means an application cannot be activated by voice until it has be
Upon receiving the request from AAR, the Background Service launches the application. The application receives a signal through the OnBackgroundActivated life-cycle method in `App.xaml.cs` with a unique event argument. This argument tells the application that it was activated by AAR and that it should start keyword verification.
-If the application successfully verifies the keyword, it can make a request to be shown in the foreground. When this request succeeds, the application displays UI and continues its interaction with the user.
+If the application successfully verifies the keyword, it can make a request that appears in the foreground. When this request succeeds, the application displays UI and continues its interaction with the user.
AAR still signals active applications when their keyword is spoken. Rather than signaling through the life-cycle method in `App.xaml.cs`, though, it signals through an event in the ConversationalAgent APIs.
connectors Apis List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/apis-list.md
An *action* is an operation that follows the trigger and performs some kind of t
## Connector categories
-In Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A small number of triggers and actions are available in both versions. The versions available depend on whether you create a multi-tenant logic app or a single-tenant logic app, which is currently available only in [Logic Apps Preview](../logic-apps/single-tenant-overview-compare.md).
+In Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A small number of triggers and actions are available in both versions. The versions available depend on whether you create a multi-tenant logic app or a single-tenant logic app, which is currently available only in [single-tenant Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md).
[Built-in triggers and actions](built-in.md) run natively on the Logic Apps runtime, don't require creating connections, and perform these kinds of tasks:
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-mq.md
Previously updated : 04/26/2021 Last updated : 05/25/2021 tags: connectors
This connector includes a Microsoft MQ client that communicates with a remote MQ
## Available operations
-The IBM MQ connector provides actions but no triggers.
+* Multi-tenant Azure Logic Apps: When you create a **Logic App (Consumption)** resource, you can connect to an MQ server only by using the *managed* MQ connector. This connector provides only actions, no triggers.
-* Multi-tenant Azure Logic Apps: When you create a consumption-based logic app workflow, you can connect to an MQ server by using the *managed* MQ connector.
-
-* Single-tenant Azure Logic Apps (preview): When you create a preview logic app workflow, you can connect to an MQ server by using either the managed MQ connector or the *built-in* MQ operations (preview).
+* Single-tenant Azure Logic Apps: When you create a single-tenant based logic app workflow, you can connect to an MQ server by using either the managed MQ connector, which includes *only* actions, or the *built-in* MQ operations, which includes triggers *and* actions.
For more information about the difference between a managed connector and built-in operations, review [key terms in Logic Apps](../logic-apps/logic-apps-overview.md#logic-app-concepts).
The following list describes only some of the managed operations available for M
For all the managed connector operations and other technical information, such as properties, limits, and so on, review the [MQ connector's reference page](/connectors/mq/).
-#### [Built-in (preview)](#tab/built-in)
+#### [Built-in](#tab/built-in)
The following list describes only some of the built-in operations available for MQ:
-* Receive a single message or an array of messages from the MQ server. For multiple messages, you can specify the maximum number of messages to return per batch and the maximum batch size in KB.
+* When a message is available in a queue, take some action.
+* When one or more messages are received from a queue (auto-complete), take some action.
+* When one or more messages are received from a queue (peek-lock), take some action.
+* Receive a single message or an array of messages from a queue. For multiple messages, you can specify the maximum number of messages to return per batch and the maximum batch size in KB.
* Send a single message or an array of messages to the MQ server. These built-in MQ operations also have the following capabilities plus the benefits from all the other capabilities for logic apps in the [single-tenant Logic Apps service](../logic-apps/single-tenant-overview-compare.md):
When you add an MQ action for the first time, you're prompted to create a connec
1. When you're done, select **Create**.
-#### [Built-in (preview)](#tab/built-in)
+#### [Built-in](#tab/built-in)
1. Provide the connection information for your MQ server.
data-factory Wrangling Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/wrangling-tutorial.md
Previously updated : 01/19/2021 Last updated : 05/14/2021 # Prepare data with data wrangling
The other method is in the activities pane of the pipeline canvas. Open the **Po
## Author a Power Query data wrangling activity
-Add a **Source dataset** for your Power Query mash-up. You can either choose an existing dataset or create a new one. You can also select a sink dataset. You can choose one or more source datasets, but only one sink is allowed at this time. Choosing a sink dataset is optional, but at least one source dataset is required.
+Add a **Source dataset** for your Power Query mash-up. You can either choose an existing dataset or create a new one. After you have saved your mash-up, you can then add the Power Query data wrangling activity to your pipeline and select a sink dataset to tell ADF where to land your data. While you can choose one or more source datasets, only one sink is allowed at this time. Choosing a sink dataset is optional, but at least one source dataset is required.
![Wrangling](media/wrangling-data-flow/tutorial4.png)
Author your wrangling Power Query using code-free data preparation. For the list
To execute a pipeline debug run of a Power Query activity, click **Debug** in the pipeline canvas. Once you publish your pipeline, **Trigger now** executes an on-demand run of the last published pipeline. Power Query pipelines can be schedule with all existing Azure Data Factory triggers.
-![Screenshot that shows how to add a Power Query data wrangling activity.](media/wrangling-data-flow/tutorial3.png)
+![Screenshot that shows how to add a Power Query data wrangling activity.](media/data-flow/pq-activity-001.png)
Go to the **Monitor** tab to visualize the output of a triggered Power Query activity run.
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-managed-identity.md
In IoT Hub, managed identities can be used for egress connectivity from IoT Hub
:::image type="content" source="./media/iot-hub-managed-identity/system-assigned.png" alt-text="Screenshot showing where to turn on system-assigned managed identity for an IoT hub":::
-### Enable managed identity at hub creation time using ARM template
+### Enable system-assigned managed identity at hub creation time using ARM template
To enable the system-assigned managed identity in your IoT hub at resource provisioning time, use the Azure Resource Manager (ARM) template below.
To enable the system-assigned managed identity in your IoT hub at resource provi
After substituting the values for your resource `name`, `location`, `SKU.name` and `SKU.tier`, you can use Azure CLI to deploy the resource in an existing resource group using: ```azurecli-interactive
-az deployment group create --name <deployment-name> --resource-group <resource-group-name> --template-file <template-file.json>
+az deployment group create --name <deployment-name> --resource-group <resource-group-name> --template-file <template-file.json> --parameters iotHubName=<valid-iothub-name> skuName=<sku-name> skuTier=<sku-tier> location=<any-of-supported-regions>
```
-After the resource is created, you can retrieve the managed service identity assigned to your hub using Azure CLI:
+After the resource is created, you can retrieve the system-assigned assigned to your hub using Azure CLI:
```azurecli-interactive az resource show --resource-type Microsoft.Devices/IotHubs --name <iot-hub-resource-name> --resource-group <resource-group-name>
In this section, you learn how to add and remove a user-assigned managed identit
:::image type="content" source="./media/iot-hub-managed-identity/user-assigned.png" alt-text="Screenshot showing how to add user-assigned managed identity for an IoT hub"::: +
+### Enable user-assigned managed identity at hub creation time using ARM template
+Following is the example template that can be used to create hub with user-assigned managed identity. This template creates one user assigned identity with name *[iothub-name-provided]-identity* and assigned to the IoT hub created. You can change the template to add multiple user assigned identities as needed.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "iotHubName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of iothub resource"
+ }
+ },
+ "skuName": {
+ "type": "string",
+ "defaultValue": "S1",
+ "metadata": {
+ "description": "SKU name of iothub resource, by default is Standard S1"
+ }
+ },
+ "skuTier": {
+ "type": "string",
+ "defaultValue": "Standard",
+ "metadata": {
+ "description": "SKU tier of iothub resource, by default is Standard"
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location of iothub resource. Please provide any of supported-regions of iothub"
+ }
+ }
+},
+ "variables": {
+ "identityName": "[concat(parameters('iotHubName'), '-identity')]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2020-10-01",
+ "name": "createIotHub",
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
+ "name": "[variables('identityName')]",
+ "apiVersion": "2018-11-30",
+ "location": "[resourceGroup().location]"
+ },
+ {
+ "type": "Microsoft.Devices/IotHubs",
+ "apiVersion": "2021-03-31",
+ "name": "[parameters('iotHubName')]",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',variables('identityName'))]": {}
+ }
+ },
+ "sku": {
+ "name": "[parameters('skuName')]",
+ "tier": "[parameters('skuTier')]",
+ "capacity": 1
+ },
+ "dependsOn": [
+ "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',variables('identityName'))]"
+ ]
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+```azurecli-interactive
+az deployment group create --name <deployment-name> --resource-group <resource-group-name> --template-file <template-file.json> --parameters iotHubName=<valid-iothub-name> skuName=<sku-name> skuTier=<sku-tier> location=<any-of-supported-regions>
+```
+
+After the resource is created, you can retrieve the user-assigned managed identity in your hub using Azure CLI:
+
+```azurecli-interactive
+az resource show --resource-type Microsoft.Devices/IotHubs --name <iot-hub-resource-name> --resource-group <resource-group-name>
+```
## Egress connectivity from IoT Hub to other Azure resources In IoT Hub, managed identities can be used for egress connectivity from IoT Hub to other Azure services for [message routing](iot-hub-devguide-messages-d2c.md), [file upload](iot-hub-devguide-file-upload.md), and [bulk device import/export](iot-hub-bulk-identity-mgmt.md). You can choose which managed identity to use for each IoT Hub egress connectivity to customer-owned endpoints including storage accounts, event hubs, and service bus endpoints.
key-vault Alert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/alert.md
This document will cover the following topics:
+ How to configure metrics and create a dashboard + How to create alerts at specified thresholds
-Azure Monitor for Key Vault combines both logs and metrics to provide a global monitoring solution. [Learn more about Azure Monitor for Key Vault here](../../azure-monitor/insights/key-vault-insights-overview.md#introduction-to-azure-monitor-for-key-vault)
+Azure Monitor for Key Vault combines both logs and metrics to provide a global monitoring solution. [Learn more about Azure Monitor for Key Vault here](../../azure-monitor/insights/key-vault-insights-overview.md#introduction-to-key-vault-insights)
## Basic Key Vault metrics to monitor
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
Title: Create workflows using single-tenant Azure Logic Apps (portal)
+ Title: Create workflows in single-tenant Azure Logic Apps using the Azure portal
description: Create automated workflows that integrate apps, data, services, and systems using single-tenant Azure Logic Apps and the Azure portal. ms.suite: integration - Previously updated : 05/10/2021+ Last updated : 05/25/2021
-# Create an integration workflow using single-tenant Azure Logic Apps and the Azure portal (preview)
+# Create an integration workflow using single-tenant Azure Logic Apps and the Azure portal
-> [!IMPORTANT]
-> This capability is in preview and is subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This article shows how to create an example automated integration workflow that runs in the *single-tenant Azure Logic Apps environment* by using the **Logic App (Standard)** resource type. If you're new to the new single-tenant model and logic app resource type, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
-This article shows how to create an example automated integration workflow that runs in the *single-tenant Logic Apps environment* by using the new **Logic App (Preview)** resource type. While this example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. If you're new to single-tenant Logic Apps and the **Logic App (Preview)** resource type, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
-
-The example workflow starts with the built-in Request trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
+While this example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. The example workflow starts with the built-in Request trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
> [!TIP] > If you don't have an Office 365 account, you can use any other available action that can send > messages from your email account, for example, Outlook.com. > > To create this example workflow using Visual Studio Code instead, follow the steps in
-> [Create integration workflows using single tenant Azure Logic Apps and Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). Both options provide the capability
-> to develop, run, and deploy logic app workflows in the same kinds of environments. However, with
-> Visual Studio Code, you can *locally* develop, test, and run workflows in your development environment.
+> [Create integration workflows using single tenant Azure Logic Apps and Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
+> Both options provide the capability to develop, run, and deploy logic app workflows in the same kinds of environments.
+> However, with Visual Studio Code, you can *locally* develop, test, and run workflows in your development environment.
-![Screenshot that shows the Azure portal with the workflow designer for the "Logic App (Preview)" resource.](./media/create-single-tenant-workflows-azure-portal/azure-portal-logic-apps-overview.png)
+![Screenshot that shows the Azure portal with the workflow designer for the "Logic App (Standard)" resource.](./media/create-single-tenant-workflows-azure-portal/azure-portal-logic-apps-overview.png)
As you progress, you'll complete these high-level tasks:
As you progress, you'll complete these high-level tasks:
* Enable or open the Application Insights after deployment. * Enable run history for stateless workflows.
-For more information, review the following documentation:
-
-* [What is Azure Logic Apps?](logic-apps-overview.md)
-* [What is the single-tenant Logic Apps environment?](single-tenant-overview-compare.md)
- ## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
For more information, review the following documentation:
* An [Azure Storage account](../storage/common/storage-account-overview.md). If you don't have one, you can either create a storage account in advance or during logic app creation. > [!NOTE]
- > The **Logic App (Preview)** resource type is powered by Azure Functions and has [storage requirements similar to function apps](../azure-functions/storage-considerations.md).
+ > The **Logic App (Standard)** resource type is powered by Azure Functions and has [storage requirements similar to function apps](../azure-functions/storage-considerations.md).
> [Stateful workflows](single-tenant-overview-compare.md#stateful-stateless) perform storage transactions, such as > using queues for scheduling and storing workflow states in tables and blobs. These transactions incur > [storage charges](https://azure.microsoft.com/pricing/details/storage/). For more information about > how stateful workflows store data in external storage, review [Stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless).
-* To deploy to a Docker container, you need an existing Docker container image.
-
- For example, you can create this image through [Azure Container Registry](../container-registry/container-registry-intro.md), [App Service](../app-service/overview.md), or [Azure Container Instance](../container-instances/container-instances-overview.md).
- * To create the same example workflow in this article, you need an Office 365 Outlook email account that uses a Microsoft work or school account to sign in. If you choose a [different email connector](/connectors/connector-reference/connector-reference-logicapps-connectors), such as Outlook.com, you can still follow the example, and the general overall steps are the same. However, your options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
For more information, review the following documentation:
1. In the Azure portal search box, enter `logic apps`, and select **Logic apps**.
- ![Screenshot that shows the Azure portal search box with the "logic app preview" search term and the "Logic App (Preview)" resource selected.](./media/create-single-tenant-workflows-azure-portal/find-logic-app-resource-template.png)
+ ![Screenshot that shows the Azure portal search box with the "logic apps" search term and the "Logic App (Standard)" resource selected.](./media/create-single-tenant-workflows-azure-portal/find-logic-app-resource-template.png)
-1. On the **Logic apps** page, select **Add** > **Preview**.
+1. On the **Logic apps** page, select **Add** > **Standard**.
- This step creates a logic app resource that runs in the single-tenant Logic Apps environment and uses the [preview (single-tenant) pricing model](logic-apps-pricing.md#preview-pricing).
+ This step creates a logic app resource that runs in the single-tenant Azure Logic Apps environment and uses the [single-tenant pricing model](logic-apps-pricing.md#standard-pricing).
1. On the **Create Logic App** page, on the **Basics** tab, provide the following information about your logic app resource:
For more information, review the following documentation:
|-|-|-|-| | **Subscription** | Yes | <*Azure-subscription-name*> | The Azure subscription to use for your logic app. | | **Resource Group** | Yes | <*Azure-resource-group-name*> | The Azure resource group where you create your logic app and related resources. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a resource group named `Fabrikam-Workflows-RG`. |
- | **Logic App name** | Yes | <*logic-app-name*> | The name to use for your logic app. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a logic app named `Fabrikam-Workflows`. <p><p>**Note**: Your logic app's name automatically gets the suffix, `.azurewebsites.net`, because the **Logic App (Preview)** resource is powered by Azure Functions, which uses the same app naming convention. |
- | **Publish** | Yes | <*deployment-environment*> | The deployment destination for your logic app. <p><p>- **Workflow**: Deploy to single-tenant Azure Logic Apps in the portal. <p><p>- **Docker Container**: Deploy to a container. If you don't have a container, first create your Docker container image. That way, after you select **Docker Container**, you can [specify the container that you want to use when creating your logic app](#set-docker-container). For example, you can create this image through [Azure Container Registry](../container-registry/container-registry-intro.md), [App Service](../app-service/overview.md), or [Azure Container Instance](../container-instances/container-instances-overview.md). <p><p>This example continues with the **Workflow** option. |
+ | **Logic App name** | Yes | <*logic-app-name*> | The name to use for your logic app. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a logic app named `Fabrikam-Workflows`. <p><p>**Note**: Your logic app's name automatically gets the suffix, `.azurewebsites.net`, because the **Logic App (Standard)** resource is powered by Azure Functions, which uses the same app naming convention. |
+ | **Publish** | Yes | <*deployment-environment*> | The deployment destination for your logic app. <p><p>- **Workflow**: Deploy to single-tenant Azure Logic Apps in the portal. <p><p>**Note**: Azure creates an empty logic app resource where you have to add your first workflow. |
| **Region** | Yes | <*Azure-region*> | The Azure region to use when creating your resource group and resources. <p><p>This example uses **West US**. | |||||
For more information, review the following documentation:
| Property | Required | Value | Description | |-|-|-|-|
- | **Storage account** | Yes | <*Azure-storage-account-name*> | The [Azure Storage account](../storage/common/storage-account-overview.md) to use for storage transactions. This resource name must be unique across regions and have 3-24 characters with only numbers and lowercase letters. Either select an existing account or create a new account. <p><p>This example creates a storage account named `fabrikamstorageacct`. |
- | **Plan type** | Yes | <*Azure-hosting-plan*> | The [hosting plan](../app-service/overview-hosting-plans.md) to use for deploying your logic app, which is either [**Functions Premium**](../azure-functions/functions-premium-plan.md) or [**App service plan**](../azure-functions/dedicated-plan.md). Your choice affects the capabilities and pricing tiers that are later available to you. <p><p>This example uses the **App service plan**. <p><p>**Note**: Similar to Azure Functions, the **Logic App (Preview)** resource type requires a hosting plan and pricing tier. Consumption plans aren't supported or available for this resource type. For more information, review the following documentation: <p><p>- [Azure Functions scale and hosting](../azure-functions/functions-scale.md) <br>- [App Service pricing details](https://azure.microsoft.com/pricing/details/app-service/) <p><p>For example, the Functions Premium plan provides access to networking capabilities, such as connect and integrate privately with Azure virtual networks, similar to Azure Functions when you create and deploy your logic apps. For more information, review the following documentation: <p><p>- [Azure Functions networking options](../azure-functions/functions-networking-options.md) <br>- [Azure Logic Apps Running Anywhere - Networking possibilities with Azure Logic Apps Preview](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047) |
- | **Windows Plan** | Yes | <*plan-name*> | The plan name to use. Either select an existing plan or provide the name for a new plan. <p><p>This example uses the name `Fabrikam-Service-Plan`. |
- | **SKU and size** | Yes | <*pricing-tier*> | The [pricing tier](../app-service/overview-hosting-plans.md) to use for hosting your logic app. Your choices are affected by the plan type that you previously chose. To change the default tier, select **Change size**. You can then select other pricing tiers, based on the workload that you need. <p><p>This example uses the free **F1 pricing tier** for **Dev / Test** workloads. For more information, review [App Service pricing details](https://azure.microsoft.com/pricing/details/app-service/). |
+ | **Storage account** | Yes | <*Azure-storage-account-name*> | The [Azure Storage account](../storage/common/storage-account-overview.md) to use for storage transactions. <p><p>This resource name must be unique across regions and have 3-24 characters with only numbers and lowercase letters. Either select an existing account or create a new account. <p><p>This example creates a storage account named `fabrikamstorageacct`. |
+ | **Plan type** | Yes | <*hosting-plan*> | The hosting plan to use for deploying your logic app. <p><p>For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing). |
+ | **Windows Plan** | Yes | <*plan-name*> | The plan name to use. Either select an existing plan name or provide a name for a new plan. <p><p>This example uses the name `Fabrikam-Service-Plan`. |
+ | **SKU and size** | Yes | <*pricing-tier*> | The [pricing tier](../app-service/overview-hosting-plans.md) to use for your logic app. Your selection affects the pricing, compute, memory, and storage that your logic app and workflows use. <p><p>To change the default pricing tier, select **Change size**. You can then select other pricing tiers, based on the workload that you need. <p><p>For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing). |
||||| 1. Next, if your creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app.
For more information, review the following documentation:
> For example, if your selected region reaches a quota for resources that you're trying to create, > you might have to try a different region.
- After Azure finishes deployment, your logic app is automatically live and running but doesn't do anything yet because the resource is empty, and no workflows exist yet.
+ After Azure finishes deployment, your logic app is automatically live and running but doesn't do anything yet because the resource is empty, and you haven't added any workflows yet.
-1. On the deployment completion page, select **Go to resource** so that you can add a blank workflow. If you selected **Docker Container** for deploying your logic app, continue with the [steps to provide information about that Docker container](#set-docker-container).
+1. On the deployment completion page, select **Go to resource** so that you can add a blank workflow.
![Screenshot that shows the Azure portal and the finished deployment.](./media/create-single-tenant-workflows-azure-portal/logic-app-completed-deployment.png)
-<a name="set-docker-container"></a>
-
-## Specify Docker container for deployment
-
-Before you start these steps, you need a Docker container image. For example, you can create this image through [Azure Container Registry](../container-registry/container-registry-intro.md), [App Service](../app-service/overview.md), or [Azure Container Instance](../container-instances/container-instances-overview.md). You can then provide information about your Docker container after you create your logic app.
-
-1. In the Azure portal, go to your logic app resource.
-
-1. On the logic app menu, under **Settings**, select **Deployment Center**.
-
-1. On the **Deployment Center** pane, follow the instructions for providing and managing the details for your Docker container.
- <a name="add-workflow"></a> ## Add a blank workflow
+After you create your empty logic app resource, you have to add your first workflow.
+ 1. After Azure opens the resource, on your logic app's menu, select **Workflows**. On the **Workflows** toolbar, select **Add**. ![Screenshot that shows the logic app resource menu with "Workflows" selected, and then hen on the toolbar, "Add" is selected.](./media/create-single-tenant-workflows-azure-portal/logic-app-add-blank-workflow.png)
Before you can add a trigger to a blank workflow, make sure that the workflow de
<a name="firewall-setup"></a>
-## Find domain names for firewall access
+## Find domain names for firewall access
Before you deploy your logic app and run your workflow in the Azure portal, if your environment has strict network requirements or firewalls that limit traffic, you have to set up network or firewall permissions for any trigger or action connections in the workflows that exist in your logic app.
-To find the fully qualified domain names (FQDNs) for these connections, follow these steps:
+To find the inbound and outbound IP addresses used by your logic app and workflows, follow these steps:
+
+1. On your logic app menu, under **Settings**, select **Networking (preview)**.
+
+1. On the networking page, find and review the **Inbound Traffic** and **Outbound Traffic** sections.
+
+To find the fully qualified domain names (FQDNs) for connections, follow these steps:
1. On your logic app menu, under **Workflows**, select **Connections**. On the **API Connections** tab, select the connection's resource name, for example:
After Application Insights opens, you can review various metrics for your logic
To debug a stateless workflow more easily, you can enable the run history for that workflow, and then disable the run history when you're done. Follow these steps for the Azure portal, or if you're working in Visual Studio Code, see [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless).
-1. In the [Azure portal](https://portal.azure.com), find and open your **Logic App (Preview)** resource.
+1. In the [Azure portal](https://portal.azure.com), find and open your **Logic App (Standard)** resource.
1. On the logic app's menu, under **Settings**, select **Configuration**.
To debug a stateless workflow more easily, you can enable the run history for th
For example:
- ![Screenshot that shows the Azure portal and Logic App (Preview) resource with the "Configuration" > "New application setting" < "Add/Edit application setting" pane open and the "Workflows.{yourWorkflowName}.OperationOptions" option set to "WithStatelessRunHistory".](./media/create-single-tenant-workflows-azure-portal/stateless-operation-options-run-history.png)
+ ![Screenshot that shows the Azure portal and Logic App (Standard) resource with the "Configuration" > "New application setting" < "Add/Edit application setting" pane open and the "Workflows.{yourWorkflowName}.OperationOptions" option set to "WithStatelessRunHistory".](./media/create-single-tenant-workflows-azure-portal/stateless-operation-options-run-history.png)
1. To finish this task, select **OK**. On the **Configuration** pane toolbar, select **Save**.
To delete an item in your workflow from the designer, follow any of these steps:
## Restart, stop, or start logic apps
-You can stop or start a [single logic app](#restart-stop-start-single-logic-app) or [multiple logic apps at the same time](#stop-start-multiple-logic-apps). You can also restart a single logic app without first stopping. Your single-tenant logic app can include multiple workflows, so you can either stop the entire logic app or [disable only workflows](#disable-enable-workflows).
+You can stop or start a [single logic app](#restart-stop-start-single-logic-app) or [multiple logic apps at the same time](#stop-start-multiple-logic-apps). You can also restart a single logic app without first stopping. Your single-tenant based logic app can include multiple workflows, so you can either stop the entire logic app or [disable only workflows](#disable-enable-workflows).
> [!NOTE] > The stop logic app and disable workflow operations have different effects. For more information, review
You can stop or start a [single logic app](#restart-stop-start-single-logic-app)
Stopping a logic app affects workflow instances in the following ways:
-* The Logic Apps service cancels all in-progress and pending runs immediately.
+* Azure Logic Apps cancels all in-progress and pending runs immediately.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* Triggers won't fire the next time that their conditions are met. However, trigger states remember the points where the logic app was stopped. So, if you restart the logic app, the triggers fire for all unprocessed items since the last run.
You can stop or start multiple logic apps at the same time, but you can't restar
To stop the trigger from firing the next time when the trigger condition is met, disable your workflow. You can disable or enable a single workflow, but you can't disable or enable multiple workflows at the same time. Disabling a workflow affects workflow instances in the following ways:
-* The Logic Apps services continues all in-progress and pending runs until they finish. Based on the volume or backlog, this process might take time to complete.
+* Azure Logic Apps continues all in-progress and pending runs until they finish. Based on the volume or backlog, this process might take time to complete.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* The trigger won't fire the next time that its conditions are met. However, the trigger state remembers the point at which the workflow was disabled. So, if you re-enable the workflow, the trigger fires for all the unprocessed items since the last run.
To stop the trigger from firing the next time when the trigger condition is met,
## Delete logic apps or workflows
-You can [delete a single or multiple logic apps at the same time](#delete-logic-apps). Your single-tenant logic app can include multiple workflows, so you can either delete the entire logic app or [delete only workflows](#delete-workflows).
+You can [delete a single or multiple logic apps at the same time](#delete-logic-apps). Your single-tenant based logic app can include multiple workflows, so you can either delete the entire logic app or [delete only workflows](#delete-workflows).
<a name="delete-logic-apps"></a>
Deleting a logic app cancels in-progress and pending runs immediately, but doesn
Deleting a workflow affects workflow instances in the following ways:
-* The Logic Apps service cancels in-progress and pending runs immediately, but runs cleanup tasks on the storage used by the workflow.
+* Azure Logic Apps cancels in-progress and pending runs immediately, but runs cleanup tasks on the storage used by the workflow.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. You have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an `Unauthorized` error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions.
Deleting a workflow affects workflow instances in the following ways:
### New triggers and actions are missing from the designer picker for previously created workflows
-Azure Logic Apps Preview supports built-in actions for Azure Function Operations, Liquid Operations, and XML Operations, such as **XML Validation** and **Transform XML**. However, for previously created logic apps, these actions might not appear in the designer for you to select if your logic app uses an outdated version of the extension bundle, `Microsoft.Azure.Functions.ExtensionBundle.Workflows`.
+Single-tenant Azure Logic Apps supports built-in actions for Azure Function Operations, Liquid Operations, and XML Operations, such as **XML Validation** and **Transform XML**. However, for previously created logic apps, these actions might not appear in the designer for you to select if your logic app uses an outdated version of the extension bundle, `Microsoft.Azure.Functions.ExtensionBundle.Workflows`.
To fix this problem, follow these steps to delete the outdated version so that the extension bundle can automatically update to the latest version. > [!NOTE]
-> This specific solution applies only to **Logic App (Preview)** resources that you create using
+> This specific solution applies only to **Logic App (Standard)** resources that you create using
> the Azure portal, not the logic apps that you create and deploy using Visual Studio Code and the
-> Azure Logic Apps (Preview) extension. See [Supported triggers and actions are missing from the designer in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#missing-triggers-actions).
+> Azure Logic Apps (Standard) extension. See [Supported triggers and actions are missing from the designer in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#missing-triggers-actions).
1. In the Azure portal, stop your logic app.
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
Title: Create Logic Apps Preview workflows in Visual Studio Code
-description: Build and run workflows for automation and integration scenarios in Visual Studio Code with the Azure Logic Apps (Preview) extension.
+ Title: Create workflows in single-tenant Azure Logic Apps using Visual Studio Code
+description: Create automated workflows that integrate apps, data, services, and systems using single-tenant Azure Logic Apps and Visual Studio Code.
ms.suite: integration-- Previously updated : 04/23/2021++ Last updated : 05/25/2021
-# Create stateful and stateless workflows in Visual Studio Code with the Azure Logic Apps (Preview) extension
+# Create an integration workflow using single-tenant Azure Logic Apps and Visual Studio Code
-> [!IMPORTANT]
-> This capability is in public preview, is provided without a service level agreement, and is not recommended for production workloads.
-> Certain features might not be supported or might have constrained capabilities. For more information, see
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This article shows how to create an example automated integration workflow by using the **Logic App (Standard)** resource type, Visual Studio Code, and the **Azure Logic Apps (Standard)** extension. When you create this logic app workflow in Visual Studio Code, you can run and test the workflow in your *local* development environment.
-With [Azure Logic Apps Preview](single-tenant-overview-compare.md), you can build automation and integration solutions across apps, data, cloud services, and systems by creating and running logic apps that include [*stateful* and *stateless* workflows](single-tenant-overview-compare.md#stateful-stateless) in Visual Studio Code by using the Azure Logic Apps (Preview) extension. By using this new logic app type, you can build multiple workflows that are powered by the redesigned Azure Logic Apps Preview runtime, which provides portability, better performance, and flexibility for deploying and running in various hosting environments, not only Azure, but also Docker containers. To learn more about the new logic app type, see [Overview for Azure Logic Apps Preview](single-tenant-overview-compare.md).
+When you're ready, you can deploy to the *single-tenant Azure Logic Apps environment* or anywhere that Azure Functions can run, due to the redesigned Azure Logic Apps runtime. Compared to the multi-tenant **Azure Logic Apps (Consumption)** extension, which works for the multi-tenant Azure Logic Apps environment, the single-tenant **Azure Logic Apps (Standard)** extension provides the capability for you to create logic apps with the following attributes:
-![Screenshot that shows Visual Studio Code, logic app project, and workflow.](./media/create-single-tenant-workflows-visual-studio-code/visual-studio-code-logic-apps-overview.png)
+* The **Logic App (Standard)** resource type can host multiple [stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless) that run locally in your development environment, in the single-tenant Azure Logic Apps environment, or anywhere that Azure Functions can run, such as containers. This attribute provides flexibility and portability for your workflows.
-In Visual Studio Code, you can start by creating a project where you can *locally* build and run your logic app's workflows in your development environment by using the Azure Logic Apps (Preview) extension. While you can also start by [creating a new **Logic App (Preview)** resource in the Azure portal](create-single-tenant-workflows-azure-portal.md), both approaches provide the capability for you to deploy and run your logic app in the same kinds of hosting environments.
+* In a **Logic App (Standard)** resource, workflows in the same logic app and tenant run in the same process as the redesigned Azure Logic Apps runtime, so they share the same resources and provide better performance.
-Meanwhile, you can still create the original logic app type. Although the development experiences in Visual Studio Code differ between the original and new logic app types, your Azure subscription can include both types. You can view and access all the deployed logic apps in your Azure subscription, but the apps are organized into their own categories and sections.
+* You can deploy a **Logic App (Standard)** resource directly to Azure or anywhere that Azure Functions can run, including containers.
-This article shows how to create your logic app and a workflow in Visual Studio Code by using the Azure Logic Apps (Preview) extension and performing these high-level tasks:
+For more information about the **Logic App (Standard)** resource type and single-tenant model, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
-* Create a project for your logic app and workflow.
+While the example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. The example workflow starts with the built-in Request trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
-* Add a trigger and an action.
+> [!TIP]
+> If you don't have an Office 365 account, you can use any other available action that can send
+> messages from your email account, for example, Outlook.com.
+>
+> To create this example workflow using the Azure portal instead, follow the steps in
+> [Create integration workflows using single tenant Azure Logic Apps and the Azure portal](create-single-tenant-workflows-azure-portal.md).
+> Both options provide the capability to develop, run, and deploy logic app workflows in the same kinds of environments.
+> However, with Visual Studio Code, you can *locally* develop, test, and run workflows in your development environment.
-* Run, test, debug, and review run history locally.
+![Screenshot that shows Visual Studio Code, logic app project, and workflow.](./media/create-single-tenant-workflows-visual-studio-code/visual-studio-code-logic-apps-overview.png)
-* Find domain name details for firewall access.
+As you progress, you'll complete these high-level tasks:
+* Create a project for your logic app and a blank [*stateful* workflow](single-tenant-overview-compare.md#stateful-stateless).
+* Add a trigger and an action.
+* Run, test, debug, and review run history locally.
+* Find domain name details for firewall access.
* Deploy to Azure, which includes optionally enabling Application Insights.- * Manage your deployed logic app in Visual Studio Code and the Azure portal.- * Enable run history for stateless workflows.- * Enable or open the Application Insights after deployment.
-* Deploy to a Docker container that you can run anywhere.
-
-> [!NOTE]
-> For information about current known issues, review the [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md).
- ## Prerequisites ### Access and connectivity
-* Access to the internet so that you can download the requirements, connect from Visual Studio Code to your Azure account, and publish from Visual Studio Code to Azure, a Docker container, or other environment.
+* Access to the internet so that you can download the requirements, connect from Visual Studio Code to your Azure account, and publish from Visual Studio Code to Azure.
* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* To build the same example logic app in this article, you need an Office 365 Outlook email account that uses a Microsoft work or school account to sign in.
+* To create the same example workflow in this article, you need an Office 365 Outlook email account that uses a Microsoft work or school account to sign in.
- If you choose to use a different [email connector that's supported by Azure Logic Apps](/connectors/), such as Outlook.com or [Gmail](../connectors/connectors-google-data-security-privacy-policy.md), you can still follow the example, and the general overall steps are the same, but your user interface and options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
+ If you choose a [different email connector](/connectors/connector-reference/connector-reference-logicapps-connectors), such as Outlook.com, you can still follow the example, and the general overall steps are the same. However, your options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
<a name="storage-requirements"></a> ### Storage requirements
-#### Windows
-
-To locally build and run your logic app project in Visual Studio Code when using Windows, follow these steps to set up the Azure Storage Emulator:
-
-1. Download and install [Azure Storage Emulator 5.10](https://go.microsoft.com/fwlink/p/?linkid=717179).
-
-1. If you don't have one already, you need to have a local SQL DB installation, such as the free [SQL Server 2019 Express Edition](https://go.microsoft.com/fwlink/p/?linkid=866658), so that the emulator can run.
-
- For more information, see [Use the Azure Storage emulator for development and testing](../storage/common/storage-use-emulator.md).
-
-1. Before you can run your project, make sure that you start the emulator.
-
- ![Screenshot that shows the Azure Storage Emulator running.](./media/create-single-tenant-workflows-visual-studio-code/start-storage-emulator.png)
-
-#### macOS and Linux
-
-To locally build and run your logic app project in Visual Studio Code when using macOS or Linux, follow these steps to create and set up an Azure Storage account.
-
-> [!NOTE]
-> Currently, the designer in Visual Studio Code doesn't work on Linux OS, but you can still run build, run, and deploy
-> logic apps that use the Logic Apps Preview runtime to Linux-based virtual machines. For now, you can build your logic
-> apps in Visual Studio Code on Windows or macOS and then deploy to a Linux-based virtual machine.
-
-1. Sign in to the [Azure portal](https://portal.azure.com), and [create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal), which is a [prerequisite for Azure Functions](../azure-functions/storage-considerations.md).
+For local development in Visual Studio Code, you need to set up a local data store for your logic app project and workflows to use for running in your local development environment. You can use and run the Azurite storage emulator as your local data store.
-1. On the storage account menu, under **Settings**, select **Access keys**.
+1. Download and install [Azurite 3.12.0 or later](https://www.npmjs.com/package/azurite).
+1. Before you run your logic app, make sure to start the emulator.
-1. On the **Access keys** pane, find and copy the storage account's connection string, which looks similar to this example:
+For more information, review the [Azurite documentation](https://github.com/Azure/Azurite#azurite-v3).
- `DefaultEndpointsProtocol=https;AccountName=fabrikamstorageacct;AccountKey=<access-key>;EndpointSuffix=core.windows.net`
-
- ![Screenshot that shows the Azure portal with storage account access keys and connection string copied.](./media/create-single-tenant-workflows-visual-studio-code/find-storage-account-connection-string.png)
-
- For more information, review [Manage storage account keys](../storage/common/storage-account-keys-manage.md?tabs=azure-portal#view-account-access-keys).
-
-1. Save the connection string somewhere safe. After you create your logic app project in Visual Studio Code, you have to add the string to the **local.settings.json** file in your project's root level folder.
-
- > [!IMPORTANT]
- > If you plan to deploy to a Docker container, you also need to use this connection string with the Docker file that you use for deployment.
- > For production scenarios, make sure that you protect and secure such secrets and sensitive information, for example, by using a key vault.
-
### Tools
-* [Visual Studio Code 1.30.1 (January 2019) or higher](https://code.visualstudio.com/), which is free. Also, download and install these tools for Visual Studio Code, if you don't have them already:
+* [Visual Studio Code](https://code.visualstudio.com/), which is free. Also, download and install these tools for Visual Studio Code, if you don't have them already:
* [Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account), which provides a single common Azure sign-in and subscription filtering experience for all other Azure extensions in Visual Studio Code. * [C# for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp), which enables F5 functionality to run your logic app.
- * [Azure Functions Core Tools 3.0.3245 or later](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.3245) by using the Microsoft Installer (MSI) version, which is `func-cli-3.0.3245-x*.msi`.
+ * [Azure Functions Core Tools 3.0.3447 or later](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.3447) by using the Microsoft Installer (MSI) version, which is `func-cli-3.0.3447-x*.msi`.
- These tools include a version of the same runtime that powers the Azure Functions runtime, which the Preview extension uses in Visual Studio Code.
+ These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code.
> [!IMPORTANT] > If you have an installation that's earlier than these versions, uninstall that version first, > or make sure that the PATH environment variable points at the version that you download and install.
- * [Azure Logic Apps (Preview) extension for Visual Studio Code](https://go.microsoft.com/fwlink/p/?linkid=2143167). This extension provides the capability for you to create logic apps where you can build stateful and stateless workflows that locally run in Visual Studio Code and then deploy those logic apps directly to Azure or to Docker containers.
-
- Currently, you can have both the original Azure Logic Apps extension and the Public Preview extension installed in Visual Studio Code. Although the development experiences differ in some ways between the extensions, your Azure subscription can include both logic app types that you create with the extensions. Visual Studio Code shows all the deployed logic apps in your Azure subscription, but organizes them into different sections by extension names, **Logic Apps** and **Azure Logic Apps (Preview)**.
+ * [Azure Logic Apps (Standard) extension for Visual Studio Code](https://go.microsoft.com/fwlink/p/?linkid=2143167).
> [!IMPORTANT]
- > If you created logic app projects with the earlier private preview extension, these projects won't work with the Public
- > Preview extension. However, you can migrate these projects after you uninstall the private preview extension, delete the
- > associated files, and install the public preview extension. You then create a new project in Visual Studio Code, and copy
- > your previously created logic app's **workflow.definition** file into your new project. For more information, see
- > [Migrate from the private preview extension](#migrate-private-preview).
- >
- > If you created logic app projects with the earlier public preview extension, you can continue using those projects
- > without any migration steps.
+ > Projects created with earlier preview extensions no longer work. To continue,
+ > uninstall any earlier versions, and recreate your logic app projects.
- **To install the **Azure Logic Apps (Preview)** extension, follow these steps:**
+ To install the **Azure Logic Apps (Standard)** extension, follow these steps:
1. In Visual Studio Code, on the left toolbar, select **Extensions**.
- 1. In the extensions search box, enter `azure logic apps preview`. From the results list, select **Azure Logic Apps (Preview)** **>** **Install**.
-
- After the installation completes, the Preview extension appears in the **Extensions: Installed** list.
+ 1. In the extensions search box, enter `azure logic apps standard`. From the results list, select **Azure Logic Apps (Standard)** **>** **Install**.
- ![Screenshot that shows Visual Studio Code's installed extensions list with the "Azure Logic Apps (Preview)" extension underlined.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-extension-installed.png)
+ After the installation completes, the extension appears in the **Extensions: Installed** list.
> [!TIP] > If the extension doesn't appear in the installed list, try restarting Visual Studio Code.
+ Currently, you can have both Consumption (multi-tenant) and Standard (single-tenant) extensions installed at the same time. The development experiences differ from each other in some ways, but your Azure subscription can include both Standard and Consumption logic app types. Visual Studio Code shows all the deployed logic apps in your Azure subscription, but organizes your apps under each extension, **Azure Logic Apps (Consumption)** and **Azure Logic Apps (Standard)**.
+ * To use the [Inline Code Operations action](../logic-apps/logic-apps-add-run-inline-code.md) that runs JavaScript, install [Node.js versions 10.x.x, 11.x.x, or 12.x.x](https://nodejs.org/en/download/releases/).
- > [!TIP]
+ > [!TIP]
> For Windows, download the MSI version. If you use the ZIP version instead, you have to > manually make Node.js available by using a PATH environment variable for your operating system. * To locally run webhook-based triggers and actions, such as the [built-in HTTP Webhook trigger](../connectors/connectors-native-webhook.md), in Visual Studio Code, you need to [set up forwarding for the callback URL](#webhook-setup).
-* To test the example logic app that you create in this article, you need a tool that can send calls to the Request trigger, which is the first step in example logic app. If you don't have such a tool, you can download, install, and use [Postman](https://www.postman.com/downloads/).
-
-* If you create your logic app and deploy with settings that support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you deploy your logic app from Visual Studio Code or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you deploy your logic app, or after deployment.
-
-<a name="migrate-private-preview"></a>
-
-## Migrate from private preview extension
-
-Any logic app projects that you created with the **Azure Logic Apps (Private Preview)** extension won't work with the Public Preview extension. However, you can migrate these projects to new projects by following these steps:
+* To test the example workflow in this article, you need a tool that can send calls to the endpoint created by the Request trigger. If you don't have such a tool, you can download, install, and use [Postman](https://www.postman.com/downloads/).
-1. Uninstall the private preview extension.
-
-1. Delete any associated extension bundle and NuGet package folders in these locations:
-
- * The **Microsoft.Azure.Functions.ExtensionBundle.Workflows** folder, which contains previous extension bundles and is located along either path here:
-
- * `C:\Users\{userName}\AppData\Local\Temp\Functions\ExtensionBundles`
-
- * `C:\Users\{userName}\.azure-functions-core-tools\Functions\ExtensionBundles`
-
- * The **microsoft.azure.workflows.webjobs.extension** folder, which is the [NuGet](/nuget/what-is-nuget) cache for the private preview extension and is located along this path:
-
- `C:\Users\{userName}\.nuget\packages`
-
-1. Install the **Azure Logic Apps (Preview)** extension.
-
-1. Create a new project in Visual Studio Code.
-
-1. Copy your previously created logic app's **workflow.definition** file to your new project.
+* If you create your logic app resources with settings that support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
<a name="set-up"></a>
Any logic app projects that you created with the **Azure Logic Apps (Private Pre
1. To make sure that all the extensions are correctly installed, reload or restart Visual Studio Code.
-1. Confirm that Visual Studio Code automatically finds and installs extension updates so that your Preview extension gets the latest updates. Otherwise, you have to manually uninstall the outdated version and install the latest version.
+1. Confirm that Visual Studio Code automatically finds and installs extension updates so that all your extensions get the latest updates. Otherwise, you have to manually uninstall the outdated version and install the latest version.
1. On the **File** menu, go to **Preferences** **>** **Settings**.
Any logic app projects that you created with the **Azure Logic Apps (Private Pre
1. Confirm that **Auto Check Updates** and **Auto Update** are selected.
-Also, by default, the following settings are enabled and set for the Logic Apps preview extension:
+By default, the following settings are enabled and set for the Azure Logic Apps (Standard) extension:
-* **Azure Logic Apps V2: Project Runtime**, which is set to version **~3**
+* **Azure Logic Apps Standard: Project Runtime**, which is set to version **~3**
> [!NOTE] > This version is required to use the [Inline Code Operations actions](../logic-apps/logic-apps-add-run-inline-code.md).
-* **Azure Logic Apps V2: Experimental View Manager**, which enables the latest designer in Visual Studio Code. If you experience problems on the designer, such as dragging and dropping items, turn off this setting.
+* **Azure Logic Apps Standard: Experimental View Manager**, which enables the latest designer in Visual Studio Code. If you experience problems on the designer, such as dragging and dropping items, turn off this setting.
To find and confirm these settings, follow these steps: 1. On the **File** menu, go to **Preferences** **>** **Settings**.
-1. On the **User** tab, go to **>** **Extensions** **>** **Azure Logic Apps (Preview)**.
+1. On the **User** tab, go to **>** **Extensions** **>** **Azure Logic Apps (Standard)**.
- For example, you can find the **Azure Logic Apps V2: Project Runtime** setting here or use the search box to find other settings:
+ For example, you can find the **Azure Logic Apps Standard: Project Runtime** setting here or use the search box to find other settings:
- ![Screenshot that shows Visual Studio Code settings for "Azure Logic Apps (Preview)" extension.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-preview-settings.png)
+ ![Screenshot that shows Visual Studio Code settings for "Azure Logic Apps (Standard)" extension.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-settings.png)
<a name="connect-azure-account"></a>
To find and confirm these settings, follow these steps:
![Screenshot that shows Visual Studio Code Activity Bar and selected Azure icon.](./media/create-single-tenant-workflows-visual-studio-code/visual-studio-code-azure-icon.png)
-1. In the Azure pane, under **Azure: Logic Apps (Preview)**, select **Sign in to Azure**. When the Visual Studio Code authentication page appears, sign in with your Azure account.
+1. In the Azure pane, under **Azure: Logic Apps (Standard)**, select **Sign in to Azure**. When the Visual Studio Code authentication page appears, sign in with your Azure account.
![Screenshot that shows Azure pane and selected link for Azure sign in.](./media/create-single-tenant-workflows-visual-studio-code/sign-in-azure-subscription.png)
- After you sign in, the Azure pane shows the subscriptions in your Azure account. If you also have the publicly released extension, you can find any logic apps that you created with that extension in the **Logic Apps** section, not the **Logic Apps (Preview)** section.
+ After you sign in, the Azure pane shows the subscriptions in your Azure account. If you also have the publicly released extension, you can find any logic apps that you created with that extension in the **Logic Apps** section, not the **Logic Apps (Standard)** section.
If the expected subscriptions don't appear, or you want the pane to show only specific subscriptions, follow these steps:
Before you can create your logic app, create a local project so that you can man
1. On your computer, create an *empty* local folder to use for the project that you'll later create in Visual Studio Code.
-1. In Visual Studio Code, close any and all open folders.
+1. In Visual Studio Code, close all open folders.
-1. In the Azure pane, next to **Azure: Logic Apps (Preview)**, select **Create New Project** (icon that shows a folder and lightning bolt).
+1. In the Azure pane, next to **Azure: Logic Apps (Standard)**, select **Create New Project** (icon that shows a folder and lightning bolt).
![Screenshot that shows Azure pane toolbar with "Create New Project" selected.](./media/create-single-tenant-workflows-visual-studio-code/create-new-project-folder.png)
Before you can create your logic app, create a local project so that you can man
## Enable built-in connector authoring
-You can create your own built-in connectors for any service you need by using the [preview release's extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similar to built-in connectors such as Azure Service Bus and SQL Server, these connectors provide higher throughput, low latency, local connectivity, and run natively in the same process as the preview runtime.
+You can create your own built-in connectors for any service you need by using the [single-tenant Azure Logic Apps extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similar to built-in connectors such as Azure Service Bus and SQL Server, these connectors provide higher throughput, low latency, local connectivity, and run natively in the same process as the single-tenant Azure Logic Apps runtime.
The authoring capability is currently available only in Visual Studio Code, but isn't enabled by default. To create these connectors, you need to first convert your project from extension bundle-based (Node.js) to NuGet package-based (.NET).
The authoring capability is currently available only in Visual Studio Code, but
After the designer appears, the **Choose an operation** prompt appears on the designer and is selected by default, which shows the **Add an action** pane.
- ![Screenshot that shows the workflow designer.](./media/create-single-tenant-workflows-visual-studio-code/workflow-app-designer.png)
+ ![Screenshot that shows the workflow designer.](./media/create-single-tenant-workflows-visual-studio-code/workflow-designer.png)
1. Next, [add a trigger and actions](#add-trigger-actions) to your workflow.
The workflow in this example uses this trigger and these actions:
### Add the Office 365 Outlook action
-1. On the designer, under the trigger that you added, select **New step**.
+1. On the designer, under the trigger that you added, select the plus sign (**+**) > **Add an action**.
The **Choose an operation** prompt appears on the designer, and the **Add an action** pane reopens so that you can select the next action.
The workflow in this example uses this trigger and these actions:
> If too much time passes before you complete the prompts, the authentication process times out and fails. > In this case, return to the designer and retry signing in to create the connection.
-1. When the Azure Logic Apps (Preview) extension prompts you for consent to access your email account, select **Open**. Follow the subsequent prompt to allow access.
+1. When the Azure Logic Apps (Standard) extension prompts you for consent to access your email account, select **Open**. Follow the subsequent prompt to allow access.
- ![Screenshot that shows the Preview extension prompt to permit access.](./media/create-single-tenant-workflows-visual-studio-code/allow-preview-extension-open-uri.png)
+ ![Screenshot that shows the extension prompt to permit access.](./media/create-single-tenant-workflows-visual-studio-code/allow-extension-open-uri.png)
> [!TIP] > To prevent future prompts, select **Don't ask again for this extension**.
The workflow in this example uses this trigger and these actions:
> [!NOTE] > If you want to make any changes in the details pane on the **Settings**, **Static Result**, or **Run After** tab, > make sure that you select **Done** to commit those changes before you switch tabs or change focus to the designer.
- > Otherwise, Visual Studio Code won't keep your changes. For more information, review the
- > [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md).
+ > Otherwise, Visual Studio Code won't keep your changes.
1. On the designer, select **Save**.
The workflow in this example uses this trigger and these actions:
When you use a webhook-based trigger or action, such as **HTTP Webhook**, with a logic app running in Azure, the Logic Apps runtime subscribes to the service endpoint by generating and registering a callback URL with that endpoint. The trigger or action then waits for the service endpoint to call the URL. However, when you're working in Visual Studio Code, the generated callback URL starts with `http://localhost:7071/...`. This URL is for your localhost server, which is private so the service endpoint can't call this URL.
-To locally run webhook-based triggers and actions in Visual Studio Code, you need to set up a public URL that exposes your localhost server and securely forwards calls from the service endpoint to the webhook callback URL. You can use a forwarding service and tool such as [**ngrok**](https://ngrok.com/), which opens an HTTP tunnel to your localhost port, or you can use your own tool.
+To locally run webhook-based triggers and actions in Visual Studio Code, you need to set up a public URL that exposes your localhost server and securely forwards calls from the service endpoint to the webhook callback URL. You can use a forwarding service and tool such as [**ngrok**](https://ngrok.com/), which opens an HTTP tunnel to your localhost port, or you can use your own equivalent tool.
#### Set up call forwarding using **ngrok**
To test your logic app, follow these steps to start a debugging session, and fin
Here are the possible statuses that each step in the workflow can have:
- | Action status | Icon | Description |
- |||-|
- | **Aborted** | ![Icon for "Aborted" action status][aborted-icon] | The action stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
- | **Cancelled** | ![Icon for "Cancelled" action status][cancelled-icon] | The action was running but received a request to cancel. |
- | **Failed** | ![Icon for "Failed" action status][failed-icon] | The action failed. |
- | **Running** | ![Icon for "Running" action status][running-icon] | The action is currently running. |
- | **Skipped** | ![Icon for "Skipped" action status][skipped-icon] | The action was skipped because the immediately preceding action failed. An action has a `runAfter` condition that requires that the preceding action finishes successfully before the current action can run. |
- | **Succeeded** | ![Icon for "Succeeded" action status][succeeded-icon] | The action succeeded. |
- | **Succeeded with retries** | ![Icon for "Succeeded with retries" action status][succeeded-with-retries-icon] | The action succeeded but only after one or more retries. To review the retry history, in the run history details view, select that action so that you can view the inputs and outputs. |
- | **Timed out** | ![Icon for "Timed out" action status][timed-out-icon] | The action stopped due to the timeout limit specified by that action's settings. |
- | **Waiting** | ![Icon for "Waiting" action status][waiting-icon] | Applies to a webhook action that's waiting for an inbound request from a caller. |
- ||||
+ | Action status | Description |
+ ||-|
+ | **Aborted** | The action stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
+ | **Cancelled** | The action was running but received a request to cancel. |
+ | **Failed** | The action failed. |
+ | **Running** | The action is currently running. |
+ | **Skipped** | The action was skipped because the immediately preceding action failed. An action has a `runAfter` condition that requires that the preceding action finishes successfully before the current action can run. |
+ | **Succeeded** | The action succeeded. |
+ | **Succeeded with retries** | The action succeeded but only after one or more retries. To review the retry history, in the run history details view, select that action so that you can view the inputs and outputs. |
+ | **Timed out** | The action stopped due to the timeout limit specified by that action's settings. |
+ | **Waiting** | Applies to a webhook action that's waiting for an inbound request from a caller. |
+ |||
[aborted-icon]: ./media/create-single-tenant-workflows-visual-studio-code/aborted.png [cancelled-icon]: ./media/create-single-tenant-workflows-visual-studio-code/cancelled.png
To test your logic app, follow these steps to start a debugging session, and fin
To return a response to the caller that sent a request to your logic app, you can use the built-in [Response action](../connectors/connectors-native-reqres.md) for a workflow that starts with the Request trigger.
-1. On the workflow designer, under the **Send an email** action, select **New step**.
+1. On the workflow designer, under the **Send an email** action, select the plus sign (**+**) > **Add an action**.
- The **Choose an operation** prompt appears on the designer, and the **Add an action pane** reopens so that you can select the next action.
+ The **Choose an operation** prompt appears on the designer, and the **Add an action** pane reopens so that you can select the next action.
1. On the **Add an action** pane, under the **Choose an action** search box, make sure that **Built-in** is selected. In the search box, enter `response`, and select the **Response** action.
To find the fully qualified domain names (FQDNs) for these connections, follow t
## Deploy to Azure
-From Visual Studio Code, you can directly publish your project to Azure, which deploys your logic app using the new **Logic App (Preview)** resource type. Similar to the function app resource in Azure Functions, deployment for this new resource type requires that you select a [hosting plan and pricing tier](../app-service/overview-hosting-plans.md), which you can set up during deployment. For more information about hosting plans and pricing, review these topics:
-
-* [Scale up an in Azure App Service](../app-service/manage-scale-up.md)
-* [Azure Functions scale and hosting](../azure-functions/functions-scale.md)
+From Visual Studio Code, you can directly publish your project to Azure, which deploys your logic app using the **Logic App (Standard)** resource type. You can publish your logic app as a new resource, which automatically creates any necessary resources, such as an [Azure Storage account, similar to function app requirements](../azure-functions/storage-considerations.md). Or, you can publish your logic app to a previously deployed **Logic App (Standard)** resource, which overwrites that logic app.
-You can publish your logic app as a new resource, which automatically creates any necessary resources, such as an [Azure Storage account, similar to function app requirements](../azure-functions/storage-considerations.md). Or, you can publish your logic app to a previously deployed **Logic App (Preview)** resource, which overwrites that logic app.
+Deployment for the **Logic App (Standard)** resource type requires a hosting plan and pricing tier, which you select during deployment. For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing).
-### Publish to a new Logic App (Preview) resource
+### Publish to a new Logic App (Standard) resource
1. On the Visual Studio Code Activity Bar, select the Azure icon.
-1. On the **Azure: Logic Apps (Preview)** pane toolbar, select **Deploy to Logic App**.
+1. On the **Azure: Logic Apps (Standard)** pane toolbar, select **Deploy to Logic App**.
- ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and pane's toolbar with "Deploy to Logic App" selected.](./media/create-single-tenant-workflows-visual-studio-code/deploy-to-logic-app.png)
+ ![Screenshot that shows the "Azure: Logic Apps (Standard)" pane and pane's toolbar with "Deploy to Logic App" selected.](./media/create-single-tenant-workflows-visual-studio-code/deploy-to-logic-app.png)
1. If prompted, select the Azure subscription to use for your logic app deployment. 1. From the list that Visual Studio Code opens, select from these options:
- * **Create new Logic App (Preview) in Azure** (quick)
- * **Create new Logic App (Preview) in Azure Advanced**
- * A previously deployed **Logic App (Preview)** resource, if any exist
+ * **Create new Logic App (Standard) in Azure** (quick)
+ * **Create new Logic App (Standard) in Azure Advanced**
+ * A previously deployed **Logic App (Standard)** resource, if any exist
- This example continues with **Create new Logic App (Preview) in Azure Advanced**.
+ This example continues with **Create new Logic App (Standard) in Azure Advanced**.
- ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane with a list with "Create new Logic App (Preview) in Azure" selected.](./media/create-single-tenant-workflows-visual-studio-code/select-create-logic-app-options.png)
+ ![Screenshot that shows the "Azure: Logic Apps (Standard)" pane with a list with "Create new Logic App (Standard) in Azure" selected.](./media/create-single-tenant-workflows-visual-studio-code/select-create-logic-app-options.png)
-1. To create your new **Logic App (Preview)** resource, follow these steps:
+1. To create your new **Logic App (Standard)** resource, follow these steps:
- 1. Provide a globally unique name for your new logic app, which is the name to use for the **Logic App (Preview)** resource. This example uses `Fabrikam-Workflows-App`.
+ 1. Provide a globally unique name for your new logic app, which is the name to use for the **Logic App (Standard)** resource. This example uses `Fabrikam-Workflows-App`.
- ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and a prompt to provide a name for the new logic app to create.](./media/create-single-tenant-workflows-visual-studio-code/enter-logic-app-name.png)
+ ![Screenshot that shows the "Azure: Logic Apps (Standard)" pane and a prompt to provide a name for the new logic app to create.](./media/create-single-tenant-workflows-visual-studio-code/enter-logic-app-name.png)
- 1. Select a [hosting plan](../app-service/overview-hosting-plans.md) for your new logic app, either [**App Service Plan** (Dedicated)](../azure-functions/dedicated-plan.md) or [**Premium**](../azure-functions/functions-premium-plan.md).
+ 1. Select a hosting plan for your new logic app. Either create a name for your plan, or select an existing plan. This example selects **Create new App Service Plan**.
- > [!IMPORTANT]
- > Consumption plans aren't supported nor available for this resource type. Your selected plan affects the
- > capabilities and pricing tiers that are later available to you. For more information, review these topics:
- >
- > * [Azure Functions scale and hosting](../azure-functions/functions-scale.md)
- > * [App Service pricing details](https://azure.microsoft.com/pricing/details/app-service/)
- >
- > For example, the Premium plan provides access to networking capabilities, such as connect and integrate
- > privately with Azure virtual networks, similar to Azure Functions when you create and deploy your logic apps.
- > For more information, review these topics:
- >
- > * [Azure Functions networking options](../azure-functions/functions-networking-options.md)
- > * [Azure Logic Apps Running Anywhere - Networking possibilities with Azure Logic Apps Preview](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
-
- This example uses the **App Service Plan**.
-
- ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and a prompt to select "App Service Plan" or "Premium".](./media/create-single-tenant-workflows-visual-studio-code/select-hosting-plan.png)
-
- 1. Create a new App Service plan or select an existing plan. This example selects **Create new App Service Plan**.
+ ![Screenshot that shows the "Azure: Logic Apps (Standard)" pane and a prompt to "Create new App Service Plan" or select an existing App Service plan.](./media/create-single-tenant-workflows-visual-studio-code/create-app-service-plan.png)
- ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and a prompt to "Create new App Service Plan" or select an existing App Service plan.](./media/create-single-tenant-workflows-visual-studio-code/create-app-service-plan.png)
+ 1. Provide a name for your hosting plan plan, and then select a pricing tier for your selected plan.
- 1. Provide a name for your App Service plan, and then select a [pricing tier](../app-service/overview-hosting-plans.md) for the plan. This example selects the **F1 Free** plan.
-
- ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and a prompt to select a pricing tier.](./media/create-single-tenant-workflows-visual-studio-code/select-pricing-tier.png)
+ For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing).
1. For optimal performance, find and select the same resource group as your project for the deployment.
You can publish your logic app as a new resource, which automatically creates an
1. For stateful workflows, select **Create new storage account** or an existing storage account.
- ![Screenshot that shows the "Azure: Logic Apps (Preview)" pane and a prompt to create or select a storage account.](./media/create-single-tenant-workflows-visual-studio-code/create-storage-account.png)
+ ![Screenshot that shows the "Azure: Logic Apps (Standard)" pane and a prompt to create or select a storage account.](./media/create-single-tenant-workflows-visual-studio-code/create-storage-account.png)
1. If your logic app's creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you deploy your logic app from Visual Studio Code or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you deploy your logic app, or after deployment.
You can have multiple workflows in your logic app project. To add a blank workfl
1. On the Visual Studio Code Activity Bar, select the Azure icon.
-1. In the Azure pane, next to **Azure: Logic Apps (Preview)**, select **Create Workflow** (icon for Azure Logic Apps).
+1. In the Azure pane, next to **Azure: Logic Apps (Standard)**, select **Create Workflow** (icon for Azure Logic Apps).
1. Select the workflow type that you want to add: **Stateful** or **Stateless**
When you're done, a new workflow folder appears in your project along with a **w
## Manage deployed logic apps in Visual Studio Code
-In Visual Studio Code, you can view all the deployed logic apps in your Azure subscription, whether they are the original **Logic Apps** or the **Logic App (Preview)** resource type, and select tasks that help you manage those logic apps. However, to access both resource types, you need both the **Azure Logic Apps** and the **Azure Logic Apps (Preview)** extensions for Visual Studio Code.
+In Visual Studio Code, you can view all the deployed logic apps in your Azure subscription, whether they are the original **Logic Apps** or the **Logic App (Standard)** resource type, and select tasks that help you manage those logic apps. However, to access both resource types, you need both the **Azure Logic Apps** and the **Azure Logic Apps (Standard)** extensions for Visual Studio Code.
-1. On the left toolbar, select the Azure icon. In the **Azure: Logic Apps (Preview)** pane, expand your subscription, which shows all the deployed logic apps for that subscription.
+1. On the left toolbar, select the Azure icon. In the **Azure: Logic Apps (Standard)** pane, expand your subscription, which shows all the deployed logic apps for that subscription.
1. Open the logic app that you want to manage. From the logic app's shortcut menu, select the task that you want to perform.
In Visual Studio Code, you can view all the deployed logic apps in your Azure su
> For more information, review [Considerations for stopping logic apps](#considerations-stop-logic-apps) and > [Considerations for deleting logic apps](#considerations-delete-logic-apps).
- ![Screenshot that shows Visual Studio Code with the opened "Azure Logic Apps (Preview)" extension pane and the deployed workflow.](./media/create-single-tenant-workflows-visual-studio-code/find-deployed-workflow-visual-studio-code.png)
+ ![Screenshot that shows Visual Studio Code with the opened "Azure Logic Apps (Standard)" extension pane and the deployed workflow.](./media/create-single-tenant-workflows-visual-studio-code/find-deployed-workflow-visual-studio-code.png)
1. To view all the workflows in the logic app, expand your logic app, and then expand the **Workflows** node.
Stopping a logic app affects workflow instances in the following ways:
To stop a trigger from firing on unprocessed items since the last run, clear the trigger state before you restart the logic app: 1. In Visual Studio Code, on the left toolbar, select the Azure icon.
- 1. In the **Azure: Logic Apps (Preview)** pane, expand your subscription, which shows all the deployed logic apps for that subscription.
+ 1. In the **Azure: Logic Apps (Standard)** pane, expand your subscription, which shows all the deployed logic apps for that subscription.
1. Expand your logic app, and then expand the **Workflows** node. 1. Open a workflow, and edit any part of that workflow's trigger. 1. Save your changes. This step resets the trigger's current state.
Deleting a logic app affects workflow instances in the following ways:
## Manage deployed logic apps in the portal
-After you deploy a logic app to the Azure portal from Visual Studio Code, you can view all the deployed logic apps that are in your Azure subscription, whether they are the original **Logic Apps** resource type or the **Logic App (Preview)** resource type. Currently, each resource type is organized and managed as separate categories in Azure. To find logic apps that have the **Logic App (Preview)** resource type, follow these steps:
+After you deploy a logic app to the Azure portal from Visual Studio Code, you can view all the deployed logic apps that are in your Azure subscription, whether they are the original **Logic Apps** resource type or the **Logic App (Standard)** resource type. Currently, each resource type is organized and managed as separate categories in Azure. To find logic apps that have the **Logic App (Standard)** resource type, follow these steps:
-1. In the Azure portal search box, enter `logic app preview`. When the results list appears, under **Services**, select **Logic App (Preview)**.
+1. In the Azure portal search box, enter `logic apps`. When the results list appears, under **Services**, select **Logic apps**.
- ![Screenshot that shows the Azure portal search box with the "logic app preview" search text.](./media/create-single-tenant-workflows-visual-studio-code/portal-find-logic-app-preview-resource.png)
+ ![Screenshot that shows the Azure portal search box with the "logic apps" search text.](./media/create-single-tenant-workflows-visual-studio-code/portal-find-logic-app-resource.png)
-1. On the **Logic App (Preview)** pane, find and select the logic app that you deployed from Visual Studio Code.
+1. On the **Logic App (Standard)** pane, find and select the logic app that you deployed from Visual Studio Code.
- ![Screenshot that shows the Azure portal and the Logic App (Preview) resources deployed in Azure.](./media/create-single-tenant-workflows-visual-studio-code/logic-app-preview-resources-pane.png)
+ ![Screenshot that shows the Azure portal and the Logic App (Standard) resources deployed in Azure.](./media/create-single-tenant-workflows-visual-studio-code/logic-app-resources-pane.png)
The Azure portal opens the individual resource page for the selected logic app.
After you deploy a logic app to the Azure portal from Visual Studio Code, you ca
The **Workflows** pane shows all the workflows in the current logic app. This example shows the workflow that you created in Visual Studio Code.
- ![Screenshot that shows a "Logic App (Preview)" resource page with the "Workflows" pane open and the deployed workflow](./media/create-single-tenant-workflows-visual-studio-code/deployed-logic-app-workflows-pane.png)
+ ![Screenshot that shows a "Logic App (Standard)" resource page with the "Workflows" pane open and the deployed workflow](./media/create-single-tenant-workflows-visual-studio-code/deployed-logic-app-workflows-pane.png)
1. To view a workflow, on the **Workflows** pane, select that workflow.
After you deploy a logic app to the Azure portal from Visual Studio Code, you ca
## Add another workflow in the portal
-Through the Azure portal, you can add blank workflows to a **Logic App (Preview)** resource that you deployed from Visual Studio Code and build those workflows in the Azure portal.
+Through the Azure portal, you can add blank workflows to a **Logic App (Standard)** resource that you deployed from Visual Studio Code and build those workflows in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com), find and select your deployed **Logic App (Preview)** resource.
+1. In the [Azure portal](https://portal.azure.com), find and select your deployed **Logic App (Standard)** resource.
1. On the logic app menu, select **Workflows**. On the **Workflows** pane, select **Add**.
To debug a stateless workflow more easily, you can enable the run history for th
## Enable monitoring view in the Azure portal
-After you deploy a **Logic App (Preview)** resource from Visual Studio Code to Azure, you can review any available run history and details for a workflow in that resource by using the Azure portal and the **Monitor** experience for that workflow. However, you first have to enable the **Monitor** view capability on that logic app resource.
+After you deploy a **Logic App (Standard)** resource from Visual Studio Code to Azure, you can review any available run history and details for a workflow in that resource by using the Azure portal and the **Monitor** experience for that workflow. However, you first have to enable the **Monitor** view capability on that logic app resource.
-1. In the [Azure portal](https://portal.azure.com), find and select the deployed **Logic App (Preview)** resource.
+1. In the [Azure portal](https://portal.azure.com), find and select the deployed **Logic App (Standard)** resource.
1. On that resource's menu, under **API**, select **CORS**.
After you deploy a **Logic App (Preview)** resource from Visual Studio Code to A
1. When you're done, on the **CORS** toolbar, select **Save**.
- ![Screenshot that shows the Azure portal with a deployed Logic App (Preview) resource. On the resource menu, "CORS" is selected with a new entry for "Allowed Origins" set to the wildcard "*" character.](./media/create-single-tenant-workflows-visual-studio-code/enable-run-history-deployed-logic-app.png)
+ ![Screenshot that shows the Azure portal with a deployed Logic App (Standard) resource. On the resource menu, "CORS" is selected with a new entry for "Allowed Origins" set to the wildcard "*" character.](./media/create-single-tenant-workflows-visual-studio-code/enable-run-history-deployed-logic-app.png)
<a name="enable-open-application-insights"></a>
After Application Insights opens, you can review various metrics for your logic
* [Azure Logic Apps Running Anywhere - Monitor with Application Insights - part 1](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-monitor-with-application/ba-p/1877849) * [Azure Logic Apps Running Anywhere - Monitor with Application Insights - part 2](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-monitor-with-application/ba-p/2003332)
-<a name="deploy-docker"></a>
-
-## Deploy to Docker
-
-You can deploy your logic app to a [Docker container](/visualstudio/docker/tutorials/docker-tutorial#what-is-a-container) as the hosting environment by using the [.NET CLI](/dotnet/core/tools/). With these commands, you can build and publish your logic app's project. You can then build and run your Docker container as the destination for deploying your logic app.
-
-If you're not familiar with Docker, review these topics:
-
-* [What is Docker?](/dotnet/architecture/microservices/container-docker-introduction/docker-defined)
-* [Introduction to Containers and Docker](/dotnet/architecture/microservices/container-docker-introduction/)
-* [Introduction to .NET and Docker](/dotnet/core/docker/introduction)
-* [Docker containers, images, and registries](/dotnet/architecture/microservices/container-docker-introduction/docker-containers-images-registries)
-* [Tutorial: Get started with Docker (Visual Studio Code)](/visualstudio/docker/tutorials/docker-tutorial)
-
-### Requirements
-
-* The Azure Storage account that your logic app uses for deployment
-
-* A Docker file for the workflow that you use when building your Docker container
-
- For example, this sample Docker file deploys a logic app and specifies the connection string that contains the access key for the Azure Storage account that was used for publishing the logic app to the Azure portal. To find this string, see [Get storage account connection string](#find-storage-account-connection-string). For more information, review [Best practices for writing Docker files](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/).
-
- > [!IMPORTANT]
- > For production scenarios, make sure that you protect and secure such secrets and sensitive information, for example, by using a key vault.
- > For Docker files specifically, review [Build images with BuildKit](https://docs.docker.com/develop/develop-images/build_enhancements/)
- > and [Manage sensitive data with Docker Secrets](https://docs.docker.com/engine/swarm/secrets/).
-
- ```text
- FROM mcr.microsoft.com/azure-functions/node:3.0
-
- ENV AzureWebJobsStorage <storage-account-connection-string>
- ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
- AzureFunctionsJobHost__Logging__Console__IsEnabled=true \
- FUNCTIONS_V2_COMPATIBILITY_MODE=true
-
- COPY . /home/site/wwwroot
-
- RUN cd /home/site/wwwroot
- ```
-
-<a name="find-storage-account-connection-string"></a>
-
-### Get storage account connection string
-
-Before you can build and run your Docker container image, you need to get the connection string that contains the access key to your storage account. Earlier, you created this storage account either as to use the extension on macOS or Linux, or when you deployed your logic app to the Azure portal.
-
-To find and copy this connection string, follow these steps:
-
-1. In the Azure portal, on the storage account menu, under **Settings**, select **Access keys**.
-
-1. On the **Access keys** pane, find and copy the storage account's connection string, which looks similar to this example:
-
- `DefaultEndpointsProtocol=https;AccountName=fabrikamstorageacct;AccountKey=<access-key>;EndpointSuffix=core.windows.net`
-
- ![Screenshot that shows the Azure portal with storage account access keys and connection string copied.](./media/create-single-tenant-workflows-visual-studio-code/find-storage-account-connection-string.png)
-
- For more information, review [Manage storage account keys](../storage/common/storage-account-keys-manage.md?tabs=azure-portal#view-account-access-keys).
-
-1. Save the connection string somewhere safe so that you can add this string to the Docker file that you use for deployment.
-
-<a name="find-storage-account-master-key"></a>
-
-### Find master key for storage account
-
-When your workflow contains a Request trigger, you need to [get the trigger's callback URL](#get-callback-url-request-trigger) after you build and run your Docker container image. For this task, you also need to specify the master key value for the storage account that you use for deployment.
-
-1. To find this master key, in your project, open the **azure-webjobs-secrets/{deployment-name}/host.json** file.
-
-1. Find the `AzureWebJobsStorage` property, and copy the key value from this section:
-
- ```json
- {
- <...>
- "masterKey": {
- "name": "master",
- "value": "<master-key>",
- "encrypted": false
- },
- <...>
- }
- ```
-
-1. Save this key value somewhere safe for you to use later.
-
-<a name="build-run-docker-container-image"></a>
-
-### Build and run your Docker container image
-
-1. Build your Docker container image by using your Docker file and running this command:
-
- `docker build --tag local/workflowcontainer .`
-
- For more information, see [docker build](https://docs.docker.com/engine/reference/commandline/build/).
-
-1. Run the container locally by using this command:
-
- `docker run -e WEBSITE_HOSTNAME=localhost -p 8080:80 local/workflowcontainer`
-
- For more information, see [docker run](https://docs.docker.com/engine/reference/commandline/run/).
-
-<a name="get-callback-url-request-trigger"></a>
-
-### Get callback URL for Request trigger
-
-For a workflow that uses the Request trigger, get the trigger's callback URL by sending this request:
-
-`POST /runtime/webhooks/workflow/api/management/workflows/{workflow-name}/triggers/{trigger-name}/listCallbackUrl?api-version=2020-05-01-preview&code={master-key}`
-
-The `{trigger-name}` value is the name for the Request trigger that appears in the workflow's JSON definition. The `{master-key}` value is defined in the Azure Storage account that you set for the `AzureWebJobsStorage` property within the file, **azure-webjobs-secrets/{deployment-name}/host.json**. For more information, see [Find storage account master key](#find-storage-account-master-key).
<a name="delete-from-designer"></a>
When you try to open the designer, you get this error, **"Workflow design time c
1. In Visual Studio Code, open the Output window. From the **View** menu, select **Output**.
- 1. From the list in the Output window's title bar, select **Azure Logic Apps (Preview)** so that you can review output from the extension, for example:
+ 1. From the list in the Output window's title bar, select **Azure Logic Apps (Standard)** so that you can review output from the extension, for example:
- ![Screenshot that shows the Output window with "Azure Logic Apps" selected.](./media/create-single-tenant-workflows-visual-studio-code/check-outout-window-azure-logic-apps.png)
+ ![Screenshot that shows the Output window with "Azure Logic Apps" selected.](./media/create-single-tenant-workflows-visual-studio-code/check-output-window-azure-logic-apps.png)
1. Review the output and check whether this error message appears:
When you try to open the designer, you get this error, **"Workflow design time c
### New triggers and actions are missing from the designer picker for previously created workflows
-Azure Logic Apps Preview supports built-in actions for Azure Function Operations, Liquid Operations, and XML Operations, such as **XML Validation** and **Transform XML**. However, for previously created logic apps, these actions might not appear in the designer picker for you to select if Visual Studio Code uses an outdated version of the extension bundle, `Microsoft.Azure.Functions.ExtensionBundle.Workflows`.
+Single-tenant Azure Logic Apps supports built-in actions for Azure Function Operations, Liquid Operations, and XML Operations, such as **XML Validation** and **Transform XML**. However, for previously created logic apps, these actions might not appear in the designer picker for you to select if Visual Studio Code uses an outdated version of the extension bundle, `Microsoft.Azure.Functions.ExtensionBundle.Workflows`.
Also, the **Azure Function Operations** connector and actions don't appear in the designer picker unless you enabled or selected **Use connectors from Azure** when you created your logic app. If you didn't enable the Azure-deployed connectors at app creation time, you can enable them from your project in Visual Studio Code. Open the **workflow.json** shortcut menu, and select **Use Connectors from Azure**.
To fix the outdated bundle, follow these steps to delete the outdated bundle, wh
> [!NOTE] > This solution applies only to logic apps that you create and deploy using Visual Studio Code with
-> the Azure Logic Apps (Preview) extension, not the logic apps that you created using the Azure portal.
+> the Azure Logic Apps (Standard) extension, not the logic apps that you created using the Azure portal.
> See [Supported triggers and actions are missing from the designer in the Azure portal](create-single-tenant-workflows-azure-portal.md#missing-triggers-actions). 1. Save any work that you don't want to lose, and close Visual Studio.
To fix the outdated bundle, follow these steps to delete the outdated bundle, wh
`...\Users\{your-username}\.nuget\packages\microsoft.azure.workflows.webjobs.extension`
-1. Delete the version folder for the earlier package, for example, if you have a folder for version 1.0.0.8-preview, delete that folder.
+1. Delete the version folder for the earlier package.
1. Reopen Visual Studio Code, your project, and the **workflow.json** file in the designer.
When you try to start a debugging session, you get the error, **"Error exists af
## Next steps
-We'd like to hear from you about your experiences with the Azure Logic Apps (Preview) extension!
+We'd like to hear from you about your experiences with the Azure Logic Apps (Standard) extension!
* For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues). * For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/lafeedback).
logic-apps Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/devops-deployment-single-tenant-azure-logic-apps.md
Title: DevOps deployment for single-tenant Azure Logic Apps (preview)
-description: Learn about DevOps deployment for single-tenant Azure Logic Apps (preview).
+ Title: DevOps deployment for single-tenant Azure Logic Apps
+description: Learn about DevOps deployment for single-tenant Azure Logic Apps.
ms.suite: integration Previously updated : 05/10/2021 Last updated : 05/25/2021 # As a developer, I want to learn about DevOps deployment support for single-tenant Azure Logic Apps.
-# DevOps deployment for single-tenant Azure Logic Apps (preview)
+# DevOps deployment for single-tenant Azure Logic Apps
With the trend towards distributed and native cloud apps, organizations are dealing with more distributed components across more environments. To maintain control and consistency, you can automate your environments and deploy more components faster and more confidently by using DevOps tools and processes.
This article provides an introduction and overview about the current continuous
## Single-tenant versus multi-tenant
-In the original multi-tenant Azure Logic Apps, resource deployment is based on Azure Resource Manager (ARM) templates, which combine and handle resource provisioning for both logic apps and infrastructure. In single-tenant Azure Logic Apps, deployment becomes easier because you can use separate provisioning between apps and infrastructure.
+In the original *multi-tenant* Azure Logic Apps, resource deployment is based on Azure Resource Manager (ARM) templates, which combine and handle resource provisioning for both logic apps and infrastructure. In single-tenant Azure Logic Apps, deployment becomes easier because you can use separate provisioning between apps and infrastructure.
-When you create logic apps using the **Logic App (Preview)** resource type, your workflows are powered by the redesigned Azure Logic Apps (Preview) runtime. This runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) extensibility and is [hosted as an extension on the Azure Functions runtime](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564). This design provides portability, flexibility, and more performance for your logic apps plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem.
+When you create logic apps using the **Logic App (Standard)** resource type, your workflows are powered by the redesigned single-tenant Azure Logic Apps runtime. This runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) extensibility and is [hosted as an extension on the Azure Functions runtime](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564). This design provides portability, flexibility, and more performance for your logic apps plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem.
For example, you can package the redesigned runtime and workflows together as part of your logic app. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your apps, copy the artifacts to the host environment and then start your apps to run your workflows. Or, integrate your artifacts into deployment pipelines using the tools and processes that you already know and use. For example, if your scenario requires containers, you can containerize your logic apps and integrate them into your existing pipelines.
By using standard build and deploy options, you can focus on app development sep
## DevOps deployment capabilities
-single-tenant Azure Logic Apps inherits many capabilities and benefits from the Azure Functions platform and Azure App Service ecosystem. These updates include a whole new deployment model and more ways to use DevOps for your logic app workflows.
+Single-tenant Azure Logic Apps inherits many capabilities and benefits from the Azure Functions platform and Azure App Service ecosystem. These updates include a whole new deployment model and more ways to use DevOps for your logic app workflows.
<a name="local-development-testing"></a> ### Local development and testing
-When you use Visual Studio Code with the Azure Logic Apps (Preview) extension, you can locally develop, build, and run **Logic App (Preview)** workflows in your development environment without having to deploy to Azure. You can also run your workflows anywhere that Azure Functions can run. For example, if your scenario requires containers, you can containerize your logic apps and deploy as Docker containers.
+When you use Visual Studio Code with the Azure Logic Apps (Standard) extension, you can locally develop, build, and run single-tenant based logic app workflows in your development environment without having to deploy to Azure. You can also run your workflows anywhere that Azure Functions can run. For example, if your scenario requires containers, you can containerize your logic apps and deploy as containers.
This capability is a major improvement and provides a substantial benefit compared to the multi-tenant model, which requires you to develop against an existing and running resource in Azure.
The single-tenant model gives you the capability to separate the concerns betwee
### Resource structure
-Single-tenant Azure Logic Apps introduces a new resource structure where your logic app can host multiple workflows. This structure differs from the multi-tenant version where you have a 1:1 mapping between logic app resource and workflow. With this 1-to-many relationship, workflows in the same logic app can share and reuse other resources. Plus, these workflows also benefit from improved performance due to shared tenancy and proximity to each other.
+Single-tenant Azure Logic Apps introduces a new resource structure where your logic app can host multiple workflows. This structure differs from the multi-tenant model where you have a 1:1 mapping between logic app resource and workflow. With this 1-to-many relationship, workflows in the same logic app can share and reuse other resources. Plus, these workflows also benefit from improved performance due to shared tenancy and proximity to each other.
This resource structure looks and works similarly to Azure Functions where a function app can host many functions. If you're working in a logic app project within Visual Studio Code, your project folder and file structure looks like the following example:
MyLogicAppProjectName
| connections.json | host.json | local.settings.json
-| Dockerfile
``` At your project's root level, you can find the following files and folders, along with other items depending on your project is extension bundle-based (Node.js), which is the default, or is NuGet package-based (.NET).
At your project's root level, you can find the following files and folders, alon
| .funcignore | File | Review [Work with Azure Functions Core Tools](../azure-functions/functions-run-local.md) | | connections.json | File | Contains the metadata, endpoints, and keys for any managed connections and Azure functions that your workflows use. <p><p>**Important**: To use different connections and functions for each environment, make sure that you parameterize this **connections.json** file and update the endpoints. | | host.json | File | Contains runtime-specific configuration settings and values, for example, the default limits for the single-tenant Azure Logic Apps platform, logic apps, workflows, triggers, and actions. |
-| local.settings.json | File | Contains the local environment variables that provides the `appSettings` values to use for your logic app when running locally. |
-| Dockerfile | Folder | Contains one or more Dockerfiles to use for deploying the logic app as a container. |
+| local.settings.json | File | Contains the local environment variables that provide the `appSettings` values to use for your logic app when running locally. |
|||| For example, to create custom built-in operations, you must have a NuGet based project, not an extension bundle-based project. A NuGet-based project includes a .bin folder that contains packages and other library files that your app needs, while a bundle-based project doesn't include this folder and files. For more information about converting your project to use NuGet, review [Enable built-connector authoring](create-stateful-stateless-workflows-visual-studio-code.md#enable-built-in-connector-authoring).
For more information and best practices about how to best organize workflows in
### Container deployment Single-tenant Azure Logic Apps supports deployment to containers, which means that you can containerize your logic app workflows and run them anywhere that containers can run. After you containerize your app, deployment works mostly the same as any other container you deploy and manage.
-
-For examples that include Azure DevOps, review [CI/CD for Containers](https://azure.microsoft.com/solutions/architecture/cicd-for-containers/). For more information about containerizing logic apps and deploying to Docker, review [Deploy your logic app to a Docker container from Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md#deploy-to-docker).
+
+For examples that include Azure DevOps, review [CI/CD for Containers](https://azure.microsoft.com/solutions/architecture/cicd-for-containers/).
<a name="app-settings-parameters"></a>
In single-tenant Azure Logic Apps, you can call and reference your environment v
## Managed connectors and built-in operations
-The Azure Logic Apps ecosystem provides [hundreds of Microsoft-managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) and built-in operations as part of a constantly growing collection that you can use in the single-tenant Azure Logic Apps service. The way that Microsoft maintains these connectors and built-in operations stays mostly the same in single-tenant Azure Logic Apps.
+The Azure Logic Apps ecosystem provides [hundreds of Microsoft-managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) and built-in operations as part of a constantly growing collection that you can use in single-tenant Azure Logic Apps. The way that Microsoft maintains these connectors and built-in operations stays mostly the same in single-tenant Azure Logic Apps.
The most significant improvement is that the single-tenant service makes more popular managed connectors also available as built-in operations. For example, you can use built-in operations for Azure Service Bus, Azure Event Hubs, SQL, and others. Meanwhile, the managed connector versions are still available and continue to work.
In Visual Studio Code, when you use the designer to develop or make changes to y
### Service provider connections
-When you use a built-in operation for a service such as Azure Service Bus or Azure Event Hubs in the single-tenant Azure Logic Apps service, you create a service provider connection that runs in the same process as your workflow. This connection infrastructure is hosted and managed as part of your logic app, and your app settings store the connection strings for any service provider-based built-in operation that your workflows use.
+When you use a built-in operation for a service such as Azure Service Bus or Azure Event Hubs in single-tenant Azure Logic Apps, you create a service provider connection that runs in the same process as your workflow. This connection infrastructure is hosted and managed as part of your logic app, and your app settings store the connection strings for any service provider-based built-in operation that your workflows use.
In your logic app project, each workflow has a workflow.json file that contains the workflow's underlying JSON definition. This workflow definition then references the necessary connection strings in your project's connections.json file.
To call functions created and hosted in Azure Functions, you use the built-in Az
## Authentication
-In the single-tenant Azure Logic Apps service, the hosting model for logic app workflows is a single tenant where your workloads benefit from more isolation than in the multi-tenant version. Plus, the service runtime is portable, which means you can run your workflows anywhere that Azure Functions can run. Still, this design requires a way for logic apps to authenticate their identity so they can access the managed connector ecosystem in Azure. Your apps also need the correct permissions to run operations when using managed connections.
+In single-tenant Azure Logic Apps, the hosting model for logic app workflows is a single tenant where your workloads benefit from more isolation than in the multi-tenant model. Plus, the single-tenant Azure Logic Apps runtime is portable, which means you can run your workflows anywhere that Azure Functions can run. Still, this design requires a way for logic apps to authenticate their identity so they can access the managed connector ecosystem in Azure. Your apps also need the correct permissions to run operations when using managed connections.
By default, each single-tenant based logic app has an automatically enabled system-assigned managed identity. This identity differs from the authentication credentials or connection string used for creating a connection. At runtime, your logic app uses this identity to authenticate its connections through Azure access policies. If you disable this identity, connections won't work at runtime.
For logic apps that run in your local development environment using Visual Studi
## Next steps
-* [Set up DevOps deployment for single-tenant Azure Logic Apps (Preview)](set-up-devops-deployment-single-tenant-azure-logic-apps.md)
+* [Set up DevOps deployment for single-tenant Azure Logic Apps](set-up-devops-deployment-single-tenant-azure-logic-apps.md)
-We'd like to hear about your experiences with the preview logic app resource type and preview single-tenant model!
+We'd like to hear about your experiences with the new logic app resource type and single-tenant model!
- For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues). - For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/logicappsdevops).
logic-apps Estimate Storage Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/estimate-storage-costs.md
Title: Estimate storage costs for single-tenant Azure Logic Apps
-description: Estimate storage costs for your workflows using the Azure Logic Apps Storage Calculator and Cost Analysis API.
+description: Estimate storage costs for your workflows using the Logic Apps Storage Calculator.
ms.suite: integration
Last updated 05/13/2021
# Estimate storage costs for workflows in single-tenant Azure Logic Apps
-Azure Logic Apps uses [Azure Storage](/azure/storage/) for any storage operations. In traditional *multi-tenant* Azure Logic Apps, any storage usage and costs are attached to the logic app. Now, in *single-tenant* Azure Logic Apps (preview), you can use your own storage account. These storage costs are listed separately in your Azure billing invoice. This capability gives you more flexibility and control over your logic app data.
+Azure Logic Apps uses [Azure Storage](/azure/storage/) for any storage operations. In traditional *multi-tenant* Azure Logic Apps, any storage usage and costs are attached to the logic app. Now, in *single-tenant* Azure Logic Apps, you can use your own storage account. These storage costs are listed separately in your Azure billing invoice. This capability gives you more flexibility and control over your logic app data.
> [!NOTE]
-> This article applies to workflows in the single-tenant Azure Logic Apps environment. These workflows exist in the same logic app and in a single tenant that share the same storage. For more information, see [Single-tenant (preview) versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+> This article applies to workflows in the single-tenant Azure Logic Apps environment. These workflows exist in the same logic app and in a single tenant that share the same storage. For more information, see [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
-Storage costs change based on your workflows' content. Different triggers, actions, and payloads result in different storage operations and needs. This article describes how to estimate your storage costs when you're using your own Azure Storage account with **single-tenant** logic apps. First, you can [estimate the number of storage operations you'll perform](#estimate-storage-needs) using the Logic Apps storage calculator. Then, you can [estimate your possible storage costs](#estimate-storage-costs) using these numbers in the Azure pricing calculator.
+Storage costs change based on your workflows' content. Different triggers, actions, and payloads result in different storage operations and needs. This article describes how to estimate your storage costs when you're using your own Azure Storage account with single-tenant based logic apps. First, you can [estimate the number of storage operations you'll perform](#estimate-storage-needs) using the Logic Apps storage calculator. Then, you can [estimate your possible storage costs](#estimate-storage-costs) using these numbers in the Azure pricing calculator.
## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
-* A single-tenant Logic Apps workflow. You can create a workflow [using the Azure portal](create-stateful-stateless-workflows-azure-portal.md) or [using Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md). If you don't have a workflow yet, you can use the sample small, medium, and large workflows in the storage calculator.
+
+* A single-tenant based logic Apps workflow. You can create a workflow [using the Azure portal](create-stateful-stateless-workflows-azure-portal.md) or [using Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md). If you don't have a workflow yet, you can use the sample small, medium, and large workflows in the storage calculator.
+ * An Azure storage account that you want to use with your workflow. If you don't have a storage account, [create a storage account](../storage/common/storage-account-create.md)
-## Estimate storage needs
+## Get your workflow's JSON code
-Before you can estimate your storage needs, get your workflow's JSON code.
+If you have a workflow to estimate, get the JSON code for your workflow:
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Go to the **Logic apps** service, and select your workflow. 1. In your logic app's menu, under **Development tools**, select **Logic app code view**. 1. Copy the workflow's JSON code.
-Next, use the Logic Apps storage calculator:
+## Estimate storage needs
1. Go to the [Logic Apps storage calculator](https://logicapps.azure.com/calculator).
- :::image type="content" source="./media/estimate-storage-costs/storage-calculator.png" alt-text="Screenshot of Logic Apps storage calculator, showing input interface." lightbox="./media/estimate-storage-costs/storage-calculator.png":::
-1. Enter, upload, or select a single-tenant logic app workflow's JSON code.
- * If you copied code in the previous section, paste it into the **Paste or upload workflow.json** box.
- * If you have your **workflow.json** file saved locally, choose **Browse Files** under **Select workflow**. Choose your file, then select **Open**.
- * If you don't have a workflow yet, choose one of the sample workflows under **Select workflow**.
+
+ :::image type="content" source="./media/estimate-storage-costs/storage-calculator.png" alt-text="Screenshot of Logic Apps storage calculator, showing input interface." lightbox="./media/estimate-storage-costs/storage-calculator.png":::
+
+1. Enter, upload, or select the JSON code for a single-tenant based logic app workflow.
+
+ * If you copied code in the previous section, paste it into the **Paste or upload workflow.json** box.
+ * If you have your **workflow.json** file saved locally, choose **Browse Files** under **Select workflow**. Choose your file, then select **Open**.
+ * If you don't have a workflow yet, choose one of the sample workflows under **Select workflow**.
+ 1. Review the options under **Advanced Options**. These settings depend on your workflow type and may include:
- * An option to enter the number of times your loops run.
- * An option to select all actions with payloads over 32 KB.
+
+ * An option to enter the number of times your loops run.
+ * An option to select all actions with payloads over 32 KB.
+ 1. For **Monthly runs**, enter the number of times that you run your workflow each month. 1. Select **Calculate** and wait for the calculation to run. 1. Under **Storage Operation Breakdown and Calculation Steps**, review the **Operation Counts** estimates.
-
+ You can see estimated operation counts by run and by month in the two tables. The following operations are shown: * **Blob (read)**, for Azure Blob Storage read operations.
Next, use the Logic Apps storage calculator:
* **Queue**, for Azure Queues Queue Class 2 operations. * **Tables**, for Azure Table Storage operations.
- Each operation has a minimum, maximum and "best guess" count number. Choose the most relevant number to use for [estimating your storage operation costs](#estimate-storage-costs) based on your individual scenario. Typically, it's recommended to use the "best guess" count for accuracy. However, you might also use the maximum count to make sure your cost estimate has a buffer.
+ Each operation has a minimum, maximum and "best guess" count number. Choose the most relevant number to use for [estimating your storage operation costs](#estimate-storage-costs) based on your individual scenario. Typically, it's recommended to use the "best guess" count for accuracy. However, you might also use the maximum count to make sure your cost estimate has a buffer.
:::image type="content" source="./media/estimate-storage-costs/storage-calculator-results.png" alt-text="Screenshot of Logic Apps storage calculator, showing output with estimated operations." lightbox="./media/estimate-storage-costs/storage-calculator-results.png"::: ## Estimate storage costs
-After you've [calculated your Logic Apps storage needs](#estimate-storage-needs), you can estimate your possible monthly storage costs. You can estimate prices for the following storage operation types:
+After you've [calculated your logic app workflow's storage needs](#estimate-storage-needs), you can estimate your possible monthly storage costs. You can estimate prices for the following storage operation types:
* [Blob storage read and write operations](#estimate-blob-storage-operations-costs) * [Queue storage operations](#estimate-queue-operations-costs)
logic-apps Logic Apps Add Run Inline Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-add-run-inline-code.md
ms.suite: integration Previously updated : 12/07/2020 Last updated : 05/25/2021 # Add and run code snippets by using inline code in Azure Logic Apps
-When you want to run a piece of code inside your logic app, you can add the built-in Inline Code action as a step in your logic app's workflow. This action works best when you want to run code that fits this scenario:
+When you want to run a piece of code inside your logic app workflow, you can add the built-in Inline Code action as a step in your logic app's workflow. This action works best when you want to run code that fits this scenario:
-* Runs in JavaScript. More languages coming soon.
+* Runs in JavaScript. More languages are in development.
* Finishes running in five seconds or fewer.
When you want to run a piece of code inside your logic app, you can add the buil
* Doesn't require working with the [**Variables** actions](../logic-apps/logic-apps-create-variables-store-values.md), which are not yet supported.
-* Uses Node.js version 8.11.1. For more information, see
-[Standard built-in objects](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects).
+* Uses Node.js version 8.11.1 for [multi-tenant based logic apps](logic-apps-overview.md) or [Node.js versions 10.x.x, 11.x.x, or 12.x.x](https://nodejs.org/en/download/releases/) for [single-tenant based logic apps](single-tenant-overview-compare.md).
+
+ For more information, see [Standard built-in objects](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects).
> [!NOTE] > The `require()` function isn't supported by the Inline Code action for running JavaScript.
In this article, the example logic app triggers when a new email arrives in a wo
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
-
-* The logic app where you want to add your code snippet, including a trigger. If you don't have a logic app, see [Quickstart: Create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-
- The example in this topic uses the Office 365 Outlook trigger that's named **When a new email arrives**.
-
-* An [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md) that's linked to your logic app.
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
- * Make sure that you use an integration account that's appropriate for your use case or scenario.
+* The logic app workflow where you want to add your code snippet, including a trigger. The example in this topic uses the Office 365 Outlook trigger that's named **When a new email arrives**.
- For example, [Free-tier](../logic-apps/logic-apps-pricing.md#integration-accounts) integration accounts are meant only for exploratory scenarios and workloads, not production scenarios, are limited in usage and throughput, and aren't supported by a service-level agreement (SLA). Other tiers incur costs, but include SLA support, offer more throughput, and have higher limits. Learn more about integration account [tiers](../logic-apps/logic-apps-pricing.md#integration-accounts), [pricing](https://azure.microsoft.com/pricing/details/logic-apps/), and [limits](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits).
+ If you don't have a logic app, review the following documentation:
- * If you don't want to use an integration account, you can try using [Azure Logic Apps Preview](single-tenant-overview-compare.md), and create a logic app from the **Logic App (Preview)** resource type.
+ * Multi-tenant: [Quickstart: Create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md)
+ * Single-tenant: [Create single-tenant based logic app workflows](create-single-tenant-workflows-azure-portal.md)
- In Azure Logic Apps Preview, **Inline Code** is now named **Inline Code Operations** along with these other differences:
+* Based on whether your logic app is multi-tenant or single-tenant, review the following information.
- * **Execute JavaScript Code** is now named **Run in-line JavaScript**.
+ * Multi-tenant: Requires Node.js version 8.11.1. You also need an empty [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md) that's linked to your logic app. Make sure that you use an integration account that's appropriate for your use case or scenario.
- * If you use macOS or Linux, the Inline Code Operations actions are currently unavailable when you use the Azure Logic Apps (Preview) extension in Visual Studio Code.
+ For example, [Free-tier](../logic-apps/logic-apps-pricing.md#integration-accounts) integration accounts are meant only for exploratory scenarios and workloads, not production scenarios, are limited in usage and throughput, and aren't supported by a service-level agreement (SLA).
- * Inline Code Operations actions have [updated limits](logic-apps-limits-and-config.md#inline-code-action-limits).
+ Other integration account tiers incur costs, but include SLA support, offer more throughput, and have higher limits. Learn more about integration account [tiers](../logic-apps/logic-apps-pricing.md#integration-accounts), [pricing](https://azure.microsoft.com/pricing/details/logic-apps/), and [limits](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits).
- You can start from either option here:
-
- * Create the logic app from the **Logic App (Preview)** resource type [by using the Azure portal](create-single-tenant-workflows-azure-portal.md).
-
- * Create a project for the logic app [by using Visual Studio Code and the Azure Logic Apps (Preview) extension](create-single-tenant-workflows-visual-studio-code.md)
+ * Single-tenant: Requires [Node.js versions 10.x.x, 11.x.x, or 12.x.x](https://nodejs.org/en/download/releases/). However, you don't need an integration account, but the Inline Code action is renamed **Inline Code Operations** and has [updated limits](logic-apps-limits-and-config.md).
## Add inline code
-1. If you haven't already, in the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
-
-1. In the designer, choose where to add the Inline Code action in your logic app's workflow.
+1. If you haven't already, in the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
- * To add the action at the end of your workflow, select **New step**.
+1. In your workflow, choose where to add the Inline Code action, either as a new step at the end of your workflow or between steps.
- * To add the action between steps, move your mouse pointer over the arrow that connects those steps. Select the plus sign (**+**) that appears, and select **Add an action**.
+ To add the action between steps, move your mouse pointer over the arrow that connects those steps. Select the plus sign (**+**) that appears, and select **Add an action**.
- This example adds the Inline Code action under the Office 365 Outlook trigger.
+ This example adds the action under the Office 365 Outlook trigger.
![Add the new step under the trigger](./media/logic-apps-add-run-inline-code/add-new-step.png)
-1. Under **Choose an action**, in the search box, enter `inline code`. From the actions list, select the action named **Execute JavaScript Code**.
+1. In the action search box, enter `inline code`. From the actions list, select the action named **Execute JavaScript Code**.
![Select the "Execute JavaScript Code" action](./media/logic-apps-add-run-inline-code/select-inline-code-action.png)
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
Title: Limits and configuration
+ Title: Limits and configuration reference guide
description: Reference guide to limits and configuration information for Azure Logic Apps ms.suite: integration-- Previously updated : 05/05/2021++ Last updated : 05/25/2021
-# Limits and configuration information for Azure Logic Apps
+# Limits and configuration reference for Azure Logic Apps
> For Power Automate, see [Limits and configuration in Power Automate](/flow/limits-and-config).
This article describes the limits and configuration information for Azure Logic
> If your scenarios require different limits, [contact the Logic Apps team](mailto://logicappspm@microsoft.com) > to discuss your requirements.
-The following table briefly summarizes differences between the original **Logic App (Consumption)** resource type and the new **Logic App (Preview)** resource type. You'll also learn how the *single-tenant* (preview) environment compares to the *multi-tenant* and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
+The following table briefly summarizes differences between the original **Logic App (Consumption)** resource type and the **Logic App (Standard)** resource type. You'll also learn how the *single-tenant* environment compares to the *multi-tenant* and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
[!INCLUDE [Logic app resource type and environment differences](../../includes/logic-apps-resource-environment-differences-table.md)]
The following tables list the values for a single workflow definition:
The following table lists the values for a single workflow run:
-| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
-||--|-||-|
+| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+||--|||-|
| Run history retention in storage | 90 days | 90 days | 366 days | The amount of time to keep workflow run history in storage after a run starts. If the run's duration exceeds the current run history retention limit, the run is removed from the runs history in storage. <p>Whether the run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <p><p>For more information, review [Change duration and run history retention in storage](#change-retention). <p><p>**Tip**: For scenarios that require different limits, [contact the Logic Apps team](mailto://logicappspm@microsoft.com) to discuss your requirements. | | Run duration | 90 days | - Stateful workflow: 90 days <p><p>- Stateless workflow: 5 min | 366 days | The amount of time that a workflow can continue running before forcing a timeout. <p>The run duration is calculated by using a run's start time and the limit that's specified in the workflow setting, [**Run history retention in days**](#change-duration) at that start time. <p>**Important**: Make sure the run duration value is always less than or equal to the run history retention in storage value. Otherwise, run histories might be deleted before the associated jobs are complete. <p><p>For more information, review [Change run duration and history retention in storage](#change-duration). <p><p>**Tip**: For scenarios that require different limits, [contact the Logic Apps team](mailto://logicappspm@microsoft.com) to discuss your requirements. | | Recurrence interval | - Min: 1 sec <p><p>- Max: 500 days | - Min: 1 sec <p><p>- Max: 500 days | - Min: 1 sec <p><p>- Max: 500 days ||
In the designer, the same setting controls the maximum number of days that a wor
* For the multi-tenant service, the 90-day default limit is the same as the maximum limit. You can only decrease this value.
-* For the single-tenant service (preview), you can decrease or increase the 90-day default limit. For more information, see [Create workflows for single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
+* For the single-tenant service, you can decrease or increase the 90-day default limit. For more information, see [Create workflows for single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
* For an integration service environment, you can decrease or increase the 90-day default limit.
The following table lists the values for a single workflow run:
The following table lists the values for a **For each** loop:
-| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
-||--|-||-|
+| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+||--|||-|
| Array items | 100,000 items | - Stateful workflow: 100,000 items <p><p>- Stateless workflow: 100 items | 100,000 items | The number of array items that a **For each** loop can process. <p><p>To filter larger arrays, you can use the [query action](logic-apps-perform-data-operations.md#filter-array-action). | | Concurrent iterations | Concurrency off: 20 <p><p>Concurrency on: <p>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <p><p>Concurrency on: <p><p>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <p><p>Concurrency on: <p>- Default: 20 <br>- Min: 1 <br>- Max: 50 | The number of **For each** loop iterations that can run at the same time, or in parallel. <p><p>To change this value in the multi-tenant service, see [Change **For each** concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-for-each-concurrency) or [Run **For each** loops sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-for-each). | ||||||
The following table lists the values for a **For each** loop:
The following table lists the values for an **Until** loop:
-| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
-||--|-||-|
+| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+||--|||-|
| Iterations | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | Stateful workflow: <p><p>- Default: 60 <br>- Min: 1 <br>- Max: 5,000 <p><p>Stateless workflow: <p><p>- Default: 60 <br>- Min: 1 <br>- Max: 100 | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | The number of cycles that an **Until** loop can have during a workflow run. <p><p>To change this value, in the **Until** loop shape, select **Change limits**, and specify the value for the **Count** property. | | Timeout | Default: PT1H (1 hour) | Stateful workflow: PT1H (1 hour) <p><p>Stateless workflow: PT5M (5 min) | Default: PT1H (1 hour) | The amount of time that the **Until** loop can run before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met. <p><p>To change this value, in the **Until** loop shape, select **Change limits**, and specify the value for the **Timeout** property. | ||||||
The following table lists the values for an **Until** loop:
### Concurrency and debatching
-| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
-||--|-||-|
+| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+||--|||-|
| Trigger - concurrent runs | Concurrency off: Unlimited <p><p>Concurrency on (irreversible): <p><p>- Default: 25 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <p><p>Concurrency on (irreversible): <p><p>- Default: 25 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <p><p>Concurrency on (irreversible): <p><p>- Default: 25 <br>- Min: 1 <br>- Max: 100 | The number of concurrent runs that a trigger can start at the same time, or in parallel. <p><p>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items for [debatching arrays](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch). <p><p>To change this value in the multi-tenant service, see [Change trigger concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-trigger). | | Maximum waiting runs | Concurrency off: <p><p>- Min: 1 run <p>- Max: 50 runs <p><p>Concurrency on: <p><p>- Min: 10 runs plus the number of concurrent runs <p>- Max: 100 runs | Concurrency off: <p><p>- Min: 1 run <p>- Max: 50 runs <p><p>Concurrency on: <p><p>- Min: 10 runs plus the number of concurrent runs <p>- Max: 100 runs | Concurrency off: <p><p>- Min: 1 run <p>- Max: 50 runs <p><p>Concurrency on: <p><p>- Min: 10 runs plus the number of concurrent runs <p>- Max: 100 runs | The number of workflow instances that can wait to run when your current workflow instance is already running the maximum concurrent instances. <p><p>To change this value in the multi-tenant service, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). | | **SplitOn** items | Concurrency off: 100,000 items <p><p>Concurrency on: 100 items | Concurrency off: 100,000 items <p><p>Concurrency on: 100 items | Concurrency off: 100,000 items <p><p>Concurrency on: 100 items | For triggers that return an array, you can specify an expression that uses a **SplitOn** property that [splits or debatches array items into multiple workflow instances](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch) for processing, rather than use a **For each** loop. This expression references the array to use for creating and running a workflow instance for each array item. <p><p>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items. |
The following table lists the values for an **Until** loop:
The following table lists the values for a single workflow definition:
-### Multi-tenant & single-tenant (preview)
+### Multi-tenant & single-tenant
| Name | Limit | Notes | | - | -- | -- |
Azure Logic Apps supports write operations, including inserts and updates, throu
The following table lists the values for a single workflow definition:
-| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
-||--|-||-|
+| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+||--|||-|
| Variables per workflow | 250 variables | 250 variables | 250 variables || | Variable - Maximum content size | 104,857,600 characters | Stateful workflow: 104,857,600 characters <p><p>Stateless workflow: 1,024 characters | 104,857,600 characters || | Variable (Array type) - Maximum number of array items | 100,000 items | 100,000 items | Premium SKU: 100,000 items <p><p>Developer SKU: 5,000 items ||
The following tables list the values for a single inbound or outbound call:
By default, the HTTP action and APIConnection actions follow the [standard asynchronous operation pattern](/architecture/patterns/async-request-reply), while the Response action follows the *synchronous operation pattern*. Some managed connector operations make asynchronous calls or listen for webhook requests, so the timeout for these operations might be longer than the following limits. For more information, review [each connector's technical reference page](/connectors/connector-reference/connector-reference-logicapps-connectors) and also the [Workflow triggers and actions](../logic-apps/logic-apps-workflow-actions-triggers.md#http-action) documentation. > [!NOTE]
-> For the preview logic app type in the single-tenant model, stateless workflows can only run *synchronously*.
+> For the **Logic App (Standard)** resource type in the single-tenant model, stateless workflows can only run *synchronously*.
-| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
-||--|-||-|
+| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+||--|||-|
| Outbound request | 120 sec <br>(2 min) | 230 sec <br>(3.9 min) | 240 sec <br>(4 min) | Examples of outbound requests include calls made by the HTTP trigger or action. <p><p>**Tip**: For longer running operations, use an [asynchronous polling pattern](../logic-apps/logic-apps-create-api-app.md#async-pattern) or an ["Until" loop](../logic-apps/logic-apps-workflow-actions-triggers.md#until-action). To work around timeout limits when you call another workflow that has a [callable endpoint](logic-apps-http-endpoint.md), you can use the built-in Azure Logic Apps action instead, which you can find in the designer's operation picker under **Built-in**. | | Inbound request | 120 sec <br>(2 min) | 230 sec <br>(3.9 min) | 240 sec <br>(4 min) | Examples of inbound requests include calls received by the Request trigger, HTTP Webhook trigger, and HTTP Webhook action. <p><p>**Note**: For the original caller to get the response, all steps in the response must finish within the limit unless you call another nested workflow. For more information, see [Call, trigger, or nest logic apps](../logic-apps/logic-apps-http-endpoint.md). | ||||||
By default, the HTTP action and APIConnection actions follow the [standard async
### Messages
-| Name | Chunking enabled | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
+| Name | Chunking enabled | Multi-tenant | Single-tenant | Integration service environment | Notes |
|||--|-||-| | Content download - Maximum number of requests | Yes | 1,000 requests | 1,000 requests | 1,000 requests || | Message size | No | 100 MB | 100 MB | 200 MB | To work around this limit, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). However, some connectors and APIs don't support chunking or even the default limit. <p><p>- Connectors such as AS2, X12, and EDIFACT have their own [B2B message limits](#b2b-protocol-limits). <p>- ISE connectors use the ISE limit, not the non-ISE connector limits. |
The following table lists the values for a single workflow definition:
The following table lists the values for a single workflow definition:
-| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
-||--|-||-|
-| Maximum number of code characters | 1,024 characters | 100,000 characters | 1,024 characters | To use the higher limit, create a **Logic App (Preview)** resource, which runs in single-tenant (preview) Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Preview)** extension](create-single-tenant-workflows-visual-studio-code.md). |
-| Maximum duration for running code | 5 sec | 15 sec | 1,024 characters | To use the higher limit, create a **Logic App (Preview)** resource, which runs in single-tenant (preview) Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Preview)** extension](create-single-tenant-workflows-visual-studio-code.md). |
+| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+||--|||-|
+| Maximum number of code characters | 1,024 characters | 100,000 characters | 1,024 characters | To use the higher limit, create a **Logic App (Standard)** resource, which runs in single-tenant Azure Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Standard)** extension](create-single-tenant-workflows-visual-studio-code.md). |
+| Maximum duration for running code | 5 sec | 15 sec | 1,024 characters | To use the higher limit, create a **Logic App (Standard)** resource, which runs in single-tenant Azure Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Standard)** extension](create-single-tenant-workflows-visual-studio-code.md). |
|||||| <a name="custom-connector-limits"></a> ## Custom connector limits
-For multi-tenant and integration service environment only, you can create and use [custom managed connectors](/connectors/custom-connectors), which are wrappers around an existing REST API or SOAP API. For single-tenant (preview) only, you can create and use [custom built-in connectors](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
+For multi-tenant and integration service environment only, you can create and use [custom managed connectors](/connectors/custom-connectors), which are wrappers around an existing REST API or SOAP API. For single-tenant only, you can create and use [custom built-in connectors](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
The following table lists the values for custom connectors:
-| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
-||--|-||-|
+| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+||--|||-|
| Custom connectors | 1,000 per Azure subscription | Unlimited | 1,000 per Azure subscription || | Requests per minute for a custom connector | 500 requests per minute per connection | Based on your implementation | 2,000 requests per minute per *custom connector* || | Connection timeout | 2 min | Idle connection: <br>4 min <p><p>Active connection: <br>10 min | 2 min ||
The following table lists the values for custom connectors:
For more information, review the following documentation: * [Custom managed connectors overview](/connectors/custom-connectors)
-* [Enable built-in connector authoring - Visual Studio Code with Azure Logic Apps (Preview)](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring)
+* [Enable built-in connector authoring - Visual Studio Code with Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring)
<a name="managed-identity"></a>
For more information, review the following documentation:
||| > [!NOTE]
-> By default, a Logic App (Preview) resource has its system-assigned managed identity automatically enabled to
+> By default, a Logic App (Standard) resource has its system-assigned managed identity automatically enabled to
> authenticate connections at runtime. This identity differs from the authentication credentials or connection > string that you use when you create a connection. If you disable this identity, connections won't work at > runtime. To view this setting, on your logic app's menu, under **Settings**, select **Identity**.
The following table lists the message size limits that apply to B2B protocols:
## Firewall configuration: IP addresses and service tags
-When your workflow needs to communicate through a firewall that limits traffic to specific IP addresses, that firewall needs to allow access for *both* the [inbound](#inbound) and [outbound](#outbound) IP addresses used by the Logic Apps service or runtime in the Azure region where your logic app resource exists. *All* logic apps in the same region use the same IP address ranges.
+If your environment has strict network requirements or firewalls that limit traffic to specific IP addresses, your environment or firewall needs to allow access for *both* the [inbound](#inbound) and [outbound](#outbound) IP addresses used by the Azure Logic Apps service or runtime in the Azure region where your logic app resource exists. *All* logic apps in the same region use the same IP address ranges.
+
+For example, suppose your logic apps are deployed in the West US region. To support calls that your logic apps send or receive through built-in triggers and actions, such as the [HTTP trigger or action](../connectors/connectors-native-http.md), your firewall needs to allow access for *all* the Azure Logic Apps service inbound IP addresses *and* outbound IP addresses that exist in the West US region.
-For example, to support calls that logic apps in the West US region send or receive through built-in triggers and actions, such as the [HTTP trigger or action](../connectors/connectors-native-http.md), your firewall needs to allow access for *all* the Logic Apps service inbound IP addresses *and* outbound IP addresses that exist in the West US region.
+If your workflow also uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](#outbound) in your logic app's Azure region.
-If your workflow also uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](#outbound) in your logic app's Azure region. Plus, if you use custom connectors that access on-premises resources through the [on-premises data gateway resource in Azure](logic-apps-gateway-connection.md), you need to set up the gateway installation to allow access for the corresponding *managed connectors [outbound IP addresses](#outbound)*.
+If you use custom connectors that access on-premises resources through the [on-premises data gateway resource in Azure](logic-apps-gateway-connection.md), you need to set up the gateway installation to allow access for the corresponding *managed connectors [outbound IP addresses](#outbound)*.
For more information about setting up communication settings on the gateway, see these topics:
Before you set up your firewall with IP addresses, review these considerations:
* For [Azure China 21Vianet](/azure/chin), such as Azure Storage, SQL Server, Office 365 Outlook, and so on.
-* If your logic apps run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), make sure that you [open these ports too](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#network-ports-for-ise).
+* If your logic app workflows run in single-tenant Azure Logic Apps, you need to find the fully qualified domain names (FQDNs) for your connections For more information, review the corresponding sections in these topics:
+
+ * [Firewall permissions for single tenant logic apps - Azure portal](create-single-tenant-workflows-azure-portal.md#firewall-setup)
+ * [Firewall permissions for single tenant logic apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#firewall-setup)
+
+* If your logic app workflows run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), make sure that you [open these ports too](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#network-ports-for-ise).
* To help you simplify any security rules that you want to create, you can optionally use [service tags](../virtual-network/service-tags-overview.md) instead, rather than specify IP address prefixes for each region. These tags work across the regions where the Logic Apps service is available:
This section lists the inbound IP addresses for the Azure Logic Apps service onl
<a name="multi-tenant-inbound"></a>
-#### Multi-tenant Azure - Inbound IP addresses
+#### Multi-tenant & single-tenant - Inbound IP addresses
-| Multi-tenant region | IP |
-||-|
+| Region | IP |
+|--|-|
| Australia East | 13.75.153.66, 104.210.89.222, 104.210.89.244, 52.187.231.161 | | Australia Southeast | 13.73.115.153, 40.115.78.70, 40.115.78.237, 52.189.216.28 | | Brazil South | 191.235.86.199, 191.235.95.229, 191.235.94.220, 191.234.166.198 |
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-overview.md
The following list describes just a few example tasks, business processes, and w
> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Introducing-Azure-Logic-Apps/player]
-Based on the logic app resource type that you choose and create, your logic apps run in either a multi-tenant, single-tenant, or dedicated integration service environment. For example, when you containerize single-tenant logic apps, you can deploy your apps as containers and run them anywhere that Azure Functions can run. For more information, review [Resource type and host environment differences for logic apps](#resource-environment-differences).
+Based on the logic app resource type that you choose and create, your logic apps run in either a multi-tenant, single-tenant, or dedicated integration service environment. For example, when you containerize single-tenant based logic apps, you can deploy your apps as containers and run them anywhere that Azure Functions can run. For more information, review [Resource type and host environment differences for logic apps](#resource-environment-differences).
To securely access and run operations in real time on various data sources, you can choose [*managed connectors*](#logic-app-concepts) from a [400+ and growing Azure connectors ecosystem](/connectors/connector-reference/connector-reference-logicapps-connectors) to use in your workflows, for example:
For more information about the ways workflows can access and work with apps, dat
## Key terms
-* *Logic app*: The Azure resource to create when you want to develop a workflow. Based on your scenario's needs and solution's requirements, you can create logic apps that run in the multi-tenant, single-tenant (preview), or integration service environment (ISE). For more information, review [Resource type and host environment differences for logic apps](#resource-environment-differences).
+* *Logic app*: The Azure resource to create when you want to develop a workflow. Based on your scenario's needs and solution's requirements, you can create logic apps that run in multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, or an integration service environment (ISE). For more information, review [Resource type and host environment differences for logic apps](#resource-environment-differences).
* *Workflow*: A series of steps that defines a task or process, starting with a single trigger and followed by one or multiple actions.
You can visually create workflows using the Logic Apps designer in the Azure por
To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
-The following table briefly summarizes differences between the original **Logic App (Consumption)** resource type and the new **Logic App (Preview)** resource type. You'll also learn how the *single-tenant* (preview) environment compares to the *multi-tenant* and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
+The following table briefly summarizes differences between the original **Logic App (Consumption)** resource type and the **Logic App (Standard)** resource type. You'll also learn how the *single-tenant* model compares to the *multi-tenant* and *integration service environment (ISE)* models for deploying, hosting, and running your logic app workflows.
[!INCLUDE [Logic app resource type and environment differences](../../includes/logic-apps-resource-environment-differences-table.md)]
The following sections provide more information about the capabilities and benef
Save time and simplify complex processes by using the visual design tools in Logic Apps. Create your workflows from start to finish by using the Logic Apps Designer in the Azure portal, Visual Studio Code, or Visual Studio. Just start your workflow with a trigger, and add any number of actions from the [connectors gallery](/connectors/connector-reference/connector-reference-logicapps-connectors).
-If you're creating a multi-tenant logic app, get started faster when you [create a workflow from the templates gallery](../logic-apps/logic-apps-create-logic-apps-from-templates.md). These templates are available for common workflow patterns, which range from simple connectivity for Software-as-a-Service (SaaS) apps to advanced B2B solutions plus "just for fun" templates.
+If you're creating a multi-tenant based logic app, get started faster when you [create a workflow from the templates gallery](../logic-apps/logic-apps-create-logic-apps-from-templates.md). These templates are available for common workflow patterns, which range from simple connectivity for Software-as-a-Service (SaaS) apps to advanced B2B solutions plus "just for fun" templates.
#### Connect different systems across various environments
If no suitable connector is available to run the code you want, you can create a
#### Access resources inside Azure virtual networks
-Logic app workflows can access secured resources, such as virtual machines (VMs) and other systems or services, that are inside an [Azure virtual network](../virtual-network/virtual-networks-overview.md) when you create an [*integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is a dedicated instance of the Logic Apps service that uses dedicated resources and runs separately from the global multi-tenant Logic Apps service.
+Logic app workflows can access secured resources, such as virtual machines (VMs) and other systems or services, that are inside an [Azure virtual network](../virtual-network/virtual-networks-overview.md) when you create an [*integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is a dedicated instance of the Azure Logic Apps service that uses dedicated resources and runs separately from the global multi-tenant Azure Logic Apps service.
Running logic apps in your own dedicated instance helps reduce the impact that other Azure tenants might have on app performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors). An ISE also provides these benefits:
When you create an ISE, Azure *injects* or deploys that ISE into your Azure virt
#### Pricing options
-Each logic app type, which differs by capabilities and where they run (multi-tenant, single-tenant, integration service environment), has a different [pricing model](../logic-apps/logic-apps-pricing.md). For example, multi-tenant logic apps use consumption pricing, while logic apps in an integration service environment use fixed pricing. Learn more about [pricing and metering](../logic-apps/logic-apps-pricing.md) for Logic Apps.
+Each logic app type, which differs by capabilities and where they run (multi-tenant, single-tenant, integration service environment), has a different [pricing model](../logic-apps/logic-apps-pricing.md). For example, multi-tenant based logic apps use consumption pricing, while logic apps in an integration service environment use fixed pricing. Learn more about [pricing and metering](../logic-apps/logic-apps-pricing.md) for Logic Apps.
## How does Logic Apps differ from Functions, WebJobs, and Power Automate?
logic-apps Logic Apps Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-pricing.md
Title: Billing & pricing models
description: Overview about how pricing and billing models work in Azure Logic Apps ms.suite: integration-+ Previously updated : 03/24/2021 Last updated : 05/25/2021 # Pricing and billing models for Azure Logic Apps
For example, a request that a polling trigger makes is still metered as an execu
|-|-| | [Built-in](../connectors/built-in.md) triggers and actions | Run natively in the Logic Apps service and are metered using the [**Actions** price](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>For example, the HTTP trigger and Request trigger are built-in triggers, while the HTTP action and Response action are built-in actions. Data operations, batch operations, variable operations, and [workflow control actions](../connectors/built-in.md), such as loops, conditions, switch, parallel branches, and so on, are also built-in actions. | | [Standard connector](../connectors/managed.md) triggers and actions <p><p>[Custom connector](../connectors/apis-list.md#custom-apis-and-connectors) triggers and actions | Metered using the [Standard connector price](https://azure.microsoft.com/pricing/details/logic-apps/). |
-| [Enterprise connector](../connectors/managed.md) triggers and actions | Metered using the [Enterprise connector price](https://azure.microsoft.com/pricing/details/logic-apps/). However, during public preview, Enterprise connectors are metered using the [*Standard* connector price](https://azure.microsoft.com/pricing/details/logic-apps/). |
+| [Enterprise connector](../connectors/managed.md) triggers and actions | Metered using the [Enterprise connector price](https://azure.microsoft.com/pricing/details/logic-apps/). However, during connector preview, Enterprise connectors are metered using the [*Standard* connector price](https://azure.microsoft.com/pricing/details/logic-apps/). |
| Actions inside [loops](logic-apps-control-flow-loops.md) | Each action that runs in a loop is metered for each loop cycle that runs. <p><p>For example, suppose that you have a "for each" loop that includes actions that process a list. The Logic Apps service meters each action that runs in that loop by multiplying the number of list items with the number of actions in the loop, and adds the action that starts the loop. So, the calculation for a 10-item list is (10 * 1) + 1, which results in 11 action executions. | | Retry attempts | To handle the most basic exceptions and errors, you can set up a [retry policy](logic-apps-exception-handling.md#retry-policies) on triggers and actions where supported. These retries along with the original request are charged at rates based on whether the trigger or action has built-in, Standard, or Enterprise type. For example, an action that executes with 2 retries is charged for 3 action executions. | | [Data retention and storage consumption](#data-retention) | Metered using the data retention price, which you can find on the [Logic Apps pricing page](https://azure.microsoft.com/pricing/details/logic-apps/), under the **Pricing details** table. |
To help you estimate more accurate consumption costs, review these tips:
For example, suppose you set up trigger that checks an endpoint every day. When the trigger checks the endpoint and finds 15 events that meet the criteria, the trigger fires and runs the corresponding workflow 15 times. The Logic Apps service meters all the actions that those 15 workflows perform, including the trigger requests.
-<a name="preview-pricing"></a>
+<a name="standard-pricing"></a>
-## Preview pricing (single-tenant)
+## Standard pricing (single-tenant)
-When you create the **Logic App (Preview)** resource in the Azure portal or deploy from Visual Studio Code, you must choose a hosting plan, either [App Service or Functions Premium](../azure-functions/functions-scale.md) for your logic app. If you select the App Service plan, you must also choose a [pricing tier](../app-service/overview-hosting-plans.md). These choices determine the pricing that applies when running your workflows in single-tenant Logic Apps.
+When you create the **Logic App (Standard)** resource in the Azure portal or deploy from Visual Studio Code, you must choose a hosting plan and pricing tier for your logic app. These choices determine the pricing that applies when running your workflows in single-tenant Azure Logic Apps.
-> [!NOTE]
-> During preview, running preview logic app resources and workflows in App Service doesn't incur *extra* charges on top of your selected hosting plan.
+<a name="hosting-plans"></a>
-Azure Logic Apps uses [Azure Storage](/storage) for any storage operations. With multi-tenant Logic Apps, any storage usage and costs are attached to the logic app. With single-tenant Logic Apps, you can use your own Azure [storage account](../azure-functions/storage-considerations.md#storage-account-requirements). This capability gives you more control and flexibility with your Logic Apps data.
+### Hosting plans and pricing tiers
+
+For single-tenant based logic apps, use the **Workflow Standard** hosting plan. The following list shows the available pricing tiers that you can select:
+
+| Pricing tier | Cores | Memory | Storage |
+|--|-|--||
+| **WS1** | 1 | 3.5 GB | 250 GB |
+| **WS2** | 2 | 7 GB | 250 GB |
+| **WS3** | 2 | 14 GB | 250 GB |
+|||||
+
+### Storage transactions
+
+Azure Logic Apps uses [Azure Storage](/storage) for any storage operations. With single-tenant Azure Logic Apps, any storage usage and costs are attached to the logic app. With single-tenant Azure Logic Apps, you can use your own Azure [storage account](../azure-functions/storage-considerations.md#storage-account-requirements). This capability gives you more control and flexibility with your Logic Apps data.
When *stateful* workflows run their operations, the Azure Logic Apps runtime makes storage transactions. For example, queues are used for scheduling, while tables and blobs are used for storing workflow states. Storage costs change based on your workflow's content. Different triggers, actions, and payloads result in different storage operations and needs. Storage transactions follow the [Azure Storage pricing model](https://azure.microsoft.com/pricing/details/storage/). Storage costs are separately listed in your Azure billing invoice.
-### Estimate storage needs and costs
+### Tips for estimating storage needs and costs
To help you get some idea about the number of storage operations that a workflow might run and their cost, try using the [Logic Apps Storage calculator](https://logicapps.azure.com/calculator). You can either select a sample workflow or use an existing workflow definition. The first calculation estimates the number of operations. You can then use these numbers to estimate costs using the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
-This article describes how to estimate your storage costs when you're using your own Azure Storage account with single-tenant logic apps. First, you can estimate the number of storage operations you'll perform using the Logic Apps storage calculator. Then, you can estimate your possible storage costs using these numbers in the
+This article describes how to estimate your storage costs when you're using your own Azure Storage account with single-tenant based logic apps. First, you can estimate the number of storage operations you'll perform using the Logic Apps storage calculator. Then, you can estimate your possible storage costs using these numbers in the
-For more information about the pricing models that apply to preview logic apps, review the following documentation:
+For more information about the pricing models that apply to single-tenant based logic apps, review the following documentation:
* [Azure Functions scale and hosting](../azure-functions/functions-scale.md) * [Scale up an app in Azure App Service](../app-service/manage-scale-up.md)
A fixed pricing model applies to logic apps that run in the dedicated [*integrat
| Items | Description | |-|-| | [Built-in](../connectors/built-in.md) triggers and actions | Display the **Core** label and run in the same ISE as your logic apps. |
-| [Standard connectors](../connectors/managed.md) <p><p>[Enterprise connectors](../connectors/managed.md#enterprise-connectors) | - Managed connectors that display the **ISE** label are specially designed to work without the on-premises data gateway and run in the same ISE as your logic apps. ISE pricing includes as many Enterprise connections as you want. <p><p>- Connectors that don't display the ISE label run in the multi-tenant Logic Apps service. However, ISE pricing includes these executions for logic apps that run in an ISE. |
+| [Standard connectors](../connectors/managed.md) <p><p>[Enterprise connectors](../connectors/managed.md#enterprise-connectors) | - Managed connectors that display the **ISE** label are specially designed to work without the on-premises data gateway and run in the same ISE as your logic apps. ISE pricing includes as many Enterprise connections as you want. <p><p>- Connectors that don't display the ISE label run in the single-tenant Azure Logic Apps service. However, ISE pricing includes these executions for logic apps that run in an ISE. |
| Actions inside [loops](logic-apps-control-flow-loops.md) | ISE pricing includes each action that runs in a loop for each loop cycle that runs. <p><p>For example, suppose that you have a "for each" loop that includes actions that process a list. To get the total number of action executions, multiply the number of list items with the number of actions in the loop, and add the action that starts the loop. So, the calculation for a 10-item list is (10 * 1) + 1, which results in 11 action executions. | | Retry attempts | To handle the most basic exceptions and errors, you can set up a [retry policy](logic-apps-exception-handling.md#retry-policies) on triggers and actions where supported. ISE pricing includes retries along with the original request. | | [Data retention and storage consumption](#data-retention) | Logic apps in an ISE don't incur retention and storage costs. |
logic-apps Parameterize Workflow App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/parameterize-workflow-app.md
+
+ Title: Create parameters for workflows in single-tenant Azure Logic Apps
+description: Define parameters for values that differ in workflows across deployment environments in single-tenant Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 05/25/2021++
+# Create parameters for values that change in workflows across environments for single-tenant Azure Logic Apps
+
+In Azure Logic Apps, you can use parameters to abstract values that might change between environments. By defining parameters to use in your workflows, you can first focus on designing your workflows, and then insert your environment-specific variables later.
+
+In *multi-tenant* Azure Logic Apps, you can create and reference parameters in the workflow designer, and then set the variables in your Azure Resource Manager (ARM) template and parameters files. Parameters are defined and set at deployment. So, even if you need to only change one variable, you have to redeploy your logic app's ARM template.
+
+In *single-tenant* Azure Logic Apps, you can work with environment variables both at runtime and deployment time by using parameters and app settings. This article shows how to edit, call, and reference environment variables with the new single-tenant parameters experience.
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* A [logic app workflow hosted in single-tenant Azure Logic Apps](single-tenant-overview-compare.md). If you don't have one, [create your logic app (Standard) in the Azure portal](create-single-tenant-workflows-azure-portal.md) or [in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
+
+## Parameters versus app settings
+
+Before you decide where to store your environment variables, review the following information.
+
+If you already use Azure Functions or Azure Web Apps, you might be familiar with app settings. In Azure Logic Apps, app settings integrate with Azure Key Vault. You can [directly reference secure strings](../app-service/app-service-key-vault-references.md), such as connection strings and keys. Similar to ARM templates, where you can define environment variables at deployment time, you can define app settings within your [logic app workflow definition](/azure/templates/microsoft.logic/workflows). You can then capture dynamically generated infrastructure values, such as connection endpoints, storage strings, and more. However, app settings have size limitations and can't be referenced from certain areas in Azure Logic Apps.
+
+If you're familiar with workflows in multi-tenant Azure Logic Apps, you might also be familiar with parameters. You can use parameters in a wider range of use cases than app settings, such as supporting complex objects and values of large sizes. In Visual Studio Code, you can also reference parameters in your logic app project's **workflow.json** and **connection.json** files. If you're developing your workflows locally, these references also work in the workflow designer. If you want to use both options in your solution, you can also reference app settings using parameters.
+
+Consider the recommendation to use parameters as the default mechanism for parameterization. That way, when you need to store secure keys or strings, you can follow the recommendation to reference app settings from your parameters.
+
+## What is parameterization?
+
+If you use Visual Studio Code, in your logic app project, you can define parameters in the **parameters.json** file. You can reference any parameter in **parameters.json** file from any workflow or connection object in your logic app. Parameterizing your workflow inputs in single-tenant Azure Logic Apps works similarly to multi-tenant Azure Logic Apps.
+
+To reference parameters in your trigger or action inputs, use the expression `@parameters('<parameter-name>')`.
+
+> [!IMPORTANT]
+> Make sure that you also include any parameters that you reference in your **parameters.json** file.
+
+In *single-tenant* Azure Logic Apps, you can parameterize different parts of your **connections.json** file. You can then check your **connections.json** file into source control, and then manage any connections through your **parameters.json** file. To parameterize your **connections.json** file, replace the values for literals, such as `ConnectionRuntimeUrl`, with a single `parameters()` expression, for example, `@parameters('api-runtimeUrl')`.
+
+You can also parameterize complex objects, such as the `authentication` JSON object. For example, replace the `authentication` object value with a string that holds a single parameters expression, such as `@parameters('api-auth')`.
+
+> [!NOTE]
+> The only valid expression types in the **connections.json** file are `@parameters` and `@appsetting`.
+
+## Define parameters
+
+In single-tenant based workflows, you need to put all parameter values in a root-level JSON file named **parameters.json**. This file contains an object that contains key-value pairs. The keys are the names of each parameter, and the values are the structures for each parameter. Each structure needs to include both a `type` and `value` declaration.
+
+> [!NOTE]
+> The only valid expression type in the **parameters.json** file is `@appsetting`.
+
+The following example shows a basic parameters file:
+
+```json
+{
+ "responseString":ΓÇ»{
+ "type":ΓÇ»"string",
+ "value":ΓÇ»"hello"
+ },
+ "functionAuth":ΓÇ»{
+ "type":ΓÇ»"object",
+ "value":ΓÇ»{
+ "type":ΓÇ»"QueryString",
+ "name":ΓÇ»"Code",
+ "value":ΓÇ»"@appsetting('<AzureFunctionsOperation-FunctionAppKey')"
+ }
+ }
+}
+```
+
+Typically, you need to manage multiple versions of parameter files. You might have targeted values for different deployment environments, such as development, testing, and production. Managing these parameter files often works like managing ARM template parameter files. When you deploy to a specific environment, you promote the corresponding parameter file, generally through a pipeline for DevOps.
+
+To replace parameter files dynamically using the Azure CLI, run the following command:
+
+```azurecli
+az functionapp deploy --resource-group MyResourceGroup --name MyLogicApp --src-path C:\parameters.json --type static --target-path parameters.json
+```
+
+> [!NOTE]
+> Currently, the capability to dynamically replace parameter files is not yet available in the Azure portal or the workflow designer.
+
+## Define app settings
+
+In single-tenant Azure Logic Apps, app settings contain global configuration options for *all the workflows* in the same logic app. When you run workflows locally, these settings are accessible as local environment variables in the **local.settings.json** file. You can then reference these app settings in your parameters.
+
+To add, update, or delete app settings, review the following sections:
+
+* [ARM templates](#app-settings-in-an-arm-template-or-bicep-template)
+* [Azure portal](#app-settings-in-the-azure-portal)
+* [Azure CLI](#app-settings-using-azure-cli)
+
+### App settings in an ARM template or Bicep template
+
+To review and define your app settings in an ARM template or Bicep template, find your logic app's resource definition, and update the `appSettings` JSON object. For the full resource definition, see the [ARM template reference](/azure/templates/microsoft.web/sites).
+
+This example shows file settings for either ARM templates or Bicep templates:
+
+```json
+"appSettings": [
+ {
+ "name": "string",
+ "value": "string"
+ },
+ <...>
+],
+"appSettings": [
+ {
+ "name": "string",
+ "value": "string"
+ },
+ <...>
+],
+```
+
+### App settings in the Azure portal
+
+To review the app settings for your logic app in the Azure portal, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com/) search box, find and open your single-tenant based logic app.
+1. On your logic app menu, under **Settings**, select **Configuration**.
+1. On the **Configuration** page, on the **Application settings** tab, review the app settings for your logic app.
+1. To view all values, select **Show Values**. Or, to view a single value, select that value.
+
+To add a new setting, follow these steps:
+
+1. On the **Application settings** tab, under **Application settings**, select **New application setting**.
+1. For **Name**, enter the *key* or name for your new setting.
+1. For **Value**, enter the value for your new setting.
+1. When you're ready to create your new *key-value* pair, select **OK**.
++
+### App settings using Azure CLI
+
+To review your current app settings using the Azure CLI, run the command, `az logicapp config appsettings list`. Make sure that your command includes the `--name -n` and `--resource-group -g` parameters, for example:
+
+```azurecli
+az logicapp config appsettings list --name MyLogicApp --resource-group MyResourceGroup
+```
+
+To add or update an app setting using the Azure CLI, run the command `az logicapp config appsettings set`. Make sure that your command includes the `--name n` and `--resource-group -g` parameters. For example, the following command creates a setting with a key named `CUSTOM_LOGIC_APP_SETTING` with a value of `12345`:
+
+```azurecli
+az logicapp config appsettings set --name MyLogicApp --resource-group MyResourceGroup --settings CUSTOM_LOGIC_APP_SETTING=12345
+```
+
+For more information about setting up your logic apps for DevOps deployments, see:
+* [DevOps deployment overview for single-tenant based logic apps](devops-deployment-single-tenant-azure-logic-apps.md)
+* [Set up DevOps deployment for single-tenant based logic apps](set-up-devops-deployment-single-tenant-azure-logic-apps.md)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Single-tenant vs. multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
logic-apps Quickstart Create Logic Apps Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/quickstart-create-logic-apps-visual-studio-code.md
Deleting a logic app affects workflow instances in the following ways:
## Next steps > [!div class="nextstepaction"]
-> [Create stateful and stateless logic apps in Visual Studio Code (Preview)](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
+> [Create single-tenant based logic app workflows in Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
Last updated 05/13/2021
# As a developer, I want to connect to my single-tenant workflows from virtual networks using private endpoints.
-# Secure traffic between virtual networks and single-tenant workflows in Azure Logic Apps using private endpoints (preview)
+# Secure traffic between virtual networks and single-tenant workflows in Azure Logic Apps using private endpoints
To securely and privately communicate between your logic app workflow and a virtual network, you can set up *private endpoints* for inbound traffic and use virtual network integration for outbound traffic.
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
Title: Set up DevOps for single-tenant Azure Logic Apps (preview)
-description: How to set up DevOps deployment for workflows in single-tenant Azure Logic Apps (preview).
+ Title: Set up DevOps for single-tenant Azure Logic Apps
+description: How to set up DevOps deployment for workflows in single-tenant Azure Logic Apps.
ms.suite: integration
Last updated 05/10/2021
# As a developer, I want to automate deployment for workflows hosted in single-tenant Azure Logic Apps by using DevOps tools and processes.
-# Set up DevOps deployment for single-tenant Azure Logic Apps (preview)
+# Set up DevOps deployment for single-tenant Azure Logic Apps
-This article shows how to deploy a single-tenant based logic app project from Visual Studio Code to your infrastructure by using DevOps tools and processes. Based on whether you prefer GitHub or Azure DevOps for deployment, choose the path and tools that work best for your scenario. You can use the included samples that contain example logic app projects plus examples for Azure deployment using either GitHub or Azure DevOps. For more information about DevOps for single-tenant, review [DevOps deployment for single-tenant Azure Logic Apps (preview)](devops-deployment-single-tenant-azure-logic-apps.md).
+This article shows how to deploy a single-tenant based logic app project from Visual Studio Code to your infrastructure by using DevOps tools and processes. Based on whether you prefer GitHub or Azure DevOps for deployment, choose the path and tools that work best for your scenario. You can use the included samples that contain example logic app projects plus examples for Azure deployment using either GitHub or Azure DevOps. For more information about DevOps for single-tenant, review [DevOps deployment overview for single-tenant Azure Logic Apps](devops-deployment-single-tenant-azure-logic-apps.md).
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- A single-tenant based logic app project created with [Visual Studio Code and the Azure Logic Apps (Preview) extension](create-stateful-stateless-workflows-visual-studio-code.md#prerequisites).
+- A single-tenant based logic app project created with [Visual Studio Code and the Azure Logic Apps (Standard) extension](create-stateful-stateless-workflows-visual-studio-code.md#prerequisites).
If you don't already have a logic app project or infrastructure set up, you can use the included sample projects to deploy an example app and infrastructure, based on the source and deployment options you prefer to use. For more information about these sample projects and the resources included to run the example logic app, review [Deploy your infrastructure](#deploy-infrastructure). -- If you want to deploy to Azure, you need an existing **Logic App (Preview)** resource created in Azure. To quickly create an empty logic app resource, review [Create single-tenant based logic app workflows - Portal](create-stateful-stateless-workflows-azure-portal.md).
+- If you want to deploy to Azure, you need an existing **Logic App (Standard)** resource created in Azure. To quickly create an empty logic app resource, review [Create single-tenant based logic app workflows - Portal](create-stateful-stateless-workflows-azure-portal.md).
<a name="deploy-infrastructure"></a>
Both samples include the following resources that a logic app uses to run.
| Resource name | Required | Description | ||-|-|
-| Logic App (Preview) | Yes | This Azure resource contains the workflows that run in single-tenant Azure Logic Apps. |
+| Logic App (Standard) | Yes | This Azure resource contains the workflows that run in single-tenant Azure Logic Apps. |
| Premium or App Service hosting plan | Yes | This Azure resource specifies the hosting resources to use for running your logic app, such as compute, processing, storage, networking, and so on. | | Azure storage account | Yes, for stateless workflows | This Azure resource stores the metadata, state, inputs, outputs, run history, and other information about your workflows. | | Application Insights | Optional | This Azure resource provides monitoring capabilities for your workflows. |
If you use other deployment tools, you can deploy your logic app by using the Az
### Release to containers
-If you containerize your logic app, deployment works mostly the same as any other container you deploy and manage. For more information about containerizing logic apps and deploying to Docker, review [Deploy your logic app to a Docker container from Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md#deploy-to-docker).
+If you containerize your logic app, deployment works mostly the same as any other container you deploy and manage.
For examples that show how to implement an end-to-end container build and deployment pipeline, review [CI/CD for Containers](https://azure.microsoft.com/solutions/architecture/cicd-for-containers/). ## Next steps
-* [DevOps deployment for single-tenant Azure Logic Apps (preview)](devops-deployment-single-tenant-azure-logic-apps.md)
+* [DevOps deployment for single-tenant Azure Logic Apps](devops-deployment-single-tenant-azure-logic-apps.md)
-We'd like to hear about your experiences with the preview logic app resource type and preview single-tenant model!
+We'd like to hear about your experiences with the single-tenant Azure Logic Apps!
- For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues). - For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/logicappsdevops).
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/single-tenant-overview-compare.md
Title: Overview - single-tenant (preview) Azure Logic Apps
-description: Learn the differences between single-tenant (preview), multi-tenant, and integration service environment (ISE) for Azure Logic Apps.
+ Title: Overview - Single-tenant Azure Logic Apps
+description: Learn the differences between single-tenant, multi-tenant, and integration service environment (ISE) for Azure Logic Apps.
ms.suite: integration
Last updated 05/05/2021
-# Single-tenant (preview) versus multi-tenant and integration service environment for Azure Logic Apps
+# Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps
-> [!IMPORTANT]
-> Currently in preview, the single-tenant Logic Apps environment and **Logic App (Preview)** resource type are subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Azure Logic Apps is a cloud-based platform for creating and running automated *logic app workflows* that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. To create a logic app, you use either the **Logic App (Consumption)** resource type or the **Logic App (Standard)** resource type. The Consumption resource type runs in the *multi-tenant* Azure Logic Apps or *integration service environment*, while the Standard resource type runs in *single-tenant* Azure Logic Apps environment.
-Azure Logic Apps is a cloud-based platform for creating and running automated *logic app workflows* that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. To create a logic app, you use either the original **Logic App (Consumption)** resource type or the new **Logic App (Preview)** resource type.
-
-Before you choose which resource type to use, review this article to learn how the new preview resource type compares to the original. You can then decide which type is best to use, based on your scenario's needs, solution requirements, and the environment where you want to deploy, host, and run your workflows.
+Before you choose which resource type to use, review this article to learn how the resources types and service environments compare to each other. You can then decide which type is best to use, based on your scenario's needs, solution requirements, and the environment where you want to deploy, host, and run your workflows.
If you're new to Azure Logic Apps, review the following documentation:
If you're new to Azure Logic Apps, review the following documentation:
To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
-The following table briefly summarizes differences between the new **Logic App (Preview)** resource type and the original **Logic App (Consumption)** resource type. You'll also learn how the *single-tenant* (preview) environment compares to the *multi-tenant* and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
+The following table briefly summarizes differences between the **Logic App (Standard)** resource type and the **Logic App (Consumption)** resource type. You'll also learn how the *single-tenant* environment compares to the *multi-tenant* and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
[!INCLUDE [Logic app resource type and environment differences](../../includes/logic-apps-resource-environment-differences-table.md)]
-<a name="preview-resource-type-introduction"></a>
+<a name="resource-type-introduction"></a>
-## Logic App (Preview) resource
+## Logic App (Standard) resource
-The **Logic App (Preview)** resource type is powered by the redesigned Azure Logic Apps (Preview) runtime, which uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and is hosted as an extension on the Azure Functions runtime. This design provides portability, flexibility, and more performance for your logic apps plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem.
+The **Logic App (Standard)** resource type is powered by the redesigned single-tenant Azure Logic Apps runtime. This runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and is hosted as an extension on the Azure Functions runtime. This design provides portability, flexibility, and more performance for your logic app workflows plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem.
-For example, you can run **Logic App (Preview)** workflows anywhere that you can run Azure function apps and their functions. The preview resource type introduces a resource structure that can have multiple workflows, similar to how an Azure function app can include multiple functions. With a 1-to-many mapping, workflows in the same logic app and tenant share compute and processing resources, providing better performance due to their proximity. This structure differs from the **Logic App (Consumption)** resource where you have a 1-to-1 mapping between a logic app resource and a workflow.
+For example, you can run single-tenant based logic apps and their workflows anywhere that Azure function apps and their functions can run. The Standard resource type introduces a resource structure that can host multiple workflows, similar to how an Azure function app can host multiple functions. With a 1-to-many mapping, workflows in the same logic app and tenant share compute and processing resources, providing better performance due to their proximity. This structure differs from the **Logic App (Consumption)** resource where you have a 1-to-1 mapping between a logic app resource and a workflow.
-To learn more about portability, flexibility, and performance improvements, continue with the following sections. Or, for more information about the redesigned runtime and Azure Functions extensibility, review the following documentation:
+To learn more about portability, flexibility, and performance improvements, continue with the following sections. Or, for more information about the single-tenant Azure Logic Apps runtime and Azure Functions extensibility, review the following documentation:
* [Azure Logic Apps Running Anywhere - Runtime Deep Dive](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564) * [Introduction to Azure Functions](../azure-functions/functions-overview.md)
To learn more about portability, flexibility, and performance improvements, cont
### Portability and flexibility
-When you create logic apps using the **Logic App (Preview)** resource type, you can run your workflows anywhere you can run Azure function apps and their functions, not just in the single-tenant service environment.
+When you create logic apps using the **Logic App (Standard)** resource type, you can run your workflows anywhere that you can run Azure function apps and their functions, not just in the single-tenant service environment.
-For example, when you use Visual Studio Code with the Azure Logic Apps (Preview) extension, you can *locally* develop, build, and run your workflows in your development environment without having to deploy to Azure. If your scenario requires containers, you can containerize your logic apps and deploy as Docker containers.
+For example, when you use Visual Studio Code with the **Azure Logic Apps (Standard)** extension, you can *locally* develop, build, and run your workflows in your development environment without having to deploy to Azure. If your scenario requires containers, you can containerize and deploy logic apps as containers.
-These capabilities provide major improvements and substantial benefits compared to the multi-tenant model, which requires you to develop against an existing running resource in Azure. Also, the multi-tenant model for automating **Logic App (Consumption)** resource deployment is completely based on Azure Resource Manager (ARM) templates, which combine and handle resource provisioning for both apps and infrastructure.
+These capabilities provide major improvements and substantial benefits compared to the multi-tenant model, which requires you to develop against an existing running resource in Azure. Also, the multi-tenant model for automating **Logic App (Consumption)** resource deployment is completely based on Azure Resource Manager (ARM) templates, which combine and handle resource provisioning for both apps and infrastructure.
-With the **Logic App (Preview)** resource type, deployment becomes easier because you can separate app deployment from infrastructure deployment. You can package the redesigned runtime and workflows together as part of your logic app. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your infrastructure, you can still use ARM templates to separately provision those resources along with other processes and pipelines that you use for those purposes.
+With the **Logic App (Standard)** resource type, deployment becomes easier because you can separate app deployment from infrastructure deployment. You can package the single-tenant Azure Logic Apps runtime and workflows together as part of your logic app. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your infrastructure, you can still use ARM templates to separately provision those resources along with other processes and pipelines that you use for those purposes.
To deploy your app, copy the artifacts to the host environment and then start your apps to run your workflows. Or, integrate your artifacts into deployment pipelines using the tools and processes that you already know and use. That way, you can deploy using your own chosen tools, no matter the technology stack that you use for development.
By using standard build and deploy options, you can focus on app development sep
### Performance
-Using the **Logic App (Preview)** resource type, you can create and run multiple workflows in the same single logic app and tenant. With this 1-to-many mapping, these workflows share resources, such as compute, processing, storage, and network, providing better performance due to their proximity.
+Using the **Logic App (Standard)** resource type, you can create and run multiple workflows in the same single logic app and tenant. With this 1-to-many mapping, these workflows share resources, such as compute, processing, storage, and network, providing better performance due to their proximity.
-The preview logic app resource type and redesigned Azure Logic Apps (Preview) runtime provide another significant improvement by making the more popular managed connectors available as built-in operations. For example, you can use built-in operations for Azure Service Bus, Azure Event Hubs, SQL, and others. Meanwhile, the managed connector versions are still available and continue to work.
+The **Logic App (Standard)** resource type and single-tenant Azure Logic Apps runtime provide another significant improvement by making the more popular managed connectors available as built-in operations. For example, you can use built-in operations for Azure Service Bus, Azure Event Hubs, SQL, and others. Meanwhile, the managed connector versions are still available and continue to work.
-When you use the new built-in operations, you create connections called *built-in connections* or *service provider connections*. Their managed connection counterparts are called *API connections*, which are created and run separately as Azure resources that you also have to then deploy by using ARM templates. Built-in operations and their connections run locally in the same process that runs your workflows. Both are hosted on the redesigned Logic Apps runtime. As a result, built-in operations and their connections provide better performance due to proximity with your workflows. This design also works well with deployment pipelines because the service provider connections are packaged into the same build artifact.
+When you use the new built-in operations, you create connections called *built-in connections* or *service provider connections*. Their managed connection counterparts are called *API connections*, which are created and run separately as Azure resources that you also have to then deploy by using ARM templates. Built-in operations and their connections run locally in the same process that runs your workflows. Both are hosted on the single-tenant Azure Logic Apps runtime. As a result, built-in operations and their connections provide better performance due to proximity with your workflows. This design also works well with deployment pipelines because the service provider connections are packaged into the same build artifact.
## Create, build, and deploy options
To create a logic app based on the environment that you want, you have multiple
| Option | Resources and tools | More information | |--|||
-| Azure portal | **Logic App (Preview)** resource type | [Create integration workflows for single-tenant Logic Apps - Azure portal](create-single-tenant-workflows-azure-portal.md) |
-| Visual Studio Code | [**Azure Logic Apps (Preview)** extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurelogicapps) | [Create integration workflows for single-tenant Logic Apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md) |
+| Azure portal | **Logic App (Standard)** resource type | [Create integration workflows for single-tenant Logic Apps - Azure portal](create-single-tenant-workflows-azure-portal.md) |
+| Visual Studio Code | [**Azure Logic Apps (Standard)** extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurelogicapps) | [Create integration workflows for single-tenant Logic Apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md) |
| Azure CLI | Logic Apps Azure CLI extension | Not yet available | ||||
To create a logic app based on the environment that you want, you have multiple
| Azure portal | **Logic App (Consumption)** resource type with an existing ISE resource | Same as [Quickstart: Create integration workflows in multi-tenant Azure Logic Apps - Azure portal](quickstart-create-first-logic-app-workflow.md), but select an ISE, not a multi-tenant region. | ||||
-Although your development experiences differ based on whether you create **Consumption** or **Preview** logic app resources, you can find and access all your deployed logic apps under your Azure subscription.
+Although your development experiences differ based on whether you create **Consumption** or **Standard** logic app resources, you can find and access all your deployed logic apps under your Azure subscription.
-For example, in the Azure portal, the **Logic apps** page shows both **Consumption** and **Preview** logic app resource types. In Visual Studio Code, deployed logic apps appear under your Azure subscription, but they are grouped by the extension that you used, namely **Azure: Logic Apps (Consumption)** and **Azure: Logic Apps (Preview)**.
+For example, in the Azure portal, the **Logic apps** page shows both **Consumption** and **Standard** logic app resource types. In Visual Studio Code, deployed logic apps appear under your Azure subscription, but they are grouped by the extension that you used, namely **Azure: Logic Apps (Consumption)** and **Azure: Logic Apps (Standard)**.
<a name="stateful-stateless"></a> ## Stateful and stateless workflows
-With the preview logic app type, you can create these workflow types within the same logic app:
+With the **Logic App (Standard)** resource type, you can create these workflow types within the same logic app:
* *Stateful*
With the preview logic app type, you can create these workflow types within the
Create stateless workflows when you don't need to save, review, or reference data from previous events in external storage for later review. These workflows save the inputs and outputs for each action and their states *only in memory*, rather than transferring this data to external storage. As a result, stateless workflows have shorter runs that are typically no longer than 5 minutes, faster performance with quicker response times, higher throughput, and reduced running costs because the run details and history aren't kept in external storage. However, if outages happen, interrupted runs aren't automatically restored, so the caller needs to manually resubmit interrupted runs. These workflows can only run synchronously.
- For easier debugging, you can enable run history for a stateless workflow, which has some impact on performance, and then disable the run history when you're done. For more information, see [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless) or [Create single-tenant based workflows in the Azure portal](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless).
+ For easier debugging, you can enable run history for a stateless workflow, which has some impact on performance, and then disable the run history when you're done. For more information, see [Create single-tenant based workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless) or [Create single-tenant based workflows in the Azure portal](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless).
> [!NOTE] > Stateless workflows currently support only *actions* for [managed connectors](../connectors/managed.md), > which are deployed in Azure, and not triggers. To start your workflow, select either the > [built-in Request, Event Hubs, or Service Bus trigger](../connectors/built-in.md).
- > These triggers run natively in the Azure Logic Apps Preview runtime. For more information about limited,
+ > These triggers run natively in the Azure Logic Apps runtime. For more information about limited,
> unavailable, or unsupported triggers, actions, and connectors, see > [Changed, limited, unavailable, or unsupported capabilities](#limited-unavailable-unsupported).
With the preview logic app type, you can create these workflow types within the
### Nested behavior differences between stateful and stateless workflows
-You can [make a workflow callable](../logic-apps/logic-apps-http-endpoint.md) from other workflows that exist in the same **Logic App (Preview)** resource by using the [Request trigger](../connectors/connectors-native-reqres.md), [HTTP Webhook trigger](../connectors/connectors-native-webhook.md), or managed connector triggers that have the [ApiConnectionWebhook type](../logic-apps/logic-apps-workflow-actions-triggers.md#apiconnectionwebhook-trigger) and can receive HTTPS requests.
+You can [make a workflow callable](../logic-apps/logic-apps-http-endpoint.md) from other workflows that exist in the same **Logic App (Standard)** resource by using the [Request trigger](../connectors/connectors-native-reqres.md), [HTTP Webhook trigger](../connectors/connectors-native-webhook.md), or managed connector triggers that have the [ApiConnectionWebhook type](../logic-apps/logic-apps-workflow-actions-triggers.md#apiconnectionwebhook-trigger) and can receive HTTPS requests.
Here are the behavior patterns that nested workflows can follow after a parent workflow calls a child workflow:
This table specifies the child workflow's behavior based on whether the parent a
<a name="other-capabilities"></a>
-## Other preview capabilities
+## Other single-tenant model capabilities
-The **Logic App (Preview)** resource and single-tenant model include many current and new capabilities, for example:
+The single-tenant model and **Logic App (Standard)** resource type include many current and new capabilities, for example:
-* Create logic apps and their workflows from [400+ connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
+* Create logic apps and their workflows from [400+ managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
- * More managed connectors are now available as built-in operations and run similarly to other built-in operations, such as Azure Functions. Built-in operations run natively on the redesigned Azure Logic Apps Preview runtime. For example, new built-in operations include Azure Service Bus, Azure Event Hubs, SQL Server, and MQ.
+ * More managed connectors are now available as built-in operations and run similarly to other built-in operations, such as Azure Functions. Built-in operations run natively on the single-tenant Azure Logic Apps runtime. For example, new built-in operations include Azure Service Bus, Azure Event Hubs, SQL Server, and MQ.
> [!NOTE] > For the built-in SQL Server version, only the **Execute Query** action can directly connect to Azure > virtual networks without using the [on-premises data gateway](logic-apps-gateway-connection.md).
- * You can create your own built-in connectors for any service that you need by using the [preview's extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similarly to built-in operations such as Azure Service Bus and SQL Server but unlike [custom managed connectors](../connectors/apis-list.md#custom-apis-and-connectors), which aren't currently supported during preview, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the redesigned runtime.
+ * You can create your own built-in connectors for any service that you need by using the [single-tenant Azure Logic Apps extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similarly to built-in operations such as Azure Service Bus and SQL Server but unlike [custom managed connectors](../connectors/apis-list.md#custom-apis-and-connectors), which aren't currently supported, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime.
The authoring capability is currently available only in Visual Studio Code, but isn't enabled by default. To create these connectors, [switch your project from extension bundle-based (Node.js) to NuGet package-based (.NET)](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring). For more information, see [Azure Logic Apps Running Anywhere - Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). * You can use the B2B actions for Liquid Operations and XML Operations without an integration account. To use these actions, you need to have Liquid maps, XML maps, or XML schemas that you can upload through the respective actions in the Azure portal or add to your Visual Studio Code project's **Artifacts** folder using the respective **Maps** and **Schemas** folders.
- * Logic app (preview) resources can run anywhere because the Azure Logic Apps service generates Shared Access Signature (SAS) connection strings that these logic apps can use for sending requests to the cloud connection runtime endpoint. The Logic Apps service saves these connection strings with other application settings so that you can easily store these values in Azure Key Vault when you deploy in Azure.
+ * **Logic app (Standard)** resources can run anywhere because the Azure Logic Apps service generates Shared Access Signature (SAS) connection strings that these logic apps can use for sending requests to the cloud connection runtime endpoint. The Logic Apps service saves these connection strings with other application settings so that you can easily store these values in Azure Key Vault when you deploy in Azure.
> [!NOTE]
- > By default, a **Logic App (Preview)** resource has the [system-assigned managed identity](../logic-apps/create-managed-service-identity.md)
- > automatically enabled to authenticate connections at runtime. This identity differs from the authentication
+ > By default, a **Logic App (Standard)** resource has the [system-assigned managed identity](../logic-apps/create-managed-service-identity.md)
+ > automatically enabled to authenticate connections at run time. This identity differs from the authentication
> credentials or connection string that you use when you create a connection. If you disable this identity,
- > connections won't work at runtime. To view this setting, on your logic app's menu, under **Settings**, select **Identity**.
+ > connections won't work at run time. To view this setting, on your logic app's menu, under **Settings**, select **Identity**.
* Stateless workflows run only in memory so that they finish more quickly, respond faster, have higher throughput, and cost less to run because the run histories and data between actions don't persist in external storage. Optionally, you can enable run history for easier debugging. For more information, see [Stateful versus stateless workflows](#stateful-stateless). * You can locally run, test, and debug your logic apps and their workflows in the Visual Studio Code development environment.
- Before you run and test your logic app, you can make debugging easier by adding and using breakpoints inside the **workflow.json** file for a workflow. However, breakpoints are supported only for actions at this time, not triggers. For more information, see [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#manage-breakpoints).
+ Before you run and test your logic app, you can make debugging easier by adding and using breakpoints inside the **workflow.json** file for a workflow. However, breakpoints are supported only for actions at this time, not triggers. For more information, see [Create single-tenant based workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#manage-breakpoints).
-* Directly publish or deploy logic apps and their workflows from Visual Studio Code to various hosting environments such as Azure and [Docker containers](/dotnet/core/docker/introduction).
+* Directly publish or deploy logic apps and their workflows from Visual Studio Code to various hosting environments such as Azure and containers.
* Enable diagnostics logging and tracing capabilities for your logic app by using [Application Insights](../azure-monitor/app/app-insights-overview.md) when supported by your Azure subscription and logic app settings. * Access networking capabilities, such as connect and integrate privately with Azure virtual networks, similar to Azure Functions when you create and deploy your logic apps using the [Azure Functions Premium plan](../azure-functions/functions-premium-plan.md). For more information, review the following documentation: * [Azure Functions networking options](../azure-functions/functions-networking-options.md)
- * [Azure Logic Apps Running Anywhere - Networking possibilities with Azure Logic Apps Preview](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
-
-* Regenerate access keys for managed connections used by individual workflows in a **Logic App (Preview)** resource. For this task, [follow the same steps for the **Logic Apps (Consumption)** resource but at the individual workflow level](logic-apps-securing-a-logic-app.md#regenerate-access-keys), not the logic app resource level.
+ * [Azure Logic Apps Running Anywhere - Networking possibilities with Azure Logic Apps](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
-For more information, see [Changed, limited, unavailable, and unsupported capabilities](#limited-unavailable-unsupported) and the [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md).
+* Regenerate access keys for managed connections used by individual workflows in a **Logic App (Standard)** resource. For this task, [follow the same steps for the **Logic Apps (Consumption)** resource but at the individual workflow level](logic-apps-securing-a-logic-app.md#regenerate-access-keys), not the logic app resource level.
<a name="limited-unavailable-unsupported"></a> ## Changed, limited, unavailable, or unsupported capabilities
-For the **Logic App (Preview)** resource, these capabilities have changed, or they are currently limited, unavailable, or unsupported:
-
-* **OS support**: Currently, the designer in Visual Studio Code doesn't work on Linux OS, but you can still deploy logic apps that use the Logic Apps Preview runtime to Linux-based virtual machines. For now, you can build your logic apps in Visual Studio Code on Windows or macOS and then deploy to a Linux-based virtual machine.
+For the **Logic App (Standard)** resource, these capabilities have changed, or they are currently limited, unavailable, or unsupported:
-* **Triggers and actions**: Built-in triggers and actions run natively in the Logic Apps Preview runtime, while managed connectors are deployed in Azure. Some built-in triggers are unavailable, such as Sliding Window and Batch. To start a stateful or stateless workflow, use the [built-in Recurrence, Request, HTTP, HTTP Webhook, Event Hubs, or Service Bus trigger](../connectors/apis-list.md). In the designer, built-in triggers and actions appear under the **Built-in** tab.
+* **Triggers and actions**: Built-in triggers and actions run natively in the single-tenant Azure Logic Apps runtime, while managed connectors are hosted and run in Azure. Some built-in triggers are unavailable, such as Sliding Window and Batch. To start a stateful or stateless workflow, use the [built-in Recurrence, Request, HTTP, HTTP Webhook, Event Hubs, or Service Bus trigger](../connectors/apis-list.md). In the designer, built-in triggers and actions appear under the **Built-in** tab.
For *stateful* workflows, [managed connector triggers and actions](../connectors/managed.md) appear under the **Azure** tab, except for the unavailable operations listed below. For *stateless* workflows, the **Azure** tab doesn't appear when you want to select a trigger. You can select only [managed connector *actions*, not triggers](../connectors/managed.md). Although you can enable Azure-hosted managed connectors for stateless workflows, the designer doesn't show any managed connector triggers for you to add. > [!NOTE] > To run locally in Visual Studio Code, webhook-based triggers and actions require additional setup. For more information, see
- > [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#webhook-setup).
+ > [Create single-tenant based workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#webhook-setup).
* These triggers and actions have either changed or are currently limited, unsupported, or unavailable:
For the **Logic App (Preview)** resource, these capabilities have changed, or th
> [!NOTE] > In the single-tenant model, the function action supports only query string authentication.
- > Azure Logic Apps Preview gets the default key from the function when making the connection,
+ > Azure Logic Apps gets the default key from the function when making the connection,
> stores that key in your app's settings, and uses the key for authentication when calling the function. > > As in the multi-tenant model, if you renew this key, for example, through the Azure Functions experience > in the portal, the function action no longer works due to the invalid key. To fix this problem, you need > to recreate the connection to the function that you want to call or update your app's settings with the new key.
- * The built-in action, [Inline Code - Execute JavaScript Code](logic-apps-add-run-inline-code.md) is now **Inline Code Operations - Run in-line JavaScript**.
-
- * Inline Code Operations actions no longer require an integration account.
-
- * For macOS and Linux, **Inline Code Operations** is now supported when you use the Azure Logic Apps (Preview) extension in Visual Studio Code.
+ * The built-in [Inline Code action](logic-apps-add-run-inline-code.md) is renamed **Inline Code Operations**, no longer requires an integration account, and has [updated limits](logic-apps-limits-and-config.md).
- * You no longer have to restart your logic app if you make changes in an **Inline Code Operations** action.
-
- * **Inline Code Operations** actions have [updated limits](logic-apps-limits-and-config.md).
+ * The built-in action, [Azure Logic Apps - Choose a Logic App workflow](logic-apps-http-endpoint.md) is now **Workflow Operations - Invoke a workflow in this workflow app**.
* Some [built-in B2B triggers and actions for integration accounts](../connectors/managed.md#integration-account-connectors) are unavailable, for example, the **Flat File** encoding and decoding actions.
- * The built-in action, [Azure Logic Apps - Choose a Logic App workflow](logic-apps-http-endpoint.md) is now **Workflow Operations - Invoke a workflow in this workflow app**.
- * [Custom managed connectors](../connectors/apis-list.md#custom-apis-and-connectors) aren't currently supported.
-* **Hosting plan availability**: Whether you create the single-tenant **Logic App (Preview)** resource type in the Azure portal or deploy from Visual Studio Code, you can only use the Premium or App Service hosting plan in Azure. The preview resource type doesn't support Consumption hosting plans. You can deploy from Visual Studio Code to a Docker container, but not to an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md).
+* **Hosting plan availability**: Whether you create the single-tenant **Logic App (Standard)** resource type in the Azure portal or deploy from Visual Studio Code, you can only use the Premium or App Service hosting plan in Azure. The Standard resource type doesn't support Consumption hosting plans. You can deploy from Visual Studio Code to a container, but not to an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md).
-* **Breakpoint debugging in Visual Studio Code**: Although you can add and use breakpoints inside the **workflow.json** file for a workflow, breakpoints are supported only for actions at this time, not triggers. For more information, see [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#manage-breakpoints).
+* **Breakpoint debugging in Visual Studio Code**: Although you can add and use breakpoints inside the **workflow.json** file for a workflow, breakpoints are supported only for actions at this time, not triggers. For more information, see [Create single-tenant based workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#manage-breakpoints).
* **Zoom control**: The zoom control is currently unavailable on the designer.
-* **Trigger history and run history**: For the **Logic App (Preview)** resource type, trigger history and run history in the Azure portal appears at the workflow level, not the logic app level. To find this historical data, follow these steps:
+* **Trigger history and run history**: For the **Logic App (Standard)** resource type, trigger history and run history in the Azure portal appears at the workflow level, not the logic app level. To find this historical data, follow these steps:
+
+ * To view the run history, open the workflow in your logic app. On the workflow menu, under **Developer**, select **Monitor**.
- * To view the run history, open the workflow in your logic app. On the workflow menu, under **Developer**, select **Monitor**.
- * To review the trigger history, open the workflow in your logic app. On the workflow menu, under **Developer**, select **Trigger Histories**.
+ * To review the trigger history, open the workflow in your logic app. On the workflow menu, under **Developer**, select **Trigger Histories**.
<a name="firewall-permissions"></a>
If your environment has strict network requirements or firewalls that limit traf
## Next steps * [Create single-tenant based workflows in the Azure portal](create-single-tenant-workflows-azure-portal.md)
-* [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md)
-* [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md)
+* [Create single-tenant based workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md)
-We'd also like to hear about your experiences with the preview logic app resource type and preview single-tenant model!
+We'd also like to hear about your experiences with single-tenant Azure Logic Apps!
* For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues). * For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/lafeedback).
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
You can also create an instance
* With [Azure Machine Learning SDK](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/machine-learning/concept-compute-instance.md) * From the [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md#computeinstance)
-The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance does not release quota to ensure you will be able to restart the compute instance.
+The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance does not release quota to ensure you will be able to restart the compute instance. Please do not stop the compute instance through the OS terminal by doing a sudo shutdown.
Compute instance comes with P10 OS disk. Temp disk type depends on the VM size chosen. Currently, it is not possible to change the OS disk type.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
The data scientist can start, stop, and restart the compute instance. They can u
## <a name="setup-script"></a> Customize the compute instance with a script (preview)
-> [!TIP]
-> This preview is currently available for workspaces in West Central US and East US regions.
- Use a setup script for an automated way to customize and configure the compute instance at provisioning time. As an administrator, you can write a customization script to be used to provision all compute instances in the workspace according to your requirements. Some examples of what you can do in a setup script:
Once you store the script, specify it during creation of your compute instance:
:::image type="content" source="media/how-to-create-manage-compute-instance/setup-script.png" alt-text="Provisiona compute instance with a setup script in the studio.":::
+Please note that if workspace storage is attached to a virtual network you might not be able to access the setup script file unless you are accessing the Studio from within virtual network.
+ ### Use script in a Resource Manager template In a Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-compute-create-computeinstance), add `setupScripts` to invoke the setup script when the compute instance is provisioned. For example:
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
To use either a [managed Azure Machine Learning __compute target__](concept-comp
> * When compute instance is deployed in a private link workspace it can be only be accessed from within virtual network. If you are using custom DNS or hosts file please add an entry for `<instance-name>.<region>.instances.azureml.ms` with private IP address of workspace private endpoint. For more information, see the [custom DNS](./how-to-custom-dns.md) article. > * The subnet used to deploy compute cluster/instance should not be delegated to any other service like ACI > * Virtual network service endpoint policies do not work for compute cluster/instance system storage accounts
+> * If storage and compute instance are in different regions you might see intermittent timeouts
> [!TIP]
purview Catalog Private Link Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-faqs.md
+
+ Title: Azure Purview private endpoints frequently asked questions (FAQ)
+description: This article answers frequently asked questions about Azure Purview Private Endpoints
+++++ Last updated : 05/11/2021
+# Customer intent: As a Purview admin, I want to set up private endpoints for my Purview account, for secure access.
++
+# Frequently asked questions (FAQ) about Azure Purview Private Endpoints
+
+This article answers common questions that customers and field teams often ask about Azure Purview network configurations using [Azure Private Link](../private-link/private-link-overview.md). It is intended to clarify questions about Azure Purview Firewall settings, Private Endpoints, DNS configuration and related configurations.
+
+To setup Azure Purview using Private Link, see [Use private endpoints for your Purview account](./catalog-private-link.md).
+
+## Common questions
+
+### What is purpose of deploying Azure Purview 'Account' Private Endpoint?
+
+Azure Purview _Account_ Private Endpoint is used to add an additional layer of security by enabling scenarios where only client calls originating from within the V-net are allowed to access the account. This private endpoint is also a prerequisites for Portal Private Endpoint.
+
+### What is purpose of deploying Azure Purview 'Portal' Private Endpoint?
+Azure Purview _Portal_ Private Endpoint is aimed to provide private connectivity to Azure Purview Studio.
+
+### What is purpose of deploying Azure Purview 'Ingestion' Private Endpoints?
+
+Azure Purview can scan data sources in Azure or on-premises environment using ingestion Private Endpoints. 3 additional Private Endpoint resources are deployed and linked to Azure Purview managed resources when ingestion private endpoints are created:
+- _Blob_ linked to Azure Purview managed storage account
+- _Queue_ linked to Azure Purview managed storage account
+- _namespace_ linked to Azure Purview managed event hub namespace
+
+### Can I scan data through Public Endpoint if Private Endpoint is enabled on my Azure Purview account?
+
+Yes, data sources that are not connected through private endpoint can be scanned using public endpoint, meanwhile Azure Purview is configured to use private endpoint.
+
+### Can I scan data through Service Endpoint if Private Endpoint is enabled?
+
+Yes, data sources that are not connected through private endpoint can be scanned using service endpoint meanwhile Azure Purview is configured to use private endpoint.
+Make sure you enable Allow trusted Microsoft services to access the resources inside Service Endpoint configuration of the data source resource in Azure. For example, if you are going to scan an Azure Blob Storage in which the Firewalls and virtual networks settings are set to _selected networks_, you need to make sure _Allow trusted Microsoft services to access this storage account_ is checked as exception.
+
+### Can I access Azure Purview Studio from public network if Public network access is set to Deny in Azure Purview Account Networking?
+
+No. Connecting to Azure Purview from public endpoint _public network access_ is set to _Deny_, results an error message as the following:
+
+_Not authorized to access this Purview account_
+_This Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Purview account's private endpoint._
+
+In this case, to launch Azure Purview Studio, you need to use a machine that is either deployed in the same VNet as Azure Purview portal Private Endpoint or use a VM that is connected to your CorpNet in which hybrid connectivity is allowed.
+
+### Is it possible to restrict access to Azure Purview managed Storage Account and Event Hub namespace (for Private Endpoint Ingestion only), but keep Portal access enabled for end-users across the Web?
+
+No. When you set _Public network access_ to _Deny_, access to Azure Purview managed Storage Account and Event Hub namespace is automatically set for Private Endpoint Ingestion only.
+When you set _Public network access_ to _Allow_, access to Azure Purview managed Storage Account and Event Hub namespace is automatically set for _All Networks_.
+You cannot modify the Private Endpoint ingestion manually for the managed Storage Account or Event Hub namespace manually.
+
+### If Public network access is set to Allow, does it mean the managed Storage Account and Event Hub can be publicly accessible?
+
+No. As protected resources, access to Azure Purview managed Storage Account and Event Hub namespace is restricted to Azure Purview only. These resources are deployed with a deny assignment to all principals which prevents any applications, users or groups gaining access to them.
+
+To read more about Azure Deny Assignment see, [Understand Azure deny assignments](../role-based-access-control/deny-assignments.md).
+
+### What are the supported authentication type when using Private Endpoint?
+
+Azure Key Vault or Service Principal.
+
+### What are Private DNS Zones required for Azure Purview for Private Endpoint?
+
+**For Azure Purview resource:**
+- `privatelink.purview.azure.com`
+
+**For Azure Purview managed resources:**
+- `privatelink.blob.core.windows.net`
+- `privatelink.queue.core.windows.net`
+- `privatelink.servicebus.windows.net`
+
+### Do I have to use a dedicated Virtual Network and dedicated subnet when deploying Azure Purview Private Endpoints?
+
+No, however _PrivateEndpointNetworkPolicies_ must be disabled in the destination subnet before deploying the Private Endpoints.
+Consider deploying Azure Purview into a Virtual Network that has network connectivity to data sources VNets through VNet Peering and access to on-premises network if you are planning to scan data sources cross premises.
+
+Read more about [Disable network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md).
+
+### Can I deploy Azure Purview Private Endpoints and use existing Private DNS Zones in my subscription to register the A records?
+
+Yes. Your Private Endpoints DNS zones can be centralized in a hub or data management subscription for all internal DNS zones required for Azure Purview and all data sources records. This is the recommended method to allow Azure Purview resolve data sources using their Private Endpoint internal IP addresses.
+
+Additionally, it is required to setup a [virtual network link](../dns/private-dns-virtual-network-links.md) for VNet for the existing private DNS zone.
+
+### Can I use Azure integration runtime to scan data sources through Private Endpoint?
+
+No. You have to deploy and register a self-hosted integration runtime to scan data using private connectivity. Azure Key Vault or Service Principal must be used as authentication method to data sources.
+
+### What are the outbound ports and firewall requirements for virtual machines with self-hosted integration runtime for Azure Purview when using private endpoint?
+
+The VMs in which self-hosted integration runtime is deployed, must have outbound access to Azure endpoints and Azure Purview Private IP address through port 443.
+
+### Do I need to enable outbound internet access from the virtual machine running self-hosted integration runtime if Private Endpoint is enabled?
+
+No. However, it is expected that virtual machine running self-hosted integration runtime can connect to your Azure Purview through internal IP address using port 443.
+Use common tools for name resolution and connectivity test such as nslookup.exe and Test-NetConnection for troubleshooting.
+
+### Why do I receive the following error message when I try to launch Azure Purview Studio from my machine?
+
+_This Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Purview account's private endpoint._
+
+It is likely your Azure Purview account is deployed using Azure Private Link and public access is disabled on your Azure Purview account, therefore, you have to browse Azure Purview Studio from a virtual machine that has internal network connectivity to Azure Purview.
+
+If you are connecting from a VM behind a hybrid network or using a jump machine connected to your VNet, use common troubleshooting tools for name resolution and connectivity test such as nslookup.exe and Test-NetConnection.
+
+1. Validate if you can resolve the following addresses through your Azure Purview account's private IP addresses:
+
+ - `Web.Purview.Azure.com`
+ - `<YourPurviewAccountName>.Purview.Azure.com`
+
+2. Verify network connectivity to your Azure Purview Account using the following PowerShell command:
+
+ ```powershell
+ Test-NetConnection -ComputerName <YourPurviewAccountName>.Purview.Azure.com -Port 443
+ ```
+
+3. Verify your cross-premises DNS configuration if your use your own DNS resolution infrastructure.
+
+ For more information about DNS settings for Private Endpoints, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).
+
+## Next steps
+
+To setup Azure Purview using Private Link, see [Use private endpoints for your Purview account](./catalog-private-link.md).
purview Catalog Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link.md
Previously updated : 03/02/2021 Last updated : 05/10/2021 # Customer intent: As a Purview admin, I want to set up private endpoints for my Purview account, for secure access.
Last updated 03/02/2021
You can use private endpoints for your Purview accounts to allow clients and users on a virtual network (VNet) to securely access the catalog over a Private Link. The private endpoint uses an IP address from the VNet address space for your Purview account. Network traffic between the clients on the VNet and the Purview account traverses over the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet.
+Review [Azure Purview Private Link Frequently asked questions (FAQ)](./catalog-private-link-faqs.md).
+ ## Create Purview account with a Private Endpoint 1. Navigate to the [Azure portal](https://portal.azure.com) and then to your Purview account.
-1. Fill basic information, and set connectivity method to Private endpoint in **Networking** tab. Set up your ingestion private endpoints by providing details of **Subscription, Vnet and Subnet** that you want to pair with your private endpoint.
+1. Fill basic information, and set connectivity method to Private endpoint in **Networking** tab. Set up your ingestion private endpoints by providing details of **Subscription, VNet and Subnet** that you want to pair with your private endpoint.
- > [!NOTE]
- > Create an ingestion private endpoint only if you intend to enable network isolation for end-to-end scan scenarios, for both your Azure and on-premises sources. We currently do not support ingestion private endpoints working with your AWS sources.
+ > [!NOTE]
+ > Create an ingestion private endpoint only if you intend to enable network isolation for end-to-end scan scenarios, for both your Azure and on-premises sources. We currently do not support ingestion private endpoints working with your AWS sources.
- :::image type="content" source="media/catalog-private-link/create-pe-azure-portal.png" alt-text="Create a Private Endpoint in the Azure portal":::
+ :::image type="content" source="media/catalog-private-link/create-pe-azure-portal.png" alt-text="Create a Private Endpoint in the Azure portal":::
1. You can also optionally choose to set up a **Private DNS zone** for each ingestion private endpoint.
-1. Click Add to add a private endpoint for your Purview account.
+1. Select **Add** to add a private endpoint for your Purview account.
1. In the Create private endpoint page, set Purview sub-resource to **account**, choose your virtual network and subnet, and select the Private DNS Zone where the DNS will be registered (you can also utilize your own DNS servers or create DNS records using host files on your virtual machines).
You can use private endpoints for your Purview accounts to allow clients and use
1. Navigate to the Purview account you just created, select the Private endpoint connections under the Settings section.
-1. Click +Private endpoint to create a new private endpoint.
+1. Select **+Private** endpoint to create a new private endpoint.
:::image type="content" source="media/catalog-private-link/pe-portal.png" alt-text="Create portal private endpoint":::
You can use private endpoints for your Purview accounts to allow clients and use
1. Select the virtual network and Private DNS Zone in the Configuration tab. Navigate to the summary page, and click **Create** to create the portal private endpoint.
+## Private DNS zone requirements for Private Endpoints
+When you create a private endpoint, the DNS CNAME resource record for Purview is updated to an alias in a subdomain with the prefix `privatelink`. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints.
+
+When you resolve the Purview endpoint URL from outside the VNet with the private endpoint, it resolves to the public endpoint of the Azure Purview. When resolved from the VNet hosting the private endpoint, the Purview endpoint URL resolves to the private endpoint's IP address.
+
+As an example, if Purview account name is 'PurviewA', when resolved from outside the VNet hosting the private endpoint, will be:
+
+| Name | Type | Value |
+| - | -- | |
+| `PurviewA.purview.azure.com` | CNAME | `PurviewA.privatelink.purview.azure.com` |
+| `PurviewA.privatelink.purview.azure.com` | CNAME | \<Purview public endpoint\> |
+| \<Purview public endpoint\> | A | \<Purview public IP address\> |
+| `Web.purview.azure.com` | CNAME | \<Purview public endpoint\> |
+
+The DNS resource records for PurviewA, when resolved in the VNet hosting the private endpoint, will be:
+
+| Name | Type | Value |
+| - | -- | |
+| `PurviewA.purview.azure.com` | CNAME | `PurviewA.privatelink.purview.azure.com` |
+| `PurviewA.privatelink.purview.azure.com` | A | \<private endpoint IP address\> |
+| `Web.purview.azure.com` | CNAME | \<private endpoint IP address\> |
+
+_Example for Azure Purview DNS name resolution from outside the VNet or when Azure Private Endpoint is not configured:_
+
+ :::image type="content" source="media/catalog-private-link/purview-name-resolution-external.png" alt-text="Purview Name Resolution from outside CorpNet":::
+
+_Example for Azure Purview DNS name resolution from inside VNet:_
+
+ :::image type="content" source="media/catalog-private-link/purview-name-resolution-private-link.png" alt-text="Purview Name Resolution from inside CorpNet":::
+
+If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the Purview endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for 'PurviewA.privatelink.purview.azure.com' with the private endpoint IP address.
+
+For more information on configuring your own DNS server to support private endpoints, refer to the following articles:
+- [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
+- [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration)
+ ## Enabling access to Azure Active Directory > [!NOTE]
The instructions below are for accessing Purview securely from an Azure VM. Simi
:::image type="content" source="media/catalog-private-link/aadcdn-rule.png" alt-text="AAD CDN rule":::
-6. Once the new rule is created, navigate back to the VM and try logging in using your AAD credentials again. If the login succeeds, then Purview portal is ready to use. But in some cases, AAD will redirect to other domains to login based on customer's account type. For e.g. for a live.com account, AAD will redirect to live.com to login, then those requests would be blocked again. For Microsoft employee accounts, AAD will access msft.sts.microsoft.com for login information. Check the networking requests in browser networking tab to see which domain's requests are getting blocked, redo the previous step to get its IP and add outbound port rules in network security group to allow requests for that IP (if possible, add the url and IP to VM's host file to fix the DNS resolution). If you know the exact login domain's IP ranges, you can also directly add them into networking rules.
+6. Once the new rule is created, navigate back to the VM and try logging in using your Azure Active Directory credentials again. If the login succeeds, then Purview portal is ready to use. But in some cases, AAD will redirect to other domains to login based on customer's account type. For example, for a live.com account, AAD will redirect to live.com to login, then those requests would be blocked again. For Microsoft employee accounts, AAD will access msft.sts.microsoft.com for login information. Check the networking requests in browser networking tab to see which domain's requests are getting blocked, redo the previous step to get its IP and add outbound port rules in network security group to allow requests for that IP (if possible, add the url and IP to VM's host file to fix the DNS resolution). If you know the exact login domain's IP ranges, you can also directly add them into networking rules.
-7. Now login to AAD should be successful. Purview Portal will load successfully but listing all Purview accounts won't work since it can only access a specific Purview account. Enter *web.purview.azure.com/resource/{PurviewAccountName}* to directly visit the Purview account that you successfully set up a private endpoint for.
+7. Now your log in to the Azure Active Directory should be successful. Purview Portal will load successfully but listing all Purview accounts won't work since it can only access a specific Purview account. Enter `web.purview.azure.com/resource/{PurviewAccountName}` to directly visit the Purview account that you successfully set up a private endpoint for.
## Ingestion private endpoints and scanning sources in private networks, Vnets and behind private endpoints
-If you want to ensure network isolation for your metadata flowing from the source which is being scanned to the Purview DataMap, then you must follow these steps:
+If you want to ensure network isolation for your metadata flowing from the source which is being scanned to the Purview DataMap, follow these steps:
+ 1. Enable an **ingestion private endpoint** by following steps in [this](#creating-an-ingestion-private-endpoint) section+ 1. Scan the source using a **self-hosted IR**.
- 1. All on-premises source types like SQL server, Oracle, SAP and others are currently supported only via self-hosted IR based scans. The self-hosted IR must run within your private network and then be peered with your Vnet in Azure. Your Azure vnet must then be enabled on your ingestion private endpoint by following steps [below](#creating-an-ingestion-private-endpoint)
- 1. For all **Azure** source types like Azure blob storage, Azure SQL Database and others, you must explicitly choose running the scan using self-hosted IR to ensure network isolation. Follow steps [here](manage-integration-runtimes.md) to set up a self-hosted IR. Then set up your scan on the Azure source by choosing that self-hosted IR in the **connect via integration runtime** dropdown to ensure network isolation.
+ 1. All on-premises source types like SQL server, Oracle, SAP, and others are currently supported only via self-hosted IR based scans. The self-hosted IR must run within your private network and then be peered with your Vnet in Azure. Your Azure vnet must then be enabled on your ingestion private endpoint by following steps [below](#creating-an-ingestion-private-endpoint)
+
+ 2. For all **Azure** source types like Azure blob storage, Azure SQL Database and others, you must explicitly choose running the scan using self-hosted IR to ensure network isolation. Follow the steps in [Create and manage a self-hosted integration runtime](manage-integration-runtimes.md) to set up a self-hosted IR. Then set up your scan on the Azure source by choosing that self-hosted IR in the **connect via integration runtime** dropdown to ensure network isolation.
:::image type="content" source="media/catalog-private-link/shir-for-azure.png" alt-text="Running Azure scan using self-hosted IR"::: > [!NOTE]
+> When you use Private Endpoint for ingestion, you can use Azure Integration Runtime for scanning only for the following data sources:
+> - Azure Blob Storage
+> - Azure Data Lake Gen 2
+>
+> For other data sources, a self-hosted integration runtime is required.
> We currently do not support MSI credential method when you scan your Azure sources using a self-hosted IR. You must use one of the other supported credential method for that Azure source. ## Enable private endpoint on existing Purview accounts
-There are 2 ways you can add Purview private endpoints after creating your Purview account:
+There are two ways you can add Purview private endpoints after creating your Purview account:
- Using the Azure portal (Purview account) - Using the Private link center
There are 2 ways you can add Purview private endpoints after creating your Purvi
:::image type="content" source="media/catalog-private-link/pe-portal.png" alt-text="Create account private endpoint":::
-1. Click +Private endpoint to create a new private endpoint.
+1. Select **+Private endpoint** to create a new private endpoint.
1. Fill in basic information.
-1. In Resource tab, select Resource type to be **Microsoft.Purview/accounts**.
+1. In Resource tab, select the **Resource type** to be **Microsoft.Purview/accounts**.
-1. Select the Resource to be the Purview account and select target sub-resource to be **account**.
+1. Select the **Resource** to be the Purview account and select target sub-resource to be **account**.
1. Select the **virtual network** and **Private DNS Zone** in the Configuration tab. Navigate to the summary page, and click **Create** to create the portal private endpoint.
There are 2 ways you can add Purview private endpoints after creating your Purvi
#### Creating an ingestion private endpoint 1. Navigate to the Purview account from the Azure portal, select the Private endpoint connections under the **networking** section of **Settings**.
-1. Navigate to the **Ingestion private endpoint connections** tab and Click **+New** to create a new ingestion private endpoint.
-1. Fill in basic information and Vnet details.
+3. Navigate to the **Ingestion private endpoint connections** tab and Click **+New** to create a new ingestion private endpoint.
+
+4. Fill in basic information and Vnet details.
- :::image type="content" source="media/catalog-private-link/ingestion-pe-fill-details.png" alt-text="Fill private endpoint details":::
+ :::image type="content" source="media/catalog-private-link/ingestion-pe-fill-details.png" alt-text="Fill private endpoint details":::
-1. Click **Create** to finish set up.
+5. Click **Create** to finish set up.
> [!NOTE] > Ingestion private endpoints can be created only via the Purview Azure portal experience described above. It cannot be created from the private link center.
There are 2 ways you can add Purview private endpoints after creating your Purvi
2. In the search bar at the top of the page, search for 'private link' and navigate to the private link blade by clicking on the first option.
-3. Click on '+ Add' and fill in the basic details.
+3. Select **+ Add** and fill in the basic details.
:::image type="content" source="media/catalog-private-link/private-link-center.png" alt-text="Create PE from private link center":::
There are 2 ways you can add Purview private endpoints after creating your Purvi
To cut off access to the Purview account completely from public internet, follow the steps below. This setting will apply to both private endpoint and ingestion private endpoint connections. 1. Navigate to the Purview account from the Azure portal, select the Private endpoint connections under the **networking** section of **Settings**.
-1. Navigate to the firewall tab and ensure that the toggle is set to **Deny**.
- :::image type="content" source="media/catalog-private-link/private-endpoint-firewall.png" alt-text="Private endpoint firewall settings":::
+3. Navigate to the firewall tab and ensure that the toggle is set to **Deny**.
+
+ :::image type="content" source="media/catalog-private-link/private-endpoint-firewall.png" alt-text="Private endpoint firewall settings":::
## Next steps
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-synapse-workspace.md
Azure Synapse Workspace scans support capturing metadata and schema for dedicate
- You need to be an Azure Purview Data Source Admin - Setting up authentication as described in the sections below
-### Setting up authentication for enumerating dedicated SQL database resources under a Synapse Workspace
+## Steps to register and scan a Synapse workspace
+
+> [!NOTE]
+> These steps **must** be followed in the exact order specified along with applying the exact permissions specified in each step where applicable, to successfully scan your workspace.
+
+### **STEP 1**: Register your source (Only a contributor on the Synapse workspace who is also a data source admin in Purview can carry out this step)
+
+To register a new Azure Synapse Source in your data catalog, do the following:
+
+1. Navigate to your Purview account
+1. Select **Sources** on the left navigation
+1. Select **Register**
+1. On **Register sources**, select **Azure Synapse Analytics (multiple)**
+1. Select **Continue**
+
+ :::image type="content" source="media/register-scan-synapse-workspace/register-synapse-source.png" alt-text="Set up Azure Synapse source":::
+
+On the **Register sources (Azure Synapse Analytics)** screen, do the following:
+
+1. Enter a **Name** that the data source will be listed with in the Catalog.
+1. Optionally choose a **subscription** to filter down to.
+1. **Select a Synapse workspace name** from the dropdown. The SQL endpoints get automatically filled based on your workspace selection.
+1. Select a **collection** or create a new one (Optional)
+1. **Finish** to register the data source
+
+ :::image type="content" source="media/register-scan-synapse-workspace/register-synapse-source-details.png" alt-text="Fill details for Azure Synapse source":::
++
+### **STEP 2**: Applying permissions to enumerate the contents of the workspace
+
+#### Setting up authentication for enumerating dedicated SQL database resources under a Synapse Workspace
1. Navigate to the **Resource group** or **Subscription** that the Synapse workspace is in, in the Azure portal. 1. Select **Access Control (IAM)** from the left navigation menu
-1. You must be owner or user access administrator to add a role on the Resource group or Subscription. Select *+Add* button.
+1. You must be owner or user access administrator to add a role on the **Resource group** or **Subscription**. Select *+Add* button.
1. Set the **Reader** Role and enter your Azure Purview account name (which represents its MSI) under Select input box. Click *Save* to finish the role assignment.
-1. Follow steps 2 to 4 above to also add **Storage blob data reader** Role for the Azure Purview MSI on the resource group or subscription that the Synapse workspace is in.
-### Setting up authentication for enumerating serverless SQL database resources under a Synapse Workspace
+#### Setting up authentication for enumerating serverless SQL database resources under a Synapse Workspace
> [!NOTE] > You must be a **Synapse administrator** on the workspace to run these commands. Learn more about Synapse permissions [here](../synapse-analytics/security/how-to-set-up-access-control.md).
Azure Synapse Workspace scans support capturing metadata and schema for dedicate
ALTER SERVER ROLE sysadmin ADD MEMBER [PurviewAccountName]; ```
-### Setting up authentication to scan resources under a Synapse workspace
+1. Navigate to the **Resource group** or **Subscription** that the Synapse workspace is in, in the Azure portal.
+1. Select **Access Control (IAM)** from the left navigation menu
+1. You must be **owner** or **user access administrator** to add a role on the Resource group or Subscription. Select *+Add* button.
+1. Set the **Storage blob data reader** Role and enter your Azure Purview account name (which represents its MSI) under Select input box. Click *Save* to finish the role assignment.
+
+### **STEP 3**: Applying permissions to scan the contents of the workspace
-There are three ways to set up authentication for an Azure Synapse source:
+There are two ways to set up authentication for an Azure Synapse source:
- Managed Identity - Service Principal
-
+
+> [!NOTE]
+> You must set up authentication on each Dedicated SQL database within your Synapse workspace, that you intend to register and scan. The permissions mentioned below for Serverless SQL database apply to all of them within your workspace i.e. you will have to run it only once.
+ #### Using Managed identity for Dedicated SQL databases 1. Navigate to your **Synapse workspace**
-1. Navigate to the **Data** section and to one of your serverless SQL databases
+1. Navigate to the **Data** section and to one of your dedicated SQL databases
1. Click on the ellipses icon and start a New SQL script 1. Add the Azure Purview account MSI (represented by the account name) as **db_datareader** on the dedicated SQL database by running the command below in your SQL script:
There are three ways to set up authentication for an Azure Synapse source:
> You must first set up a new **credential** of type Service Principal by following instructions [here](manage-credentials.md). 1. Navigate to your **Synapse workspace**
-1. Navigate to the **Data** section and to one of your serverless SQL databases
+1. Navigate to the **Data** section and to one of your dedicated SQL databases
1. Click on the ellipses icon and start a New SQL script 1. Add the **Service Principal ID** as **db_datareader** on the dedicated SQL database by running the command below in your SQL script:
There are three ways to set up authentication for an Azure Synapse source:
ALTER SERVER ROLE sysadmin ADD MEMBER [ServicePrincipalID]; ```
-> [!NOTE]
-> You must set up authentication on each Dedicated SQL database within your Synapse workspace, that you intend to register and scan. The permissions mentioned above for Serverless SQL database apply to all of them within your workspace i.e. you will have to run it only once.
-
-## Register an Azure Synapse Source
-
-To register a new Azure Synapse Source in your data catalog, do the following:
-
-1. Navigate to your Purview account
-1. Select **Sources** on the left navigation
-1. Select **Register**
-1. On **Register sources**, select **Azure Synapse Analytics (multiple)**
-1. Select **Continue**
-
- :::image type="content" source="media/register-scan-synapse-workspace/register-synapse-source.png" alt-text="Set up Azure Synapse source":::
-
-On the **Register sources (Azure Synapse Analytics)** screen, do the following:
-
-1. Enter a **Name** that the data source will be listed with in the Catalog.
-1. Optionally choose a **subscription** to filter down to.
-1. **Select a Synapse workspace name** from the dropdown. The SQL endpoints get automatically filled based on your workspace selection.
-1. Select a **collection** or create a new one (Optional)
-1. **Finish** to register the data source
-
- :::image type="content" source="media/register-scan-synapse-workspace/register-synapse-source-details.png" alt-text="Fill details for Azure Synapse source":::
-
-## Creating and running a scan
+### **STEP 4**: Setting up a scan on the workspace
To create and run a new scan, do the following:
To create and run a new scan, do the following:
1. Review your scan and select Save to complete set up
-## Viewing your scans and scan runs
+#### Viewing your scans and scan runs
1. View source details by clicking on **view details** on the tile under the sources section.
To create and run a new scan, do the following:
1. View a summary of recent failed scan runs at the bottom of the source details page. You can also click into view more granular details pertaining to these runs.
-## Manage your scans - edit, delete, or cancel
+#### Manage your scans - edit, delete, or cancel
To manage or delete a scan, do the following: - Navigate to the management center. Select Data sources under the Sources and scanning section then select on the desired data source.
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-overview.md
Previously updated : 04/19/2021 Last updated : 05/14/2021 # Storage account overview
-An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.
+An Azure storage account contains all of your Azure Storage data objects: blobs, file shares, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that's accessible from anywhere in the world over HTTP or HTTPS. Data in your storage account is durable and highly available, secure, and massively scalable.
To learn how to create an Azure storage account, see [Create a storage account](storage-account-create.md). ## Types of storage accounts
-Azure Storage offers several types of storage accounts. Each type supports different features and has its own pricing model. Consider these differences before you create a storage account to determine the type of account that is best for your applications.
+Azure Storage offers several types of storage accounts. Each type supports different features and has its own pricing model. Consider these differences before you create a storage account to determine the type of account that's best for your applications.
-The following table describes the types of storage accounts recommended by Microsoft for most scenarios:
+The following table describes the types of storage accounts recommended by Microsoft for most scenarios. All of these use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) deployment model.
-| Type of storage account | Supported services | Redundancy options | Deployment model | Usage |
-|--|--|--|--|--|
-| Standard general-purpose v2 | Blob, File, Queue, Table, and Data Lake Storage<sup>1</sup> | LRS/GRS/RA-GRS<br /><br />ZRS/GZRS/RA-GZRS<sup>2</sup> | Resource Manager<sup>3</sup> | Basic storage account type for blobs, files, queues, and tables. Recommended for most scenarios using Azure Storage. |
-| Premium block blobs<sup>4</sup> | Block blobs only | LRS<br /><br />ZRS<sup>2</sup> | Resource Manager<sup>3</sup> | Storage accounts with premium performance characteristics for block blobs and append blobs. Recommended for scenarios with high transactions rates, or scenarios that use smaller objects or require consistently low storage latency.<br />[Learn more...](../blobs/storage-blob-performance-tiers.md) |
-| Premium file shares<sup>4</sup> | File shares only | LRS<br /><br />ZRS<sup>2</sup> | Resource Manager<sup>3</sup> | Files-only storage accounts with premium performance characteristics. Recommended for enterprise or high performance scale applications.<br />[Learn more...](../files/storage-files-planning.md#management-concepts) |
-| Premium page blobs<sup>4</sup> | Page blobs only | LRS | Resource Manager<sup>3</sup> | Premium storage account type for page blobs only.<br />[Learn more...](../blobs/storage-blob-pageblob-overview.md) |
-
-<sup>1</sup> Data Lake Storage is a set of capabilities dedicated to big data analytics, built on Azure Blob storage. For more information, see [Introduction to Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md).
+| Type of storage account | Supported storage services | Redundancy options | Usage |
+|--|--|--|--|
+| Standard general-purpose v2 | Blob (including Data Lake Storage<sup>1</sup>), Queue, and Table storage, Azure Files | LRS/GRS/RA-GRS<br /><br />ZRS/GZRS/RA-GZRS<sup>2</sup> | Standard storage account type for blobs, file shares, queues, and tables. Recommended for most scenarios using Azure Storage. Note that if you want support for NFS file shares in Azure Files, use the premium file shares account type. |
+| Premium block blobs<sup>3</sup> | Blob storage (including Data Lake Storage<sup>1</sup>) | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transactions rates, or scenarios that use smaller objects or require consistently low storage latency. [Learn more about example workloads.](../blobs/storage-blob-performance-tiers.md#premium-performance) |
+| Premium file shares<sup>3</sup> | Azure Files | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for file shares only. Recommended for enterprise or high-performance scale applications. Use this account type if you want a storage account that supports both SMB and NFS file shares. |
+| Premium page blobs<sup>3</sup> | Page blobs only | LRS | Premium storage account type for page blobs only. [Learn more about page blobs and sample use cases.](../blobs/storage-blob-pageblob-overview.md) |
-<sup>2</sup> Zone-redundant storage (ZRS) and geo-zone-redundant storage (GZRS/RA-GZRS) are available only for standard general-purpose v2, premium block blob, and premium file share accounts in certain regions. For more information about Azure Storage redundancy options, see [Azure Storage redundancy](storage-redundancy.md).
+<sup>1</sup> Data Lake Storage is a set of capabilities dedicated to big data analytics, built on Azure Blob storage. For more information, see [Introduction to Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md) and [Create a storage account to use with Data Lake Storage Gen2](../blobs/create-data-lake-storage-account.md).
-<sup>3</sup> Azure Resource Manager is the recommended deployment model for Azure resources, including storage accounts. For more information, see [Resource Manager overview](../../azure-resource-manager/management/overview.md).
+<sup>2</sup> Zone-redundant storage (ZRS) and geo-zone-redundant storage (GZRS/RA-GZRS) are available only for standard general-purpose v2, premium block blobs, and premium file shares accounts in certain regions. For more information, see [Azure Storage redundancy](storage-redundancy.md).
-<sup>4</sup> Storage accounts in a premium performance tier use solid state disks (SSDs) for low latency and high throughput.
+<sup>3</sup> Storage accounts in a premium performance tier use solid-state drives (SSDs) for low latency and high throughput.
Legacy storage accounts are also supported. For more information, see [Legacy storage account types](#legacy-storage-account-types).
Construct the URL for accessing an object in a storage account by appending the
You can also configure your storage account to use a custom domain for blobs. For more information, see [Configure a custom domain name for your Azure Storage account](../blobs/storage-custom-domain-name.md).
-## Migrating a storage account
+## Migrate a storage account
-The following table summarizes and points to guidance on moving, upgrading, or migrating a storage account:
+The following table summarizes and points to guidance on how to move, upgrade, or migrate a storage account:
| Migration scenario | Details | |--|--|
The following table summarizes and points to guidance on moving, upgrading, or m
| Move a storage account to a different resource group | Azure Resource Manager provides options for moving a resource to a different resource group. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). | | Move a storage account to a different region | To move a storage account, create a copy of your storage account in another region. Then, move your data to that account by using AzCopy, or another tool of your choice. For more information, see [Move an Azure Storage account to another region](storage-account-move.md). | | Upgrade to a general-purpose v2 storage account | You can upgrade a general-purpose v1 storage account or Blob storage account to a general-purpose v2 account. Note that this action cannot be undone. For more information, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md). |
-| Migrate a classic storage account to Azure Resource Manager | The Azure Resource Manager deployment model is superior to the classic deployment model in terms of functionality, scalability, and security. For more information about migrating a classic storage account to Azure Resource Manager, see [Migration of storage accounts](../../virtual-machines/migration-classic-resource-manager-overview.md#migration-of-storage-accounts) in **Platform-supported migration of IaaS resources from classic to Azure Resource Manager**. |
+| Migrate a classic storage account to Azure Resource Manager | The Azure Resource Manager deployment model is superior to the classic deployment model in terms of functionality, scalability, and security. For more information about migrating a classic storage account to Azure Resource Manager, see the "Migration of storage accounts" section of [Platform-supported migration of IaaS resources from classic to Azure Resource Manager](../../virtual-machines/migration-classic-resource-manager-overview.md#migration-of-storage-accounts). |
-## Transferring data into a storage account
+## Transfer data into a storage account
Microsoft provides services and utilities for importing your data from on-premises storage devices or third-party cloud storage providers. Which solution you use depends on the quantity of data you're transferring. For more information, see [Azure Storage migration overview](storage-migration-overview.md).
The [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/sto
The following table describes the legacy storage account types. These account types are not recommended by Microsoft, but may be used in certain scenarios:
-| Type of legacy storage account | Supported services | Redundancy options | Deployment model | Usage |
+| Type of legacy storage account | Supported storage services | Redundancy options | Deployment model | Usage |
|--|--|--|--|--|
-| Standard general-purpose v1 | Blob, File, Queue, Table, and Data Lake Storage | LRS/GRS/RA-GRS | Resource Manager, Classic | General-purpose v1 accounts may not have the latest features or the lowest per-gigabyte pricing. Consider using for these scenarios:<br /><ul><li>Your applications require the Azure classic deployment model.</li><li>Your applications are transaction-intensive or use significant geo-replication bandwidth, but don't require large capacity. In this case, general-purpose v1 may be the most economical choice.</li><li>You use a version of the Azure Storage REST API that is earlier than 2014-02-14 or a client library with a version lower than 4.x, and you can't upgrade your application.</li></ul> |
-| Standard Blob storage | Blob (block blobs and append blobs only) | LRS/GRS/RA-GRS | Resource Manager | Microsoft recommends using standard general-purpose v2 accounts instead when possible. |
+| Standard general-purpose v1 | Blob, Queue, and Table storage, Azure Files | LRS/GRS/RA-GRS | Resource Manager, Classic | General-purpose v1 accounts may not have the latest features or the lowest per-gigabyte pricing. Consider using for these scenarios:<br /><ul><li>Your applications require the Azure [classic deployment model](../../azure-portal/supportability/classic-deployment-model-quota-increase-requests.md).</li><li>Your applications are transaction-intensive or use significant geo-replication bandwidth, but don't require large capacity. In this case, general-purpose v1 may be the most economical choice.</li><li>You use a version of the Azure Storage REST API that is earlier than 2014-02-14 or a client library with a version lower than 4.x, and you can't upgrade your application.</li></ul> |
+| Standard Blob storage | Blob storage (block blobs and append blobs only) | LRS/GRS/RA-GRS | Resource Manager | Microsoft recommends using standard general-purpose v2 accounts instead when possible. |
## Next steps
virtual-machines Oracle Database Backup Azure Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-database-backup-azure-backup.md
To prepare the environment, complete these steps:
### Prepare the database
-This step assumes that you have an Oracle instance (*test*) that is running on a VM named *vmoracle19c*.
+This step assumes that you have an Oracle instance named `test` that is running on a VM named `vmoracle19c`.
1. Switch user to the *oracle* user:
This step assumes that you have an Oracle instance (*test*) that is running on a
sudo su - oracle ```
-2. Before you connect, you need to set the environment variable ORACLE_SID:
+1. Before you connect, you need to set the environment variable ORACLE_SID:
```bash export ORACLE_SID=test;
This step assumes that you have an Oracle instance (*test*) that is running on a
echo "export ORACLE_SID=test" >> ~oracle/.bashrc ```
-3. Start the Oracle listener if it's not already running:
+1. Start the Oracle listener if it's not already running:
```output $ lsnrctl start
This step assumes that you have an Oracle instance (*test*) that is running on a
The command completed successfully ```
-4. Create the Fast Recovery Area (FRA) location:
+1. Create the database Fast Recovery Area (FRA) location. The FRA is a centralized storage location for backup and recovery files:
```bash mkdir /u02/fast_recovery_area ```
-5. Connect to the database:
+1. Create an Azure Files fileshare for the Oracle archived redo log files
+
+ The Oracle database archive redo logfiles play a crucial role in database recovery as they store the committed transactions needed to roll forward from a database snapshot taken in the past. When in archivelog mode, the database archives the contents of online redo logfiles when they become full and switch. Together with a backup, they are required to achieve point-in-time recovery when the database has been lost.
+
+ Oracle provides the capability to archive redo logfiles to different locations, with industry best practice recommending that at least one of those destinations be on remote storage, so it is separate from the host storage and protected with independent snapshots. Azure Files is a great fit for those requirements.
+
+ An Azure Files fileshare is storage which can be attached to a Linux or Windows VM as a regular filesystem component, using SMB or NFS (Preview) protocols. To setup an Azure Files fileshare on Linux, using SMB 3.0 protocol (recommended), for use as archive log storage, please follow the [Use Azure Files with Linux how-to guide](../../../storage/files/storage-how-to-use-files-linux.md).
+
+ Once the Azure Files share is configured and mounted on the Linux VM, for example under a mount point directory named `/backup`, it can be added as an additional archive log file destination in the database as follows:
+
+ First check the name of the Oracle SID
+ ```bash
+ echo $ORACLE_SID
+ test
+ ```
+
+ Make a sub-directory named after your database SID. In this example the mount point name is `/backup` and the SID returned in by the previous command is `test` so we will create a sub-directory `/backup/test` and change ownership to the oracle user. Please substitute **/backup/SID** for your mount point name and database SID. Note, if you have multiple databases on the VM make a sub-directory for each one and change the ownership:
+
+ ```bash
+ sudo mkdir /backup/test
+ sudo chown oracle:oinstall /backup/test
+ ```
+
+1. Connect to the database:
```bash
- SQL> sqlplus / as sysdba
+ sqlplus / as sysdba
```
+ Note that if you have multiple databases installed on the VM you will need to run steps 6-15 on each database:
-6. Start the database if it's not already running:
-
+1. Start the database if it's not already running.
+
```bash SQL> startup ```
+
+1. Set the first archive log destination of the database to the fileshare directory you created in step 5:
-7. Set database environment variables for fast recovery area:
+ ```bash
+ sqlplus / as sysdba
+ SQL> alter system set log_archive_dest_1='LOCATION=/backup/test';
+ SQL>
+ ```
++
+1. Set database environment variables for fast recovery area:
```bash SQL> alter system set db_recovery_file_dest_size=4096M scope=both; SQL> alter system set db_recovery_file_dest='/u02/fast_recovery_area' scope=both; ```
-
-8. Make sure the database is in archive log mode to enable online backups.
- Check the log archive status first:
- ```bash
- SQL> SELECT log_mode FROM v$database;
+1. Define the recovery point objective (RPO) for the database.
- LOG_MODE
-
- NOARCHIVELOG
- ```
+ To achieve a consistent RPO, the frequency at which the online redo log files will be archived must be considered. Archive log generation frequency is controlled by:
+ - The size of the online redo logfiles. As an online logfile becomes full it is switched and archived. The larger the online logfile the longer it takes to fill up which decreases the frequency of archive generation.
+ - The setting of the ARCHIVE_LAG_TARGET parameter controls the maximum number of seconds permitted before the current online logfile must be switched and archived.
- If it's in NOARCHIVELOG mode, run the following commands:
+ To minimize the frequency of switching and archiving, along with the accompanying checkpoint operation, Oracle online redo logfiles generally get sized quite large (1024M, 4096M, 8192M, etc). In a busy database environment logs are still likely to switch and archive every few seconds or minutes, but in a less active database they might go hours or days before the most recent transactions are archived, which would dramatically decrease archival frequency. Setting ARCHIVE_LAG_TARGET is therefore recommended to ensure a consistent RPO is achieved. A setting of 5 minutes (300 seconds) is a prudent value for ARCHIVE_LAG_TARGET, ensuring that any database recovery operation can recover to within 5 minutes or less of the time of failure.
- ```bash
- SQL> SHUTDOWN IMMEDIATE;
- SQL> STARTUP MOUNT;
- SQL> ALTER DATABASE ARCHIVELOG;
- SQL> ALTER DATABASE OPEN;
- SQL> ALTER SYSTEM SWITCH LOGFILE;
+ To set ARCHIVE_LAG_TARGET:
+
+ ```bash
+ sqlplus / as sysdba
+ SQL> alter system set archive_lag_target=300 scope=both;
```
-9. Create a table to test the backup and restore operations:
+ To better understand how to deploy highly available Oracle databases in Azure with zero RPO, please see [Reference Architectures for Oracle Database](./oracle-reference-architecture.md).
- ```bash
- SQL> create user scott identified by tiger quota 100M on users;
- SQL> grant create session, create table to scott;
- connect scott/tiger
- SQL> create table scott_table(col1 number, col2 varchar2(50));
- SQL> insert into scott_table VALUES(1,'Line 1');
- SQL> commit;
- SQL> quit
- ```
+1. Make sure the database is in archive log mode to enable online backups.
+
+ Check the log archive status first:
+
+ ```bash
+ SQL> SELECT log_mode FROM v$database;
-10. Configure RMAN to back up to the Fast Recovery Area located on the VM disk:
+ LOG_MODE
+
+ NOARCHIVELOG
+ ```
+
+ If it's in NOARCHIVELOG mode, run the following commands:
+
+ ```bash
+ SQL> SHUTDOWN IMMEDIATE;
+ SQL> STARTUP MOUNT;
+ SQL> ALTER DATABASE ARCHIVELOG;
+ SQL> ALTER DATABASE OPEN;
+ SQL> ALTER SYSTEM SWITCH LOGFILE;
+ ```
+
+1. Create a table to test the backup and restore operations:
+
+ ```bash
+ SQL> create user scott identified by tiger quota 100M on users;
+ SQL> grant create session, create table to scott;
+ SQL> connect scott/tiger
+ SQL> create table scott_table(col1 number, col2 varchar2(50));
+ SQL> insert into scott_table VALUES(1,'Line 1');
+ SQL> commit;
+ SQL> quit
+ ```
+
+1. Configure RMAN to back up the databases to the Fast Recovery Area located on the VM disk. Note that snapshot controlfile configuration does not accept %d database name substitution so you should explicitly include the database SID in the file name so multiple databases can backup to the same location. Please substitute `<ORACLE_SID>` for your database name:
```bash $ rman target /
- RMAN> configure snapshot controlfile name to '/u02/fast_recovery_area/snapcf_ev.f';
+ RMAN> configure snapshot controlfile name to '/u02/fast_recovery_area/snapcf_<ORACLE_SID>.f';
RMAN> configure channel 1 device type disk format '/u02/fast_recovery_area/%d/Full_%d_%U_%T_%s'; RMAN> configure channel 2 device type disk format '/u02/fast_recovery_area/%d/Full_%d_%U_%T_%s'; ```
-11. Confirm the configuration change details:
+1. Confirm the configuration change details:
```bash RMAN> show all; ```
-12. Now run the backup. The following command will take a full database backup, including archive logfiles, as a backupset in compressed format:
+1. Now run the backup. The following command will take a full database backup, including archive logfiles, as a backupset in compressed format:
```bash RMAN> backup as compressed backupset database plus archivelog;
The Azure Backup service provides simple, secure, and cost-effective solutions t
Azure Backup service provides a [framework](../../../backup/backup-azure-linux-app-consistent.md) to achieve application consistency during backups of Windows and Linux VMs for various applications like Oracle, MySQL, Mongo DB and PostGreSQL. This involves invoking a pre-script (to quiesce the applications) before taking a snapshot of disks and calling post-script (commands to unfreeze the applications) after the snapshot is completed, to return the applications to the normal mode. While sample pre-scripts and post-scripts are provided on GitHub, the creation and maintenance of these scripts is your responsibility.
-Now Azure Backup is providing an enhanced pre-scripts and post-script framework (**which is currently in preview**), where the Azure Backup service will provide packaged pre-scripts and post-scripts for selected applications. Azure Backup users just need to name the application and then Azure VM backup will automatically invoke the relevant pre-post scripts. The packaged pre-scripts and post-scripts will be maintained by the Azure Backup team and so users can be assured of the support, ownership, and validity of these scripts. Currently, the supported applications for the enhanced framework are *Oracle* and *MySQL*.
+Now Azure Backup is providing an enhanced pre-scripts and post-script framework (**currently in preview**), where the Azure Backup service will provide packaged pre-scripts and post-scripts for selected applications. Azure Backup users just need to name the application and then Azure VM backup will automatically invoke the relevant pre-post scripts. The packaged pre-scripts and post-scripts will be maintained by the Azure Backup team and so users can be assured of the support, ownership, and validity of these scripts. Currently, the supported applications for the enhanced framework are *Oracle* and *MySQL*.
In this section, you will use Azure Backup enhanced framework to take application-consistent snapshots of your running VM and Oracle database. The database will be placed into backup mode allowing a transactionally consistent online backup to occur while Azure Backup takes a snapshot of the VM disks. The snapshot will be a full copy of the storage and not an incremental or Copy on Write snapshot, so it is an effective medium to restore your database from. The advantage of using Azure Backup application-consistent snapshots is that they are extremely fast to take no matter how large your database is, and a snapshot can be used for restore operations as soon as it is taken, without having to wait for it to be transferred to the Recovery Services vault.
To use Azure Backup to back up the database, complete these steps:
### Prepare the environment for an application-consistent backup
-1. Switch to the *root* user:
+> [!IMPORTANT]
+> The Oracle database employs job role separation to provide separation of duties using least privilege. This is achieved by associating separate operating system groups with separate database administrative roles. Operating system users can then have different database privileges granted to them depending on their membership of operating system groups.
+>
+> The `SYSBACKUP` database role, (generic name OSBACKUPDBA), is used to provide limited privileges to perform backup operations in the database, and is required by Azure Backup.
+>
+> During Oracle installation, the recommended operating system group name to associate with the SYSBACKUP role is `backupdba`, but any name can be used so you need to determine the name of the operating system group representing the Oracle SYSBACKUP role first.
+1. Switch to the *oracle* user:
```bash
- sudo su -
+ sudo su - oracle
```
-1. Create new backup user:
+1. Set the oracle environment:
+ ```bash
+ export ORACLE_SID=test
+ export ORAENV_ASK=NO
+ . oraenv
+ ```
+1. Determine the name of the operating system group representing the Oracle SYSBACKUP role:
```bash
- useradd -G backupdba azbackup
+ grep "define SS_BKP" $ORACLE_HOME/rdbms/lib/config.c
+ ```
+ The output will look similar to this:
+ ```output
+ #define SS_BKP_GRP "backupdba"
```
-
-2. Set up the backup user environment:
+ In the output, the value enclosed within double-quotes, in this example `backupdba`, is the name of the Linux operating system group to which the Oracle SYSBACKUP role is externally authenticated. Note down this value.
+
+1. Verify if the operating system group exists by running the following command. Please substitute \<group name\> with the value returned by the previous command (without the quotes):
```bash
- echo "export ORACLE_SID=test" >> ~azbackup/.bashrc
- echo export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1 >> ~azbackup/.bashrc
- echo export PATH='$ORACLE_HOME'/bin:'$PATH' >> ~azbackup/.bashrc
+ grep <group name> /etc/group
```
-
-3. Set up external authentication for the new backup user. The backup user needs to be able to access the database using external authentication, so as not to be challenged by a password.
+ The output will look similar to this, in our example `backupdba` is used:
+ ```output
+ backupdba:x:54324:oracle
+ ```
+
+ > [!IMPORTANT]
+ > If you the output does not match the Oracle operating system group value retrieved in Step 3 you will need to create the operating system group representing the Oracle SYSBACKUP role. Please substitute `<group name>` for the group name retrieved in step 3 :
+ > ```bash
+ > sudo groupadd <group name>
+ > ```
- First, switch back to the *oracle* user:
+1. Create a new backup user `azbackup` which belongs to the operating system group you have verified or created in the previous steps. Please substitute \<group name\> for the name of the group verified:
```bash
- su - oracle
+ sudo useradd -G <group name> azbackup
```
+1. Set up external authentication for the new backup user.
+
+ The backup user `azbackup` needs to be able to access the database using external authentication, so as not to be challenged by a password. In order to do this you must create a database user that authenticates externally through `azbackup`. The database uses a prefix for the user name which you need to find.
+ On each database installed on the VM perform the following steps:
+
Log in to the database using sqlplus and check the default settings for external authentication: ```bash
To use Azure Backup to back up the database, complete these steps:
SQL> show parameter remote_os_authent ```
- The output should look like this example:
+ The output should look like this example which shows `ops$` as the database user name prefix:
```output NAME TYPE VALUE
To use Azure Backup to back up the database, complete these steps:
remote_os_authent boolean FALSE ```
- Now, create a database user *azbackup* authenticated externally and grant sysbackup privilege:
+ Create a database user ***ops$azbackup*** for external authentication to `azbackup` user, and grant sysbackup privileges:
```bash SQL> CREATE USER ops$azbackup IDENTIFIED EXTERNALLY;
To use Azure Backup to back up the database, complete these steps:
``` > [!IMPORTANT]
- > If you receive error `ORA-46953: The password file is not in the 12.2 format.` when you run the `GRANT` statement, follow these steps to migrate the orapwd file to 12.2 format:
+ > If you receive error `ORA-46953: The password file is not in the 12.2 format.` when you run the `GRANT` statement, follow these steps to migrate the orapwd file to 12.2 format, Note that you will need to perform this for every Oracle database on the VM:
> > 1. Exit sqlplus. > 1. Move the password file with the old format to a new name. > 1. Migrate the password file. > 1. Remove the old file.
- > 1. Run the following command:
+ > 1. Run the following commands:
> > ```bash > mv $ORACLE_HOME/dbs/orapwtest $ORACLE_HOME/dbs/orapwtest.tmp
To use Azure Backup to back up the database, complete these steps:
> 1. Rerun the `GRANT` operation in sqlplus. >
-4. Create a stored procedure to log backup messages to the database alert log:
+1. Create a stored procedure to log backup messages to the database alert log:
```bash sqlplus / as sysdba
To use Azure Backup to back up the database, complete these steps:
SQL> SHOW ERRORS SQL> QUIT ```
-
-### Set up application-consistent backups
-1. Switch to the *root* user:
+### Set up application-consistent backups
+1. Switch to the root user first:
```bash sudo su - ```
-2. Check for "etc/azure" folder. If that is not present, create the application-consistent backup working directory:
+1. Check for "/ etc/azure" folder. If that is not present, create the application-consistent backup working directory:
```bash if [ ! -d "/etc/azure" ]; then
- sudo mkdir /etc/azure
+ mkdir /etc/azure
fi ```
-3. Check for "workload.conf" within the folder. If that is not present, create a file in the */etc/azure* directory called *workload.conf* with the following contents, which must begin with `[workload]`. If the file is already present, then just edit the fields such that it matches the following content. Otherwise, the following command will create the file and populate the contents:
+1. Check for "workload.conf" within the folder. If that is not present, create a file in the */etc/azure* directory called *workload.conf* with the following contents, which must begin with `[workload]`. If the file is already present, then just edit the fields such that it matches the following content. Otherwise, the following command will create the file and populate the contents:
```bash echo "[workload] workload_name = oracle
- command_path = /u01/app/oracle/product/19.0.0/dbhome_1/bin/
+ configuration_path = /etc/oratab
timeout = 90 linux_user = azbackup" > /etc/azure/workload.conf ```
+ > [!IMPORTANT]
+ > The format used by workload.conf is as follows:
+ > * The parameter **workload_name** is used by Azure Backup to determine the database workload type. In this case setting to oracle, allows Azure Backup to run the correct pre and post consistency commands for Oracle databases.
+ > * The parameter **timeout** indicates the maximum time in seconds that each database will have to complete storage snapshots.
+ > * The parameter **linux_user** indicates the Linux user account that will be used by Azure Backup to run database quiesce operations. You created this user, `azbackup`, previously.
+ > * The parameter **configuration_path** indicates the absolute path name for a text file on the VM where each line lists a database instance running on the VM. This will typically be the `/etc/oratab` file which is generated by Oracle during database installation, but it can be any file with any name you choose, it must however follow these format rules:
+ > * A text file with each field delimited with the colon character `:`
+ > * The first field in each line is the name for an ORACLE_SID
+ > * The second field in each line is the absolute path name for the ORACLE_HOME for that ORACLE_SID
+ > * All text following these first two fields will be ignored
+ > * If the line starts with a pound/hash character `#` then the entire line will be ignored as a comment
+ > * If the first field has the value `+ASM` denoting an Automatic Storage Management instance, it is ignored.
++ ### Trigger an application-consistent backup of the VM # [Portal](#tab/azure-portal) 1. In the Azure portal, go to your resource group **rg-oracle** and click on your Virtual Machine **vmoracle19c**.
-2. On the **Backup** blade, create a new **Recovery Services Vault** in the resource group **rg-oracle** with the name **myVault**.
+1. On the **Backup** blade, create a new **Recovery Services Vault** in the resource group **rg-oracle** with the name **myVault**.
For **Choose backup policy**, use **(new) DailyPolicy**. If you want to change the backup frequency or retention range select **Create a new policy** instead. ![Recovery Services vaults add page](./media/oracle-backup-recovery/recovery-service-01.png)
-3. To continue, click **Enable Backup**.
+1. To continue, click **Enable Backup**.
> [!IMPORTANT] > After you click **Enable backup**, the backup process doesn't start until the scheduled time expires. To set up an immediate backup, complete the next step.
-4. From the resource group page, click on your newly created Recovery Services Vault **myVault**. Hint: You may need to refresh the page to see it.
+1. From the resource group page, click on your newly created Recovery Services Vault **myVault**. Hint: You may need to refresh the page to see it.
-5. On the **myVault - Backup items** blade, under **BACKUP ITEM COUNT**, select the backup item count.
+1. On the **myVault - Backup items** blade, under **BACKUP ITEM COUNT**, select the backup item count.
![Recovery Services vaults myVault detail page](./media/oracle-backup-recovery/recovery-service-02.png)
-6. On the **Backup Items (Azure Virtual Machine)** blade, on the right side of the page, click the ellipsis (**...**) button, and then click **Backup now**.
+1. On the **Backup Items (Azure Virtual Machine)** blade, on the right side of the page, click the ellipsis (**...**) button, and then click **Backup now**.
![Recovery Services vaults Backup now command](./media/oracle-backup-recovery/recovery-service-03.png)
-7. Accept the default Retain Backup Till value and click the **OK** button. Wait for the backup process to finish.
+1. Accept the default Retain Backup Till value and click the **OK** button. Wait for the backup process to finish.
To view the status of the backup job, click **Backup Jobs**.
To use Azure Backup to back up the database, complete these steps:
Note that while it only takes seconds to execute the snapshot it can take some time to transfer it to the vault, and the backup job is not completed until the transfer is finished.
-8. For an application-consistent backup, address any errors in the log file. The log file is located at /var/log/azure/Microsoft.Azure.RecoveryServices.VMSnapshotLinux/extension.log.
+1. For an application-consistent backup, address any errors in the log file. The log file is located at /var/log/azure/Microsoft.Azure.RecoveryServices.VMSnapshotLinux/extension.log.
# [Azure CLI](#tab/azure-cli)
To use Azure Backup to back up the database, complete these steps:
az backup vault create --location eastus --name myVault --resource-group rg-oracle ```
-2. Enable backup protection for the VM:
+1. Enable backup protection for the VM:
```azurecli az backup protection enable-for-vm \
To use Azure Backup to back up the database, complete these steps:
--policy-name DefaultPolicy ```
-3. Trigger a backup to run now rather than waiting for the backup to trigger at the default schedule (5 AM UTC):
+1. Trigger a backup to run now rather than waiting for the backup to trigger at the default schedule (5 AM UTC):
```azurecli az backup protection backup-now \
To recover the database, complete these steps:
Later in this article, you'll learn how to test the recovery process. Before you can test the recovery process, you have to remove the database files.
-1. Shut down the Oracle instance:
+1. Switch back to the oracle user:
+ ```bash
+ su - oracle
+ ```
+
+1. Shut down the Oracle instance:
```bash sqlplus / as sysdba
Later in this article, you'll learn how to test the recovery process. Before you
ORACLE instance shut down. ```
-2. Remove the datafiles and backups:
+1. Remove the database datafiles and contolfiles to simulate a failure:
```bash
- sudo su - oracle
cd /u02/oradata/TEST
- rm -f *.dbf
- cd /u02/fast_recovery_area
- rm -f *
+ rm -f *.dbf *.ctl
``` ### Generate a restore script from the Recovery Services vault
Later in this article, you'll learn how to test the recovery process. Before you
![Recovery Services vaults myVault backup items](./media/oracle-backup-recovery/recovery-service-06.png)
-2. On the **Overview** blade, select **Backup items** and the select **Azure Virtual Machine**, which should have anon-zero Backup Item Count listed.
+1. On the **Overview** blade, select **Backup items** and the select **Azure Virtual Machine**, which should have anon-zero Backup Item Count listed.
![Recovery Services vaults Azure Virtual Machine backup item count](./media/oracle-backup-recovery/recovery-service-07.png)
-3. On the Backups Items (Azure Virtual Machines) page, your VM **vmoracle19c** is listed. Click the ellipsis on the right to bring up the menu and select **File Recovery**.
+1. On the Backups Items (Azure Virtual Machines) page, your VM **vmoracle19c** is listed. Click the ellipsis on the right to bring up the menu and select **File Recovery**.
![Screenshot of the Recovery Services vaults file recovery page](./media/oracle-backup-recovery/recovery-service-08.png)
-4. On the **File Recovery (Preview)** pane, click **Download Script**. Then, save the download (.py) file to a folder on the client computer. A password is generated to the run the script. Copy the password to a file for use later.
+1. On the **File Recovery (Preview)** pane, click **Download Script**. Then, save the download (.py) file to a folder on the client computer. A password is generated to the run the script. Copy the password to a file for use later.
![Download script file saves options](./media/oracle-backup-recovery/recovery-service-09.png)
-5. Copy the .py file to the VM.
+1. Copy the .py file to the VM.
The following example shows how you to use a secure copy (scp) command to move the file to the VM. You also can copy the contents to the clipboard, and then paste the contents in a new file that is set up on the VM.
Later in this article, you'll learn how to test the recovery process. Before you
# [Azure CLI](#tab/azure-cli)
-To list recovery points for your VM, use az backup recovery point list. In this example, we select the most recent recovery point for the VM named myVM that's protected in myRecoveryServicesVault:
+To list recovery points for your VM, use az backup recovery point list. In this example, we select the most recent recovery point for the VM named vmoracle19c that's protected in the Recovery Services Vault called myVault:
```azurecli az backup recoverypoint list \
To list recovery points for your VM, use az backup recovery point list. In this
--output tsv ```
-To obtain the script that connects, or mounts, the recovery point to your VM, use az backup restore files mount-rp. The following example obtains the script for the VM named myVM that's protected in myRecoveryServicesVault.
+To obtain the script that connects, or mounts, the recovery point to your VM, use az backup restore files mount-rp. The following example obtains the script for the VM named vmoracle19c that's protected in the Recovery Services Vault called myVault.
Replace myRecoveryPointName with the name of the recovery point that you obtained in the preceding command:
$ scp vmoracle19c_xxxxxx_xxxxxx_xxxxxx.py azureuser@<publicIpAddress>:/tmp
### Mount the restore point
+1. Switch to the root user:
+ ```bash
+ sudo su -
+ ``````
1. Create a restore mount point and copy the script to it. In the following example, create a */restore* directory for the snapshot to mount to, move the file to the directory, and change the file so that it's owned by the root user and made executable. ```bash
- ssh azureuser@<publicIpAddress>
- sudo su -
mkdir /restore chmod 777 /restore cd /restore
$ scp vmoracle19c_xxxxxx_xxxxxx_xxxxxx.py azureuser@<publicIpAddress>:/tmp
Please enter 'q/Q' to exit... ```
-2. Access to the mounted volumes is confirmed.
+1. Access to the mounted volumes is confirmed.
To exit, enter **q**, and then search for the mounted volumes. To create a list of the added volumes, at a command prompt, enter **df -h**.
$ scp vmoracle19c_xxxxxx_xxxxxx_xxxxxx.py azureuser@<publicIpAddress>:/tmp
### Perform recovery
-1. Copy the missing backup files back to the fast recovery area:
+1. Restore the missing database files back to their location:
```bash
- cd /restore/vmoracle19c-2020XXXXXXXXXX/Volume1/fast_recovery_area/TEST
- cp * /u02/fast_recovery_area/TEST
- cd /u02/fast_recovery_area/TEST
+ cd /restore/vmoracle19c-2020XXXXXXXXXX/Volume1/oradata/TEST
+ cp * /u02/oradata/TEST
+ cd /u02/oradata/TEST
chown -R oracle:oinstall * ```
+1. Switch back to the oracle user
+ ```bash
+ sudo su - oracle
+ ```
+1. Start the database instance and mount the controlfile for reading:
+ ```bash
+ sqlplus / as sysdba
+ SQL> startup mount
+ SQL> quit
+ ```
-2. The following commands use RMAN to restore the missing datafiles and recover the database:
+1. Connect to the database with sysbackup:
+ ```bash
+ sqlplus / as sysbackup
+ ```
+1. Initiate automatic database recovery:
- ```bash
- sudo su - oracle
- rman target /
- RMAN> startup mount;
- RMAN> restore database;
- RMAN> recover database;
- RMAN> alter database open;
- ```
-
-3. Check the database content has been fully recovered:
+ ```bash
+ SQL> recover automatic database until cancel using backup controlfile;
+ ```
+ > [!IMPORTANT]
+ > Please note that it is important to specify the USING BACKUP CONTROLFILE syntax to inform the RECOVER AUTOMATIC DATABASE command that recovery should not stop at the Oracle system change number (SCN) recorded in the restored database control file. The restored database control file was a snapshot, along with the rest of the database, and the SCN stored within it is from the point-in-time of the snapshot. There may be transactions recorded after this point and we want to recover to the point-in-time of the last transaction committed to the database.
+
+ When recovery completes successfully you will see the message `Media recovery complete`. However, when using the BACKUP CONTROLFILE clause the recover command will ignore online log files and it is possible there are changes in the current online redo log required to complete point in time recovery. In this situation you may see messages similar to these:
+
+ ```output
+ SQL> recover automatic database until cancel using backup controlfile;
+ ORA-00279: change 2172930 generated at 04/08/2021 12:27:06 needed for thread 1
+ ORA-00289: suggestion :
+ /u02/fast_recovery_area/TEST/archivelog/2021_04_08/o1_mf_1_13_%u_.arc
+ ORA-00280: change 2172930 for thread 1 is in sequence #13
+ ORA-00278: log file
+ '/u02/fast_recovery_area/TEST/archivelog/2021_04_08/o1_mf_1_13_%u_.arc' no
+ longer needed for this recovery
+ ORA-00308: cannot open archived log
+ '/u02/fast_recovery_area/TEST/archivelog/2021_04_08/o1_mf_1_13_%u_.arc'
+ ORA-27037: unable to obtain file status
+ Linux-x86_64 Error: 2: No such file or directory
+ Additional information: 7
+
+ Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
+ ```
+
+ > [!IMPORTANT]
+ > Note that if the current online redo log has been lost or corrupted and cannot be used, you may cancel recovery at this point.
+
+ To correct this you can identify which is the current online log that has not been archived, and supply the fully qualified filename to the prompt.
++
+ Open a new ssh connection
+ ```bash
+ ssh azureuser@<IP Address>
+ ```
+ Switch to the oracle user and set the Oracle SID
+ ```bash
+ sudo su - oracle
+ export ORACLE_SID=test
+ ```
+
+ Connect to the database and run the following query to find the online logfile
+ ```bash
+ sqlplus / as sysdba
+ SQL> column member format a45
+ SQL> set linesize 500
+ SQL> select l.SEQUENCE#, to_char(l.FIRST_CHANGE#,'999999999999999') as CHK_CHANGE, l.group#, l.archived, l.status, f.member
+ from v$log l, v$logfile f
+ where l.group# = f.group#;
+ ```
+
+ The output will look similar to this.
+ ```output
+ SEQUENCE# CHK_CHANGE GROUP# ARC STATUS MEMBER
+ - - - -
+ 13 2172929 1 NO CURRENT /u02/oradata/TEST/redo01.log
+ 12 2151934 3 YES INACTIVE /u02/oradata/TEST/redo03.log
+ 11 2071784 2 YES INACTIVE /u02/oradata/TEST/redo02.log
+ ```
+ Copy the logfile path and file name for the CURRENT online log, in this example it is `/u02/oradata/TEST/redo01.log`. Switch back to the ssh session running the recover command, input the logfile information and press return:
+
+ ```bash
+ Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
+ /u02/oradata/TEST/redo01.log
+ ```
+
+ You should see the logfile is applied and recovery completes. Enter CANCEL to exit the recover command:
+ ```output
+ Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
+ /u02/oradata/TEST/redo01.log
+ Log applied.
+ Media recovery complete.
+ ```
+
+1. Open the database
+ ```bash
+ SQL> alter database open resetlogs;
+ ```
+ > [!IMPORTANT]
+ > The RESETLOGS option is required when the RECOVER command uses the USING BACKUP CONTROLFILE option. RESETLOGS creates a new incarnation of the database by resetting the redo history back to the beginning, because there is no way to determine how much of the previous database incarnation was skipped in the recovery.
+
+1. Check the database content has been fully recovered:
```bash RMAN> SELECT * FROM scott.scott_table; ```
-4. Unmount the restore point.
+1. Unmount the restore point.
```bash
- umount /restore/vmoracle19c-20210107110037/Volume*
+ sudo umount /restore/vmoracle19c-20210107110037/Volume*
``` In the Azure portal, on the **File Recovery (Preview)** blade, click **Unmount Disks**.
To restore the entire VM, complete these steps:
az vm deallocate --resource-group rg-oracle --name vmoracle19c ```
-2. Delete the VM. Enter 'y' when prompted:
+1. Delete the VM. Enter 'y' when prompted:
```azurecli az vm delete --resource-group rg-oracle --name vmoracle19c
To restore the entire VM, complete these steps:
1. Click on Review + Create and then click Create.
-2. In the Azure portal, search for the *myVault* Recovery Services vaults item and click on it.
+1. In the Azure portal, search for the *myVault* Recovery Services vaults item and click on it.
![Recovery Services vaults myVault backup items](./media/oracle-backup-recovery/recovery-service-06.png)
-3. On the **Overview** blade, select **Backup items** and the select **Azure Virtual Machine**, which should have anon-zero Backup Item Count listed.
+1. On the **Overview** blade, select **Backup items** and the select **Azure Virtual Machine**, which should have anon-zero Backup Item Count listed.
![Recovery Services vaults Azure Virtual Machine backup item count](./media/oracle-backup-recovery/recovery-service-07.png)
-4. On the Backups Items (Azure Virtual Machines), page your VM **vmoracle19c** is listed. Click on the VM name.
+1. On the Backups Items (Azure Virtual Machines), page your VM **vmoracle19c** is listed. Click on the VM name.
![Recovery VM page](./media/oracle-backup-recovery/recover-vm-02.png)
-5. On the **vmoracle19c** blade, choose a restore point that has a consistency type of **Application Consistent** and click the ellipsis (**...**) on the right to bring up the menu. From the menu click **Restore VM**.
+1. On the **vmoracle19c** blade, choose a restore point that has a consistency type of **Application Consistent** and click the ellipsis (**...**) on the right to bring up the menu. From the menu click **Restore VM**.
![Restore VM command](./media/oracle-backup-recovery/recover-vm-03.png)
-6. On the **Restore Virtual Machine** blade, choose **Create New** and **Create New Virtual Machine**. Enter the virtual machine name **vmoracle19c** and choose the VNet **vmoracle19cVNET**, the subnet will be automatically populated for you based on your VNet selection. The restore VM process requires an Azure storage account in the same resource group and region. You can choose the storage account **orarestore** you setup earlier.
+1. On the **Restore Virtual Machine** blade, choose **Create New** and **Create New Virtual Machine**. Enter the virtual machine name **vmoracle19c** and choose the VNet **vmoracle19cVNET**, the subnet will be automatically populated for you based on your VNet selection. The restore VM process requires an Azure storage account in the same resource group and region. You can choose the storage account **orarestore** you setup earlier.
![Restore configuration values](./media/oracle-backup-recovery/recover-vm-04.png)
-7. To restore the VM, click the **Restore** button.
+1. To restore the VM, click the **Restore** button.
-8. To view the status of the restore process, click **Jobs**, and then click **Backup Jobs**.
+1. To view the status of the restore process, click **Jobs**, and then click **Backup Jobs**.
![Backup jobs status command](./media/oracle-backup-recovery/recover-vm-05.png)
To set up your storage account and file share, run the following commands in Azu
az storage account create -n orarestore -g rg-oracle -l eastus --sku Standard_LRS ```
-2. Retrieve the list of recovery points available.
+1. Retrieve the list of recovery points available.
```azurecli az backup recoverypoint list \
To set up your storage account and file share, run the following commands in Azu
--output tsv ```
-3. Restore the recovery point to the storage account. Substitute `<myRecoveryPointName>` with a recovery point from the list generated in the previous step:
+1. Restore the recovery point to the storage account. Substitute `<myRecoveryPointName>` with a recovery point from the list generated in the previous step:
```azurecli az backup restore restore-disks \
To set up your storage account and file share, run the following commands in Azu
--target-resource-group rg-oracle ```
-4. Retrieve the restore job details. The following command gets more details for the triggered restored job, including its name, which is needed to retrieve the template URI.
+1. Retrieve the restore job details. The following command gets more details for the triggered restored job, including its name, which is needed to retrieve the template URI.
```azurecli az backup job list \
To set up your storage account and file share, run the following commands in Azu
502bc7ae-d429-4f0f-b78e-51d41b7582fc ConfigureBackup Completed vmoracle19c 2021-01-07T09:43:55.298755+00:00 0:00:30.839674 ```
-5. Retrieve the details of the URI to use for recreating the VM. Substitute the restore job name from the previous step for `<RestoreJobName>`.
+1. Retrieve the details of the URI to use for recreating the VM. Substitute the restore job name from the previous step for `<RestoreJobName>`.
```azurecli az backup job show \
After the VM is restored, you should reassign the original IP address to the new
![List of public IP addresses](./media/oracle-backup-recovery/create-ip-01.png)
-2. Stop the VM
+1. Stop the VM
![Create IP address](./media/oracle-backup-recovery/create-ip-02.png)
-3. Go to **Networking**
+1. Go to **Networking**
![Associate IP address](./media/oracle-backup-recovery/create-ip-03.png)
-4. Click on **Attach network interface**, choose the original NIC **vmoracle19cVMNic, which the original public IP address is still associated to, and click **OK**
+1. Click on **Attach network interface**, choose the original NIC **vmoracle19cVMNic, which the original public IP address is still associated to, and click **OK**
![Select resource type and NIC values](./media/oracle-backup-recovery/create-ip-04.png)
-5. Now you must detach the NIC that was created with the VM restore operation as it is configured as the primary interface. Click on **Detach network interface** and choose the new NIC similar to **vmoracle19c-nic-XXXXXXXXXXXX**, then click **OK**
+1. Now you must detach the NIC that was created with the VM restore operation as it is configured as the primary interface. Click on **Detach network interface** and choose the new NIC similar to **vmoracle19c-nic-XXXXXXXXXXXX**, then click **OK**
![Screenshot that shows where to select Detach network interface.](./media/oracle-backup-recovery/create-ip-05.png)
After the VM is restored, you should reassign the original IP address to the new
![IP address value](./media/oracle-backup-recovery/create-ip-06.png)
-6. Go back to the **Overview** and click **Start**
+1. Go back to the **Overview** and click **Start**
# [Azure CLI](#tab/azure-cli)
After the VM is restored, you should reassign the original IP address to the new
az vm deallocate --resource-group rg-oracle --name vmoracle19c ```
-2. List the current, restore generated VM NIC
+1. List the current, restore generated VM NIC
```azurecli az vm nic list --resource-group rg-oracle --vm-name vmoracle19c
After the VM is restored, you should reassign the original IP address to the new
} ```
-3. Attach original NIC, which should have a name of `<VMName>VMNic`, in this case `vmoracle19cVMNic`. The original Public IP address is still attached to this NIC and will be restored to the VM when the NIC is reattached.
+1. Attach original NIC, which should have a name of `<VMName>VMNic`, in this case `vmoracle19cVMNic`. The original Public IP address is still attached to this NIC and will be restored to the VM when the NIC is reattached.
```azurecli az vm nic add --nics vmoracle19cVMNic --resource-group rg-oracle --vm-name vmoracle19c ```
-4. Detach the restore generated NIC
+1. Detach the restore generated NIC
```azurecli az vm nic remove --nics vmoracle19cRestoredNICc2e8a8a4fc3f47259719d5523cd32dcf --resource-group rg-oracle --vm-name vmoracle19c ```-
-5. Start the VM:
+1. Start the VM:
```azurecli az vm start --resource-group rg-oracle --name vmoracle19c
After the VM is restored, you should reassign the original IP address to the new
### Connect to the VM
-To connect to the VM, use the following script:
+To connect to the VM:
```azurecli
-ssh <publicIpAddress>
+ssh azureuser@<publicIpAddress>
``` ### Start the database to mount stage and perform recovery
ssh <publicIpAddress>
The backup and recovery of the Oracle Database 19c database on an Azure Linux VM is now finished.
+More information about Oracle commands and concepts can be found in the Oracle documentation, including:
+
+ * [Performing Oracle user-managed backups of the entire database](https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/user-managed-database-backups.html#GUID-65C5E03A-E906-47EB-92AF-6DC273DBD0A8)
+ * [Performing complete user-managed database recovery](https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/user-managed-flashback-dbpitr.html#GUID-66D07694-533F-4E3A-BA83-DD461B68DB56)
+ * [Oracle STARTUP command](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqpug/STARTUP.html#GUID-275013B7-CAE2-4619-9A0F-40DB71B61FE8)
+ * [Oracle RECOVER command](https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/user-managed-flashback-dbpitr.html#GUID-54B59888-8683-4CD9-B144-B0BB68887572)
+ * [Oracle ALTER DATABASE command](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ALTER-DATABASE.html#GUID-8069872F-E680-4511-ADD8-A4E30AF67986)
+ * [Oracle LOG_ARCHIVE_DEST_n parameter](https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/LOG_ARCHIVE_DEST_n.html#GUID-10BD97BF-6295-4E85-A1A3-854E15E05A44)
+ * [Oracle ARCHIVE_LAG_TARGET parameter](https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/ARCHIVE_LAG_TARGET.html#GUID-405D335F-5549-4E02-AFB9-434A24465F0B)
++ ## Delete the VM
-When you no longer need the VM, you can use the following command to remove the resource group, the VM, and all related resources:
+When you no longer need the VM, you can use the following commands to remove the resource group, the VM, and all related resources:
-```azurecli
-az group delete --name rg-oracle
-```
+1. Disable Soft Delete of backups in the vault
+
+ ```azurecli
+ az backup vault backup-properties set --name myVault --resource-group rg-oracle --soft-delete-feature-state disable
+ ```
+
+1. Stop protection for the VM and delete backups
+
+ ```azurecli
+ az backup protection disable --resource-group rg-oracle --vault-name myVault --container-name vmoracle19c --item-name vmoracle19c --delete-backup-data true --yes
+ ```
+
+1. Remove the resource group including all resources
+
+ ```azurecli
+ az group delete --name rg-oracle
+ ```
## Next steps
virtual-machines Oracle Database Backup Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-database-backup-azure-storage.md
This article demonstrates the use of Azure Storage as a media to back up and res
10. Set database environment variables for fast recovery area: ```bash
- SQL> system set db_recovery_file_dest_size=4096M scope=both;
+ SQL> alter system set db_recovery_file_dest_size=4096M scope=both;
SQL> alter system set db_recovery_file_dest='/u02/fast_recovery_area' scope=both; ```
While using RMAN and Azure File storage for database backup has many advantages,
ORACLE instance shut down. ```
-2. Remove the datafiles and backups:
+2. Remove the database datafiles:
```bash cd /u02/oradata/TEST
virtual-machines Oracle Database Quick Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-database-quick-create.md
After you create the VM, Azure CLI displays information similar to the following
## Create and attach a new disk for Oracle datafiles and FRA ```bash
-az vm disk attach --name oradata01 --new --resource-group rg-oracle --size-gb 128 --sku StandardSSD_LRS --vm-name vmoracle19c
+az vm disk attach --name oradata01 --new --resource-group rg-oracle --size-gb 64 --sku StandardSSD_LRS --vm-name vmoracle19c
``` ## Open ports for connectivity
virtual-network Virtual Networks Viewing And Modifying Hostnames https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-networks-viewing-and-modifying-hostnames.md
ms.devlang: na
na Previously updated : 10/30/2018 Last updated : 05/14/2021
From a REST client, follow these instructions:
1. Ensure that you have a client certificate to connect to the Azure portal. To obtain a client certificate, follow the steps presented in [How to: Download and Import Publish Settings and Subscription Information](/previous-versions/dynamicsnav-2013/dn385850(v=nav.70)). 2. Set a header entry named x-ms-version with a value of 2013-11-01.
-3. Send a request in the following format: https:\//management.core.windows.net/\<subscrition-id\>/services/hostedservices/\<service-name\>?embed-detail=true
+3. Send a request in the following format: `https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>?embed-detail=true`
4. Look for the **HostName** element for each **RoleInstance** element. > [!WARNING]
You can modify the host name for any virtual machine or role instance by uploadi
[Azure Virtual Network Configuration Schema](/previous-versions/azure/reference/jj157100(v=azure.100))
-[Specify DNS settings using network configuration files](/previous-versions/azure/virtual-network/virtual-networks-specifying-a-dns-settings-in-a-virtual-network-configuration-file)
+[Specify DNS settings using network configuration files](/previous-versions/azure/virtual-network/virtual-networks-specifying-a-dns-settings-in-a-virtual-network-configuration-file)