Updates from: 11/17/2022 02:15:42
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
Previously updated : 04/04/2022 Last updated : 11/12/2022
# Troubleshoot on-premises application provisioning ## Troubleshoot test connection issues
-After you configure the provisioning agent and ECMA host, it's time to test connectivity from the Azure Active Directory (Azure AD) provisioning service to the provisioning agent, the ECMA host, and the application. To perform this end-to-end test, select **Test connection** in the application in the Azure portal. When the test connection fails, try the following troubleshooting steps:
+After you configure the provisioning agent and ECMA host, it's time to test connectivity from the Azure Active Directory (Azure AD) provisioning service to the provisioning agent, the ECMA host, and the application. To perform this end-to-end test, select **Test connection** in the application in the Azure portal. Be sure to wait 10 to 20 minutes after assigning an initial agent or changing the agent before testing the connection. If after this time the test connection fails, try the following troubleshooting steps:
1. Check that the agent and ECMA host are running: 1. On the server with the agent installed, open **Services** by going to **Start** > **Run** > **Services.msc**.
After you configure the provisioning agent and ECMA host, it's time to test conn
6. After you assign an agent, you need to wait 10 to 20 minutes for the registration to complete. The connectivity test won't work until the registration completes. 7. Ensure that you're using a valid certificate. Go to the **Settings** tab of the ECMA host to generate a new certificate. 8. Restart the provisioning agent by going to the taskbar on your VM by searching for the Microsoft Azure AD Connect provisioning agent. Right-click **Stop**, and then select **Start**.
- 9. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host. The error message 'invalid resource' generally indicates that the URL does not follow the expected format.
+ 1. If you continue to see `The ECMA host is currently importing data from the target application` even after restarting the ECMA Connector Host and the provisioning agent, and waiting for the initial import to complete, then you may need to cancel and re-start configuring provisioning to the application in the Azure portal.
+ 1. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host. The error message 'invalid resource' generally indicates that the URL does not follow the expected format.
``` https://localhost:8585/ecma2host_connectorName/scim
After the ECMA Connector Host schema mapping has been configured, start the serv
| Error | Resolution | | -- | -- | | Could not load file or assembly 'file:///C:\Program Files\Microsoft ECMA2Host\Service\ECMA\Cache\8b514472-c18a-4641-9a44-732c296534e8\Microsoft.IAM.Connector.GenericSql.dll' or one of its dependencies. Access is denied. | Ensure that the network service account has 'full control' permissions over the cache folder. |
-| Invalid LDAP style of object's DN. DN: username@domain.com" | Ensure the 'DN is Anchor' checkbox is not checked in the 'connectivity' page of the ECMA host. Ensure the 'autogenerated' checkbox is selected in the 'object types' page of the ECMA host. See [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names) for more information.|
+| Invalid LDAP style of object's DN. DN: username@domain.com" or `Target Site: ValidByLdapStyle` | Ensure the 'DN is Anchor' checkbox is not checked in the 'connectivity' page of the ECMA host. Ensure the 'autogenerated' checkbox is selected in the 'object types' page of the ECMA host. See [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names) for more information.|
## Understand incoming SCIM requests
By using Azure AD, you can monitor the provisioning service in the cloud and col
``` ### I am getting an Invalid LDAP style DN error when trying to configure the ECMA Connector Host with SQL
-By default, the genericSQL connector expects the DN to be populated using the LDAP style (when the 'DN is anchor' attribute is left unchecked in the first connectivity page). In the error message above, you can see that the DN is a UPN, rather than an LDAP style DN that the connector expects.
+By default, the generic SQL connector expects the DN to be populated using the LDAP style (when the 'DN is anchor' attribute is left unchecked in the first connectivity page). In the error message `Invalid LDAP style DN` or `Target Site: ValidByLdapStyle`, you may see that the DN field contains a user principal name (UPN), rather than an LDAP style DN that the connector expects.
To resolve this, ensure that **Autogenerated** is selected on the object types page when you configure the connector.
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
Previously updated : 06/23/2022 Last updated : 11/16/2022
Users may have a combination of up to five OATH hardware tokens or authenticator
> > When two methods are required, users can reset using either a notification or verification code in addition to any other enabled methods. +
+## FIPS 140 compliant for Azure AD authentication
+
+Beginning with version 6.6.8, Microsoft Authenticator for iOS is compliant with [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/3/final?azure-portal=true) for all Azure AD authentications using push multi-factor authentications (MFA), passwordless Phone Sign-In (PSI), and time-based one-time passcodes (TOTP).  
+
+Consistent with the guidelines outlined in [NIST SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html?azure-portal=true), authenticators are required to use FIPS 140 validated cryptography. This helps federal agencies meet the requirements of [Executive Order (EO) 14028](https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/?azure-portal=true) and healthcare organizations working with [Electronic Prescriptions for Controlled Substances (EPCS)](/azure/compliance/offerings/offering-epcs-us). 
+
+FIPS 140 is a US government standard that defines minimum security requirements for cryptographic modules in information technology products and systems. Testing against the FIPS 140 standard is maintained by the [Cryptographic Module Validation Program (CMVP)](https://csrc.nist.gov/Projects/cryptographic-module-validation-program?azure-portal=true).
+
+No changes in configurations are required in Microsoft Authenticator or the Azure portal to enable FIPS 140 compliance. Beginning with Microsoft Authenticator for iOS version 6.6.8, Azure AD authentications will be FIPS 140 compliant by default.
+
+Authenticator leverages the native Apple cryptography to achieve FIPS 140, Security Level 1 compliance on Apple iOS devices beginning with Microsoft Authenticator version 6.6.8. For more information about the certifications being used, see the [Apple CoreCrypto module](https://support.apple.com/guide/sccc/security-certifications-for-ios-scccfa917cb49/web?azure-portal=true). 
+
+FIPS 140 compliance for Microsoft Authenticator on Android is in progress and will follow soon.
+ ## Next steps - To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md). - Learn more about configuring authentication methods using the [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview).+
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Previously updated : 11/14/2022 Last updated : 11/16/2022
The MFA Server Migration utility targets a single Azure AD group for all migrati
To begin the migration process, enter the name or GUID of the Azure AD group you want to migrate. Once complete, press Tab or click outside the window and the utility will begin searching for the appropriate group. The window will populate all users in the group. A large group can take several minutes to finish.
-To view user attribute data for a user, highlight the user, and select **View**:
+To view attribute data for a user, highlight the user, and select **View**:
:::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/view-user.png" alt-text="Screenshot of how to view use settings.":::
The settings option allows you to change the settings for the migration process:
:::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/settings.png" alt-text="Screenshot of settings."::: - Migrate ΓÇô This setting allows you to specify which method(s) should be migrated for the selection of users-- User Match ΓÇô Allows you to specify a different on-premises Active Directory attribute for matching Azure AD UPN instead of the default match to userPrincipalName
+- User Match ΓÇô Allows you to specify a different on-premises Active Directory attribute for matching Azure AD UPN instead of the default match to userPrincipalName:
+ - The migration utility tries direct matching to UPN before using the on-premises Active Directory attribute.
+ - If no match is found, it calls a Windows API to find the Azure AD UPN and get the SID, which it uses to search the MFA Server user list.
+ - If the Windows API doesnΓÇÖt find the user or the SID isnΓÇÖt found in the MFA Server, then it will use the configured Active Directory attribute to find the user in the on-premises Active Directory, and then use the SID to search the MFA Server user list.
- Automatic synchronization ΓÇô Starts a background service that will continually monitor any authentication method changes to users in the on-premises MFA Server, and write them to Azure AD at the specified time interval defined The migration process can be an automatic process, or a manual process.
Content-Type: application/json
} ```
-Set the **Staged Rollout for Azure MFA** to **Off**. Users will once again be redirected to your on-premises federation server for MFA.
+Users will no longer be redirected to your on-premises federation server for MFA, whether theyΓÇÖre targeted by the Staged Rollout tool or not. Note this can take up to 24 hours to take effect.
>[!NOTE] >The update of the domain federation setting can take up to 24 hours to take effect.
If the upgrade had issues, follow these steps to roll back:
} ```
-Users will no longer be redirected to your on-premises federation server for MFA, whether theyΓÇÖre targeted by the Staged Rollout tool or not. Note this can take up to 24 hours to take effect.
+
+Set the **Staged Rollout for Azure MFA** to **Off**. Users will once again be redirected to your on-premises federation server for MFA.
## Next steps
active-directory Overview Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/overview-authentication.md
Persistent session tokens are stored as persistent cookies on the web browser's
| ESTSAUTHPERSISTENT | Common | Contains user's session information to facilitate SSO. Persistent. | | ESTSAUTHLIGHT | Common | Contains Session GUID Information. Lite session state cookie used exclusively by client-side JavaScript in order to facilitate OIDC sign-out. Security feature. | | SignInStateCookie | Common | Contains list of services accessed to facilitate sign-out. No user information. Security feature. |
-| CCState | Common | Contains session information state to be used between Azure AD and the [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults). |
+| CCState | Common | Contains session information state to be used between Azure AD and the [Azure AD Backup Authentication Service](../conditional-access/resilience-defaults.md). |
| buid | Common | Tracks browser related information. Used for service telemetry and protection mechanisms. | | fpc | Common | Tracks browser related information. Used for tracking requests and throttling. | | esctx | Common | Session context cookie information. For CSRF protection. Binds a request to a specific browser instance so the request can't be replayed outside the browser. No user information. |
Persistent session tokens are stored as persistent cookies on the web browser's
| wlidperf | Common | Client-side cookie (set by JavaScript) that tracks local time for performance purposes. | | x-ms-gateway-slice | Common | Azure AD Gateway cookie used for tracking and load balance purposes. | | stsservicecookie | Common | Azure AD Gateway cookie also used for tracking purposes. |
-| x-ms-refreshtokencredential | Specific | Available when [Primary Refresh Token (PRT)](/azure/active-directory/devices/concept-primary-refresh-token) is in use. |
+| x-ms-refreshtokencredential | Specific | Available when [Primary Refresh Token (PRT)](../devices/concept-primary-refresh-token.md) is in use. |
| estsStateTransient | Specific | Applicable to new session information model only. Transient. | | estsStatePersistent | Specific | Same as estsStateTransient, but persistent. | | ESTSNCLOGIN | Specific | National Cloud Login related Cookie. | | UsGovTraffic | Specific | US Gov Cloud Traffic Cookie. | | ESTSWCTXFLOWTOKEN | Specific | Saves flowToken information when redirecting to ADFS. |
-| CcsNtv | Specific | To control when Azure AD Gateway will send requests to [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults). Native flows. |
-| CcsWeb | Specific | To control when Azure AD Gateway will send requests to [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults). Web flows. |
-| Ccs* | Specific | Cookies with prefix Ccs*, have the same purpose as the ones without prefix, but only apply when [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults) is in use. |
+| CcsNtv | Specific | To control when Azure AD Gateway will send requests to [Azure AD Backup Authentication Service](../conditional-access/resilience-defaults.md). Native flows. |
+| CcsWeb | Specific | To control when Azure AD Gateway will send requests to [Azure AD Backup Authentication Service](../conditional-access/resilience-defaults.md). Web flows. |
+| Ccs* | Specific | Cookies with prefix Ccs*, have the same purpose as the ones without prefix, but only apply when [Azure AD Backup Authentication Service](../conditional-access/resilience-defaults.md) is in use. |
| threxp | Specific | Used for throttling control. | | rrc | Specific | Cookie used to identify a recent B2B invitation redemption. | | debug | Specific | Cookie used to track if user's browser session is enabled for DebugMode. |
To learn more about multi-factor authentication concepts, see [How Azure AD Mult
[tutorial-sspr]: tutorial-enable-sspr.md [tutorial-azure-mfa]: tutorial-enable-azure-mfa.md [concept-sspr]: concept-sspr-howitworks.md
-[concept-mfa]: concept-mfa-howitworks.md
+[concept-mfa]: concept-mfa-howitworks.md
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Applications must have the Intune SDK with policy assurance implemented and must
The following client apps are confirmed to support this setting, this list isn't exhaustive and is subject to change:
+- iAnnotate for Office 365
- Microsoft Cortana - Microsoft Edge - Microsoft Excel
The following client apps are confirmed to support this setting, this list isn't
- Microsoft PowerApps - Microsoft PowerPoint - Microsoft SharePoint
+- Microsoft Stream Mobile Native 2.0
- Microsoft Teams - Microsoft To Do - Microsoft Word
+- Microsoft Whiteboard Services
- Microsoft Field Service (Dynamics 365) - MultiLine for Intune - Nine Mail - Email and Calendar
active-directory Developer Guide Conditional Access Authentication Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-guide-conditional-access-authentication-context.md
Previously updated : 05/18/2021 Last updated : 11/15/2022 --++ -+ + # Developer guide to Conditional Access authentication context
-[Conditional Access](../conditional-access/overview.md) is the Zero Trust control plane that allows you to target policies for access to all your apps ΓÇô old or new, private, or public, on-premises, or multi-cloud. With [Conditional Access authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context), you can apply different policies within those apps.
+[Conditional Access](../conditional-access/overview.md) is the Zero Trust control plane that allows you to target policies for access to all your apps ΓÇô old or new, private, or public, on-premises, or multicloud. With [Conditional Access authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context), you can apply different policies within those apps.
Conditional Access authentication context (auth context) allows you to apply granular policies to sensitive data and actions instead of just at the app level. You can refine your Zero Trust policies for least privileged access while minimizing user friction and keeping users more productive and your resources more secure. Today, it can be used by applications using [OpenId Connect](https://openid.net/specs/openid-connect-core-1_0.html) for authentication developed by your company to protect sensitive resources, like high-value transactions or viewing employee personal data.
Use the Azure AD Conditional Access engine's new auth context feature to trigger
## Problem statement
-The IT administrators and regulators often struggle between balancing prompting their users with additional factors of authentication too frequently and achieving adequate security and policy adherence for applications and services where parts of them contain sensitive data and operations. It can be a choice between a strong policy that impacts users' productivity when they access most data and actions or a policy that is not strong enough for sensitive resources.
+The IT administrators and regulators often struggle between balancing prompting their users with additional factors of authentication too frequently and achieving adequate security and policy adherence for applications and services where parts of them contain sensitive data and operations. It can be a choice between a strong policy that impacts users' productivity when they access most data and actions or a policy that isn't strong enough for sensitive resources.
So, what if apps were able to mix both, where they can function with a relatively lesser security and less frequent prompts for most users and operations and yet conditionally stepping up the security requirement when the users accessed more sensitive parts?
For example, while users may sign in to SharePoint using multi-factor authentica
**Second**, [Conditional Access](../conditional-access/overview.md) requires Azure AD Premium P1 licensing. More information about licensing can be found on the [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
-**Third**, today it is only available to applications that sign-in users. Applications that authenticate as themselves are not supported. Use the [Authentication flows and application scenarios guide](authentication-flows-app-scenarios.md) to learn about the supported authentication app types and flows in the Microsoft Identity Platform.
+**Third**, today it's only available to applications that sign-in users. Applications that authenticate as themselves aren't supported. Use the [Authentication flows and application scenarios guide](authentication-flows-app-scenarios.md) to learn about the supported authentication app types and flows in the Microsoft Identity Platform.
## Integration steps
Create or modify your Conditional Access policies to use the Conditional Access
1. Identity actions in the code that can be made available to map against auth context Ids. 1. Build a screen in the admin portal of the app (or an equivalent functionality) that IT admins can use to map sensitive actions against an available auth context ID.
-1. See the code sample, [Use the Conditional Access Auth Context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md) for an example on how it is done.
+1. See the code sample, [Use the Conditional Access Auth Context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md) for an example on how it's done.
These steps are the changes that you need to carry in your code base. The steps broadly comprise of
These steps are the changes that you need to carry in your code base. The steps
- Checks if the application's action being called requires step-up authentication. It does so by checking its database for a saved mapping for this method - If this action indeed requires an elevated auth context, it checks the **acrs** claim for an existing, matching Auth Context ID.
- - If a matching Auth Context ID is not found, it raises a [claims challenge](claims-challenge.md#claims-challenge-header-format).
+ - If a matching Auth Context ID isn't found, it raises a [claims challenge](claims-challenge.md#claims-challenge-header-format).
```csharp public void CheckForRequiredAuthContext(string method)
These steps are the changes that you need to carry in your code base. The steps
## Caveats and recommendations
-Do not hard-code Auth Context values in your app. Apps should read and apply auth context [using MS Graph calls](/graph/api/resources/authenticationcontextclassreference). This practice is critical for [multi-tenant applications](howto-convert-app-to-be-multi-tenant.md). The Auth Context values will vary between Azure AD tenants will not available in Azure AD free edition. For more information on how an app should query, set, and use auth context in their code, see the code sample, [Use the Conditional Access auth context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md) as how an app should query, set and use auth context in their code.
+Don't hard-code Auth Context values in your app. Apps should read and apply auth context [using MS Graph calls](/graph/api/resources/authenticationcontextclassreference). This practice is critical for [multi-tenant applications](howto-convert-app-to-be-multi-tenant.md). The Auth Context values will vary between Azure AD tenants and won't be available in Azure AD free edition. For more information on how an app should query, set, and use auth context in their code, see the code sample, [Use the Conditional Access auth context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md) as how an app should query, set and use auth context in their code.
-Do not use auth context where the app itself is going to be a target of Conditional Access policies. The feature works best when parts of the application require the user to meet a higher bar of authentication.
+Don't use auth context where the app itself is going to be a target of Conditional Access policies. The feature works best when parts of the application require the user to meet a higher bar of authentication.
## Code samples - [Use the Conditional Access auth context to perform step-up authentication for high-privilege operations in a web app](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) - [Use the Conditional Access auth context to perform step-up authentication for high-privilege operations in a web API](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md)
+## Authentication context [ACRs] in Conditional Access expected behavior
+
+## Explicit auth context satisfaction in requests
+
+A client can explicitly ask for a token with an Auth Context (ACRS) through the claims in the request's body. If an ACRS was requested, Conditional Access will allow issuing the token with the requested ACRS if all challenges were completed.
+
+## Expected behavior when an auth context isn't protected by Conditional Access in the tenant
+
+Conditional Access may issue an ACRS in a token's claims when all Conditional Access policy assigned to the ACRS value has been satisfied. If no Conditional Access policy is assigned to an ACRS value the claim may still be issued, because all policy requirements have been satisfied.
+
+## Summary table for expected behavior when ACRS are explicitly requested
+
+ACRS requested | Policy applied | Control satisfied | ACRS added to claims |
+|--|--|--|--|--|
+|Yes | No | Yes | Yes |
+|Yes | Yes | No | No |
+|Yes | Yes | Yes | Yes |
+|Yes | No policies configured with ACRS | Yes | Yes |
+
+## Implicit auth context satisfaction by opportunistic evaluation
+
+A resource provider may opt in to the optional 'acrs' claim. Conditional Access will try to add ACRS to the token claims opportunistically in order to avoid round trips to acquire new tokens to Azure AD. In that evaluation, Conditional Access will check if the policies protecting Auth Context challenges are already satisfied and will add the ACRS to the token claims if so.
+
+> [!NOTE]
+> Each token type will need to be individually opted-in (ID token, Access token).
+>
+> If a resource provider doesn't opt in to the optional 'acrs' claim, the only way to get an ACRS in the token will be to explicitly ask for it in a token request. It will not get the benefits of the opportunistic evaluation, therefore every time that the required ACRS will be missing from the token claims, the resource provider will challenge the client to acquire a new token containing it in the claims.
+
+## Expected behavior with auth context and session controls for implicit ACRS opportunistic evaluation
+
+### Sign-in frequency by interval
+
+Conditional Access will consider "sign-in frequency by interval" as satisfied for opportunistic ACRS evaluation when all the present authentication factors auth instants are within the sign-in frequency interval. In case that the first factor auth instant is stale, or if the second factor (MFA) is present and its auth instant is stale, the sign-in frequency by interval won't be satisfied and the ACRS won't be issued in the token opportunistically.
+
+### Cloud App Security (CAS)
+
+Conditional Access will consider CAS session control as satisfied for opportunistic ACRS evaluation, when a CAS session was established during that request. For example, when a request comes in and any Conditional Access policy applied and enforced a CAS session, and in addition there's a Conditional Access policy that also requires a CAS session, since the CAS session will be enforced, that will satisfy the CAS session control for the opportunistic evaluation.
+
+## Expected behavior when a tenant contain Conditional Access policies protecting auth context
+
+The table below will show all corner cases where ACRS is added to the token's claims by opportunistic evaluation.
+
+**Policy A**: Require MFA from all users, excluding the user "Ariel", when asking for "c1" acrs.
+**Policy B**: Block all users, excluding user "Jay", when asking for "c2", or "c3" acrs.
+
+| Flow | ACRS requested | Policy applied | Control satisfied | ACRS added to claims |
+|--|--|--|--|--|
+| Ariel requests for an access token | "c1" | None | Yes for "c1". No for "c2" and "c3" | "c1" (requested) |
+| Ariel requests for an access token | "c2" | Policy B | Blocked by policy B | None |
+| Ariel requests for an access token | None | None | Yes for "c1". No for "c2" and "c3" | "c1" (opportunistically added from policy A) |
+| Jay requests for an access token (without MFA) | "c1" | Policy A | No | None |
+| Jay requests for an access token (with MFA) | "c1" | Policy A | Yes | "c1" (requested), "c2" (opportunistically added from policy B), "c3" (opportunistically added from policy B)|
+| Jay requests for an access token (without MFA) | "c2" | None | Yes for "c2" and "c3". No for "c1" | "c2" (requested), "c3" (opportunistically added from policy B) |
+| Jay requests for an access token (with MFA) | "c2" | None | Yes for "c1", "c2" and "c3" | "c1" (best effort from A), "c2" (requested), "c3" (opportunistically added from policy B) |
+| Jay requests for an access token (with MFA) | None | None | Yes for "c1", "c2" and "c3" | "c1", "c2", "c3" all opportunistically added |
+| Jay requests for an access token (without MFA) | None | None | Yes for "c2" and "c3". No for "c1"| "c2", "c3" all opportunistically added |
+ ## Next steps - [Granular Conditional Access for sensitive data and actions (Blog)](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/granular-conditional-access-for-sensitive-data-and-actions/ba-p/1751775)
active-directory Msal Node Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-node-migration.md
## Update app registration settings
-When working with ADAL Node, you were likely using the **Azure AD v1.0 endpoint**. Apps migrating from ADAL to MSAL should also consider switching to **Azure AD v2.0 endpoint**.
+When working with ADAL Node, you were likely using the **Azure AD v1.0 endpoint**. Apps migrating from ADAL to MSAL should switch to **Azure AD v2.0 endpoint**.
1. Review the [differences between v1 and v2 endpoints](../azuread-dev/azure-ad-endpoint-comparison.md) 1. Update, if necessary, your existing app registrations accordingly.
-> [!NOTE]
-> In order to ensure backward compatibility, MSAL Node supports both v1.0 end v2.0 endpoints.
- ## Install and import MSAL 1. install MSAL Node package via NPM:
authenticationContext.acquireTokenWithAuthorizationCode(
); ```
-MSAL Node supports both **v1.0** and **v2.0** endpoints. The v2.0 endpoint employs a *scope-centric* model to access resources. Thus, when you request an access token for a resource, you also need to specify the scope for that resource:
+The v2.0 endpoint employs a *scope-centric* model to access resources. Thus, when you request an access token for a resource, you also need to specify the scope for that resource:
```javascript const tokenRequest = {
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md
The error message displayed will be: "Due to a configuration change made by your
When a request to add a verified publisher is made, many signals are used to make a security risk assessment. If the user risk state is determined to be ΓÇÿAtRiskΓÇÖ, an error, ΓÇ£You're unable to add a verified publisher to this application. Contact your administrator for assistanceΓÇ¥ will be returned. Please investigate the user risk and take the appropriate steps to remediate the risk (guidance below):
-> [Investigate risk](/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk#risky-users)
+> [Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users)
-> [Remediate risk/unblock users](/azure/active-directory/identity-protection/howto-identity-protection-remediate-unblock)
+> [Remediate risk/unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md)
-> [Self-remediation guidance](/azure/active-directory/identity-protection/howto-identity-protection-remediate-unblock)
+> [Self-remediation guidance](../identity-protection/howto-identity-protection-remediate-unblock.md)
> Self-serve password reset (SSPR): If the organization allows SSPR, use aka.ms/sspr to reset the password for remediation. Please choose a strong password; Choosing a weak password may not reset the risk state.
If you've reviewed all of the previous information and are still receiving an er
- TenantId where app is registered - MPN ID - REST request being made -- Error code and message being returned
+- Error code and message being returned
active-directory V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-overview.md
Title: Microsoft identity platform overview description: Learn about the components of the Microsoft identity platform and how they can help you build identity and access management (IAM) support into your applications. -+ Previously updated : 10/18/2022--- Last updated : 11/16/2022++ # Customer intent: As an application developer, I want a quick introduction to the Microsoft identity platform so I can decide if this platform meets my application development requirements.
Learn how core authentication and Azure AD concepts apply to the Microsoft ident
[Azure AD B2B](../external-identities/what-is-b2b.md) - Invite external users into your Azure AD tenant as "guest" users, and assign permissions for authorization while they use their existing credentials for authentication.
-[Azure Active Directory for developers (v1.0)](../azuread-dev/v1-overview.md) - Exclusively for developers with existing apps that use the older v1.0 endpoint. **Do not** use v1.0 for new projects.
- ## Next steps If you have an Azure account, then you have access to an Azure Active Directory tenant. However, most Microsoft identity platform developers need their own Azure AD tenant for use while developing applications, known as a *dev tenant*.
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
The value of `{tenant}` varies based on the application's sign-in audience as sh
| `8eaef023-2b34-4da1-9baa-8bc8c9d6a490` or `contoso.onmicrosoft.com` | Only users from a specific Azure AD tenant (directory members with a work or school account or directory guests with a personal Microsoft account) can sign in to the application. <br/><br/>The value can be the domain name of the Azure AD tenant or the tenant ID in GUID format. You can also use the consumer tenant GUID, `9188040d-6c67-4c5b-b112-36a304b66dad`, in place of `consumers`. | > [!TIP]
-> Note that when using the `common` or `consumers` authority for personal Microsoft accounts, the consuming resource application must be configured to support such type of accounts in accordance with [signInAudience](/azure/active-directory/develop/supported-accounts-validation).
+> Note that when using the `common` or `consumers` authority for personal Microsoft accounts, the consuming resource application must be configured to support such type of accounts in accordance with [signInAudience](./supported-accounts-validation.md).
You can also find your app's OpenID configuration document URI in its app registration in the Azure portal.
When you redirect the user to the `end_session_endpoint`, the Microsoft identity
* Review the [UserInfo endpoint documentation](userinfo.md). * [Populate claim values in a token](active-directory-claims-mapping.md) with data from on-premises systems.
-* [Include your own claims in tokens](active-directory-optional-claims.md).
+* [Include your own claims in tokens](active-directory-optional-claims.md).
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md
az identity federated-credential delete --name $ficId --identity-name $uaId --re
::: zone pivot="identity-wif-mi-methods-powershell" ## Prerequisites -- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](/azure/active-directory/managed-identities-azure-resources/overview). Be sure to review the [difference between a system-assigned and user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types).
+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types).
- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles#managed-identity-contributor) role assignment.
+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
- To run the example scripts, you have two options: - Use [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open by using the **Try It** button in the upper-right corner of code blocks. - Run scripts locally with Azure PowerShell, as described in the next section.-- [Create a user-assigned manged identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-powershell#list-user-assigned-managed-identities-2)
+- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-powershell#list-user-assigned-managed-identities-2)
- Find the object ID of the user-assigned managed identity, which you need in the following steps. ### Configure Azure PowerShell locally
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation.md
The following scenarios are supported for accessing Azure AD protected resources
Create a trust relationship between the external IdP and an app registration or user-assigned managed identity in Azure AD. The federated identity credential is used to indicate which token from the external IdP should be trusted by your application or managed identity. You configure a federated identity either: -- On an Azure AD [App registration](/azure/active-directory/develop/quickstart-register-app) in the Azure portal or through Microsoft Graph. This configuration allows you to get an access token for your application without needing to manage secrets outside Azure. For more information, learn how to [configure an app to trust an external identity provider](workload-identity-federation-create-trust.md).
+- On an Azure AD [App registration](./quickstart-register-app.md) in the Azure portal or through Microsoft Graph. This configuration allows you to get an access token for your application without needing to manage secrets outside Azure. For more information, learn how to [configure an app to trust an external identity provider](workload-identity-federation-create-trust.md).
- On a user-assigned managed identity through the Azure portal, Azure CLI, Azure PowerShell, Azure SDK, and Azure Resource Manager (ARM) templates. The external workload uses the access token to access Azure AD protected resources without needing to manage secrets (in supported scenarios). The [steps for configuring the trust relationship](workload-identity-federation-create-trust-user-assigned-managed-identity.md) will differ, depending on the scenario and external IdP. The workflow for exchanging an external token for an access token is the same, however, for all scenarios. The following diagram shows the general workflow of a workload exchanging an external token for an access token and then accessing Azure AD protected resources.
Learn more about how workload identity federation works:
- How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust.md) on an app registration. - How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust-user-assigned-managed-identity.md) on a user-assigned managed identity. - Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources.-- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
active-directory Clean Up Stale Guest Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-stale-guest-accounts.md
As users collaborate with external partners, itΓÇÖs possible that many guest accounts get created in Azure Active Directory (Azure AD) tenants over time. When collaboration ends and the users no longer access your tenant, the guest accounts may become stale. Admins can use Access Reviews to automatically review inactive guest users and block them from signing in, and later, delete them from the directory.
-Learn more about [how to manage inactive user accounts in Azure AD](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts).
+Learn more about [how to manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md).
There are a few recommended patterns that are effective at cleaning up stale guest accounts: 1. Create a multi-stage review whereby guests self-attest whether they still need access. A second-stage reviewer assesses results and makes a final decision. Guests with denied access are disabled and later deleted.
-2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts#how-to-detect-inactive-user-accounts).
+2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](../reports-monitoring/howto-manage-inactive-user-accounts.md#how-to-detect-inactive-user-accounts).
Use the following instructions to learn how to create Access Reviews that follow these patterns. Consider the configuration recommendations and then make the needed changes that suit your environment. ## Create a multi-stage review for guests to self-attest continued access
-1. Create a [dynamic group](/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+1. Create a [dynamic group](./groups-create-rule.md) for the guest users you want to review. For example,
`(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
-2. To [create an Access Review](/azure/active-directory/governance/create-access-review)
+2. To [create an Access Review](../governance/create-access-review.md)
for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**. 3. Select **New access review**.
Use the following instructions to learn how to create Access Reviews that follow
## Create a review to remove inactive external guests
-1. Create a [dynamic group](/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+1. Create a [dynamic group](./groups-create-rule.md) for the guest users you want to review. For example,
`(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
-2. To [create an access review](/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
+2. To [create an access review](../governance/create-access-review.md) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
3. Select **New access review**.
Use the following instructions to learn how to create Access Reviews that follow
Guest users who don't sign into the tenant for the number of days you configured are disabled for 30 days, then deleted. After deletion, you can restore guests for up to 30 days, after which a new invitation is
-needed.
+needed.
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
B2B collaboration is enabled by default, but comprehensive admin settings let yo
- Use [external collaboration settings](external-collaboration-settings-configure.md) to define who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory. -- Use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and [Microsoft Azure Government](/azure/azure-government) or [Microsoft Azure China 21Vianet](/azure/china).
+- Use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and [Microsoft Azure Government](../../azure-government/index.yml) or [Microsoft Azure China 21Vianet](/azure/china).
## Easily invite guest users from the Azure AD portal
You can [enable integration with SharePoint and OneDrive](/sharepoint/sharepoint
- [External Identities pricing](external-identities-pricing.md) - [Add B2B collaboration guest users in the portal](add-users-administrator.md)-- [Understand the invitation redemption process](redemption-experience.md)
+- [Understand the invitation redemption process](redemption-experience.md)
active-directory 10 Secure Local Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/10-secure-local-guest.md
Azure Active Directory (Azure AD B2B) allows external users to collaborate using their own identities. However, it isn't uncommon for organizations to issue local usernames and passwords to external users. This approach isn't recommended as the bring-your-own-identity (BYOI) capabilities provided by Azure AD B2B to provide better security, lower cost, and reduce complexity when compared to local account creation. Learn more
-[here.](/azure/active-directory/fundamentals/secure-external-access-resources)
+[here.](./secure-external-access-resources.md)
If your organization currently issues local credentials that external users have to manage and would like to migrate to using Azure AD B2B instead, this document provides a guide to make the transition as seamlessly as possible.
If your organization currently issues local credentials that external users have
Before migrating local accounts to Azure AD B2B, admins should understand what applications and workloads these external users need to access. For example, if external users need access to an application that is hosted on-premises, admins will need to validate that the application is integrated with Azure AD and that a provisioning process is implemented to provision the user from Azure AD to the application. The existence and use of on-premises applications could be a reason why local accounts are created in the first place. Learn more about [provisioning B2B guests to on-premises
-applications.](/azure/active-directory/external-identities/hybrid-cloud-to-on-premises)
+applications.](../external-identities/hybrid-cloud-to-on-premises.md)
All external-facing applications should have single-sign on (SSO) and provisioning integrated with Azure AD for the best end user experience.
External users should be notified that the migration will be taking place and wh
## Migrate local guest accounts to Azure AD B2B
-Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](/azure/active-directory/external-identities/invite-internal-users)
+Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](../external-identities/invite-internal-users.md)
This can be done in the UX or programmatically via PowerShell or the Microsoft Graph API. Once complete, the users will no longer authenticate with their local password, but will instead authenticate with their home identity/email that was populated in the user.mail attribute. You've successfully migrated to Azure AD B2B.
See the following articles on securing external access to resources. We recommen
1. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) 1. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md) 1. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
-1. [Convert local guest accounts to B2B](10-secure-local-guest.md) (YouΓÇÖre here)
+1. [Convert local guest accounts to B2B](10-secure-local-guest.md) (YouΓÇÖre here)
active-directory Active Directory Users Assign Role Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md
# Assign user roles with Azure Active Directory
-The ability to manage Azure resources is granted by assigning roles that provide the required permissions. Roles can be assigned to individual users or groups. To align with the [Zero Trust guiding principles](/azure/security/fundamentals/zero-trust), use Just-In-Time and Just-Enough-Access policies when assigning roles.
+The ability to manage Azure resources is granted by assigning roles that provide the required permissions. Roles can be assigned to individual users or groups. To align with the [Zero Trust guiding principles](../../security/fundamentals/zero-trust.md), use Just-In-Time and Just-Enough-Access policies when assigning roles.
Before assigning roles to users, review the following Microsoft Learn articles:
You can remove role assignments from the **Administrative roles** page for a sel
- [Add guest users from another directory](../external-identities/what-is-b2b.md) -- [Explore other user management tasks](../enterprise-users/index.yml)
+- [Explore other user management tasks](../enterprise-users/index.yml)
active-directory Automate Provisioning To Applications Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/automate-provisioning-to-applications-solutions.md
Use the numbered sections in the next two section to cross reference the followi
As customers transition identity management to the cloud, more users and groups are created directly in Azure AD. However, they still need a presence on-premises in AD DS to access various resources.
-3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD DS](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to [on-premises Windows-Integrated Authentication or Kerberos-based applications](/azure/active-directory/external-identities/hybrid-cloud-to-on-premises). Alternatively, customers can user [PowerShell scripts](https://github.com/Azure-Samples/B2B-to-AD-Sync) to automate the creation of guest accounts on-premises.
+3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD DS](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to [on-premises Windows-Integrated Authentication or Kerberos-based applications](../external-identities/hybrid-cloud-to-on-premises.md). Alternatively, customers can user [PowerShell scripts](https://github.com/Azure-Samples/B2B-to-AD-Sync) to automate the creation of guest accounts on-premises.
1. When a group is created in Azure AD, it can be automatically synchronized to AD DS using [Azure AD Connect sync](../hybrid/how-to-connect-group-writeback-v2.md).
As customers transition identity management to the cloud, more users and groups
|No.| What | From | To | Technology | | - | - | - | - | - |
-| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](/azure/active-directory/cloud-sync/what-is-cloud-sync) |
-| 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](/azure/active-directory/hybrid/whatis-azure-ad-connect) |
+| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](../cloud-sync/what-is-cloud-sync.md) |
+| 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](../hybrid/whatis-azure-ad-connect.md) |
| 3 |Groups| Azure AD| AD DS| [Azure AD Connect Sync](../hybrid/how-to-connect-group-writeback-v2.md) | | 4 |Guest accounts| Azure AD| AD DS| [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario), [PowerShell](https://github.com/Azure-Samples/B2B-to-AD-Sync)| | 5 |Users, groups| Azure AD| Managed AD| [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) |
After users are provisioned into Azure AD, use Lifecycle Workflows (LCW) to auto
* **Leaver**: When users leave the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner.
-[Learn more about Azure AD Lifecycle Workflows](/azure/active-directory/governance/what-are-lifecycle-workflows)
+[Learn more about Azure AD Lifecycle Workflows](../governance/what-are-lifecycle-workflows.md)
> [!Note] > For scenarios not covered by LCW, customers can leverage the extensibility of [Logic Applications](../..//logic-apps/logic-apps-overview.md).
Organizations often need a complete audit trail of what users have access to app
1. Automate provisioning with any of your applications that are in the [Azure AD app gallery](../saas-apps/tutorial-list.md), support [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md), [SQL](../app-provisioning/on-premises-sql-connector-configure.md), or [LDAP](../app-provisioning/on-premises-ldap-connector-configure.md). 2. Evaluate [Azure AD Cloud Sync](../cloud-sync/what-is-cloud-sync.md) for synchronization between AD DS and Azure AD
-3. Use the [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) for complex provisioning scenarios
+3. Use the [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) for complex provisioning scenarios
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
When a requirement exists to deploy IaaS workloads to Azure that require identit
![Diagram that shows Azure AD authentication to Azure VMs.](media/secure-with-azure-ad-resource-management/sign-into-vm.png)
-**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-linux).
+**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../devices/howto-vm-sign-in-azure-ad-linux.md).
**Credentials**: One of the key benefits of signing into virtual machines in Azure using Azure AD authentication is the ability to use the same federated or managed Azure AD credentials that you normally use for access to Azure AD services for sign-in to the virtual machine. >[!NOTE] >The Azure AD tenant that is used for sign-in in this scenario is the Azure AD tenant that is associated with the subscription that the virtual machine has been provisioned into. This Azure AD tenant can be one that has identities synchronized from on-premises AD DS. Organizations should make an informed choice that aligns with their isolation principals when choosing which subscription and Azure AD tenant they wish to use for sign-in to these servers.
-**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-linux) for more information.
+**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../devices/howto-vm-sign-in-azure-ad-linux.md) for more information.
**Role-based Access Control (RBAC)**: Two RBAC roles are available to provide the appropriate level of access to these virtual machines. These RBAC roles can be configured via the Azure AD Portal or via the Azure Cloud Shell Experience. For more information, see [Configure role assignments for the VM](../devices/howto-vm-sign-in-azure-ad-windows.md).
For this isolated model, it's assumed that there's no connectivity to the VNet t
* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
-* [Best practices](secure-with-azure-ad-best-practices.md)
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
Group writeback allows you to write cloud groups back to your on-premises Active Directory instance by using Azure Active Directory (Azure AD) Connect sync. You can use this feature to manage groups in the cloud, while controlling access to on-premises applications and resources. > [!NOTE]
-> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](/azure/active-directory/hybrid/how-to-connect-group-writeback-v2#understand-limitations-of-public-preview) before you enable this functionality.
+> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](#understand-limitations-of-public-preview) before you enable this functionality.
There are two versions of group writeback. The original version is in general availability and is limited to writing back Microsoft 365 groups to your on-premises Active Directory instance as distribution groups. The new, expanded version of group writeback is in public preview and enables the following capabilities:
active-directory How To Upgrade Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-upgrade-previous-version.md
This method is preferred when you have a single server and less than about 100,0
![In-place upgrade](./media/how-to-upgrade-previous-version/inplaceupgrade.png)
-If you've made changes to the out-of-box synchronization rules, then these rules are set back to the default configuration on upgrade. To make sure that your configuration is kept between upgrades, make sure that you make changes as they're described in [Best practices for changing the default configuration](how-to-connect-sync-best-practices-changing-default-configuration.md). If you already changed the default sync rules, please see how to [Fix modified default rules in Azure AD Connect](/azure/active-directory/hybrid/how-to-connect-sync-best-practices-changing-default-configuration), before starting the upgrade process.
+If you've made changes to the out-of-box synchronization rules, then these rules are set back to the default configuration on upgrade. To make sure that your configuration is kept between upgrades, make sure that you make changes as they're described in [Best practices for changing the default configuration](how-to-connect-sync-best-practices-changing-default-configuration.md). If you already changed the default sync rules, please see how to [Fix modified default rules in Azure AD Connect](./how-to-connect-sync-best-practices-changing-default-configuration.md), before starting the upgrade process.
During in-place upgrade, there may be changes introduced that require specific synchronization activities (including Full Import step and Full Synchronization step) to be executed after upgrade completes. To defer such activities, refer to section [How to defer full synchronization after upgrade](#how-to-defer-full-synchronization-after-upgrade).
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-t
### Functional changes
+ - We added a new attribute 'employeeLeaveDateTime' for syncing to Azure AD. To learn more about how to use this attribute to manage your users' life cycles, please refer to [this article](../governance/how-to-lifecycle-workflow-sync-attributes.md)
### Bug fixes
You can use these cmdlets to retrieve the TLS 1.2 enablement status or set it as
- We added the following new user properties to sync from on-premises Active Directory to Azure AD: - employeeType - employeeHireDate
+ >[!NOTE]
+ > There's no corresponding EmployeeHireDate or EmployeeLeaveDateTime attribute in Active Directory. If you're importing from on-premises AD, you'll need to identify an attribute in AD that can be used. This attribute must be a string. For more information see, [Synchronizing lifecycle workflow attributes](../governance/how-to-lifecycle-workflow-sync-attributes.md)
+ - This release requires PowerShell version 5.0 or newer to be installed on the Windows server. This version is part of Windows Server 2016 and newer. - We increased the group sync membership limits to 250,000 with the new V2 endpoint. - We updated the Generic LDAP Connector and the Generic SQL Connector to the latest versions. To learn more about these connectors, see the reference documentation for:
This is a bug fix release. There are no functional changes in this release.
## Next steps
-Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
+Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
active-directory Concept Identity Protection Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-policies.md
Previously updated : 10/04/2022 Last updated : 11/11/2022
If risks are detected on a sign-in, users can perform the required access contro
Identity Protection analyzes signals about user accounts and calculates a risk score based on the probability that the user has been compromised. If a user has risky sign-in behavior, or their credentials have been leaked, Identity Protection will use these signals to calculate the user risk level. Administrators can configure user risk-based Conditional Access policies to enforce access controls based on user risk, including requirements such as: - Block access-- Allow access but require a secure password change using [Azure AD self-service password reset](../authentication/howto-sspr-deployment.md).
+- Allow access but require a secure password change.
A secure password change will remediate the user risk and close the risky user event to prevent unnecessary noise for administrators.
-> [!NOTE]
-> Users must have previously registered for self-service password reset before triggering the user risk policy.
- ## Identity Protection policies While Identity Protection also offers a user interface for creating user risk policy and sign-in risk policy, we highly recommend that you [use Azure AD Conditional Access to create risk-based policies](howto-identity-protection-configure-risk-policies.md) for the following benefits:
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Previously updated : 08/16/2022 Last updated : 11/10/2022
Premium detections are visible only to Azure AD Premium P2 customers. Customers
| Risk detection | Detection type | Description | | | | | | Possible attempt to access Primary Refresh Token (PRT) | Offline | This risk detection type is detected by Microsoft Defender for Endpoint (MDE). A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10, Windows Server 2016, and later versions, iOS, and Android devices. A PRT is a JSON Web Token (JWT) that's specially issued to Microsoft first-party token brokers to enable single sign-on (SSO) across the applications used on those devices. Attackers can attempt to access this resource to move laterally into an organization or perform credential theft. This detection will move users to high risk and will only fire in organizations that have deployed MDE. This detection is low-volume and will be seen infrequently by most organizations. However, when it does occur it's high risk and users should be remediated. |
-| Anomalous user activity | Offline | This risk detection indicates that suspicious patterns of activity have been identified for an authenticated user. The post-authentication behavior of users is assessed for anomalies. This behavior is based on actions occurring for the account, along with any sign-in risk detected. |
+| Anomalous user activity | Offline | This risk detection baselines normal administrative user behavior in Azure AD, and spots anomalous patterns of behavior like suspicious changes to the directory. The detection is triggered against the administrator making the change or the object that was changed. |
+ #### Nonpremium user risk detections
active-directory Concept Identity Protection User Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-user-experience.md
Previously updated : 01/21/2022 Last updated : 11/11/2022
When an administrator has configured a policy for sign-in risks, affected users
### Risky sign-in self-remediation
-1. The user is informed that something unusual was detected about their sign-in. This could be something like, such as signing in from a new location, device, or app.
+1. The user is informed that something unusual was detected about their sign-in. This behavior could be something like, such as signing in from a new location, device, or app.
![Something unusual prompt](./media/concept-identity-protection-user-experience/120.png)
If your organization has users who are delegated access to another tenant and th
1. An organization has a managed service provider (MSP) or cloud solution provider (CSP) who takes care of configuring their cloud environment. 1. One of the MSPs technicians credentials are leaked and triggers high risk. That technician is blocked from signing in to other tenants. 1. The technician can self-remediate and sign in if the home tenant has enabled the appropriate policies [requiring password change for high risk users](../conditional-access/howto-conditional-access-policy-risk-user.md) or [MFA for risky users](../conditional-access/howto-conditional-access-policy-risk.md).
- 1. If the home tenant hasn't enabled self-remediation policies, an administrator in the technician's home tenant will have to [remediate the risk](howto-identity-protection-remediate-unblock.md#remediation).
+ 1. If the home tenant hasn't enabled self-remediation policies, an administrator in the technician's home tenant will have to [remediate the risk](howto-identity-protection-remediate-unblock.md#risk-remediation).
## See also
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
Previously updated : 02/07/2022 Last updated : 11/10/2022
We detect risk on workload identities across sign-in behavior and offline indica
| Admin confirmed account compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). | | Leaked Credentials | Offline | This risk detection indicates that the account's valid credentials have been leaked. This leak can occur when someone checks in the credentials in public code artifact on GitHub, or when the credentials are leaked through a data breach. <br><br> When the Microsoft leaked credentials service acquires credentials from GitHub, the dark web, paste sites, or other sources, they're checked against current valid credentials in Azure AD to find valid matches. | | Malicious application | Offline | This detection indicates that Microsoft has disabled an application for violating our terms of service. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
-| Suspicious application | Offline | This detection indicates that Microsoft has identified an application that may be violating our terms of service, but has not disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
-| Anomalous service principal activity | Offline | This risk detection indicates that suspicious patterns of activity have been identified for an authenticated service principal. The post-authentication behavior of service principals is assessed for anomalies. This behavior is based on actions occurring for the account, along with any sign-in risk detected. |
+| Suspicious application | Offline | This detection indicates that Microsoft has identified an application that may be violating our terms of service, but hasn't disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
+| Anomalous service principal activity | Offline | This risk detection baselines normal administrative service principal behavior in Azure AD, and spots anomalous patterns of behavior like suspicious changes to the directory. The detection is triggered against the administrative service principal making the change or the object that was changed. |
## Identify risky workload identities
active-directory Howto Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-investigate-risk.md
Previously updated : 01/24/2022 Last updated : 11/11/2022
Administrators can then choose to return to the user's risk or sign-ins report t
Organizations may use the following frameworks to begin their investigation into any suspicious activity. Investigations may require having a conversation with the user in question, review of the [sign-in logs](../reports-monitoring/concept-sign-ins.md), or review of the [audit logs](../reports-monitoring/concept-audit-logs.md) to name a few. 1. Check the logs and validate whether the suspicious activity is normal for the given user.
- 1. Look at the userΓÇÖs past activities including at least the following properties to see if they are normal for the given user.
+ 1. Look at the userΓÇÖs past activities including at least the following properties to see if they're normal for the given user.
1. Application 1. Device - Is the device registered or compliant? 1. Location - Is the user traveling to a different location or accessing devices from multiple locations?
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
Previously updated : 02/17/2022 Last updated : 11/11/2022
# Remediate risks and unblock users
-After completing your [investigation](howto-identity-protection-investigate-risk.md), you need to take action to remediate the risk or unblock users. Organizations can enable automated remediation using their [risk policies](howto-identity-protection-configure-risk-policies.md). Organizations should try to close all risk detections that they're presented in a time period your organization is comfortable with. Microsoft recommends closing events quickly, because time matters when working with risk.
+After completing your [investigation](howto-identity-protection-investigate-risk.md), you need to take action to remediate the risky users or unblock them. Organizations can enable automated remediation by setting up [risk-based policies](howto-identity-protection-configure-risk-policies.md). Organizations should try to investigate and remediate all risky users in a time period that your organization is comfortable with. Microsoft recommends acting quickly, because time matters when working with risks.
-## Remediation
+## Risk remediation
-All active risk detections contribute to the calculation of a value called user risk level. The user risk level is an indicator (low, medium, high) for the probability that an account has been compromised. As an administrator, you want to get all risk detections closed, so that the affected users are no longer at risk.
+All active risk detections contribute to the calculation of the user's risk level. The user risk level is an indicator (low, medium, high) of the probability that the user's account has been compromised. As an administrator, after thorough investigation on the risky users and the corresponding risky sign-ins and detections, you want to remediate the risky users so that they're no longer at risk and won't be blocked.
-Some risks detections may be marked by Identity Protection as "Closed (system)" because the events were no longer determined to be risky.
+Some risk detections and the corresponding risky sign-ins may be marked by Identity Protection as dismissed with risk state "Dismissed" and risk detail "Azure AD Identity Protection assessed sign-in safe" because those events were no longer determined to be risky.
Administrators have the following options to remediate:--- Self-remediation with risk policy
+- Set up [risk-based policies](howto-identity-protection-configure-risk-policies.md) to allow users to self-remediate their risks
- Manual password reset - Dismiss user risk-- Close individual risk detections manually
-### Remediation framework
+### Self-remediation with risk-based policy
-1. If the account is confirmed compromised:
- 1. Select the event or user in the **Risky sign-ins** or **Risky users** reports and choose "Confirm compromised".
- 1. If a risk policy or a Conditional Access policy wasn't triggered at part of the risk detection, and the risk wasn't [self-remediated](#self-remediation-with-risk-policy), then:
- 1. [Request a password reset](#manual-password-reset).
- 1. Block the user if you suspect the attacker can reset the password or do multi-factor authentication for the user.
- 1. Revoke refresh tokens.
- 1. [Disable any devices](../devices/device-management-azure-portal.md) considered compromised.
- 1. If using [continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md), revoke all access tokens.
+You can allow users to self-remediate their sign-in risks and user risks by setting up [risk-based policies](howto-identity-protection-configure-risk-policies.md). If users pass the required access control, such as Azure AD multifactor authentication (MFA) or secure password change, then their risks are automatically remediated. The corresponding risk detections, risky sign-ins, and risky users will be reported with the risk state "Remediated" instead of "At risk".
-For more information about what happens when confirming compromise, see the section [How should I give risk feedback and what happens under the hood?](howto-identity-protection-risk-feedback.md#how-should-i-give-risk-feedback-and-what-happens-under-the-hood).
+Here are the prerequisites on users before risk-based policies can be applied to them to allow self-remediation of risks:
+- To perform MFA to self-remediate a sign-in risk:
+ - The user must have registered for Azure AD MFA.
+- To perform secure password change to self-remediate a user risk:
+ - The user must have registered for Azure AD MFA.
+ - For hybrid users that are synced from on-premises to cloud, password writeback must have been enabled on them.
+
+If a risk-based policy is applied to a user during sign-in before the above prerequisites are met, then the user will be blocked because they aren't able to perform the required access control, and admin intervention will be required to unblock the user.
-### Self-remediation with risk policy
+Risk-based policies are configured based on risk levels and will only apply if the risk level of the sign-in or user matches the configured level. Some detections may not raise risk to the level where the policy will apply, and administrators will need to handle those risky users manually. Administrators may determine that extra measures are necessary like [blocking access from locations](../conditional-access/howto-conditional-access-policy-location.md) or lowering the acceptable risk in their policies.
-If you allow users to self-remediate, with Azure AD multifactor authentication (MFA) and self-service password reset (SSPR) in your risk policies, they can unblock themselves when risk is detected. These detections are then considered closed. Users must have previously registered for Azure AD MFA and SSPR for use when risk is detected.
+### Self-remediation with self-service password reset
-Some detections may not raise risk to the level where a user self-remediation would be required but administrators should still evaluate these detections. Administrators may determine that extra measures are necessary like [blocking access from locations](../conditional-access/howto-conditional-access-policy-location.md) or lowering the acceptable risk in their policies.
+If a user has registered for self-service password reset (SSPR), then they can also remediate their own user risk by performing a self-service password reset.
### Manual password reset
-If requiring a password reset using a user risk policy isn't an option, administrators can close all risk detections for a user with a manual password reset.
+If requiring a password reset using a user risk policy isn't an option, administrators can remediate a risky user by requiring a password reset.
Administrators are given two options when resetting a password for their users:
Administrators are given two options when resetting a password for their users:
### Dismiss user risk
-If a password reset isn't an option for you, you can choose to dismiss user risk detections.
+If after investigation and confirming that the user account isn't at risk of being compromised, then you can choose to dismiss the risky user.
-When you select **Dismiss user risk**, all events are closed and the affected user is no longer at risk. However, because this method doesn't have an impact on the existing password, it doesn't bring the related identity back into a safe state.
+To **Dismiss user risk**, search for and select **Azure AD Risky users** in the Azure portal or the Entra portal, select the affected user, and select **Dismiss user(s) risk**.
-To **Dismiss user risk**, search for and select **Azure AD Risky users**, select the affected user, and select **Dismiss user(s) risk**.
+When you select **Dismiss user risk**, the user will no longer be at risk, and all the risky sign-ins of this user and corresponding risk detections will be dismissed as well.
-### Close individual risk detections manually
+Because this method doesn't have an impact on the user's existing password, it doesn't bring their identity back into a safe state.
-You can close individual risk detections manually. By closing risk detections manually, you can lower the user risk level. Typically, risk detections are closed manually in response to a related investigation. For example, when talking to a user reveals that an active risk detection isn't required anymore.
-
-When closing risk detections manually, you can choose to take any of the following actions to change the status of a risk detection:
+#### Risk state and detail based on dismissal of risk
-- Confirm user compromised-- Dismiss user risk-- Confirm sign-in safe-- Confirm sign-in compromised
+- Risky user:
+ - Risk state: "At risk" -> "Dismissed"
+ - Risk detail (the risk remediation detail): "-" -> "Admin dismissed all risk for user"
+- All the risky sign-ins of this user and the corresponding risk detections:
+ - Risk state: "At risk" -> "Dismissed"
+ - Risk detail (the risk remediation detail): "-" -> "Admin dismissed all risk for user"
-#### Deleted users
+### Confirm a user to be compromised
+
+If after investigation, an account is confirmed compromised:
+ 1. Select the event or user in the **Risky sign-ins** or **Risky users** reports and choose "Confirm compromised".
+ 2. If a risk-based policy wasn't triggered, and the risk wasn't [self-remediated](#self-remediation-with-risk-based-policy), then do one or more of the followings:
+ 1. [Request a password reset](#manual-password-reset).
+ 1. Block the user if you suspect the attacker can reset the password or do multi-factor authentication for the user.
+ 1. Revoke refresh tokens.
+ 1. [Disable any devices](../devices/device-management-azure-portal.md) that are considered compromised.
+ 1. If using [continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md), revoke all access tokens.
+
+For more information about what happens when confirming compromise, see the section [How should I give risk feedback and what happens under the hood?](howto-identity-protection-risk-feedback.md#how-should-i-give-risk-feedback-and-what-happens-under-the-hood).
+
+### Deleted users
It isn't possible for administrators to dismiss risk for users who have been deleted from the directory. To remove deleted users, open a Microsoft support case.
An administrator may choose to block a sign-in based on their risk policy or inv
To unblock an account blocked because of user risk, administrators have the following options:
-1. **Reset password** - You can reset the user's password.
-1. **Dismiss user risk** - The user risk policy blocks a user if the configured user risk level for blocking access has been reached. You can reduce a user's risk level by dismissing user risk or manually closing reported risk detections.
-1. **Exclude the user from policy** - If you think that the current configuration of your sign-in policy is causing issues for specific users, you can exclude the users from it. For more information, see the section Exclusions in the article [How To: Configure and enable risk policies](howto-identity-protection-configure-risk-policies.md#exclusions).
+1. **Reset password** - You can reset the user's password. If a user has been compromised or is at risk of being compromised, the user's password should be reset to protect their account and your organization.
+1. **Dismiss user risk** - The user risk policy blocks a user if the configured user risk level for blocking access has been reached. If after investigation you're confident that the user isn't at risk of being compromised, and it's safe to allow their access, then you can reduce a user's risk level by dismissing their user risk.
+1. **Exclude the user from policy** - If you think that the current configuration of your sign-in policy is causing issues for specific users, and it's safe to grant access to these users without applying this policy to them, then you can exclude them from this policy. For more information, see the section Exclusions in the article [How To: Configure and enable risk policies](howto-identity-protection-configure-risk-policies.md#exclusions).
1. **Disable policy** - If you think that your policy configuration is causing issues for all your users, you can disable the policy. For more information, see the article [How To: Configure and enable risk policies](howto-identity-protection-configure-risk-policies.md). ### Unblocking based on sign-in risk
active-directory Datawiza Azure Ad Sso Oracle Peoplesoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-peoplesoft.md
The scenario solution has the following components:
- **Oracle PeopleSoft application**: Legacy application going to be protected by Azure AD and DAB.
-Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
+Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](./datawiza-with-azure-ad.md#datawiza-with-azure-ad-authentication-architecture).
## Prerequisites
Ensure the following prerequisites are met.
- An Azure AD tenant linked to the Azure subscription.
- - See, [Quickstart: Create a new tenant in Azure Active Directory.](/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
+ - See, [Quickstart: Create a new tenant in Azure Active Directory.](../fundamentals/active-directory-access-create-new-tenant.md)
- Docker and Docker Compose
Ensure the following prerequisites are met.
- User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory.
- - See, [Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis).
+ - See, [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md).
- An account with Azure AD and the Application administrator role
- - See, [Azure AD built-in roles, all roles](/azure/active-directory/roles/permissions-reference#all-roles).
+ - See, [Azure AD built-in roles, all roles](../roles/permissions-reference.md#all-roles).
- An Oracle PeopleSoft environment
For the Oracle PeopleSoft application to recognize the user correctly, there's a
## Enable Azure AD Multi-Factor Authentication To provide an extra level of security for sign-ins, enforce multi-factor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure
-portal](/azure/active-directory/authentication/tutorial-enable-azure-mfa).
+portal](../authentication/tutorial-enable-azure-mfa.md).
1. Sign in to the Azure portal as a **Global Administrator**.
To confirm Oracle PeopleSoft application access occurs correctly, a prompt appea
- [Watch the video - Enable SSO/MFA for Oracle PeopleSoft with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90). -- [Configure Datawiza and Azure AD for secure hybrid access](/azure/active-directory/manage-apps/datawiza-with-azure-ad)
+- [Configure Datawiza and Azure AD for secure hybrid access](./datawiza-with-azure-ad.md)
-- [Configure Datawiza with Azure AD B2C](/azure/active-directory-b2c/partner-datawiza)
+- [Configure Datawiza with Azure AD B2C](../../active-directory-b2c/partner-datawiza.md)
-- [Datawiza documentation](https://docs.datawiza.com/)
+- [Datawiza documentation](https://docs.datawiza.com/)
active-directory How Managed Identities Work Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md
The following table shows the differences between the system-assigned and user-a
2. Azure Resource Manager creates a service principal in Azure AD for the identity of the VM. The service principal is created in the Azure AD tenant that's trusted by the subscription.
-3. Azure Resource Manager updates the VM identity using the Azure Instance Metadata Service identity endpoint (for [Windows](/azure/virtual-machines/windows/instance-metadata-service) and [Linux](/azure/virtual-machines/linux/instance-metadata-service)), providing the endpoint with the service principal client ID and certificate.
+3. Azure Resource Manager updates the VM identity using the Azure Instance Metadata Service identity endpoint (for [Windows](../../virtual-machines/windows/instance-metadata-service.md) and [Linux](../../virtual-machines/linux/instance-metadata-service.md)), providing the endpoint with the service principal client ID and certificate.
4. After the VM has an identity, use the service principal information to grant the VM access to Azure resources. To call Azure Resource Manager, use Azure Role-Based Access Control (Azure RBAC) to assign the appropriate role to the VM service principal. To call Key Vault, grant your code access to the specific secret or key in Key Vault.
The following table shows the differences between the system-assigned and user-a
Get started with the managed identities for Azure resources feature with the following quickstarts: * [Use a Windows VM system-assigned managed identity to access Resource Manager](tutorial-windows-vm-access-arm.md)
-* [Use a Linux VM system-assigned managed identity to access Resource Manager](tutorial-linux-vm-access-arm.md)
+* [Use a Linux VM system-assigned managed identity to access Resource Manager](tutorial-linux-vm-access-arm.md)
active-directory Adstream Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adstream-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Adstream'
+description: Learn how to configure single sign-on between Azure Active Directory and Adstream.
++++++++ Last updated : 11/16/2022++++
+# Azure Active Directory SSO integration with Adstream
+
+In this article, you'll learn how to integrate Adstream with Azure Active Directory (Azure AD). Adstream provides the safest and easiest to use business solution for sending and receiving files. When you integrate Adstream with Azure AD, you can:
+
+* Control in Azure AD who has access to Adstream.
+* Enable your users to be automatically signed-in to Adstream with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Adstream in a test environment. Adstream supports both **SP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Adstream, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Adstream single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Adstream application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Adstream from the Azure AD gallery
+
+Add Adstream from the Azure AD application gallery to configure single sign-on with Adstream. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Adstream** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Reply URL** textbox, type the URL:
+ `https://msft.adstream.com/saml/assert`
+
+ b. In the **Sign on URL** textbox, type the URL:
+ `https://msft.adstream.com`
+
+ c. In the **Relay State** textbox, type the URL:
+ `https://a5.adstream.com/projects#/projects/projects`
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Adstream** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Adstream SSO
+
+To configure single sign-on on **Adstream** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Adstream support team](mailto:support@adstream.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Adstream test user
+
+In this section, you create a user called Britta Simon in Seculio. Work with [Adstream support team](mailto:support@adstream.com) to add the users in the Seculio platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Adstream Sign-on URL where you can initiate the login flow.
+
+* Go to Adstream Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Adstream tile in the My Apps, this will redirect to Adstream Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Adstream you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Boomi Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/boomi-tutorial.md
Previously updated : 02/25/2021 Last updated : 11/14/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, sign in to your Boomi company site as an administrator.
-1. Navigate to **Company Name** and go to **Set up**.
-
-1. Click the **SSO Options** tab and perform below steps.
+1. Go to the **Settings**, click the **SSO Options** in the security options and perform the below steps.
![Configure Single Sign-On On App Side](./media/boomi-tutorial/import.png)
- a. Check **Enable SAML Single Sign-On** checkbox.
+ a. Select **Enabled** in **Enable SAML Single Sign-On**.
b. Click **Import** to upload the downloaded certificate from Azure AD to **Identity Provider Certificate**.
- c. In the **Identity Provider Login URL** textbox, put the value of **Login URL** from Azure AD application configuration window.
+ c. In the **Identity Provider Sign In URL** textbox, paste the value of **Login URL** from Azure AD application configuration window.
d. For **Federation Id Location**, select the **Federation Id is in FEDERATION_ID Attribute element** radio button.
- e. Copy the **AtomSphere MetaData URL**, go to the **MetaData URL** via the browser of your choice, and save the output to a file. Upload the **MetaData URL** in the **Basic SAML Configuration** section in the Azure portal.
+ e. For **SAML Authentication Context**, select the **Password Protected Transport** radio button.
+
+ f. Copy the **AtomSphere Sign In URL**, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ g. Copy the **AtomSphere MetaData URL**, go to the **MetaData URL** via the browser of your choice, and save the output to a file. Upload the **MetaData URL** in the **Basic SAML Configuration** section in the Azure portal.
- f. Click **Save** button.
+ h. Click **Save** button.
### Create Boomi test user
In order to enable Azure AD users to sign in to Boomi, they must be provisioned
1. Sign in to your Boomi company site as an administrator.
-1. After logging in, navigate to **User Management** and go to **Users**.
-
- ![Screenshot shows the User Management page with Users selected.](./media/boomi-tutorial/user.png "Users")
+1. After logging in, navigate to **User Management** ->**Users**.
1. Click **+** icon and the **Add/Maintain User Roles** dialog opens. ![Screenshot shows the + icon selected.](./media/boomi-tutorial/add.png "Users")
- ![Screenshot shows the Add / Maintain User Roles where you configure a user.](./media/boomi-tutorial/roles.png "Users")
- a. In the **User e-mail address** textbox, type the email of user like B.Simon@contoso.com. b. In the **First name** textbox, type the First name of user like B.
active-directory Dx Netops Portal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dx-netops-portal-tutorial.md
+
+ Title: Azure Active Directory SSO integration with DX NetOps Portal'
+description: Learn how to configure single sign-on between Azure Active Directory and DX NetOps Portal.
++++++++ Last updated : 11/07/2022++++
+# Azure Active Directory SSO integration with DX NetOps Portal
+
+In this article, you'll learn how to integrate DX NetOps Portal with Azure Active Directory (Azure AD). DX NetOps Portal provides network observability, topology with fault correlation and root-cause analysis at telecom carrier level scale, over traditional and software defined networks, internal and external. When you integrate DX NetOps Portal with Azure AD, you can:
+
+* Control in Azure AD who has access to DX NetOps Portal.
+* Enable your users to be automatically signed-in to DX NetOps Portal with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for DX NetOps Portal in a test environment. DX NetOps Portal supports **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with DX NetOps Portal, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* DX NetOps Portal single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the DX NetOps Portal application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add DX NetOps Portal from the Azure AD gallery
+
+Add DX NetOps Portal from the Azure AD application gallery to configure single sign-on with DX NetOps Portal. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **DX NetOps Portal** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `<DX NetOps Portal hostname>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<DX NetOps Portal FQDN>:<SSO port>/sso/saml2/UserAssertionService`
+
+ c. In the **Relay State** textbox, type a URL using the following pattern:
+ `SsoProductCode=pc&SsoRedirectUrl=https://<DX NetOps Portal FQDN>:<https port>/pc/desktop/page`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Relay State URL. Contact [DX NetOps Portal Client support team](https://support.broadcom.com/web/ecx/contact-support) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Your DX NetOps Portal application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but DX NetOps Portal expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Attributes")
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up DX NetOps Portal** section, copy the appropriate URL(s) as per your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure DX NetOps Portal SSO
+
+To configure single sign-on on **DX NetOps Portal** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [DX NetOps Portal support team](https://support.broadcom.com/web/ecx/contact-support). The support team will use the copied URLs to configure the single sign-on on the application.
+
+### Create DX NetOps Portal test user
+
+To be able to test and use single sign-on, you have to create and activate users in the DX NetOps Portal application.
+
+In this section, you create a user called Britta Simon in DX NetOps Portal that corresponds with the Azure AD user you already created in the previous section. Work with [DX NetOps Portal support team](https://support.broadcom.com/web/ecx/contact-support) to add the user in the DX NetOps Portal platform.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the DX NetOps Portal for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the DX NetOps Portal tile in the My Apps, you should be automatically signed in to the DX NetOps Portal for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure DX NetOps Portal you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Factset Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/factset-tutorial.md
Previously updated : 10/10/2022 Last updated : 11/15/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure FactSet SSO
-To configure single sign-on on **FactSet** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to FactSet account support representatives or to [FactSet Support Team](https://www.factset.com/contact-us). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on FactSet side you will need to visit FactSet's [Control Center](https://controlcenter.factset.com) and configure the **Federation Metadata XML** and appropriate copied URLs from Azure portal under **Control Center's Security > Single Sign-On (SSO)** page. If you require access to this page, please contact [FactSet Support Team](https://www.factset.com/contact-us) and request FactSet product 8514 (Control Center - Source IPs, Security + Authentication).
### Create FactSet test user
-In this section, you create a user called Britta Simon in FactSet. Work with your FactSet account support representatives or contact [FactSet Support Team](https://www.factset.com/contact-us) to add the users in the FactSet platform. Users must be created and activated before you use single sign-on.
+Work with your FactSet account support representatives or contact [FactSet Support Team](https://www.factset.com/contact-us) to add the users in the FactSet platform. Users must be created and activated before you use single sign-on.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
+In this section, you test your Azure AD single sign-on configuration with following option.
-* Click on Test this application in Azure portal and you should be automatically signed in to the FactSet for which you set up the SSO.
-
-* You can use Microsoft My Apps. When you click the FactSet tile in the My Apps, you should be automatically signed in to the FactSet for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* FactSet only supports SP-initiated SAML. You may test SSO by visiting any authenticated FactSet URL such as [Issue Tracker](https://issuetracker.factset.com) or [FactSet-Web](https://my.factset.com), click on **Single Sign-On (SSO)** on the logon portal and supply your email address in the subsequent page. Please see supplied [documentation](https://download.factset.com/documents/web/FactSet_Single_Sign-On.pdf) for additional information and usage.
## Next steps
active-directory Icertisicm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/icertisicm-tutorial.md
- Title: 'Tutorial: Azure Active Directory integration with Icertis Contract Management Platform | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Icertis Contract Management Platform.
-------- Previously updated : 04/22/2021--
-# Tutorial: Azure Active Directory integration with Icertis Contract Management Platform
-
-In this tutorial, you'll learn how to integrate Icertis Contract Management Platform with Azure Active Directory (Azure AD). When you integrate Icertis Contract Management Platform with Azure AD, you can:
-
-* Control in Azure AD who has access to Icertis Contract Management Platform.
-* Enable your users to be automatically signed-in to Icertis Contract Management Platform with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Icertis Contract Management Platform single sign-on (SSO) enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-
-* Icertis Contract Management Platform supports **SP** initiated SSO.
-
-## Add Icertis Contract Management Platform from the gallery
-
-To configure the integration of Icertis Contract Management Platform into Azure AD, you need to add Icertis Contract Management Platform from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Icertis Contract Management Platform** in the search box.
-1. Select **Icertis Contract Management Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-
-## Configure and test Azure AD SSO for Icertis Contract Management Platform
-
-Configure and test Azure AD SSO with Icertis Contract Management Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Icertis Contract Management Platform.
-
-To configure and test Azure AD SSO with Icertis Contract Management Platform, perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Icertis Contract Management Platform SSO](#configure-icertis-contract-management-platform-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Icertis Contract Management Platform test user](#create-icertis-contract-management-platform-test-user)** - to have a counterpart of B.Simon in Icertis Contract Management Platform that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **Icertis Contract Management Platform** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<company name>.icertis.com`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<company name>.icertis.com`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Icertis Contract Management Platform Client support team](https://www.icertis.com/company/contact/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-6. On the **Set up Icertis Contract Management Platform** section, copy the appropriate URL(s) as per your requirement. For **Login URL**, use the value with the following pattern: `https://login.microsoftonline.com/_my_directory_id_/wsfed`
-
- > [!Note]
- > _my_directory_id_ is the tenant id of Azure AD subscription.
-
- ![Copy configuration URLs](media/icertisicm-tutorial/configurls.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Icertis Contract Management Platform.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Icertis Contract Management Platform**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Icertis Contract Management Platform SSO
-
-To configure single sign-on on **Icertis Contract Management Platform** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Icertis Contract Management Platform support team](https://www.icertis.com/company/contact/). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create Icertis Contract Management Platform test user
-
-In this section, you create a user called Britta Simon in Icertis Contract Management Platform. Work with [Icertis Contract Management Platform support team](https://www.icertis.com/company/contact/) to add the users in the Icertis Contract Management Platform platform. Users must be created and activated before you use single sign-on.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-* Click on **Test this application** in Azure portal. This will redirect to Icertis Contract Management Platform Sign-on URL where you can initiate the login flow.
-
-* Go to Icertis Contract Management Platform Sign-on URL directly and initiate the login flow from there.
-
-* You can use Microsoft My Apps. When you click the Icertis Contract Management Platform tile in the My Apps, this will redirect to Icertis Contract Management Platform Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Next steps
-
-Once you configure Icertis Contract Management Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory New Relic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/new-relic-tutorial.md
Previously updated : 02/02/2021 Last updated : 11/14/2022
In this tutorial, you'll learn how to integrate New Relic by Account with Azure
* Enable your users to be automatically signed-in to New Relic by Account with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
+> [!NOTE]
+> This document is only relevant if you're using the [Original User Model](https://docs.newrelic.com/docs/accounts/original-accounts-billing/original-users-roles/overview-user-models/) in New Relic. Please refer to [New Relic (By Organization)](new-relic-limited-release-tutorial.md) if you're using New Relic's newer user model.
+ ## Prerequisites To get started, you need the following items:
active-directory Tableauserver Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableauserver-tutorial.md
Previously updated : 01/25/2021 Last updated : 11/14/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
> [!NOTE] > Customer have to upload A PEM-encoded x509 Certificate file with a .crt extension and a RSA or DSA private key file that has the .key extension, as a Certificate Key file. For more information on Certificate file and Certificate Key file, please refer to [this](https://help.tableau.com/current/server/en-us/saml_requ.htm) document. If you need help configuring SAML on Tableau Server then please refer to this article [Configure Server Wide SAML](https://help.tableau.com/current/server/en-us/config_saml.htm).
+ > [!NOTE]
+ > The SAML Certificate and SAML Key files are generated separately and uploaded to the Tableau Server Manager. For example, in the linux shell, use openssl to generate the cert and key like so: `openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout private.key -out saml.crt` then upload the `saml.crt` and `private.key` files via the TSM Configruation GUI (As shown in the screenshot at the start of this step) or via the [command line according to the tableau docs](https://help.tableau.com/current/server-linux/en-us/config_saml.htm). If you are in a production environment, you may want to find a more secure way to handle SAML certs and keys.
+ ### Create Tableau Server test user The objective of this section is to create a user called B.Simon in Tableau Server. You need to provision all the users in the Tableau server.
active-directory Timetabling Solutions Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timetabling-solutions-tutorial.md
Previously updated : 06/04/2022 Last updated : 11/16/2022
In this section, you create a user called Britta Simon in the Timetabling Soluti
> [!NOTE]
-> Work with [Timetabling Solutions support team](https://www.timetabling.com.au/contact-us/) to add the users in the Timetabling Solutions platform. Users must be created and activated before you use single sign-on.
+> To add the users in the Timetabling Solutions platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Tranxfer Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tranxfer-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Tranxfer'
+description: Learn how to configure single sign-on between Azure Active Directory and Tranxfer.
++++++++ Last updated : 11/07/2022++++
+# Azure Active Directory SSO integration with Tranxfer
+
+In this article, you'll learn how to integrate Tranxfer with Azure Active Directory (Azure AD). Tranxfer provides the safest and easiest to use business solution for sending and receiving files. When you integrate Tranxfer with Azure AD, you can:
+
+* Control in Azure AD who has access to Tranxfer.
+* Enable your users to be automatically signed-in to Tranxfer with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Tranxfer in a test environment. Tranxfer supports **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Tranxfer, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Tranxfer single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Tranxfer application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Tranxfer from the Azure AD gallery
+
+Add Tranxfer from the Azure AD application gallery to configure single sign-on with Tranxfer. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Tranxfer** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:-
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.tranxfer.com`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.tranxfer.com/SAMLResponse`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.tranxfer.com/saml/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Tranxfer Client support team](mailto:soporte@tranxfer.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Tranxfer application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Tranxfer application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | groups | user.groups [All] |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, select copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Tranxfer SSO
+
+To configure single sign-on on **Tranxfer** side, you need to send the **App Federation Metadata Url** to [Tranxfer support team](mailto:soporte@tranxfer.com). The support team will use the copied URLs to configure the single sign-on on the application.
+
+### Create Tranxfer test user
+
+In this section, a user called B.Simon is created in Tranxfer. Tranxfer supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Tranxfer, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Tranxfer Sign-on URL where you can initiate the login flow.
+
+* Go to Tranxfer Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Tranxfer tile in the My Apps, this will redirect to Tranxfer Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Tranxfer you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Howto Verifiable Credentials Partner Au10tix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md
Before you can continue with the steps below you need to meet the following requ
## Scenario description
-When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](./plan-verification-solution.md#account-onboarding).
User flow is specific to your application or website. However if you are using o
## Next steps - [Verifiable credentials admin API](admin-api.md)-- [Request Service REST API issuance specification](issuance-request-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
active-directory Howto Verifiable Credentials Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md
You can use Entra Verified ID with LexisNexis Risk Solutions to enable faster on
## Scenario description
-Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](./plan-verification-solution.md#account-onboarding).
:::image type="content" source="media/verified-id-partner-au10tix/vc-solution-architecture-diagram.png" alt-text="Diagram of the verifiable credential solution.":::
User flow is specific to your application or website. However if you are using [
## Next steps - [Verifiable credentials admin API](admin-api.md)-- [Request Service REST API issuance specification](issuance-request-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
active-directory Partner Vu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-vu.md
VU Identity Card works as a link between users who need to access an application
Verifiable credentials can be used to enable faster and easier user onboarding by replacing some human interactions. For example, a user or employee who wants to create or remotely access an account can use a Verified ID through VU Identity Card to verify their identity without using vulnerable or overly complex passwords or the requirement to be on-site.
-Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+Learn more about [account onboarding](./plan-verification-solution.md#account-onboarding).
In this account onboarding scenario, Vu plays the Trusted ID proofing issuer role.
User flow is specific to your application or website. However if you are using o
## Next steps - [Verifiable credentials admin API](admin-api.md)-- [Request Service REST API issuance specification](issuance-request-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
Now that you have a new credential, you're going to gather some information abou
1. Copy your **Tenant ID**, and record it for later. The Tenant ID is the guid in the manifest URL highlighted in red above.
- >[!NOTE]
- > When setting up access policies for Azure Key Vault, you must add the access policies for both **Verifiable Credentials Service Request** and **Verifiable Credentials Service**.
- ## Download the sample code The sample application is available in .NET, and the code is maintained in a GitHub repository. Download the sample code from [GitHub](https://github.com/Azure-Samples/active-directory-verifiable-credentials-dotnet), or clone the repository to your local machine:
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
To add the required permissions, follow these steps:
1. Select **APIs my organization uses**.
-1. Search for the **Verifiable Credentials Service Request** and **Verifiable Credentials Service** service principals, and select them.
+1. Search for the **Verifiable Credentials Service Request** service principal and select it.
:::image type="content" source="media/verifiable-credentials-configure-tenant/add-app-api-permissions-select-service-principal.png" alt-text="Screenshot that shows how to select the service principal.":::
You can choose to grant issuance and presentation permissions separately if you
1. Navigate to the Verified ID service in the Azure portal. 1. Select **Registration**. 1. Notice that there are two sections:
- 1. Website ID registration
- 1. Domain verification.
+ 1. DID registration
+ 1. Domain ownership verification.
1. Select on each section and download the JSON file under each. 1. Create a website that you can use to distribute the files. If you specified **https://contoso.com** as your domain, the URLs for each of the files would look as shown below: - `https://contoso.com/.well-known/did.json` - `https://contoso.com/.well-known/did-configuration.json` Once that you have successfully completed the verification steps, you are ready to continue to the next tutorial.
+If you have selected ION as the trust system, you will not see the DID registration section as it is not applicable for ION and you only have to distribute the did-configuration.json file.
## Next steps
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
You can get these recommendations on the **Operational Excellence** tab of the A
1. On the **Advisor** dashboard, select the **Operational Excellence** tab.
-## Spring Cloud
+## Azure Spring Apps
### Update your outdated Azure Spring Apps SDK to the latest version We have identified API calls from an outdated Azure Spring Apps SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
-Learn more about [Spring Cloud Service - SpringCloudUpgradeOutdatedSDK (Update your outdated Azure Spring Apps SDK to the latest version)](../spring-apps/index.yml).
+Learn more about the [Azure Spring Apps service](../spring-apps/index.yml).
### Update Azure Spring Apps API Version
-We have identified API calls from outdated Azure Spring Apps API for resources under this subscription. We recommend switching to the latest Spring Cloud API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version. This ensures you receive the latest features and performance improvements.
+We have identified API calls from outdated Azure Spring Apps API for resources under this subscription. We recommend switching to the latest Azure Spring Apps API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version. This ensures you receive the latest features and performance improvements.
-Learn more about [Spring Cloud Service - UpgradeAzureSpringCloudAPI (Update Azure Spring Apps API Version)](../spring-apps/index.yml).
+Learn more about the [Azure Spring Apps service](../spring-apps/index.yml).
## Automation
Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number
### Add Azure Monitor to your virtual machine (VM) labeled as production
-Azure Monitor for VMs monitors your Azure virtual machines (VM) and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and it monitors their processes and dependencies on other resources and external processes. It includes support for monitoring performance and application dependencies for VMs that are hosted on-premises or in another cloud provider.
+Azure Monitor for VMs monitors your Azure virtual machines (VM) and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and it monitors their processes and dependencies on other resources and external processes. It includes support for monitoring performance and application dependencies for VMs that are hosted on-premises or in another cloud provider.
Learn more about [Virtual machine - AddMonitorProdVM (Add Azure Monitor to your virtual machine (VM) labeled as production)](/azure/azure-monitor/insights/vminsights-overview).
aks Aks Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-diagnostics.md
+
+ Title: Azure Kubernetes Service (AKS) Diagnostics Overview
+description: Learn about self-diagnosing clusters in Azure Kubernetes Service.
++ Last updated : 11/15/2022++
+# Azure Kubernetes Service Diagnostics (preview) overview
+
+Troubleshooting Azure Kubernetes Service (AKS) cluster issues plays an important role in maintaining your cluster, especially if your cluster is running mission-critical workloads. AKS Diagnostics (preview) is an intelligent, self-diagnostic experience that:
+
+* Helps you identify and resolve problems in your cluster.
+* Is cloud-native.
+* Requires no extra configuration or billing cost.
++
+## Open AKS Diagnostics
+
+To access AKS Diagnostics:
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+1. From **All services** in the Azure portal, select **Kubernetes Service**.
+1. Select **Diagnose and solve problems** in the left navigation, which opens AKS Diagnostics.
+1. Choose a category that best describes the issue of your cluster, like _Cluster Node Issues_, by:
+
+ * Using the keywords in the homepage tile.
+ * Typing a keyword that best describes your issue in the search bar.
+
+![Homepage](./media/concepts-diagnostics/aks-diagnostics-homepage.png)
+
+## View a diagnostic report
+
+After you click on a category, you can view a diagnostic report specific to your cluster. Diagnostic reports intelligently call out any issues in your cluster with status icons. You can drill down on each topic by clicking **More Info** to see a detailed description of:
+
+* Issues
+* Recommended actions
+* Links to helpful docs
+* Related-metrics
+* Logging data
+
+Diagnostic reports generate based on the current state of your cluster after running various checks. They can be useful for pinpointing the problem of your cluster and understanding next steps to resolve the issue.
+
+![Diagnostic Report](./media/concepts-diagnostics/diagnostic-report.png)
+
+![Expanded Diagnostic Report](./media/concepts-diagnostics/node-issues.png)
+
+## Cluster insights
+
+The following diagnostic checks are available in **Cluster Insights**.
+
+### Cluster Node Issues
+
+Cluster Node Issues checks for node-related issues that cause your cluster to behave unexpectedly. Specifically:
+
+- Node readiness issues
+- Node failures
+- Insufficient resources
+- Node missing IP configuration
+- Node CNI failures
+- Node not found
+- Node power off
+- Node authentication failure
+- Node kube-proxy stale
+
+### Create, read, update & delete (CRUD) operations
+
+CRUD Operations checks for any CRUD operations that cause issues in your cluster. Specifically:
+
+- In-use subnet delete operation error
+- Network security group delete operation error
+- In-use route table delete operation error
+- Referenced resource provisioning error
+- Public IP address delete operation error
+- Deployment failure due to deployment quota
+- Operation error due to organization policy
+- Missing subscription registration
+- VM extension provisioning error
+- Subnet capacity
+- Quota exceeded error
+
+### Identity and security management
+
+Identity and Security Management detects authentication and authorization errors that prevent communication to your cluster. Specifically,
+
+- Node authorization failures
+- 401 errors
+- 403 errors
+
+## Next steps
+
+* Collect logs to help you further troubleshoot your cluster issues by using [AKS Periscope](https://aka.ms/aksperiscope).
+
+* Read the [triage practices section](/azure/architecture/operator-guides/aks/aks-triage-practices) of the AKS day-2 operations guide.
+
+* Post your questions or feedback at [UserVoice](https://feedback.azure.com/d365community/forum/aabe212a-f724-ec11-b6e6-000d3a4f0da0) by adding "[Diag]" in the title.
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Azure Kubernetes Service (AKS)
-description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage (preview) in an Azure Kubernetes Service (AKS) cluster.
+description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage in an Azure Kubernetes Service (AKS) cluster.
Previously updated : 08/10/2022 Last updated : 11/16/2022
-# Use Azure Blob storage Container Storage Interface (CSI) driver (preview)
+# Use Azure Blob storage Container Storage Interface (CSI) driver
-The Azure Blob storage Container Storage Interface (CSI) driver (preview) is a [CSI specification][csi-specification]-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure Blob storage. The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes.
+The Azure Blob storage Container Storage Interface (CSI) driver is a [CSI specification][csi-specification]-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure Blob storage. The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes.
By adopting and using CSI, AKS now can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes. Using CSI drivers in AKS avoids having to touch the core Kubernetes code and wait for its release cycles.
Mounting Azure Blob storage as a file system into a container or pod, enables yo
* Images, documents, and streaming video or audio * Disaster recovery data
-The data on the object storage can be accessed by applications using BlobFuse or Network File System (NFS) 3.0 protocol. Before the introduction of the Azure Blob storage CSI driver (preview), the only option was to manually install an unsupported driver to access Blob storage from your application running on AKS. When the Azure Blob storage CSI driver (preview) is enabled on AKS, there are two built-in storage classes: *azureblob-fuse-premium* and *azureblob-nfs-premium*.
+The data on the object storage can be accessed by applications using BlobFuse or Network File System (NFS) 3.0 protocol. Before the introduction of the Azure Blob storage CSI driver, the only option was to manually install an unsupported driver to access Blob storage from your application running on AKS. When the Azure Blob storage CSI driver is enabled on AKS, there are two built-in storage classes: *azureblob-fuse-premium* and *azureblob-nfs-premium*.
+
+> [!NOTE]
+> Azure Blob CSI driver only supports NFS 3.0 protocol for Kubernetes versions 1.25 (preview) on AKS.
To create an AKS cluster with CSI drivers support, see [CSI drivers on AKS][csi-drivers-aks]. To learn more about the differences in access between each of the Azure storage types using the NFS protocol, see [Compare access to Azure Files, Blob Storage, and Azure NetApp Files with NFS][compare-access-with-nfs].
-## Azure Blob storage CSI driver (preview) features
+## Azure Blob storage CSI driver features
-Azure Blob storage CSI driver (preview) supports the following features:
+Azure Blob storage CSI driver supports the following features:
- BlobFuse and Network File System (NFS) version 3.0 protocol ## Before you begin -- The Azure CLI version 2.37.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].--- Install the aks-preview Azure CLI extension version 0.5.85 or later.--- If the open-source CSI Blob storage driver is installed on your cluster, uninstall it before enabling the preview driver.
+Review the prerequisites listed in the [CSI storage drivers overview][csi-storage-driver-overview] article to verify the requirements before using this feature.
### Uninstall open-source driver Perform the steps in this [link][csi-blob-storage-open-source-driver-uninstall-steps] if you previously installed the [CSI Blob Storage open-source driver][csi-blob-storage-open-source-driver] to access Azure Blob storage from your cluster.
-## Install the Azure CLI aks-preview extension
-
-The following steps are required to install and register the Azure CLI aks-preview extension and driver in your subscription.
-
-1. To use the Azure CLI aks-preview extension for enabling the Blob storage CSI driver (preview) on your AKS cluster, run the following command to install it:
-
- ```azurecli
- az extension add --name aks-preview
- ```
-
-2. Run the following command to register the CSI driver (preview):
-
- ```azurecli
- az feature register --name EnableBlobCSIDriver --namespace Microsoft.ContainerService
- ```
-
-3. To register the provider, run the following command:
-
- ```azurecli
- az provider register -n Microsoft.ContainerService
- ```
-
-When newer versions of the extension are released, run the following command to upgrade the extension to the latest release:
-
-```azurecli
-az extension update --name aks-preview
-```
-
-## Enable CSI driver on a new or existing AKS cluster
-
-Using the Azure CLI, you can enable the Blob storage CSI driver (preview) on a new or existing AKS cluster before you configure a persistent volume for use by pods in the cluster.
-
-To enable the driver on a new cluster, include the `--enable-blob-driver` parameter with the `az aks create` command as shown in the following example:
-
-```azurecli
-az aks create --enable-blob-driver -n myAKSCluster -g myResourceGroup
-```
-
-To enable the driver on an existing cluster, include the `--enable-blob-driver` parameter with the `az aks update` command as shown in the following example:
-
-```azurecli
-az aks update --enable-blob-driver -n myAKSCluster -g myResourceGroup
-```
-
-You're prompted to confirm there isn't an open-source Blob CSI driver installed. After confirming, it may take several minutes to complete this action. Once it's complete, you should see in the output the status of enabling the driver on your cluster. The following example is resembles the section indicating the results of the previous command:
-
-```output
-"storageProfile": {
- "blobCsiDriver": {
- "enabled": true
- },
-```
-
-## Disable CSI driver on an existing AKS cluster
-
-Using the Azure CLI, you can disable the Blob storage CSI driver on an existing AKS cluster after you remove the persistent volume from the cluster.
-
-To disable the driver on an existing cluster, include the `--disable-blob-driver` parameter with the `az aks update` command as shown in the following example:
-
-```azurecli
-az aks update --disable-blob-driver -n myAKSCluster -g myResourceGroup
-```
- ## Use a persistent volume with Azure Blob storage A [persistent volume][persistent-volume] (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Blob storage to connect by using the Network File System (NFS) or blobfuse. This article shows you how to dynamically create an Azure Blob storage container for use by multiple pods in an AKS cluster.
A storage class is used to define how an Azure Blob storage container is created
* **Standard_GRS**: Standard geo-redundant storage * **Standard_RAGRS**: Standard read-access geo-redundant storage
-When you use storage CSI drivers on AKS, there are two additional built-in StorageClasses that use the Azure Blob CSI storage driver (preview).
+When you use storage CSI drivers on AKS, there are two additional built-in StorageClasses that use the Azure Blob CSI storage driver.
The reclaim policy on both storage classes ensures that the underlying Azure Blob storage is deleted when the respective PV is deleted. The storage classes also configure the container to be expandable by default, as the `set allowVolumeExpansion` parameter is set to **true**.
To have a storage volume persist for your workload, you can use a StatefulSet. T
- To learn how to manually set up a static persistent volume, see [Create and use a volume with Azure Blob storage][azure-csi-blob-storage-static]. - To learn how to dynamically set up a persistent volume, see [Create and use a dynamic persistent volume with Azure Blob storage][azure-csi-blob-storage-dynamic].-- To learn how to use CSI driver for Azure Disks, see [Use Azure Disks with CSI driver](azure-disk-csi.md).-- To learn how to use CSI driver for Azure Files, see [Use Azure Files with CSI driver](azure-files-csi.md).
+- To learn how to use CSI driver for Azure Disks, see [Use Azure Disks with CSI driver][azure-disk-csi-driver]
+- To learn how to use CSI driver for Azure Files, see [Use Azure Files with CSI driver][azure-files-csi-driver]
- For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage]. <!-- LINKS - external -->
-[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
-[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
[csi-specification]: https://github.com/container-storage-interface/spec/blob/master/spec.md [csi-blob-storage-open-source-driver]: https://github.com/kubernetes-sigs/blob-csi-driver [csi-blob-storage-open-source-driver-uninstall-steps]: https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/docs/install-csi-driver-master.md#clean-up-blob-csi-driver <!-- LINKS - internal -->
-[install-azure-cli]: /cli/azure/install-azure-cli
-[azure-disk-volume]: azure-disk-volume.md
-[azure-files-pvc]: azure-files-dynamic-pv.md
-[premium-storage]: ../virtual-machines/disks-types.md
[compare-access-with-nfs]: ../storage/common/nfs-comparison.md
-[az-disk-list]: /cli/azure/disk#az_disk_list
-[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
-[az-disk-create]: /cli/azure/disk#az_disk_create
-[az-disk-show]: /cli/azure/disk#az_disk_show
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
[operator-best-practices-storage]: operator-best-practices-storage.md [concepts-storage]: concepts-storage.md [persistent-volume]: concepts-storage.md#persistent-volumes [csi-drivers-aks]: csi-storage-drivers.md
-[storage-class-concepts]: concepts-storage.md#storage-classes
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks
-[storage-skus]: ../storage/common/storage-redundancy.md
-[use-tags]: use-tags.md
-[az-tags]: ../azure-resource-manager/management/tag-resources.md
[azure-csi-blob-storage-dynamic]: azure-csi-blob-storage-dynamic.md [azure-csi-blob-storage-static]: azure-csi-blob-storage-static.md
+[csi-storage-driver-overview]: csi-storage-drivers.md
+[azure-disk-csi-driver]: azure-disk-csi.md
+[azure-files-csi-driver]: azure-files-csi.md
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
Scaling your workload based on relevant business metrics such as HTTP requests,
Removing state from your design reduces the in-memory or on-disk data required by the workload to function.
-* Consider [stateless design](/azure/aks/operator-best-practices-multi-region#remove-service-state-from-inside-containers) to reduce unnecessary network load, data processing, and compute resources.
+* Consider [stateless design](./operator-best-practices-multi-region.md#remove-service-state-from-inside-containers) to reduce unnecessary network load, data processing, and compute resources.
## Application platform
Explore this section to learn how to make better informed platform-related decis
An up-to-date cluster avoids unnecessary performance issues and ensures you benefit from the latest performance improvements and compute optimizations.
-* Enable [cluster auto-upgrade](/azure/aks/auto-upgrade-cluster) and [apply security updates to nodes automatically using GitHub Actions](/azure/aks/node-upgrade-github-actions), to ensure your cluster has the latest improvements.
+* Enable [cluster auto-upgrade](./auto-upgrade-cluster.md) and [apply security updates to nodes automatically using GitHub Actions](./node-upgrade-github-actions.md), to ensure your cluster has the latest improvements.
### Install supported add-ons and extensions
-Add-ons and extensions covered by the [AKS support policy](/azure/aks/support-policies) provide additional and supported functionality to your cluster while allowing you to benefit from the latest performance improvements and energy optimizations throughout your cluster lifecycle.
+Add-ons and extensions covered by the [AKS support policy](./support-policies.md) provide additional and supported functionality to your cluster while allowing you to benefit from the latest performance improvements and energy optimizations throughout your cluster lifecycle.
-* Ensure you install [KEDA](/azure/aks/integrations#available-add-ons) as an add-on and [GitOps & Dapr](/azure/aks/cluster-extensions?tabs=azure-cli#currently-available-extensions) as extensions.
+* Ensure you install [KEDA](./integrations.md#available-add-ons) as an add-on and [GitOps & Dapr](./cluster-extensions.md?tabs=azure-cli#currently-available-extensions) as extensions.
### Containerize your workload where applicable Containers allow for reducing unnecessary resource allocation and making better use of the resources deployed as they allow for bin packing and require less compute resources than virtual machines.
-* Use [Draft](/azure/aks/draft) to simplify application containerization by generating Dockerfiles and Kubernetes manifests.
+* Use [Draft](./draft.md) to simplify application containerization by generating Dockerfiles and Kubernetes manifests.
### Use energy efficient hardware
Ampere's Cloud Native Processors are uniquely designed to meet both the high per
An oversized cluster does not maximize utilization of compute resources and can lead to a waste of energy. Separate your applications into different node pools to allow for cluster right sizing and independent scaling according to the application requirements. As you run out of capacity in your AKS cluster, grow from AKS to ACI to scale out additional pods to serverless nodes and ensure your workload uses all the allocated resources efficiently.
-* Size your cluster to match the scalability needs of your application and [use cluster autoscaler](/azure/aks/cluster-autoscaler) in combination with [virtual nodes](/azure/aks/virtual-nodes) to rapidly scale and maximize compute resource utilization. Additionally, [enforce resource quotas](/azure/aks/operator-best-practices-scheduler#enforce-resource-quotas) at the namespace level and [scale user node pools to 0](/azure/aks/scale-cluster?tabs=azure-cli#scale-user-node-pools-to-0) when there is no demand.
+* Size your cluster to match the scalability needs of your application and [use cluster autoscaler](./cluster-autoscaler.md) in combination with [virtual nodes](./virtual-nodes.md) to rapidly scale and maximize compute resource utilization. Additionally, [enforce resource quotas](./operator-best-practices-scheduler.md#enforce-resource-quotas) at the namespace level and [scale user node pools to 0](./scale-cluster.md?tabs=azure-cli#scale-user-node-pools-to-0) when there is no demand.
### Turn off workloads and node pools outside of business hours Workloads may not need to run continuously and could be turned off to reduce energy waste, hence carbon emissions. You can completely turn off (stop) your node pools in your AKS cluster, allowing you to also save on compute costs.
-* Use the [node pool stop / start](/azure/aks/start-stop-nodepools) to turn off your node pools outside of business hours, and [KEDA CRON scaler](https://keda.sh/docs/2.7/scalers/cron/) to scale down your workloads (pods) based on time.
+* Use the [node pool stop / start](./start-stop-nodepools.md) to turn off your node pools outside of business hours, and [KEDA CRON scaler](https://keda.sh/docs/2.7/scalers/cron/) to scale down your workloads (pods) based on time.
## Operational procedures
Explore this section to set up your environment for measuring and continuously i
Unused resources such as unreferenced images and storage resources should be identified and deleted as they have a direct impact on hardware and energy efficiency. Identifying and deleting unused resources must be treated as a process, rather than a point-in-time activity to ensure continuous energy optimization.
-* Use [Azure Advisor](/azure/advisor/advisor-cost-recommendations) to identify unused resources and [ImageCleaner](/azure/aks/image-cleaner?tabs=azure-cli) to clean up stale images and remove an area of risk in your cluster.
+* Use [Azure Advisor](../advisor/advisor-cost-recommendations.md) to identify unused resources and [ImageCleaner](./image-cleaner.md?tabs=azure-cli) to clean up stale images and remove an area of risk in your cluster.
### Tag your resources Getting the right information and insights at the right time is important for producing reports about performance and resource utilization.
-* Set [Azure tags on your cluster](/azure/aks/use-tags) to enable monitoring of your workloads.
+* Set [Azure tags on your cluster](./use-tags.md) to enable monitoring of your workloads.
## Storage
Explore this section to learn how to design a more sustainable data storage arch
The data retrieval and data storage operations can have a significant impact on both energy and hardware efficiency. Designing solutions with the correct data access pattern can reduce energy consumption and embodied carbon.
-* Understand the needs of your application to [choose the appropriate storage](/azure/aks/operator-best-practices-storage#choose-the-appropriate-storage-type) and define it using [storage classes](/azure/aks/operator-best-practices-storage#create-and-use-storage-classes-to-define-application-needs) to avoid storage underutilization. Additionally, consider [provisioning volumes dynamically](/azure/aks/operator-best-practices-storage#dynamically-provision-volumes) to automatically scale the number of storage resources.
+* Understand the needs of your application to [choose the appropriate storage](./operator-best-practices-storage.md#choose-the-appropriate-storage-type) and define it using [storage classes](./operator-best-practices-storage.md#create-and-use-storage-classes-to-define-application-needs) to avoid storage underutilization. Additionally, consider [provisioning volumes dynamically](./operator-best-practices-storage.md#dynamically-provision-volumes) to automatically scale the number of storage resources.
## Network and connectivity
The distance from a data center to the users has a significant impact on energy
Placing nodes in a single region or a single availability zone reduces the physical distance between the instances. However, for business critical workloads, you need to ensure your cluster is spread across multiple availability-zones, which may result in more network traversal and increase in your carbon footprint.
-* Consider deploying your nodes within a [proximity placement group](/azure/virtual-machines/co-location) to reduce the network traversal by ensuring your compute resources are physically located close to each other. For critical workloads configure [proximity placement groups with availability zones](/azure/aks/reduce-latency-ppg#configure-proximity-placement-groups-with-availability-zones).
+* Consider deploying your nodes within a [proximity placement group](../virtual-machines/co-location.md) to reduce the network traversal by ensuring your compute resources are physically located close to each other. For critical workloads configure [proximity placement groups with availability zones](./reduce-latency-ppg.md#configure-proximity-placement-groups-with-availability-zones).
### Evaluate using a service mesh A service mesh deploys additional containers for communication, typically in a [sidecar pattern](/azure/architecture/patterns/sidecar), to provide more operational capabilities leading to an increase in CPU usage and network traffic. Nevertheless, it allows you to decouple your application from these capabilities as it moves them out from the application layer, and down to the infrastructure layer.
-* Carefully consider the increase in CPU usage and network traffic generated by [service mesh](/azure/aks/servicemesh-about) communication components before making the decision to use one.
+* Carefully consider the increase in CPU usage and network traffic generated by [service mesh](./servicemesh-about.md) communication components before making the decision to use one.
### Optimize log collection Sending and storing all logs from all possible sources (workloads, services, diagnostics and platform activity) can considerably increase storage and network traffic, which would impact higher costs and carbon emissions.
-* Make sure you are collecting and retaining only the log data necessary to support your requirements. [Configure data collection rules for your AKS workloads](/azure/azure-monitor/containers/container-insights-agent-config#data-collection-settings) and implement design considerations for [optimizing your Log Analytics costs](/azure/architecture/framework/services/monitoring/log-analytics/cost-optimization).
+* Make sure you are collecting and retaining only the log data necessary to support your requirements. [Configure data collection rules for your AKS workloads](../azure-monitor/containers/container-insights-agent-config.md#data-collection-settings) and implement design considerations for [optimizing your Log Analytics costs](/azure/architecture/framework/services/monitoring/log-analytics/cost-optimization).
### Cache static data Using Content Delivery Network (CDN) is a sustainable approach to optimizing network traffic because it reduces the data movement across a network. It minimizes latency through storing frequently read static data closer to users, and helps reduce network traffic and server load.
-* Ensure you [follow best practices](/azure/architecture/best-practices/cdn) for CDN and consider using [Azure CDN](/azure/cdn/cdn-how-caching-works?toc=%2Fazure%2Ffrontdoor%2FTOC.json) to lower the consumed bandwidth and keep costs down.
+* Ensure you [follow best practices](/azure/architecture/best-practices/cdn) for CDN and consider using [Azure CDN](../cdn/cdn-how-caching-works.md?toc=%2fazure%2ffrontdoor%2fTOC.json) to lower the consumed bandwidth and keep costs down.
## Security
Explore this section to learn more about the recommendations leading to a sustai
Transport Layer Security (TLS) ensures that all data passed between the web server and web browsers remain private and encrypted. However, terminating and re-establishing TLS increases CPU utilization and might be unnecessary in certain architectures. A balanced level of security can offer a more sustainable and energy efficient workload, while a higher level of security may increase the compute resource requirements.
-* Review the information on TLS termination when using [Application Gateway](/azure/application-gateway/ssl-overview) or [Azure Front Door](/azure/application-gateway/ssl-overview). Consider if you can terminate TLS at your border gateway and continue with non-TLS to your workload load balancer and onwards to your workload.
+* Review the information on TLS termination when using [Application Gateway](../application-gateway/ssl-overview.md) or [Azure Front Door](../application-gateway/ssl-overview.md). Consider if you can terminate TLS at your border gateway and continue with non-TLS to your workload load balancer and onwards to your workload.
### Use cloud native network security tools and controls
Azure Font Door and Application Gateway help manage traffic from web application
Many attacks on cloud infrastructure seek to misuse deployed resources for the attacker's direct gain leading to an unnecessary spike in usage and cost. Vulnerability scanning tools help minimize the window of opportunity for attackers and mitigate any potential malicious usage of resources.
-* Follow recommendations from [Microsoft Defender for Cloud](/security/benchmark/azure/security-control-vulnerability-management) and run automated vulnerability scanning tools such as [Defender for Containers](/azure/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure) to avoid unnecessary resource usage by identifying vulnerabilities in your images and minimizing the window of opportunity for attackers.
+* Follow recommendations from [Microsoft Defender for Cloud](/security/benchmark/azure/security-control-vulnerability-management) and run automated vulnerability scanning tools such as [Defender for Containers](../defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md) to avoid unnecessary resource usage by identifying vulnerabilities in your images and minimizing the window of opportunity for attackers.
## Next steps > [!div class="nextstepaction"]
-> [Azure Well-Architected Framework review of AKS](/azure/architecture/framework/services/compute/azure-kubernetes-service/azure-kubernetes-service)
+> [Azure Well-Architected Framework review of AKS](/azure/architecture/framework/services/compute/azure-kubernetes-service/azure-kubernetes-service)
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AK
description: Learn about and deploy the Container Storage Interface (CSI) drivers for Azure Disks and Azure Files in an Azure Kubernetes Service (AKS) cluster Previously updated : 09/18/2022- Last updated : 11/16/2022
The CSI storage driver support on AKS allows you to natively use:
## Prerequisites
-You need the Azure CLI version 2.40 installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+- You need the Azure CLI version 2.42 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+- If the open-source CSI Blob storage driver is installed on your cluster, uninstall it before enabling the Azure Blob storage driver.
-## Disable CSI storage drivers on a new cluster
+## Disable CSI storage drivers on a new or existing cluster
-`--disable-disk-driver` allows you to disable the [Azure Disks CSI driver][azure-disk-csi]. `--disable-file-driver` allows you to disable the [Azure Files CSI driver][azure-files-csi]. `--disable-snapshot-controller` allows you to disable the [snapshot controller][snapshot-controller ].
+To disable CSI storage drivers on a new cluster, include one of the following parameters depending on the storage system:
-To disable CSI storage drivers on a new cluster, use `--disable-disk-driver`, `--disable-file-driver`, and `--disable-snapshot-controller`.
+* `--disable-disk-driver` allows you to disable the [Azure Disks CSI driver][azure-disk-csi].
+* `--disable-file-driver` allows you to disable the [Azure Files CSI driver][azure-files-csi].
+* `--disable-blob-driver` allows you to disable the [Azure Blob storage CSI driver][azure-blob-csi].
+* `--disable-snapshot-controller` allows you to disable the [snapshot controller][snapshot-controller].
```azurecli
-az aks create -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-snapshot-controller
+az aks create -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
```
-## Disable CSI storage drivers on an existing cluster
-
-To disable CSI storage drivers on an existing cluster, use `--disable-disk-driver`, `--disable-file-driver`, and `--disable-snapshot-controller`.
+To disable CSI storage drivers on an existing cluster, use one of the parameters listed earlier depending on the storage system:
```azurecli
-az aks update -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-snapshot-controller
+az aks update -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
``` ## Enable CSI storage drivers on an existing cluster
-`--enable-disk-driver` allows you enable the [Azure Disks CSI driver][azure-disk-csi]. `--enable-file-driver` allows you to enable the [Azure Files CSI driver][azure-files-csi]. `--enable-snapshot-controller` allows you to enable the [snapshot controller][snapshot-controller].
+To enable CSI storage drivers on a new cluster, include one of the following parameters depending on the storage system:
-To enable CSI storage drivers on an existing cluster with CSI storage drivers disabled, use `--enable-disk-driver`, `--enable-file-driver`, and `--enable-snapshot-controller`.
+* `--enable-disk-driver` allows you to enable the [Azure Disks CSI driver][azure-disk-csi].
+* `--enable-file-driver` allows you to enable the [Azure Files CSI driver][azure-files-csi].
+* `--enable-blob-driver` allows you to enable the [Azure Blob storage CSI driver][azure-blob-csi].
+* `--enable-snapshot-controller` allows you to enable the [snapshot controller][snapshot-controller].
```azurecli
-az aks update -n myAKSCluster -g myResourceGroup --enable-disk-driver --enable-file-driver --enable-snapshot-controller
+az aks update -n myAKSCluster -g myResourceGroup --enable-disk-driver --enable-file-driver --enable-blob-driver --enable-snapshot-controller
``` ## Migrate custom in-tree storage classes to CSI
If you have in-tree Azure File persistent volumes, get `secretName`, `shareName`
## Next steps -- To use the CSI driver for Azure Disks, see [Use Azure Disks with CSI drivers](azure-disk-csi.md).-- To use the CSI driver for Azure Files, see [Use Azure Files with CSI drivers](azure-files-csi.md).-- To use the CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI drivers](azure-blob-csi.md)
+- To use the CSI driver for Azure Disks, see [Use Azure Disks with CSI drivers][azure-disk-csi].
+- To use the CSI driver for Azure Files, see [Use Azure Files with CSI drivers][azure-files-csi].
+- To use the CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI drivers][azure-blob-csi]
- For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage]. - For more information on CSI migration, see [Kubernetes In-Tree to CSI Volume Migration][csi-migration-community]. <!-- LINKS - external -->
-[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
[csi-migration-community]: https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
-[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
-[azure-disk-csi]: https://github.com/kubernetes-sigs/azuredisk-csi-driver
-[azure-files-csi]: https://github.com/kubernetes-sigs/azurefile-csi-driver
[snapshot-controller]: https://kubernetes-csi.github.io/docs/snapshot-controller.html <!-- LINKS - internal -->
-[azure-disk-volume]: azure-disk-volume.md
[azure-disk-static-mount]: azure-disk-volume.md#mount-disk-as-a-volume [azure-file-static-mount]: azure-files-volume.md#mount-file-share-as-a-persistent-volume
-[azure-files-pvc]: azure-files-dynamic-pv.md
-[premium-storage]: ../virtual-machines/disks-types.md
-[az-disk-list]: /cli/azure/disk#az_disk_list
-[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
-[az-disk-create]: /cli/azure/disk#az_disk_create
-[az-disk-show]: /cli/azure/disk#az_disk_show
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[storage-class-concepts]: concepts-storage.md#storage-classes
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[install-azure-cli]: ../cli/azure/install-azure-cli
+[azure-blob-csi]: azure-blob-csi.md
+[azure-disk-csi]: azure-disk-csi.md
+[azure-files-csi]: azure-files-csi.md
aks Enable Host Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-host-encryption.md
This feature can only be set at cluster creation or node pool creation time.
### Prerequisites - Ensure you have the CLI extension v2.23 or higher version installed.-- Ensure you have the `EncryptionAtHost` feature flag under `Microsoft.Compute` enabled.-
-### Register `EncryptionAtHost` feature
-
-To create an AKS cluster that uses host-based encryption, you must enable the `EncryptionAtHost` feature flags on your subscription.
-
-Register the `EncryptionAtHost` feature flag using the [az feature register][az-feature-register] command as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.Compute" --name "EncryptionAtHost"
-```
-
-It takes a few minutes for the status to show *Registered*. You can check on the registration status using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.Compute/EncryptionAtHost')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the `Microsoft.Compute` resource providers using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.Compute
-```
### Limitations
aks Tutorial Kubernetes Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md
Title: Tutorial - Use a workload identity with an application on Azure Kubernete
description: In this Azure Kubernetes Service (AKS) tutorial, you deploy an Azure Kubernetes Service cluster and configure an application to use a workload identity. Previously updated : 09/29/2022 Last updated : 11/16/2022 # Tutorial: Use a workload identity with an application on Azure Kubernetes Service (AKS)
spec:
- image: ghcr.io/azure/azure-workload-identity/msal-go name: oidc env:
- - name: KEYVAULT_NAME
- value: ${KEYVAULT_NAME}
+ - name: KEYVAULT_URL
+ value: ${KEYVAULT_URL}
- name: SECRET_NAME value: ${KEYVAULT_SECRET_NAME} nodeSelector:
This tutorial is for introductory purposes. For guidance on a creating full solu
[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials [az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create [aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
To upgrade from *1.12.x* -> *1.14.x*:
Skipping multiple versions can only be done when upgrading from an unsupported version back into the minimum supported version. For example, you can upgrade from an unsupported *1.10.x* to a supported *1.15.x* if *1.15* is the minimum supported minor version.
+ When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, it's recommended to re-create the cluster.
+ **Can I create a new 1.xx.x cluster during its 30 day support window?** No. Once a version is deprecated/removed, you cannot create a cluster with that version. As the change rolls out, you will start to see the old version removed from your version list. This process may take up to two weeks from announcement, progressively by region.
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
Title: Kubernetes on Azure tutorial - Upgrade a cluster
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to upgrade an existing AKS cluster to the latest available Kubernetes version. Previously updated : 05/24/2021 Last updated : 11/15/2022 #Customer intent: As a developer or IT pro, I want to learn how to upgrade an Azure Kubernetes Service (AKS) cluster so that I can use the latest version of Kubernetes and features. # Tutorial: Upgrade Kubernetes in Azure Kubernetes Service (AKS)
-As part of the application and cluster lifecycle, you may wish to upgrade to the latest available version of Kubernetes and use new features. An Azure Kubernetes Service (AKS) cluster can be upgraded using the Azure CLI.
+As part of the application and cluster lifecycle, you may want to upgrade to the latest available version of Kubernetes. You can upgrade your Azure Kubernetes Service (AKS) cluster by using the Azure CLI, Azure PowerShell, or the Azure portal.
-In this tutorial, part seven of seven, a Kubernetes cluster is upgraded. You learn how to:
+In this tutorial, part seven of seven, you learn how to:
> [!div class="checklist"]
-> * Identify current and available Kubernetes versions
-> * Upgrade the Kubernetes nodes
-> * Validate a successful upgrade
+> * Identify current and available Kubernetes versions.
+> * Upgrade your Kubernetes nodes.
+> * Validate a successful upgrade.
## Before you begin
-In previous tutorials, an application was packaged into a container image. This image was uploaded to Azure Container Registry, and you created an AKS cluster. The application was then deployed to the AKS cluster. If you have not done these steps, and would like to follow along, start with [Tutorial 1 ΓÇô Create container images][aks-tutorial-prepare-app].
+In previous tutorials, an application was packaged into a container image, and this container image was uploaded to Azure Container Registry (ACR). You also created an AKS cluster. The application was then deployed to the AKS cluster. If you have not done these steps and would like to follow along, start with [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
-### [Azure CLI](#tab/azure-cli)
-
-This tutorial requires that you are running the Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-This tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
--
+* If you're using Azure CLI, this article requires that you're running Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+* If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Get available cluster versions ### [Azure CLI](#tab/azure-cli)
-Before you upgrade a cluster, use the [az aks get-upgrades][] command to check which Kubernetes releases are available for upgrade:
+Before you upgrade a cluster, use the [az aks get-upgrades][] command to check which Kubernetes releases are available.
```azurecli az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster ```
-In the following example, the current version is *1.18.10*, and the available versions are shown under *upgrades*.
+In the following example output, the current version is *1.18.10*, and the available versions are shown under *upgrades*.
-```json
+```output
{ "agentPoolProfiles": null, "controlPlaneProfile": {
In the following example, the current version is *1.18.10*, and the available ve
### [Azure PowerShell](#tab/azure-powershell)
-Before you upgrade a cluster, use the [Get-AzAksCluster][get-azakscluster] cmdlet to determine which Kubernetes version you're running and what region it resides in:
+Before you upgrade a cluster, use the [Get-AzAksCluster][get-azakscluster] cmdlet to check which Kubernetes version you're running and the region in which it resides.
```azurepowershell Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster | Select-Object -Property Name, KubernetesVersion, Location ```
-In the following example, the current version is *1.19.9*:
+In the following example output, the current version is *1.19.9*.
```output
-Name KubernetesVersion Location
-- -- --
-myAKSCluster 1.19.9 eastus
+Name KubernetesVersion Location
+- -- --
+myAKSCluster 1.19.9 eastus
```
-Use the [Get-AzAksVersion][get-azaksversion] cmdlet to determine which Kubernetes upgrade releases are available in the region where your AKS cluster resides:
+Use the [Get-AzAksVersion][get-azaksversion] cmdlet to check which Kubernetes upgrade releases are available in the region where your AKS cluster resides.
```azurepowershell Get-AzAksVersion -Location eastus | Where-Object OrchestratorVersion -gt 1.19.9
Get-AzAksVersion -Location eastus | Where-Object OrchestratorVersion -gt 1.19.9
The available versions are shown under *OrchestratorVersion*. ```output
-OrchestratorType : Kubernetes
-OrchestratorVersion : 1.20.2
-DefaultProperty :
-IsPreview :
-Upgrades : {Microsoft.Azure.Commands.Aks.Models.PSOrchestratorProfile}
-
-OrchestratorType : Kubernetes
-OrchestratorVersion : 1.20.5
-DefaultProperty :
-IsPreview :
-Upgrades : {}
+Default IsPreview OrchestratorType OrchestratorVersion
+- - -
+ Kubernetes 1.20.2
+ Kubernetes 1.20.5
```
+### [Azure portal](#tab/azure-portal)
+
+To check which Kubernetes releases are available for your cluster:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to your AKS cluster.
+3. Under **Settings**, select **Cluster configuration**.
+4. In **Kubernetes version**, select **Upgrade version**. This will redirect you to a new page.
+5. In **Kubernetes version**, select the version to check for available upgrades.
+
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when no upgrades are available.
+ ## Upgrade a cluster
-To minimize disruption to running applications, AKS nodes are carefully cordoned and drained. In this process, the following steps are performed:
+AKS nodes are carefully cordoned and drained to minimize any potential disruptions to running applications. The following steps are performed during this process:
-1. The Kubernetes scheduler prevents additional pods being scheduled on a node that is to be upgraded.
+1. The Kubernetes scheduler prevents additional pods from being scheduled on a node that is to be upgraded.
1. Running pods on the node are scheduled on other nodes in the cluster.
-1. A node is created that runs the latest Kubernetes components.
-1. When the new node is ready and joined to the cluster, the Kubernetes scheduler begins to run pods on it.
+1. A new node is created that runs the latest Kubernetes components.
+1. When the new node is ready and joined to the cluster, the Kubernetes scheduler begins to run pods on the new node.
1. The old node is deleted, and the next node in the cluster begins the cordon and drain process. [!INCLUDE [alias minor version callout](./includes/aliasminorversion/alias-minor-version-upgrade.md)] ### [Azure CLI](#tab/azure-cli)
-Use the [az aks upgrade][] command to upgrade the AKS cluster.
+Use the [az aks upgrade][] command to upgrade your AKS cluster.
```azurecli az aks upgrade \
az aks upgrade \
``` > [!NOTE]
-> You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but cannot upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
+> You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you cannot upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, you must first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
-The following condensed example output shows the result of upgrading to *1.19.1*. Notice the *kubernetesVersion* now reports *1.19.1*:
+The following example output shows the result of upgrading to *1.19.1*. Notice the *kubernetesVersion* now reports *1.19.1*.
-```json
+```output
{ "agentPoolProfiles": [ {
The following condensed example output shows the result of upgrading to *1.19.1*
### [Azure PowerShell](#tab/azure-powershell)
-Use the [Set-AzAksCluster][set-azakscluster] cmdlet to upgrade the AKS cluster.
+Use the [Set-AzAksCluster][set-azakscluster] cmdlet to upgrade your AKS cluster.
```azurepowershell Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -KubernetesVersion <KUBERNETES_VERSION> ``` > [!NOTE]
-> You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but cannot upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
+> You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you cannot upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
-The following condensed example output shows the result of upgrading to *1.19.9*. Notice the *kubernetesVersion* now reports *1.20.2*:
+The following example output shows the result of upgrading to *1.19.9*. Notice the *kubernetesVersion* now reports *1.20.2*.
```output ProvisioningState : Succeeded
Location : eastus
Tags : {} ```
+### [Azure portal](#tab/azure-portal)
+
+To upgrade your AKS cluster:
+
+1. In the Azure portal, navigate to your AKS cluster.
+2. Under **Settings**, select **Cluster configuration**.
+3. In **Kubernetes version**, select **Upgrade version**. This will redirect you to a new page.
+4. In **Kubernetes version**, select your desired version and then select **Save**.
+
+It takes a few minutes to upgrade the cluster, depending on how many nodes you have.
+ ## View the upgrade events
-When you upgrade your cluster, the following Kubenetes events may occur on each node:
+When you upgrade your cluster, the following Kubernetes events may occur on the nodes:
-* Surge ΓÇô Create surge node.
-* Drain ΓÇô Pods are being evicted from the node. Each pod has a 5 minute timeout to complete the eviction.
-* Update ΓÇô Update of a node has succeeded or failed.
-* Delete ΓÇô Deleted a surge node.
+* **Surge**: Create surge node.
+* **Drain**: Pods are being evicted from the node. Each pod has a *5 minute timeout* to complete the eviction.
+* **Update**: Update of a node has succeeded or failed.
+* **Delete**: Delete a surge node.
-Use `kubectl get events` to show events in the default namespaces while running an upgrade. For example:
+Use `kubectl get events` to show events in the default namespaces while running an upgrade.
```azurecli-interactive kubectl get events ```
-The following example output shows some of the above events listed during an upgrade.
+The following example output shows some of the above events listed during an upgrade.
```output ...
default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surg
... ``` ++ ## Validate an upgrade ### [Azure CLI](#tab/azure-cli)
-Confirm that the upgrade was successful using the [az aks show][] command as follows:
+Confirm that the upgrade was successful using the [az aks show][] command.
```azurecli az aks show --resource-group myResourceGroup --name myAKSCluster --output table
az aks show --resource-group myResourceGroup --name myAKSCluster --output table
The following example output shows the AKS cluster runs *KubernetesVersion 1.19.1*: ```output
-Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
- - - - -
-myAKSCluster eastus myResourceGroup 1.19.1 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
+Name Location ResourceGroup KubernetesVersion CurrentKubernetesVersion ProvisioningState Fqdn
+ - - - -
+myAKSCluster eastus myResourceGroup 1.19.1 1.19.1 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
``` ### [Azure PowerShell](#tab/azure-powershell)
-Confirm that the upgrade was successful using the [Get-AzAksCluster][get-azakscluster] cmdlet as follows:
+Confirm that the upgrade was successful using the [Get-AzAksCluster][get-azakscluster] cmdlet.
```azurepowershell Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
The following example output shows the AKS cluster runs *KubernetesVersion 1.20.2*: ```output
-Name Location KubernetesVersion ProvisioningState
-- -- -- --
-myAKSCluster eastus 1.20.2 Succeeded
+Name Location KubernetesVersion ProvisioningState
+- -- -- --
+myAKSCluster eastus 1.20.2 Succeeded
```
+### [Azure portal](#tab/azure-portal)
+
+To confirm that the upgrade was successful, navigate to your AKS cluster in the Azure portal. On the **Overview** page, select the **Kubernetes version** and ensure it's the latest version you installed in the previous step.
+ ## Delete the cluster
+As this tutorial is the last part of the series, you may want to delete your AKS cluster. The Kubernetes nodes run on Azure virtual machines and continue incurring charges even if you don't use the cluster.
+ ### [Azure CLI](#tab/azure-cli)
-As this tutorial is the last part of the series, you may want to delete the AKS cluster. As the Kubernetes nodes run on Azure virtual machines (VMs), they continue to incur charges even if you don't use the cluster. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
+Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
```azurecli-interactive az group delete --name myResourceGroup --yes --no-wait ```+ ### [Azure PowerShell](#tab/azure-powershell)
-As this tutorial is the last part of the series, you may want to delete the AKS cluster. As the Kubernetes nodes run on Azure virtual machines (VMs), they continue to incur charges even if you don't use the cluster. Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
```azurepowershell-interactive Remove-AzResourceGroup -Name myResourceGroup ```
+### [Azure portal](#tab/azure-portal)
+
+To delete your AKS cluster:
+
+1. In the Azure portal, navigate to your AKS cluster.
+2. On the **Overview** page, select **Delete**.
+3. A popup will appear that asks you to confirm the deletion of the cluster. Select **Yes**.
+ > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity, the identity is managed by the platform and does not require you to provision or rotate any secrets.
+> When you delete the cluster, the Azure Active Directory (AAD) service principal used by the AKS cluster isn't removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity, the identity is managed by the platform and it doesn't require that you provision or rotate any secrets.
## Next steps In this tutorial, you upgraded Kubernetes in an AKS cluster. You learned how to: > [!div class="checklist"]
-> * Identify current and available Kubernetes versions
-> * Upgrade the Kubernetes nodes
-> * Validate a successful upgrade
+> * Identify current and available Kubernetes versions.
+> * Upgrade your Kubernetes nodes.
+> * Validate a successful upgrade.
-For more information on AKS, see [AKS overview][aks-intro]. For guidance on a creating full solutions with AKS, see [AKS solution guidance][aks-solution-guidance].
+For more information on AKS, see [AKS overview][aks-intro]. For guidance on how to create full solutions with AKS, see [AKS solution guidance][aks-solution-guidance].
<!-- LINKS - external --> [kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --outpu
> [!NOTE] > When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. All upgrades must be performed sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* is not allowed. >
-> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available.
+> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available. When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, it's recommended to re-create the cluster.
The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
To check which Kubernetes releases are available for your cluster, use the [Get-
> [!NOTE] > When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. All upgrades must be performed sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* is not allowed. >
-> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available.
+> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available. When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, it's recommended to re-create the cluster.
The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
The following policy is the minimal form of the `validate-azure-ad-token` policy
The following policy checks that the audience is the hostname of the API Management instance and that the `ctry` claim is `US`. The hostname is provided using a policy expression, and the Azure AD tenant ID and client application ID are provided using named values. The decoded JWT is provided in the `jwt` variable after validation.
-For more details on optional claims, read [Provide optional claims to your app](/azure/active-directory/develop/active-directory-optional-claims).
+For more details on optional claims, read [Provide optional claims to your app](../active-directory/develop/active-directory-optional-claims.md).
```xml <validate-azure-ad-token tenant-id="{{aad-tenant-id}}" output-token-variable-name="jwt">
This policy can be used in the following policy [sections](./api-management-howt
- **Policy sections:** inbound - **Policy scopes:** all scopes
api-management Api Management Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-capacity.md
To follow the steps in this article, you must have:
+ An API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). ## What is capacity
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
With a private endpoint and Private Link, you can:
- Limit incoming traffic only to private endpoints, preventing data exfiltration.
-> [!IMPORTANT]
-> * API Management support for private endpoints is currently in **preview**.
-> * To enable private endpoints, the API Management instance can't already be configured with an external or internal [virtual network](virtual-network-concepts.md).
-> * A private endpoint connection supports only incoming traffic to the API Management instance.
+ [!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)]
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
With a private endpoint and Private Link, you can:
* Use policy to distinguish traffic that comes from the private endpoint. * Limit incoming traffic only to private endpoints, preventing data exfiltration.
-> [!IMPORTANT]
-> * API Management support for private endpoints is currently in preview.
-> * During the preview period, a private endpoint connection supports only incoming traffic to the API Management managed gateway.
For more information, see [Connect privately to API Management using a private endpoint](private-endpoint.md).
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
When no longer needed, you can delete the resource group, App service, and all r
## Manage the MySQL flexible server, username, or password -- The MySQL Flexible Server is created behind a private [Virtual Network](/azure/virtual-network/virtual-networks-overview) and can't be accessed directly. To access or manage the database, use phpMyAdmin that's deployed with the WordPress site. You can access phpMyAdmin by following these steps:
+- The MySQL Flexible Server is created behind a private [Virtual Network](../virtual-network/virtual-networks-overview.md) and can't be accessed directly. To access or manage the database, use phpMyAdmin that's deployed with the WordPress site. You can access phpMyAdmin by following these steps:
- Navigate to the URL: https://`<sitename>`.azurewebsites.net/phpmyadmin - Login with the flexible server's username and password
Congratulations, you've successfully completed this quickstart!
> [Tutorial: PHP app with MySQL](tutorial-php-mysql-app.md) > [!div class="nextstepaction"]
-> [Configure PHP app](configure-language-php.md)
+> [Configure PHP app](configure-language-php.md)
app-service Reference Dangling Subdomain Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-dangling-subdomain-prevention.md
The risks of subdomain takeover include:
- Phishing campaigns - Further risks of classic attacks such as XSS, CSRF, CORS bypass
-Learn more about Subdomain Takeover at [Dangling DNS and subdomain takeover](/azure/security/fundamentals/subdomain-takeover).
+Learn more about Subdomain Takeover at [Dangling DNS and subdomain takeover](../security/fundamentals/subdomain-takeover.md).
Azure App Service provides [Name Reservation Service](#how-app-service-prevents-subdomain-takeovers) and [domain verification tokens](#how-you-can-prevent-subdomain-takeovers) to prevent subdomain takeovers. ## How App Service prevents subdomain takeovers
These records prevent the creation of another App Service app using the same nam
DNS records should be updated before the site deletion to ensure bad actors can't take over the domain between the period of deletion and re-creation.
-To get a domain verification ID, see the [Map a custom domain tutorial](app-service-web-tutorial-custom-domain.md#2-get-a-domain-verification-id)
+To get a domain verification ID, see the [Map a custom domain tutorial](app-service-web-tutorial-custom-domain.md#2-get-a-domain-verification-id)
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-business-cards)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-business-cards)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
See how data, including name, job title, address, email, and company name, is ex
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice. -
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
The following resources are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|_**Custom model**_| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|_**Custom model**_| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
| _**Composed model**_ |<ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net/)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/Compose)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</li><li>JavaScript SDK</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>| ::: moniker-end
Learn to create and compose custom models:
> [!div class="nextstepaction"] > [**Build a custom model**](how-to-guides/build-a-custom-model.md)
-> [**Compose custom models**](how-to-guides/compose-custom-models.md)
+> [**Compose custom models**](how-to-guides/compose-custom-models.md)
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | Model ID| |||:|
-|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
+|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
### Try building a custom model
Explore Form Recognizer quickstarts and REST APIs:
| Quickstart | REST API| |--|--| |[v3.0 Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) |[Form Recognizer v3.0 API 2022-08-31](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|
-| [v2.1 quickstart](quickstarts/get-started-v2-1-sdk-rest-api.md) | [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/BuildDocumentModel) |
-
+| [v2.1 quickstart](quickstarts/get-started-v2-1-sdk-rest-api.md) | [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/BuildDocumentModel) |
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-identity-id-documents)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-identity-id-documents)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end ## Input requirements
Below are the fields extracted per document type. The Azure Form Recognizer ID m
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-invoices)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-invoices)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
The JSON output has three parts:
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-layout)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
Layout API also extracts selection marks from documents. Extracted selection mar
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Receipt digitization is the process of converting scanned receipts into digital
::: moniker range="form-recog-2.1.0"
-**Sample invoice processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/connection)**:
+**Sample receipt processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/connection)**:
::: moniker-end
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-receipts)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-receipts)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
See how data, including time and date of transactions, merchant information, and
1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
- :::image type="content" source="media/invoice-example-new.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
+ :::image type="content" source="media/receipts-example.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
The receipt model supports all English receipts and the following locales:
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 10/20/2022 Last updated : 11/16/2022 monikerRange: '>=form-recog-2.1.0' recommendations: false
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
-Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation.
>[!NOTE] > With the release of the 2022-08-31 GA API, the associated preview APIs are being deprecated. If you are using the 2021-09-30-preview or the 2022-01-30-preview API versions, please update your applications to target the 2022-08-31 API version. There are a few minor changes involved, for more information, _see_ the [migration guide](v3-migration-guide.md). ## October 2022
+### Form Recognizer versioned content
+
+Form Recognizer documentation has been updated to present a versioned experience. Now, you can choose to view content targeting the v3.0 GA experience or the v2.1 GA experience. The v3.0 experience is the default.
++ ### Form Recognizer Studio Sample Code
-Sample code the Form Recgonizer Studio labeling experience is now available on github - https://github.com/microsoft/Form-Recognizer-Toolkit/tree/main/SampleCode/LabelingUX. Customers can develop and integrate Form Recognizer into their own UX or build their own new UX using the Form Recognizer Studio sample code.
+Sample code for the [Form Recognizer Studio labeling experience](https://github.com/microsoft/Form-Recognizer-Toolkit/tree/main/SampleCode/LabelingUX) is now available on GitHub. Customers can develop and integrate Form Recognizer into their own UX or build their own new UX using the Form Recognizer Studio sample code.
### Language expansion
Use the REST API parameter `api-version=2022-06-30-preview` when using the API o
### New Prebuilt Contract model
-A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. Contracts is currently in preview, please request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
+A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. the contracts model is currently in preview, request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
### Region expansion for training custom neural models
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
-## November 8, 2022
-
-### Image tag
-
-`v1.13.0_2022-11-08`
-
-For complete release version information, see [Version log](version-log.md#november-8-2022).
-
-New for this release:
--- Azure Arc data controller
- - Support database as resource in Azure Arc data resource provider
--- Arc-enabled PostgreSQL server
- - Add support for automated backups
--- `arcdata` Azure CLI extension
- - CLI support for automated backups: Setting the `--storage-class-backups` parameter for the create command will enable automated backups
- ## October 11, 2022 ### Image tag
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
} ```
-1. Update the application's group membership claims. Run the commands in the same directory as `oauth2-permissions.json` file. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](/azure/active-directory/develop/supported-accounts-validation):
+1. Update the application's group membership claims. Run the commands in the same directory as `oauth2-permissions.json` file. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](../../active-directory/develop/supported-accounts-validation.md):
```azurecli az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
az ad app show --id "${SERVER_APP_ID}" --query "api.oauth2PermissionScopes[0].id" -o tsv ```
-4. Grant the required permissions for the client application. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](/azure/active-directory/develop/supported-accounts-validation):
+4. Grant the required permissions for the client application. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](../../active-directory/develop/supported-accounts-validation.md):
```azurecli az ad app permission add --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}" --api-permissions <oAuthPermissionId>=Scope
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
## Next steps > [!div class="nextstepaction"]
-> Securely connect to the cluster by using [Cluster Connect](cluster-connect.md).
+> Securely connect to the cluster by using [Cluster Connect](cluster-connect.md).
azure-arc Diagnose Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md
If everything is working correctly, your pods should all be in the `Running` sta
### Still having problems?
-The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) so we can investigate the problem further.
+The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) so we can investigate the problem further.
To generate the troubleshooting log file, run the following command:
To generate the troubleshooting log file, run the following command:
az connectedk8s troubleshoot -g <myResourceGroup> -n <myK8sCluster> ```
-When you [create your support request](/azure/azure-portal/supportability/how-to-create-azure-support-request), in the **Additional details** section, use the **File upload** option to upload the generated log file.
+When you [create your support request](../../azure-portal/supportability/how-to-create-azure-support-request.md), in the **Additional details** section, use the **File upload** option to upload the generated log file.
## Connections with a proxy server If you are using a proxy server on at least one machine, complete the first five steps of the non-proxy flowchart (through resource provider registration) for basic troubleshooting steps. Then, if you are still encountering issues, review the next flowchart for additional troubleshooting steps. More details about each step are provided below. ### Is the machine executing commands behind a proxy server?
+If the machine is executing commands behind a proxy server, you'll need to set any necessary environment variables, [explained below](#set-environment-variables).
+
+### Set environment variables
+ Be sure you have set all of the necessary environment variables. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server).
+For example:
+
+```bash
+export HTTP_PROXY=ΓÇ£http://<proxyIP>:<proxyPort>ΓÇ¥
+export HTTPS_PROXY=ΓÇ£https://<proxyIP>:<proxyPort>ΓÇ¥
+export NO_PROXY=ΓÇ£<service CIDR>,Kubernetes.default.svc,.svc.cluster.local,.svcΓÇ¥
+```
+ ### Does the proxy server only accept trusted certificates? Be sure to include the certificate file path by including `--proxy-cert <path-to-cert-file>` when running the `az connectedk8s connect` command.
If everything is working correctly, your pods should all be in the `Running` sta
### Still having problems?
-The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) so we can investigate the problem further.
+The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) so we can investigate the problem further.
To generate the troubleshooting log file, run the following command:
To generate the troubleshooting log file, run the following command:
az connectedk8s troubleshoot -g <myResourceGroup> -n <myK8sCluster> ```
-When you [create your support request](/azure/azure-portal/supportability/how-to-create-azure-support-request), in the **Additional details** section, use the **File upload** option to upload the generated log file.
-
+When you [create your support request](../../azure-portal/supportability/how-to-create-azure-support-request.md), in the **Additional details** section, use the **File upload** option to upload the generated log file.
## Next steps
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
To resolve the error, one or more network misconfigurations may need to be addre
If a request times out, the deployment machine is not able to communicate with the IP(s). This could be caused by a closed port, network misconfiguration or a firewall block. Work with your network administrator to allow communication between the deployment machine to the Control Plane IP and Appliance VM IP.
-1. Appliance VM IP and Control Plane IP must be able to communicate with the deployment machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This may require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](/azure/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script) for Arc resource bridge.
+1. Appliance VM IP and Control Plane IP must be able to communicate with the deployment machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This may require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md) for Arc resource bridge.
1. Appliance VM IP and Control Plane IP need internet access to [these required URLs](#restricted-outbound-connectivity). Azure Stack HCI requires [additional URLs](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites). Work with your network administrator to ensure that the IPs can access the required URLs.
If you don't see your problem here or you can't resolve your issue, try one of t
- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. -- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-fluid-relay Use Audience In Fluid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/use-audience-in-fluid.md
++
+description: Learn how to use audience features in the Fluid Framework
+ Title: 'How to: Use audience features in the Fluid Framework'
+ Last updated : 11/04/2022++++
+# How to: Use audience features in the Fluid Framework
+
+In this tutorial, you'll learn about using the Fluid Framework [Audience](https://fluidframework.com/docs/build/audience/) with [React](https://reactjs.org/) to create a visual demonstration of users connecting to a container. The audience object holds information related to all users connected to the container. In this example, the Azure Client library will be used to create the container and audience.
+
+To jump ahead into the finished demo, check out the [Audience demo in our FluidExamples repo](https://github.com/microsoft/FluidExamples/tree/main/audience-demo).
+
+The following image shows ID buttons and a container ID input field. Leaving the container ID field blank and clicking a user ID button will create a new container and join as the selected user. Alternatively, the end-user can input a container ID and choose a user ID to join an existing container as the selected user.
++
+The next image shows multiple users connected to a container represented by boxes. The box outlined in blue represents the user who is viewing the client while the boxes outlined in black represents the other connected users. As new users attach to the container with unique ID's, the number of boxes will increase.
++
+> [!NOTE]
+> This tutorial assumes that you are familiar with the [Fluid Framework Overview](https://fluidframework.com/docs/) and that you have completed the [QuickStart](https://fluidframework.com/docs/start/quick-start/). You should also be familiar with the basics of [React](https://reactjs.org/), [creating React projects](https://reactjs.org/docs/create-a-new-react-app.html#create-react-app), and [React Hooks](https://reactjs.org/docs/hooks-intro.html).
+
+## Create the project
+
+1. Open a Command Prompt and navigate to the parent folder where you want to create the project; e.g., `C:\My Fluid Projects`.
+1. Run the following command at the prompt. (Note that the CLI is np**x**, not npm. It was installed when you installed Node.js.)
+
+ ```dotnetcli
+ npx create-react-app fluid-audience-tutorial
+ ```
+
+1. The project is created in a subfolder named `fluid-audience-tutorial`. Navigate to it with the command `cd fluid-audience-tutorial`.
+
+1. The project uses the following Fluid libraries:
+
+ | Library | Description |
+ |-|-|
+ | `fluid-framework` | Contains the SharedMap [distributed data structure](../concepts/data-structures.md) that synchronizes data across clients. |
+ | `@fluidframework/azure-client` | Defines the connection to a Fluid service server and defines the starting schema for the [Fluid container](../concepts/architecture.md#container). |
+ | `@fluidframework/test-client-utils` | Defines the [InsecureTokenProvider](../concepts/authentication-authorization.md#the-token-provider) needed to create the connection to a Fluid Service. |
+
+ Run the following command to install the libraries.
+
+ ```dotnetcli
+ npm install @fluidframework/azure-client @fluidframework/test-client-utils fluid-framework
+ ```
+
+## Code the project
+
+### Set up state variables and component view
+
+1. Open the file `\src\App.js` in the code editor. Delete all the default `import` statements. Then delete all the markup from the `return` statement. Then add import statements for components and React hooks. Note that we will be implementing the imported **AudienceDisplay** and **UserIdSelection** components in the later steps. The file should look like the following:
+
+ ```js
+ import { useState, useCallback } from "react";
+ import { AudienceDisplay } from "./AudienceDisplay";
+ import { UserIdSelection } from "./UserIdSelection";
+
+ export const App = () => {
+ // TODO 1: Define state variables to handle view changes and user input
+ return (
+ // TODO 2: Return view components
+ );
+ }
+ ```
+
+1. Replace `TODO 1` with the following code. This code initializes local state variables that will be used within the application. The `displayAudience` value determines if we render the **AudienceDisplay** component or the **UserIdSelection** component (see `TODO 2`). The `userId` value is the user identifier to connect to the container with and the `containerId` value is the container to load. The `handleSelectUser` and `handleContainerNotFound` functions are passed in as callbacks to the two views and manage state transitions. `handleSelectUser` is called when attempting to create/load a container. `handleContainerNotFound` is called when creating/loading a container fails.
+
+ Note, the values userId and containerId will come from a **UserIdSelection** component through the `handleSelectUser` function.
+
+ ```js
+ const [displayAudience, setDisplayAudience] = useState(false);
+ const [userId, setUserId] = useState();
+ const [containerId, setContainerId] = useState();
+
+ const handleSelectUser = useCallback((userId, containerId) => {
+ setDisplayAudience(true)
+ setUserId(userId);
+ setContainerId(containerId);
+ }, [displayAudience, userId, containerId]);
+
+ const handleContainerNotFound = useCallback(() => {
+ setDisplayAudience(false)
+ }, [setDisplayAudience]);
+ ```
+
+1. Replace `TODO 2` with the following code. As stated above, the `displayAudience` variable will determine if we render the **AudienceDisplay** component or the **UserIdSelection** component. Also, functions to update the state variables are passed into components as properties.
+
+ ```js
+ (displayAudience) ?
+ <AudienceDisplay userId={userId} containerId={containerId} onContainerNotFound={handleContainerNotFound}/> :
+ <UserIdSelection onSelectUser={handleSelectUser}/>
+ ```
+
+### Set up AudienceDisplay component
+
+1. Create and open a file `\src\AudienceDisplay.js` in the code editor. Add the following `import` statements:
+
+ ```js
+ import { useEffect, useState } from "react";
+ import { SharedMap } from "fluid-framework";
+ import { AzureClient } from "@fluidframework/azure-client";
+ import { InsecureTokenProvider } from "@fluidframework/test-client-utils";
+ ```
+
+ Note that the objects imported from the Fluid Framework library are required for defining users and containers. In the following steps, **AzureClient** and **InsecureTokenProvider** will be used to configure the client service (see `TODO 1`) while the **SharedMap** will be used to configure a `containerSchema` needed to create a container (see `TODO 2`).
+
+1. Add the following functional components and helper functions:
+
+ ```js
+ const tryGetAudienceObject = async (userId, userName, containerId) => {
+ // TODO 1: Create container and return audience object
+ }
+
+ export const AudienceDisplay = (props) => {
+ //TODO 2: Configure user ID, user name, and state variables
+ //TODO 3: Set state variables and set event listener on component mount
+ //TODO 4: Return list view
+ }
+
+ const AudienceList = (data) => {
+ //TODO 5: Append view elements to list array for each member
+ //TODO 6: Return list of member elements
+ }
+ ```
+
+ Note that the **AudienceDisplay** and **AudienceList** are functional components which handle getting and rendering audience data while the `tryGetAudienceObject` method handles the creation of container and audience services.
+
+### Getting container and audience
+
+You can use a helper function to get the Fluid data, from the Audience object, into the view layer (the React state). The `tryGetAudienceObject` method is called when the view component loads after a user ID is selected. The returned value is assigned to a React state property.
+
+1. Replace `TODO 1` with the following code. Note that the values for `userId` `userName` `containerId` will be passed in from the **App** component. If there is no `containerId`, a new container is created. Also, note that the `containerId` is stored on the URL hash. A user entering a session from a new browser may copy the URL from an existing session browser or navigate to `localhost:3000` and manually input the container ID. With this implementation, we want to wrap the `getContainer` call in a try catch in the case that the user inputs a container ID which does not exist. Visit the [React demo](https://fluidframework.com/docs/recipes/react/) and [Containers](../concepts/architecture.md#container) documentation for more information.
+
+ ```js
+ const userConfig = {
+ id: userId,
+ name: userName,
+ additionalDetails: {
+ email: userName.replace(/\s/g, "") + "@example.com",
+ date: new Date().toLocaleDateString("en-US"),
+ },
+ };
+
+ const serviceConfig = {
+ connection: {
+ type: "local",
+ tokenProvider: new InsecureTokenProvider("", userConfig),
+ endpoint: "http://localhost:7070",
+ },
+ };
+
+ const client = new AzureClient(serviceConfig);
+
+ const containerSchema = {
+ initialObjects: { myMap: SharedMap },
+ };
+
+ let container;
+ let services;
+ if (!containerId) {
+ ({ container, services } = await client.createContainer(containerSchema));
+ const id = await container.attach();
+ location.hash = id;
+ } else {
+ try {
+ ({ container, services } = await client.getContainer(containerId, containerSchema));
+ } catch (e) {
+ return;
+ }
+ }
+ return services.audience;
+ ```
+
+### Getting the audience on component mount
+
+Now that we've defined how to get the Fluid audience, we need to tell React to call `tryGetAudienceObject` when the Audience Display component is mounted.
+
+1. Replace `TODO 2` with the following code. Note that the user ID will come from the parent component as either `user1` `user2` or `random`. If the ID is `random` we use `Math.random()` to generate a random number as the ID. Additionally, a name will be mapped to the user based on their ID as specified in `userNameList`. Lastly, we define the state variables which will store the connected members as well as the current user. `fluidMembers` will store a list of all members connected to the container whereas `currentMember` will contain the member object representing the current user viewing the browser context.
+
+ ```js
+ const userId = props.userId == "random" ? Math.random() : props.userId;
+ const userNameList = {
+ "user1" : "User One",
+ "user2" : "User Two",
+ "random" : "Random User"
+ };
+ const userName = userNameList[props.userId];
+
+ const [fluidMembers, setFluidMembers] = useState();
+ const [currentMember, setCurrentMember] = useState();
+ ```
+
+1. Replace `TODO 3` with the following code. This will call the `tryGetAudienceObject` when the component is mounted and set the returned audience members to `fluidMembers` and `currentMember`. Note, we check if an audience object is returned in case a user inputs a containerId which does not exist and we need to return them to the **UserIdSelection** view (`props.onContainerNotFound()` will handle switching the view). Also, it is good practice to deregister event handlers when the React component dismounts by returning `audience.off`.
+
+ ```js
+ useEffect(() => {
+ tryGetAudienceObject(userId, userName, props.containerId).then(audience => {
+ if(!audience) {
+ props.onContainerNotFound();
+ alert("error: container id not found.");
+ return;
+ }
+
+ const updateMembers = () => {
+ setFluidMembers(audience.getMembers());
+ setCurrentMember(audience.getMyself());
+ }
+
+ updateMembers();
+
+ audience.on("membersChanged", updateMembers);
+
+ return () => { audience.off("membersChanged", updateMembers) };
+ });
+ }, []);
+ ```
+
+1. Replace `TODO 4` with the following code. Note, if the `fluidMembers` or `currentMember` has not been initialized, a blank screen is rendered. The **AudienceList** component will render the member data with styling (to be implemented in the next section).
+
+ ```js
+ if (!fluidMembers || !currentMember) return (<div/>);
+
+ return (
+ <AudienceList fluidMembers={fluidMembers} currentMember={currentMember}/>
+ )
+ ```
+
+ > [!NOTE]
+ > Connection transitions can result in short timing windows where `getMyself` returns `undefined`. This is because the current client connection will not have been added to the audience yet, so a matching connection ID cannot be found. To prevent React from rendering a page with no audience members, we add a listener to call `updateMembers` on `membersChanged`. This works since the service audience emits a `membersChanged` event when the container is connected.
+
+### Create the view
+
+1. Replace `TODO 5` with the following code. Note we are rendering a list component for each member passed from the **AudienceDisplay** component. For each member, we first compare `member.userId` to `currentMember.userId` to check if that member `isSelf`. This way, we can differentiate the client user from the other users and display the component with a different color. We then push the list component to a `list` array. Each component will display member data such as `userId` `userName` and `additionalDetails`.
+
+ ```js
+ const currentMember = data.currentMember;
+ const fluidMembers = data.fluidMembers;
+
+ const list = [];
+ fluidMembers.forEach((member, key) => {
+ const isSelf = (member.userId === currentMember.userId);
+ const outlineColor = isSelf ? 'blue' : 'black';
+
+ list.push(
+ <div style={{
+ padding: '1rem',
+ margin: '1rem',
+ display: 'flex',
+ outline: 'solid',
+ flexDirection: 'column',
+ maxWidth: '25%',
+ outlineColor
+ }} key={key}>
+ <div style={{fontWeight: 'bold'}}>Name</div>
+ <div>
+ {member.userName}
+ </div>
+ <div style={{fontWeight: 'bold'}}>ID</div>
+ <div>
+ {member.userId}
+ </div>
+ <div style={{fontWeight: 'bold'}}>Connections</div>
+ {
+ member.connections.map((data, key) => {
+ return (<div key={key}>{data.id}</div>);
+ })
+ }
+ <div style={{fontWeight: 'bold'}}>Additional Details</div>
+ { JSON.stringify(member.additionalDetails, null, '\t') }
+ </div>
+ );
+ });
+ ```
+
+1. Replace `TODO 6` with the following code. This will render all each of the member elements we pushed into the `list` array.
+
+ ```js
+ return (
+ <div>
+ {list}
+ </div>
+ );
+ ```
+
+### Setup UserIdSelection component
+
+1. Create and open a file `\src\UserIdSelection.js` in the code editor. This component will include user ID buttons and container ID input fields which allow end-users to choose their user ID and collaborative session. Add the following `import` statements and functional components:
+
+ ```js
+ import { useState } from 'react';
+
+ export const UserIdSelection = (props) => {
+ // TODO 1: Define styles and handle user inputs
+ return (
+ // TODO 2: Return view components
+ );
+ }
+ ```
+
+1. Replace `TODO 1` with the following code. Note that the `onSelectUser` function will update the state variables in the parent **App** component and prompt a view change. The `handleSubmit` method is triggered by button elements which will be implemented in `TODO 2`. Also, the `handleChange` method is used to update the `containerId` state variable. This method will be called from an input element event listener implemented in `TODO 2`. Also, note that we update the `containerId` be getting the value from an HTML element with the id `containerIdInput` (defined in `TODO 2`).
+
+ ```js
+ const selectionStyle = {
+ marginTop: '2rem',
+ marginRight: '2rem',
+ width: '150px',
+ height: '30px',
+ };
+
+ const [containerId, setContainerId] = (location.hash.substring(1));
+
+ const handleSubmit = (userId) => {
+ props.onSelectUser(userId, containerId);
+ }
+
+ const handleChange = () => {
+ setContainerId(document.getElementById("containerIdInput").value);
+ };
+ ```
+
+1. Replace `TODO 2` with the following code. This will render the user ID buttons as well as the container ID input field.
+
+ ```js
+ <div style={{display: 'flex', flexDirection:'column'}}>
+ <div style={{marginBottom: '2rem'}}>
+ Enter Container Id:
+ <input type="text" id="containerIdInput" value={containerId} onChange={() => handleChange()} style={{marginLeft: '2rem'}}></input>
+ </div>
+ {
+ (containerId) ?
+ (<div style={{}}>Select a User to join container ID: {containerId} as the user</div>)
+ : (<div style={{}}>Select a User to create a new container and join as the selected user</div>)
+ }
+ <nav>
+ <button type="submit" style={selectionStyle} onClick={() => handleSubmit("user1")}>User 1</button>
+ <button type="submit" style={selectionStyle} onClick={() => handleSubmit("user2")}>User 2</button>
+ <button type="submit" style={selectionStyle} onClick={() => handleSubmit("random")}>Random User</button>
+ </nav>
+ </div>
+ ```
+
+## Start the Fluid server and run the application
+
+> [!NOTE]
+> To match the rest of this how-to, this section uses `npx` and `npm` commands to start a Fluid server. However, the code in this article can also run against an Azure Fluid Relay server. For more information, see [How to: Provision an Azure Fluid Relay service](provision-fluid-azure-portal.md) and [How to: Connect to an Azure Fluid Relay service](connect-fluid-azure-service.md)
+
+In the Command Prompt, run the following command to start the Fluid service.
+
+```dotnetcli
+npx @fluidframework/azure-local-service@latest
+```
+
+Open a new Command Prompt and navigate to the root of the project; for example, `C:/My Fluid Projects/fluid-audience-tutorial`. Start the application server with the following command. The application opens in the browser. This may take a few minutes.
+
+```dotnetcli
+npm run start
+```
+
+Navigate to `localhost:3000` on a browser tab to view the running application. To create a new container, select a user ID button while leaving the container ID input blank. To simulate a new user joining the container session, open a new browser tab and navigate to `localhost:3000`. This time, input the container ID value which can be found from first browser tab's url proceeding `http://localhost:3000/#`.
+
+> [!NOTE]
+> You may need to install an additional dependency to make this demo compatible with Webpack 5. If you receive a compilation error related to a "buffer" or "url" package, please run `npm install -D buffer url` and try again. This will be resolved in a future release of Fluid Framework.
+
+## Next steps
+
+- Try extending the demo with more key/value pairs in the `additionalDetails` field in `userConfig`.
+- Consider integrating audience into a collaborative application which utilizes distributed data structures such as SharedMap or SharedString.
+- Learn more about [Audience](https://fluidframework.com/docs/build/audience/).
+
+> [!TIP]
+> When you make changes to the code the project will automatically rebuild and the application server will reload. However, if you make changes to the container schema, they will only take effect if you close and restart the application server. To do this, give focus to the Command Prompt and press Ctrl-C twice. Then run `npm run start` again.
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
You can exclude certain types of telemetry from sampling. In this example, data
For more information, see [Sampling in Application Insights](../azure-monitor/app/sampling.md).
+## Enable SQL query collection
+
+Application Insights automatically collects data on dependencies for HTTP requests, database calls, and for several bindings. For more information, see [Dependencies](./functions-monitoring.md#dependencies). For SQL calls, the name of the server and database is always collected and stored, but SQL query text isn't collected by default. You can use `dependencyTrackingOptions.enableSqlCommandTextInstrumentation` to enable SQL query text logging by setting (at minimum) the following in your [host.json file](./functions-host-json.md#applicationinsightsdependencytrackingoptions):
+
+```json
+"logging": {
+ "applicationInsights": {
+ "enableDependencyTracking": true,
+ "dependencyTrackingOptions": {
+ "enableSqlCommandTextInstrumentation": true
+ }
+ }
+}
+```
+
+For more information, see [Advanced SQL tracking to get full SQL query](../azure-monitor/app/asp-net-dependencies.md#advanced-sql-tracking-to-get-full-sql-query).
+ ## Configure scale controller logs _This feature is in preview._
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
There are other function app configuration options in the [host.json](functions-
Example connection string values are truncated for readability. > [!NOTE]
-> You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values). Changes to function app settings require your function app to be restarted.
-
-> [!IMPORTANT]
-> Do not use an [instrumentation key](../azure-monitor/app/separate-resources.md#about-resources-and-instrumentation-keys) and a [connection string](../azure-monitor/app/sdk-connection-string.md#overview) simultaneously. Whichever was set last will take precedence.
+> You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values). Changes to function app settings require your function app to be restarted.
## APPINSIGHTS_INSTRUMENTATIONKEY
-The instrumentation key for Application Insights. Only use one of `APPINSIGHTS_INSTRUMENTATIONKEY` or `APPLICATIONINSIGHTS_CONNECTION_STRING`. When Application Insights runs in a sovereign cloud, use `APPLICATIONINSIGHTS_CONNECTION_STRING`. For more information, see [How to configure monitoring for Azure Functions](configure-monitoring.md).
+The instrumentation key for Application Insights. Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING`. When possible, use `APPLICATIONINSIGHTS_CONNECTION_STRING`. When Application Insights runs in a sovereign cloud, you must use `APPLICATIONINSIGHTS_CONNECTION_STRING`. For more information, see [How to configure monitoring for Azure Functions](configure-monitoring.md).
|Key|Sample value| |||
The instrumentation key for Application Insights. Only use one of `APPINSIGHTS_I
## APPLICATIONINSIGHTS_CONNECTION_STRING
-The connection string for Application Insights. Use `APPLICATIONINSIGHTS_CONNECTION_STRING` instead of `APPINSIGHTS_INSTRUMENTATIONKEY` in the following cases:
+The connection string for Application Insights. When possible, use `APPLICATIONINSIGHTS_CONNECTION_STRING` instead of `APPINSIGHTS_INSTRUMENTATIONKEY`. Using `APPLICATIONINSIGHTS_CONNECTION_STRING` is required in the following cases:
+ When your function app requires the added customizations supported by using the connection string. + When your Application Insights instance runs in a sovereign cloud, which requires a custom endpoint.
azure-functions Functions Concurrency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-concurrency.md
Using dynamic concurrency provides the following benefits:
### Dynamic concurrency configuration
-Dynamic concurrency can be enabled at the host level in the host.json file. When, enabled any binding extensions used by your function app that support dynamic concurrency will adjust concurrency dynamically as needed. Dynamic concurrency settings override any manually configured concurrency settings for triggers that support dynamic concurrency.
+Dynamic concurrency can be enabled at the host level in the host.json file. When, enabled any binding extensions used by your function app that support dynamic concurrency adjust concurrency dynamically as needed. Dynamic concurrency settings override any manually configured concurrency settings for triggers that support dynamic concurrency.
By default, dynamic concurrency is disabled. With dynamic concurrency enabled, concurrency starts at 1 for each function, and is adjusted up to an optimal value, which is determined by the host.
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json.md
Title: host.json reference for Azure Functions 2.x
description: Reference documentation for the Azure Functions host.json file with the v2 runtime. Previously updated : 04/28/2020 Last updated : 11/16/2022 # host.json reference for Azure Functions 2.x and later
The following sample *host.json* file for version 2.x+ has all possible options
"batchSize": 1000, "flushTimeout": "00:00:30" },
+ "concurrency": {
+ "dynamicConcurrencyEnabled": true,
+ "snapshotPersistenceEnabled": true
+ },
"extensions": { "blobs": {}, "cosmosDb": {},
The following sample *host.json* file for version 2.x+ has all possible options
"excludedTypes" : "Dependency;Event", "includedTypes" : "PageView;Trace" },
+ "dependencyTrackingOptions": {
+ "enableSqlCommandTextInstrumentation": true
+ },
"enableLiveMetrics": true, "enableDependencyTracking": true, "enablePerformanceCountersCollection": true,
For the complete JSON structure, see the earlier [example host.json file](#sampl
| Property | Default | Description | | | | | | samplingSettings | n/a | See [applicationInsights.samplingSettings](#applicationinsightssamplingsettings). |
+| dependencyTrackingOptions | n/a | See [applicationInsights.dependencyTrackingOptions](#applicationinsightsdependencytrackingoptions). |
| enableLiveMetrics | true | Enables live metrics collection. | | enableDependencyTracking | true | Enables dependency tracking. | | enablePerformanceCountersCollection | true | Enables Kudu performance counters collection. |
For more information about these settings, see [Sampling in Application Insights
| enableW3CDistributedTracing | true | Enables or disables support of W3C distributed tracing protocol (and turns on legacy correlation schema). Enabled by default if `enableHttpTriggerExtendedInfoCollection` is true. If `enableHttpTriggerExtendedInfoCollection` is false, this flag applies to outgoing requests only, not incoming requests. | | enableResponseHeaderInjection | true | Enables or disables injection of multi-component correlation headers into responses. Enabling injection allows Application Insights to construct an Application Map to when several instrumentation keys are used. Enabled by default if `enableHttpTriggerExtendedInfoCollection` is true. This setting doesn't apply if `enableHttpTriggerExtendedInfoCollection` is false. |
+### applicationInsights.dependencyTrackingOptions
+
+|Property | Default | Description |
+| | | |
+| enableSqlCommandTextInstrumentation | false | Enables collection of the full text of SQL queries, which is disabled by default. For more information on collecting SQL query text, see [Advanced SQL tracking to get full SQL query](../azure-monitor/app/asp-net-dependencies.md#advanced-sql-tracking-to-get-full-sql-query). |
+ ### applicationInsights.snapshotConfiguration For more information on snapshots, see [Debug snapshots on exceptions in .NET apps](../azure-monitor/app/snapshot-debugger.md) and [Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](../azure-monitor/app/snapshot-debugger-troubleshoot.md).
Configuration settings for a custom handler. For more information, see [Azure Fu
Configuration setting can be found in [bindings for Durable Functions](durable/durable-functions-bindings.md#host-json).
+## concurrency
+
+Enables dynamic concurrency for specific bindings in your function app. For more information, see [Dynamic concurrency](./functions-concurrency.md#dynamic-concurrency).
+
+```json
+ {
+ "concurrency": {
+ "dynamicConcurrencyEnabled": true,
+ "snapshotPersistenceEnabled": true
+ }
+ }
+```
+
+|Property | Default | Description |
+| | | |
+| dynamicConcurrencyEnabled | false | Enables dynamic concurrency behaviors for all triggers supported by this feature, which is off by default. |
+| snapshotPersistenceEnabled | true | Learned concurrency values are periodically persisted to storage so new instances start from those values instead of starting from 1 and having to redo the learning. |
+ ## eventHub Configuration settings can be found in [Event Hub triggers and bindings](functions-bindings-event-hubs.md#host-json).
An array of one or more names of files that are monitored for changes that requi
## Override host.json values
-There may be instances where you wish to configure or modify specific settings in a host.json file for a specific environment, without changing the host.json file itself. You can override specific host.json values by creating an equivalent value as an application setting. When the runtime finds an application setting in the format `AzureFunctionsJobHost__path__to__setting`, it overrides the equivalent host.json setting located at `path.to.setting` in the JSON. When expressed as an application setting, the dot (`.`) used to indicate JSON hierarchy is replaced by a double underscore (`__`).
+There may be instances where you wish to configure or modify specific settings in a host.json file for a specific environment, without changing the host.json file itself. You can override specific host.json values by creating an equivalent value as an application setting. When the runtime finds an application setting in the format `AzureFunctionsJobHost__path__to__setting`, it overrides the equivalent host.json setting located at `path.to.setting` in the JSON. When expressed as an application setting, the dot (`.`) used to indicate JSON hierarchy is replaced by a double underscore (`__`).
For example, say that you wanted to disable Application Insight sampling when running locally. If you changed the local host.json file to disable Application Insights, this change might get pushed to your production app during deployment. The safer way to do this is to instead create an application setting as `"AzureFunctionsJobHost__logging__applicationInsights__samplingSettings__isEnabled":"false"` in the `local.settings.json` file. You can see this in the following `local.settings.json` file, which doesn't get published:
For example, say that you wanted to disable Application Insight sampling when ru
} ```
+Overriding host.json settings using environment variables follows the ASP.NET Core naming conventions. When the element structure includes an array, the numeric array index should be treated as an additional element name in this path. For more information, see [Naming of environment variables](/aspnet/core/fundamentals/configuration/#naming-of-environment-variables).
+ ## Next steps > [!div class="nextstepaction"]
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
To download the publishing profile of your function app:
1. In [GitHub](https://github.com/), go to your repository.
-1. Select **Security > Secrets and variables > Actions**.
+1. Select **Settings > Secrets > Actions**.
1. Select **New repository secret**.
azure-maps How To Dev Guide Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md
The [Azure.Maps Namespace][Azure.Maps Namespace] in the .NET documentation.
[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account [authentication]: azure-maps-authentication.md
-[Host daemon]: /azure/azure-maps/how-to-secure-daemon-app#host-a-daemon-on-non-azure-resources
+[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
[.NET standard]: /dotnet/standard/net-standard?tabs=net-standard-2-0 [Rest API]: /rest/api/maps/ [.NET Standard versions]: https://dotnet.microsoft.com/platform/dotnet-standard#versions
The [Azure.Maps Namespace][Azure.Maps Namespace] in the .NET documentation.
[search-api]: /dotnet/api/azure.maps.search [Identity library .NET]: /dotnet/api/overview/azure/identity-readme?view=azure-dotnet [defaultazurecredential.NET]: /dotnet/api/overview/azure/identity-readme?view=azure-dotnet#defaultazurecredential
-[NuGet]: https://www.nuget.org/
+[NuGet]: https://www.nuget.org/
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
description: How to develop applications that incorporate Azure Maps using the JavaScript SDK Developers Guide. Previously updated : 11/07/2021 Last updated : 11/15/2021
The Azure Maps JavaScript/TypeScript REST SDK (JavaScript SDK) supports searchin
> az maps account create --kind "Gen2" --account-name "myMapAccountName" --resource-group "<resource group>" --sku "G2" > ```
+## Create a Node.js project
+
+The example below creates a new directory then a Node.js program named _mapsDemo_ using npm:
+
+```powershell
+mkdir mapsDemo
+cd mapsDemo
+npm init
+```
+ ## Install the search package To use Azure Maps JavaScript SDK, you'll need to install the search package. Each of the Azure Maps services including search, routing, rendering and geolocation are each in their own package.
mapsDemo
+-- search.js ```
-### Azure Maps search service
+### Azure Maps services
-| Service Name  | NPM package  | Samples  |
+| Service Name  | npm packages | Samples  |
||-|--|
-| [Search][search readme] | [Azure.Maps.Search][search package] | [search samples][search sample] |
+| [Search][search readme] | [@azure/maps-search][search package] | [search samples][search sample] |
| [Route][js route readme] | [@azure-rest/maps-route][js route package] | [route samples][js route sample] |
-## Create a Node.js project
-
-The example below creates a new directory then a Node.js program named _mapsDemo_ using NPM:
-
-```powershell
-mkdir mapsDemo
-cd mapsDemo
-npm init
-```
- ## Create and authenticate a MapsSearchClient You'll need a `credential` object for authentication when creating the `MapsSearchClient` object used to access the Azure Maps search APIs. You can use either an Azure Active Directory (Azure AD) credential or an Azure subscription key to authenticate. For more information on authentication, see [Authentication with Azure Maps][authentication].
main().catch((err) => {
[authentication]: azure-maps-authentication.md [Identity library]: /javascript/api/overview/azure/identity-readme
-[Host daemon]: /azure/azure-maps/how-to-secure-daemon-app#host-a-daemon-on-non-azure-resources
+[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
[dotenv]: https://github.com/motdotla/dotenv#readme [search package]: https://www.npmjs.com/package/@azure/maps-search
main().catch((err) => {
[js route readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-route-rest/README.md [js route package]: https://www.npmjs.com/package/@azure-rest/maps-route
-[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
+[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
azure-maps How To Use Best Practices For Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md
To learn more, please see:
> [Show route on the map](./map-route.md) > [!div class="nextstepaction"]
-> [Azure Maps NPM Package](https://www.npmjs.com/package/azure-maps-rest )
+> [Azure Maps npm Package](https://www.npmjs.com/package/azure-maps-rest )
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
See also the [Azure Maps Glossary](./glossary.md) for an in-depth list of termin
## Web SDK side-by-side examples
-The following is a collection of code samples for each platform that cover common use cases to help you migrate your web application from Bing Maps V8 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript; however, Azure Maps also provides TypeScript definitions as an additional option through an [NPM module](./how-to-use-map-control.md).
+The following is a collection of code samples for each platform that cover common use cases to help you migrate your web application from Bing Maps V8 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript; however, Azure Maps also provides TypeScript definitions as an additional option through an [npm module](./how-to-use-map-control.md).
**Topics**
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
In addition to this API, Azure Maps provides many time zone APIs. These APIs con
Azure Maps provides client libraries for the following programming languages:
-* JavaScript, TypeScript, Node.js ΓÇô [documentation](how-to-use-services-module.md) \| [NPM package](https://www.npmjs.com/package/azure-maps-rest)
+* JavaScript, TypeScript, Node.js ΓÇô [documentation](how-to-use-services-module.md) \| [npm package](https://www.npmjs.com/package/azure-maps-rest)
These Open-source client libraries are for other programming languages:
azure-maps Power Bi Visual Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bubble-layer.md
Title: Add a bubble layer to an Azure Maps Power BI visual
-description: In this article, you will learn how to use the bubble layer in an Azure Maps Power BI visual.
+description: In this article, you'll learn how to use the bubble layer in an Azure Maps Power BI visual.
Previously updated : 11/29/2021 Last updated : 11/14/2022
Initially all bubbles have the same fill color. If a field is passed into the **
| Setting | Description | |--|-|
-| Size | The size of each bubble. This option is hidden when a field is passed into the **Size** bucket of the **Fields** pane. Additional options will appear as outlined in the [Bubble size scaling](#bubble-size-scaling) topic further down in this article. |
+| Size | The size of each bubble. This option is hidden when a field is passed into the **Size** bucket of the **Fields** pane. More options will appear as outlined in the [Bubble size scaling](#bubble-size-scaling) section further down in this article. |
| Fill color | Color of each bubble. This option is hidden when a field is passed into the **Legend** bucket of the **Fields** pane and a separate **Data colors** section will appear in the **Format** pane. | | Fill transparency | Transparency of each bubble. | | High-contrast outline | Makes the outline color contrast with the fill color for better accessibility by using a high-contrast variant of the fill color. | | Outline color | Color that outlines the bubble. This option is hidden when the **High-contrast outline** option is enabled. | | Outline transparency | Transparency of the outline. | | Outline width | Width of the outline in pixels. |
-| Blur | Amount of blur applied to the outline. A value of 1 blurs the bubbles such that only the center point has no transparency. A value of 0 apply any blur effect. |
+| Blur | Amount of blur applied to the outline. A value of one blurs the bubbles such that only the center point has no transparency. A value of 0 apply any blur effect. |
| Pitch alignment | Specifies how the bubbles look when the map is pitched. <br/><br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Viewport - Bubbles appear on their edge on the map relative to viewport. (default)<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Map - Bubbles are rendered flat on the surface of the map. |
-| Zoom scale | Amount the bubbles should scale relative to the zoom level. A zoom scale of one means no scaling. Large values will make bubbles smaller when zoomed out and larger when zoomed in. This helps to reduce the clutter on the map when zoomed out, yet ensures points stand out more when zoomed in. A value of 1 does not apply any scaling. |
+| Zoom scale | Amount the bubbles should scale relative to the zoom level. A zoom scale of one means no scaling. Large values will make bubbles smaller when zoomed out and larger when zoomed in. This helps to reduce the clutter on the map when zoomed out, yet ensures points stand out more when zoomed in. A value of 1 doesn't apply any scaling. |
| Min zoom | Minimum zoom level tiles are available. | | Max zoom | Maximum zoom level tiles are available. | | Layer position | Specifies the position of the layer relative to other map layers. |
If a field is passed into the **Size** bucket of the **Fields** pane, the bubble
||--| | Min size | Minimum bubble size when scaling the data.| | Max size | Maximum bubble size when scaling the data.|
-| Size scaling method | Scaling algorithm used to determine relative bubble size.<br/><br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Linear - Range of input data linearly mapped to the min and max size. (default)<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Log - Range of input data logarithmically mapped to the min and max size.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Cubic-Bezier - Specify X1, Y1, X2, Y2 values of a Cubic-Bezier curve to create a custom scaling method. |
+| Size scaling method | Scaling algorithm used to determine relative bubble size.<br/><br/>&nbsp;ΓÇó Linear: Range of input data linearly mapped to the min and max size. (default)<br/>&nbsp;ΓÇó Log: Range of input data logarithmically mapped to the min and max size.<br/>&nbsp;ΓÇó Cubic-Bezier: Specify X1, Y1, X2, Y2 values of a Cubic-Bezier curve to create a custom scaling method. |
When the **Size scaling method** is set to **Log**, the following options will be made available.
When the **Size scaling method** is set to **Cubic-Bezier**, the following optio
> [!TIP] > [https://cubic-bezier.com/](https://cubic-bezier.com/) has a handy tool for creating the parameters for Cubic-Bezier curves.
+## Category labels
+
+When displaying a **bubble layer** map, the **Category labels** settings will become active in the **Format visual** pane.
++
+The **Category labels** settings enable you to customize font setting such as font type, size and color as well as the category labels background color and transparency.
++ ## Next steps Change how your data is displayed on the map:
azure-maps Power Bi Visual Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-get-started.md
Title: Get started with Azure Maps Power BI visual
-description: In this article, you will learn how to use Azure Maps Power BI visual.
+description: In this article, you'll learn how to use Azure Maps Power BI visual.
Last updated 11/29/2021
This article shows how to use the Microsoft Azure Maps Power BI visual.
> [!NOTE] > This visual can be created and viewed in both Power BI Desktop and the Power BI service. The steps and illustrations in this article are from Power BI Desktop.
-The Azure Maps Power BI visual provides a rich set of data visualizations for spatial data on top of a map. It is estimated that over 80% of business data has a location context. The Azure Maps Power BI visual can be used to gain insights into how this location context relates to and influences your business data.
+The Azure Maps Power BI visual provides a rich set of data visualizations for spatial data on top of a map. It's estimated that over 80% of business data has a location context. The Azure Maps Power BI visual can be used to gain insights into how this location context relates to and influences your business data.
-![Power BI desktop with the Azure Maps Power BI visual displaying business data](media/power-bi-visual/azure-maps-visual-hero.png)
## What is sent to Azure?
To learn more, about privacy and terms of use related to the Azure Maps Power BI
There are a few considerations and requirements for the Azure Maps Power BI visual: -- The Azure Maps Power BI visual must be enabled in Power BI Desktop. To enable Azure Maps Power BI visual, select **File** &gt; **Options and Settings** &gt; **Options** &gt; **Preview features**, then select the **Azure Maps Visual** checkbox. If the Azure Maps visual is not available after enabling this setting, it's likely that a tenant admin switch in the Admin Portal needs to be enabled.
+- The Azure Maps Power BI visual must be enabled in Power BI Desktop. To enable Azure Maps Power BI visual, select **File** &gt; **Options and Settings** &gt; **Options** &gt; **Preview features**, then select the **Azure Maps Visual** checkbox. If the Azure Maps visual isn't available after enabling this setting, it's likely that a tenant admin switch in the Admin Portal needs to be enabled.
- The data set must have fields that contain **latitude** and **longitude** information. ## Use the Azure Maps Power BI visual Once the Azure Maps Power BI visual is enabled, select the **Azure Maps** icon from the **Visualizations** pane. Power BI creates an empty Azure Maps visual design canvas. While in preview, another disclaimer is displayed. Take the following steps to load the Azure Maps visual: 1. In the **Fields** pane, drag data fields that contain latitude and longitude coordinate information into the **Latitude** and/or **Longitude** buckets. This is the minimal data needed to load the Azure Maps visual.
- :::image type="content" source="media/power-bi-visual/bubble-layer.png" alt-text="Azure Maps visual displaying points as bubbles on the map after latitude and longitude fields provided.":::
+ :::image type="content" source="media/power-bi-visual/bubble-layer.png" alt-text="A screenshot of the Azure Maps visual displaying points as bubbles on the map after latitude and longitude fields are provided." lightbox="media/power-bi-visual/bubble-layer.png":::
2. To color the data based on categorization, drag a categorical field into the **Legend** bucket of the **Fields** pane. In this example, we're using the **AdminDistrict** column (also known as state or province).
- :::image type="content" source="media/power-bi-visual/bubble-layer-with-legend-color.png" alt-text="Azure Maps visual displaying points as colored bubbles on the map after legend field provided.":::
+ :::image type="content" source="media/power-bi-visual/bubble-layer-with-legend-color.png" alt-text="A screenshot of the Azure Maps visual displaying points as colored bubbles on the map after legend field is provided." lightbox="media/power-bi-visual/bubble-layer-with-legend-color.png":::
> [!NOTE] > The built-in legend control for Power BI does not currently appear in this preview. 3. To scale the data relatively, drag a measure into the **Size** bucket of the **Fields** pane. In this example, we're using **Sales** column.
- :::image type="content" source="media/power-bi-visual/bubble-layer-with-legend-color-and-size.png" alt-text="Azure Maps visual displaying points as colored and scaled bubbles on the map after size field provided.":::
+ :::image type="content" source="media/power-bi-visual/bubble-layer-with-legend-color-and-size.png" alt-text="A screenshot of the Azure Maps visual displaying points as colored and scaled bubbles on the map demonstrating the size field." lightbox="media/power-bi-visual/bubble-layer-with-legend-color-and-size.png":::
4. Use the options in the **Format** pane to customize how data is rendered. The following image is the same map as above, but with the bubble layers fill transparency option set to 50% and the high-contrast outline option enabled.
- :::image type="content" source="media/power-bi-visual/bubble-layer-styled.png" alt-text="Azure Maps visual displaying points as bubbles on the map with a custom style.":::
+ :::image type="content" source="media/power-bi-visual/bubble-layer-styled.png" alt-text="A screenshot of the Azure Maps visual displaying points as bubbles on the map with a custom style." lightbox="media/power-bi-visual/bubble-layer-styled.png":::
+
+5. You can also show or hide labels in the **Format** pane. The following two images show maps with the **Show labels** setting turned on and off:
+
+ :::image type="content" source="media/power-bi-visual/show-labels-on.png" alt-text="A screenshot of the Azure Maps visual displaying a map with the show labels setting turned on in the style section of the format pane in Power BI." lightbox="media/power-bi-visual/show-labels-on.png":::
+
+ :::image type="content" source="media/power-bi-visual/show-labels-off.png" alt-text="A screenshot of the Azure Maps visual displaying a map with the show labels setting turned off in the style section of the format pane in Power BI." lightbox="media/power-bi-visual/show-labels-off.png":::
## Fields pane buckets
The following data buckets are available in the **Fields** pane of the Azure Map
## Map settings
-The **Map settings** section of the Format pane provide options for customizing how the map is displayed and reacts to updates.
+The **Map settings** section of the **Format** pane provide options for customizing how the map is displayed and reacts to updates.
+
+The **Map settings** section is divided into three subsections: [style](#style), [view](#view) and [controls](#controls).
+
+### Style
-| Setting | Description |
-||--|
-| Auto zoom | Automatically zooms the map into the data loaded through the **Fields** pane of the visual. As the data changes, the map will update its position accordingly. When the slider is in the **Off** position, more map view settings are displayed for the default map view. |
-| World wrap | Allows the user to pan the map horizontally infinitely. |
-| Style picker | Adds a button to the map that allows the report readers to change the style of the map. |
-| Navigation controls | Adds buttons to the map as another method to allow the report readers to zoom, rotate, and change the pitch of the map. See this document on [Navigating the map](map-accessibility.md#navigating-the-map) for details on all the different ways users can navigate the map. |
-| Map style | The style of the map. See the [supported map styles](supported-map-styles.md) document for more information. |
-| Selection control | Adds a button that allows the user to choose between different modes to select data on the map; circle, rectangle, polygon (lasso), or travel time or distance. When drawing a polygon, to complete the drawing; click on the first point, or double-click the map on the last point, or press the `c` key. |
+The following settings are available in the **Style** section:
-### Map view settings
+| Setting | Description |
+|-|--|
+| Style | The style of the map. The dropdown list contains [greyscale light][gs-light], [greyscale dark][gs-dark], [night][night], [road shaded relief][RSR], [satellite][satellite] and [satellite road labels][satellite RL]. |
+| Show labels | A toggle switch that enables you to either show or hide map labels. For more information, see list item number five in the previous section. |
-If the **Auto zoom** slider is in the **Off** position, the following settings are displayed and allow the user to specify the default map view information.
+### View
+
+The following settings available in the **View** section enable the user to specify the default map view information when the **Auto zoom** setting is set to **Off**.
| Setting | Description | |||
+| Auto zoom | Automatically zooms the map into the data loaded through the **Fields** pane of the visual. As the data changes, the map will update its position accordingly. When **Auto zoom** is set to **Off**, the remaining settings in this section become active that enable to user to define the default map view. |
| Zoom | The default zoom level of the map. Can be a number between 0 and 22. |
-| Center latitude | The default latitude at the center of the map. |
-| Center longitude | The default longitude at the center of the map. |
+| Center latitude | The default latitude of the center of the map. |
+| Center longitude | The default longitude of the center of the map. |
| Heading | The default orientation of the map in degrees, where 0 is north, 90 is east, 180 is south, and 270 is west. Can be any number between 0 and 360. | | Pitch | The default tilt of the map in degrees between 0 and 60, where 0 is looking straight down at the map. |
+### Controls
+
+The following settings are available in the **Controls** section:
+
+| Setting | Description |
+|--|--|
+| World wrap | Allows the user to pan the map horizontally infinitely. |
+| Style picker | Adds a button to the map that allows the report readers to change the style of the map. |
+| Navigation | Adds buttons to the map as another method to allow the report readers to zoom, rotate, and change the pitch of the map. See this document on [Navigating the map](map-accessibility.md#navigating-the-map) for details on all the different ways users can navigate the map. |
+| Selection | Adds a button that allows the user to choose between different modes to select data on the map; circle, rectangle, polygon (lasso), or travel time or distance. To complete drawing a polygon; select the first point, or double-click on the last point on the map, or press the `c` key. |
+| Geocoding culture | The default, **Auto**, refers to the Western Address System. The only other option, **JA**, refers to the Japanese address system. In the western address system, you begin with the address details and then proceed to the larger categories such as city, state and postal code. In the Japanese address system, the larger categories are listed first and finish with the address details. |
+ ## Considerations and Limitations The Azure Maps Power BI visual is available in the following services and applications:
The Azure Maps Power BI visual is available in the following services and applic
**Where is Azure Maps available?**
-At this time, Azure Maps is currently available in all countries and regions except the following:
+At this time, Azure Maps is currently available in all countries and regions except:
- China - South Korea
Customize the visual:
> [!div class="nextstepaction"] > [Customize visualization titles, backgrounds, and legends](/power-bi/visuals/power-bi-visualization-customize-title-background-and-legend)+
+[gs-light]: supported-map-styles.md#grayscale_light
+[gs-dark]: supported-map-styles.md#grayscale_dark
+[night]:supported-map-styles.md#night
+[RSR]: supported-map-styles.md#road_shaded_relief
+[satellite]: supported-map-styles.md#satellite
+[satellite RL]: supported-map-styles.md#satellite_road_labels
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Azure Maps Python SDK supports Python version 3.7 or later. Check theΓÇ»[Azure S
Azure Maps JavaScript/TypeScript SDK supports LTS versions of [Node.js][Node.js] including versions in Active status and Maintenance status.
-| Service Name  | npm package  | Samples  |
+| Service Name  | npm packages | Samples  |
||-|--| | [Search][js search readme] | [@azure/maps-search][js search package] | [search samples][js search sample] | | [Route][js route readme] | [@azure-rest/maps-route][js route package] | [route samples][js route sample] |
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
For security best practices, see [Authentication and authorization best practice
The Azure Maps SDKs go through regular security testing along with any external dependency libraries that may be used by the SDKs. Any known security issue is fixed in a timely manner and released to production. If your application points to the latest major version of the hosted version of the Azure Maps Web SDK, it will automatically receive all minor version updates that will include security related fixes.
-If self-hosting the Azure Maps Web SDK via the NPM module, be sure to use the caret (^) symbol to in combination with the Azure Maps NPM package version number in your `package.json` file so that it will always point to the latest minor version.
+If self-hosting the Azure Maps Web SDK via the npm module, be sure to use the caret (^) symbol to in combination with the Azure Maps npm package version number in your `package.json` file so that it will always point to the latest minor version.
```json "dependencies": {
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
The OMS Agent has limited customization and hardening support for Linux.
The following are currently supported: - SELinux (Marketplace images for CentOS and RHEL with their default settings)
+- FIPS (Marketplace images for CentOS and RHEL 6/7 with their default settings)
The following aren't supported: - CIS
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
To install DCR Config Generator, you need:
1. PowerShell version 5.1 or higher. We recommend using PowerShell version 7.1.3 or higher. 1. Read access for the specified workspace resources. 1. The `Az Powershell` module to pull workspace agent configuration information.
-1. The Azure credentials for running `Connect-AzAccount` and `Select-AzSubscription`, which set the context for the script to run.
+1. The Azure credentials for running `Connect-AzAccount` and `Select-AzContext`, which set the context for the script to run.
To install DCR Config Generator:
azure-monitor Diagnostics Extension Windows Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-windows-install.md
The protected settings are defined in the [PrivateConfig element](diagnostics-ex
{ "storageAccountName": "mystorageaccount", "storageAccountKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
- "storageAccountEndPoint": "https://mystorageaccount.blob.core.windows.net"
+ "storageAccountEndPoint": "https://core.windows.net"
} ```
The following minimal example of a configuration file enables collection of diag
"PrivateConfig": { "storageAccountName": "mystorageaccount", "storageAccountKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
- "storageAccountEndPoint": "https://mystorageaccount.blob.core.windows.net"
+ "storageAccountEndPoint": "https://core.windows.net"
} } ```
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
And then defining these elements for the resulting alert actions using:
1. In the **Conditions** pane, select the **Chart period**. 1. The **Preview** chart shows you the results of your selection.
- 1. In the **Alert logic** section:
+ 1. Select values for each of these fields in the **Alert logic** section:
|Field |Description | |||
- |Event level| Select the level of the events that this alert rule monitors. Values are: **Critical**, **Error**, **Warning**, **Informational**, **Verbose** and **All**.|
- |Status|Select the status levels for which the alert is evaluated.|
+ |Event level| Select the level of the events for this alert rule. Values are: **Critical**, **Error**, **Warning**, **Informational**, **Verbose** and **All**.|
+ |Status|Select the status levels for the alert.|
|Event initiated by|Select the user or service principal that initiated the event.|
+ ### [Resource Health alert](#tab/resource-health)
+
+ 1. In the **Conditions** pane, select values for each of these fields:
+
+ |Field |Description |
+ |||
+ |Event status| Select the statuses of Resource Health events. Values are: **Active**, **In Progress**, **Resolved**, and **Updated**.|
+ |Current resource status|Select the current resource status. Values are: **Available**, **Degraded**, and **Unavailable**.|
+ |Previous resource status|Select the previous resource status. Values are: **Available**, **Degraded**, **Unavailable**, and **Unknown**.|
+ |Reason type|Select the cause(s) of the Resource Health events. Values are: **Platform Initiated**, **Unknown**, and **User Initiated**.|
+ ### [Service Health alert](#tab/service-health)
+
+ 1. In the **Conditions** pane, select values for each of these fields:
+
+ |Field |Description |
+ |||
+ |Services| Select the Azure services.|
+ |Regions|Select the Azure regions.|
+ |Event types|Select the type(s) of Service Health events. Values are: **Service issue**, **Planned maintenance**, **Health advisories**, and **Security advisories**.|
+ From this point on, you can select the **Review + create** button at any time.
And then defining these elements for the resulting alert actions using:
1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot of the actions tab when creating a new activity log alert rule.":::
+ ### [Resource Health alert](#tab/resource-health)
+
+ 1. Enter values for the **Alert rule name** and the **Alert rule description**.
+ 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.
+ ### [Service Health alert](#tab/service-health)
+
+ 1. Enter values for the **Alert rule name** and the **Alert rule description**.
+ 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
### [Activity log alert](#tab/activity-log)
- To create an activity log alert rule, use the **az monitor activity-log alert create** command. You can see detailed documentation on the metric alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
- To create a new activity log alert rule, use the following commands: - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource. - [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule. - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
-
+ You can find detailed documentation on the activity log alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+ ### [Resource Health alert](#tab/resource-health)
+
+ To create a new activity log alert rule, use the following commands using the `Resource Health` category:
+ - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource.
+ - [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule.
+ - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
+
+ You can find detailed documentation on the alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+
+ ### [Service Health alert](#tab/service-health)
+
+ To create a new activity log alert rule, use the following commands using the `Service Health` category:
+ - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource .
+ - [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule.
+ - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
+
+ You can find detailed documentation on the alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+
+
+ ## Create a new alert rule using PowerShell - To create a metric alert rule using PowerShell, use this cmdlet: [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2)
azure-monitor Alerts Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-logic-apps.md
This article shows you how to create a Logic App and integrate it with an Azure Monitor Alert.
-[Azure Logic Apps](https://docs.microsoft.com/azure/logic-apps/logic-apps-overview) allows you to build and customize workflows for integration. Use Logic Apps to customize your alert notifications.
+[Azure Logic Apps](../../logic-apps/logic-apps-overview.md) allows you to build and customize workflows for integration. Use Logic Apps to customize your alert notifications.
+ Customize the alerts email, using your own email subject and body format. + Customize the alert metadata by looking up tags for affected resources or fetching a log query search result.
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
You can see all alert instances in all your Azure resources generated in the las
## Types of alerts
-There are four types of alerts. This table provides a brief description of each alert type.
+This table provides a brief description of each alert type.
See [this article](alerts-types.md) for detailed information about each alert type and how to choose which alert type best suits your needs. |Alert type|Description| |:|:| |[Metric alerts](alerts-types.md#metric-alerts)|Metric alerts evaluate resource metrics at regular intervals. Metrics can be platform metrics, custom metrics, logs from Azure Monitor converted to metrics or Application Insights metrics. Metric alerts have several additional features, such as the ability to apply multiple conditions and dynamic thresholds.| |[Log alerts](alerts-types.md#log-alerts)|Log alerts allow users to use a Log Analytics query to evaluate resource logs at a predefined frequency.|
-|[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches the defined conditions.|
+|[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches defined conditions. **Resource Health** alerts and **Service Health** alerts are activity log alerts that report on your service and resource health.|
|[Smart detection alerts](alerts-types.md#smart-detection-alerts)|Smart detection on an Application Insights resource automatically warns you of potential performance problems and failure anomalies in your web application. You can migrate smart detection on your Application Insights resource to create alert rules for the different smart detection modules.|
+|[Prometheus alerts (preview)](alerts-types.md#prometheus-alerts-preview)|Prometheus alerts are used for alerting on performance and health of Kubernetes clusters (including AKS). The alert rules are based on PromQL, which is an open source query language.|
## Out-of-the-box alert rules (preview)
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
This article describes the kinds of Azure Monitor alerts you can create, and helps you understand when to use each type of alert.
-There are five types of alerts:
+There are four types of alerts:
- [Metric alerts](#metric-alerts)-
+- [Log alerts](#log-alerts)
- [Activity log alerts](#activity-log-alerts)
+ - [Service Health alerts](#service-health-alerts)
+ - [Resource Health alerts](#resource-health-alerts)
- [Smart detection alerts](#smart-detection-alerts) - [Prometheus alerts](#prometheus-alerts-preview) (preview)+ ## Choosing the right alert type This table can help you decide when to use what type of alert. For more detailed information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
This table can help you decide when to use what type of alert. For more detailed
|||| |Metric alert|Metric data is stored in the system already pre-computed. Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. We recommend using metric alerts if the data you want to monitor is available in metric data.|Each metric alert rule is charged based on the number of time-series that are monitored. | |Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for log alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
-|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
+|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource. Service Health alerts and Resource Health alerts can let you know when there is an issue with one of your services or resources.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
|Prometheus alerts (preview)| Prometheus alerts are primarily used for alerting on performance and health of Kubernetes clusters (including AKS). The alert rules are based on PromQL, which is an open source query language. | There is no charge for Prometheus alerts during the preview period. | ## Metric alerts
Activity log alert rules are Azure resources, so they can be created by using an
An activity log alert only monitors events in the subscription in which the alert is created.
+### Service Health alerts
+
+Service Health alerts are a type of activity alert. [Service Health](../../service-health/overview.md) lets you know about outages, planned maintenance activities, and other health advisories because the authenticated Service Health experience knows which services and resources you currently use.
+
+The best way to use Service Health is to set up Service Health alerts to notify you using your preferred communication channels when service issues, planned maintenance, or other changes may affect the Azure services and regions you use.
+
+### Resource Health alerts
+
+Resource Health alerts are a type of activity alert. [Resource Health overview](../../service-health/resource-health-overview.md) helps you diagnose and get support for service problems that affect your Azure resources. It reports on the current and past health of your resources. Resource Health relies on signals from different Azure services to assess whether a resource is healthy. If a resource is unhealthy, Resource Health analyzes additional information to determine the source of the problem. It also reports on actions that Microsoft is taking to fix the problem and identifies things that you can do to address it.
+ ## Smart Detection alerts After setting up Application Insights for your project, when your app generates a certain minimum amount of data, Smart Detection takes 24 hours to learn the normal behavior of your app. Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine learning to find these anomalies. Smart Detection monitors the data received from your app, and in particular the failure rates. Application Insights automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests.
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 10/31/2022 Last updated : 11/15/2022 ms.devlang: csharp, java, javascript, vb
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
Title: Filtering and preprocessing in the Application Insights SDK | Microsoft Docs description: Write telemetry processors and telemetry initializers for the SDK to filter or add properties to the data before the telemetry is sent to the Application Insights portal. Previously updated : 11/23/2016 Last updated : 11/14/2022 ms.devlang: csharp, javascript, python
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application. Previously updated : 09/20/2022 Last updated : 11/14/2022 # Application Insights overview
Application Insights provides other features including, but not limited to:
- [Live Metrics](live-stream.md) ΓÇô observe activity from your deployed application in real time with no effect on the host environment - [Availability](availability-overview.md) ΓÇô also known as ΓÇ£Synthetic Transaction MonitoringΓÇ¥, probe your applications external endpoint(s) to test the overall availability and responsiveness over time-- [GitHub or Azure DevOps integration](work-item-integration.md) ΓÇô create [GitHub](https://learn.microsoft.com/training/paths/github-administration-products/) or [Azure DevOps](https://learn.microsoft.com/azure/devops/?view=azure-devops) work items in context of Application Insights data
+- [GitHub or Azure DevOps integration](work-item-integration.md) ΓÇô create [GitHub](/training/paths/github-administration-products/) or [Azure DevOps](/azure/devops/?view=azure-devops) work items in context of Application Insights data
- [Usage](usage-overview.md) ΓÇô understand which features are popular with users and how users interact and use your application - [Smart Detection](proactive-diagnostics.md) ΓÇô automatic failure and anomaly detection through proactive telemetry analysis
-In addition, Application Insights supports [Distributed Tracing](distributed-tracing.md), also known as ΓÇ£distributed component correlationΓÇ¥. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a given execution or transaction. The ability to trace activity end-to-end is increasingly important for applications that have been built as distributed components or [microservices](https://learn.microsoft.com/azure/architecture/guide/architecture-styles/microservices).
+In addition, Application Insights supports [Distributed Tracing](distributed-tracing.md), also known as ΓÇ£distributed component correlationΓÇ¥. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a given execution or transaction. The ability to trace activity end-to-end is increasingly important for applications that have been built as distributed components or [microservices](/azure/architecture/guide/architecture-styles/microservices).
The [Application Map](app-map.md) allows a high level top-down view of the application architecture and at-a-glance visual references to component health and responsiveness.
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Title: Application Map in Azure Application Insights | Microsoft Docs description: Monitor complex application topologies with Application Map and Intelligent view. Previously updated : 05/16/2022 Last updated : 11/15/2022 ms.devlang: csharp, java, javascript, python
# Application Map: Triage distributed applications
+Application maps represent the logical structure of a distributed application. Individual components of the application are determined by their "roleName" or "name" property in recorded telemetry. These components are represented as circles on the map and are referred to as "nodes." HTTP calls between nodes are represented as arrows connecting these nodes, referred to as "connectors" or "edges." The node that makes the call is the "source" of the call, and the receiving node is the "target" of the call.
+ Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. Each node on the map represents an application component or its dependencies and has health KPI and alerts status. You can select any component to get more detailed diagnostics, such as Application Insights events. If your app uses Azure services, you can also select Azure diagnostics, such as SQL Database Advisor recommendations. Application Map also features [Intelligent view](#application-map-intelligent-view-public-preview) to assist with fast service health investigations.
For the [official definitions](https://github.com/Microsoft/ApplicationInsights-
``` Alternatively, *cloud role instance* can be helpful for scenarios where a cloud role name tells you the problem is somewhere in your web front end. But you might be running multiple load-balanced servers across your web front end. Being able to drill in a layer deeper via Kusto queries and knowing if the issue is affecting all web front-end servers or instances or just one can be important.-
+intelligent view
A scenario when you might want to override the value for cloud role instance could be if your app is running in a containerized environment. In this case, just knowing the individual server might not be enough information to locate a specific issue. For more information about how to override the cloud role name property with telemetry initializers, see [Add properties: ITelemetryInitializer](api-filtering-sampling.md#addmodify-properties-itelemetryinitializer). +
+## Application Map Filters
+
+Application Map filters allow the user to reduce the number of nodes and edges shown by applying one or more filters. These filters can be used to reduce the scope of the map, showing a smaller and more focused map.
+
+### Creating Application Map filters
+
+To create a filter, select the "Add filter" button in the application map's toolbar.
++
+This pops up a dialog with three sections: 1) Select filter type, 2)
+Choose filter parameters, and 3) Review.
+++
+The first section has two options:
+
+1. Node filter
+1. Connector (edge) filter
+
+The contents in the other sections change based on the option selected.
+
+#### Node filters
+
+Node filters allow the user to leave only selected nodes on the map and hide the rest. A node filter checks each node if it contains a property (its name, for example) with a value that matches a search value through a given operator. If a node is removed by a node filter, all of its connectors (edges) are also removed.
+
+There are three parameters available for nodes:
+
+- "**Nodes included**" allows the user to select only nodes with
+ matching properties or to also include source nodes, target nodes,
+ or both in the resulting map.
+
+ - "Nodes and sources, targets"--This means nodes that match the search parameters will be included in the resulting map, and nodes that are sources or targets for the matching node will also be included, even if they don't have property values that
+ match the search. Source and target nodes are collectively referred to as "Connected" nodes.
+
+ - "Nodes and sources"--Same as above, but target nodes aren't automatically included in the results.
+
+ - "Nodes and targets"--Same as above, but source nodes aren't automatically included.
+
+ - "Nodes only"--All nodes in the resulting map must have a property value that matches.
+
+- "**Operator**" is the type of check that will be performed on each
+ node's property values:
+
+ - contains
+
+ - !contains (not contains)
+
+ - == (equals)
+
+ - != (not equals)
+
+- "**Search value**" is the text that has to be contained, not
+ contained, equal, or not equal to a node property value. Some of the
+ values found in nodes that are on the map are shown in a drop-down.
+ Any arbitrary value can be entered by clicking "Create option ..."
+ in the drop-down.
+
+For example, in the screenshot below, the filter is being configured to
+select **Node(s)** that **contain(s)** the text **"-west".** **Source**
+and t**arget** nodes will also be included in the resulting map. In the
+same screenshot, the user is able to select one of the values found in
+the map or to create an option that isn't an exact match to one found
+in the map.
++
+#### Connector (edge) filters
+
+Connector filters examine the properties of a connector to match a value. Connectors that don't match the filter are removed from the map. The same happens to nodes with no connectors left.
+
+Connector filters require three parameters:
+
+- "**Filter connectors by**" allows the user to choose which property
+ of a connector to use:
+
+ - "**Error connector (highlighted red)**" selects connectors based
+ on their color (red or not). A value can't be entered for this
+ type of filter, only an operator that is "==" or "!=" meaning
+ "connector with errors" and "connector without errors."
+
+ - "**Error rate**" uses the average error rate for the
+ connectorthe number of failed calls divided by the number of
+ all callsexpressed as a percentage. For example, a value of
+ "1" would refer to 1% failed calls.
+
+ - "**Average call duration (****ms)**" uses just that: the average
+ duration of all calls represented by the connector, in
+ milliseconds. For example, a value of "1000" would refer to
+ calls that averaged 1 second.
+
+ - "**Calls count**" uses the total number of calls represented by
+ the connector.
+
+- **"Operator"** is the comparison that will be applied between the
+ connector property and the value entered below. The options change:
+ "Error connector" has equals/not equals options; all others have
+ greater/less than.
+
+- **"Value"** is the comparison value for the filter. There's only
+ one option for the "Error connector" filter: "Errors." Other filter
+ types require a numeric value and offer a drop-down with some
+ pre-populated entries relevant to the map.
+
+ - Some of these entries have a designation "(Pxx)" which are
+ percentile levels. For example, "Average call duration" filter
+ may have the value "200 (P90)" which indicates 90% of all
+ connectors (regardless of the number of calls they represent)
+ have less than 200 ms call duration..
+
+ - When a specific number isn't shown in the drop-down, it can be
+ typed, and created by clicking on "Create option." Typing "P"
+ shows all the percentile values in the drop-down.
+
+### Review section
+
+The Review section contains textual and visual descriptions of what the filter will do, which should be helpful when learning how filters work:
+++
+### Using filters in Application Map
+
+#### Filter interactivity
+
+After configuring a filter in the "Add filter" pop-up, select "Apply" to create the filter. Several filters can be applied, and they work sequentially, from left to right. Each filter can remove further nodes and connectors, but can't add them back to the map.
+
+The filters show up as rounded buttons above the application map:
++
+Clicking the :::image type="content" source="media/app-map/image-8.png" alt-text="A screenshot of a rounded X button."::: on a filter will remove that filter. Clicking elsewhere on the button allows the user to edit the filter's values. As the user changes values in the filter, the new values are applied so that the map is a preview of the change. Clicking "Cancel" restores the filter as it was before editing.
++
+### Reusing filters
+
+Filters can be reused in two ways:
+
+- The "Copy link" button on the toolbar above the map encodes the
+ filter information in the copied URL. This link can be saved in the
+ browser's bookmarks or shared with others. "Copy link" preserves the
+ duration value, but not the absolute time, so the map shown at a
+ later time may be different from the one observed when the link was
+ created.
+
+- The dashboard pin :::image type="content" source="media/app-map/image-10.png" alt-text="A screenshot displaying the dashboard pin button."::: is located next to the title bar of the Application Map blade. This button pins the map to a dashboard, along with the filters applied to it. This action can be useful for filters that are frequently interesting. As an example, the user can pin a map with "Error connector" filter applied to it, and the dashboard view will only show nodes that have errors in their HTTP calls.
+
+#### Filter usage scenarios
+
+There are many filter combinations. Here are some suggestions that apply to most maps and may be useful to pin on a dashboard:
+
+- Show only errors that appear significant by using the "Error connector" filter along with "Intelligent view":\
+ :::image type="content" source="media/app-map/image-11.png" alt-text="A screenshot displaying the Last 24 hours and Highlighted Errors filters.":::
+ :::image type="content" source="media/app-map/image-12.png" alt-text="A screenshot displaying the Intelligent Overview toggle.":::
+
+- Hide low-traffic connectors with no errors to quickly focus on issues that have higher impact:
+ :::image type="content" source="media/app-map/image-13.png" alt-text="A screenshot displaying the Last 24 hours, calls greater than 876, and highlihgted errors filters.":::
+
+- Show high-traffic connectors with high average duration to focus on potential performance issues:
+ :::image type="content" source="media/app-map/image-14.png" alt-text="A screenshot displaying the Last 24 hours, calls greater than 3057, and average time greater than 467 filters.":::
+
+- Show a specific portion of a distributed application (requires suitable roleName naming convention):
+ :::image type="content" source="media/app-map/image-15.png" alt-text="A screenshot displaying the Last 24 hours and Connected Contains West filters.":::
+
+- Hide a dependency type that is too noisy:
+ :::image type="content" source="media/app-map/image-16.png" alt-text="A screenshot displaying the Last 24 hours and Nodes Contains Storage Accounts filters.":::
+
+- Show only connectors that have higher error rates than a specific value
+ :::image type="content" source="media/app-map/image-17.png" alt-text="A screenshot displaying the Last 24 hours and Errors greater than 0.01 filters.":::
+++ ## Application Map Intelligent view (public preview) The following sections discuss Intelligent view.
Intelligent view has some limitations:
To provide feedback, see [Portal feedback](#portal-feedback). ++ ## Troubleshooting If you're having trouble getting Application Map to work as expected, try these steps.
Common troubleshooting questions about Intelligent view.
A dependency might appear to be failing but the model doesn't indicate it's a potential incident:
-* If this dependency has been failing for a while, the model might believe it's a regular state and not highlight the edge for you. It focuses on problem-solving in RT.
+* If this dependency has been failing for a while, the model might believe it's a regular state, and not highlight the edge for you. It focuses on problem-solving in RT.
* If this dependency has a minimal effect on the overall performance of the app, that can also make the model ignore it. * If none of the above is correct, use the **Feedback** option and describe your experience. You can help us improve future model versions.
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Title: Dependency tracking in Application Insights | Microsoft Docs description: Monitor dependency calls from your on-premises or Azure web application with Application Insights. Previously updated : 08/26/2020 Last updated : 11/15/2022 ms.devlang: csharp
For webpages, the Application Insights JavaScript SDK automatically collects AJA
## Advanced SQL tracking to get full SQL query > [!NOTE]
-> Azure Functions requires separate settings to enable SQL text collection. Within [host.json](../../azure-functions/functions-host-json.md#applicationinsights), set `"EnableDependencyTracking": true,` and `"DependencyTrackingOptions": { "enableSqlCommandTextInstrumentation": true }` in `applicationInsights`.
+> Azure Functions requires separate settings to enable SQL text collection. For more information, see [Enable SQL query collection](../../azure-functions/configure-monitoring.md#enable-sql-query-collection).
For SQL calls, the name of the server and database is always collected and stored as the name of the collected `DependencyTelemetry`. Another field, called data, can contain the full SQL query text.
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
description: Capture exceptions from ASP.NET apps along with request telemetry.
ms.devlang: csharp Previously updated : 08/19/2022 Last updated : 11/15/2022
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
description: Search logs generated by Trace, NLog, or Log4Net.
ms.devlang: csharp Previously updated : 05/08/2019 Last updated : 11/15/2022
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
Title: Configure monitoring for ASP.NET with Azure Application Insights | Microsoft Docs description: Configure performance, availability, and user behavior analytics tools for your ASP.NET website hosted on-premises or in Azure. Previously updated : 10/12/2021 Last updated : 11/15/2022 ms.devlang: csharp
azure-monitor Auto Collect Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-collect-dependencies.md
description: Application Insights automatically collect and visualize dependenci
ms.devlang: csharp, java, javascript Previously updated : 08/22/2022 Last updated : 11/15/2022
azure-monitor Availability Multistep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-multistep.md
Title: Monitor with multi-step web tests - Azure Application Insights
-description: Set up multi-step web tests to monitor your web applications with Azure Application Insights
+ Title: Monitor with multistep web tests - Application Insights
+description: Set up multistep web tests to monitor your web applications with Application Insights.
Last updated 07/21/2021
-# Multi-step web tests
+# Multistep web tests
-You can monitor a recorded sequence of URLs and interactions with a website via multi-step web tests. This article will walk you through the process of creating a multi-step web test with Visual Studio Enterprise.
+You can monitor a recorded sequence of URLs and interactions with a website via multistep web tests. This article walks you through the process of creating a multistep web test with Visual Studio Enterprise.
> [!IMPORTANT]
-> [Multi-step web tests have been deprecated](https://azure.microsoft.com/updates/retirement-notice-transition-to-custom-availability-tests-in-application-insights/). We recommend using [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) to submit [custom availability tests](availability-azure-functions.md) instead of multi-step web tests. With TrackAvailability() and custom availability tests, you can run tests on any compute you want and use C# to easily author new tests.
+> [Multistep web tests have been deprecated](https://azure.microsoft.com/updates/retirement-notice-transition-to-custom-availability-tests-in-application-insights/). We recommend using [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) to submit [custom availability tests](availability-azure-functions.md) instead of multistep web tests. With `TrackAvailability()` and custom availability tests, you can run tests on any compute you want and use C# to easily author new tests.
-> [!NOTE]
-> Multi-step web tests **are not supported** in the [Azure Government](../../azure-government/index.yml) cloud.
+Multistep web tests are categorized as classic tests and can be found under **Add Classic Test** on the **Availability** pane.
+> [!NOTE]
+> Multistep web tests *aren't supported* in the [Azure Government](../../azure-government/index.yml) cloud.
-Multi-step web tests are categorized as classic tests and can be found under **Add Classic Test** in the Availability pane.
+## Multistep web test alternative
-## Multi-step webtest alternative
+Multistep web tests depend on Visual Studio web test files. It was [announced](https://devblogs.microsoft.com/devops/cloud-based-load-testing-service-eol/) that Visual Studio 2019 will be the last version with web test functionality. Although no new features will be added, web test functionality in Visual Studio 2019 is still currently supported and will continue to be supported during the support lifecycle of the product.
-Multi-step web tests depend on Visual Studio webtest files. It was [announced](https://devblogs.microsoft.com/devops/cloud-based-load-testing-service-eol/) that Visual Studio 2019 will be the last version with webtest functionality. It's important to understand that while no new features will be added, webtest functionality in Visual Studio 2019 is still currently supported and will continue to be supported during the support lifecycle of the product.
+We recommend using [TrackAvailability](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) to submit [custom availability tests](./availability-azure-functions.md) instead of multistep web tests. This option is the long-term supported solution for multi-request or authentication test scenarios. With `TrackAvailability()` and custom availability tests, you can run tests on any compute you want and use C# to easily author new tests.
-We recommend using the [TrackAvailability](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) to submit [custom availability tests](./availability-azure-functions.md) instead of Multi-step web tests. This is the long term supported solution for multi request or authentication test scenarios. With TrackAvailability() and custom availability tests, you can run tests on any compute you want and use C# to easily author new tests.
+## Prerequisites
-## Pre-requisites
+You need:
* Visual Studio 2017 Enterprise or greater. * Visual Studio web performance and load testing tools.
-To locate the testing tools pre-requisite. Launch the **Visual Studio Installer** > **Individual components** > **Debugging and testing** > **Web performance and load testing tools**.
+To locate the testing tools prerequisite, select **Visual Studio Installer** > **Individual components** > **Debugging and testing** > **Web performance and load testing tools**.
-![Screenshot of the Visual Studio installer UI with Individual components selected with a checkbox next to the item for Web performance and load testing tools](./media/availability-multistep/web-performance-load-testing.png)
+![Screenshot that shows the Visual Studio installer UI with individual components selected with a checkbox next to the item for web performance and load testing tools.](./media/availability-multistep/web-performance-load-testing.png)
> [!NOTE]
-> Multi-step web tests have additional costs associated with them. To learn more consult the [official pricing guide](https://azure.microsoft.com/pricing/details/application-insights/).
+> Multistep web tests have extra costs associated with them. To learn more, see the [official pricing guide](https://azure.microsoft.com/pricing/details/application-insights/).
-## Record a multi-step web test
+## Record a multistep web test
> [!WARNING]
-> We no longer recommend using the multi-step recorder. The recorder was developed for static HTML pages with basic interactions, and does not provide a functional experience for modern web pages.
+> We no longer recommend using the multistep recorder. The recorder was developed for static HTML pages with basic interactions. It doesn't provide a functional experience for modern webpages.
-For guidance on creating Visual Studio web tests consult the [official Visual Studio 2019 documentation](/visualstudio/test/how-to-create-a-web-service-test).
+For guidance on how to create Visual Studio web tests, see the [official Visual Studio 2019 documentation](/visualstudio/test/how-to-create-a-web-service-test).
## Upload the web test
-1. In the Application Insights portal on the Availability pane select **Add Classic test**, then select **Multi-step** as the *SKU*.
-2. Upload your multi-step web test.
-3. Set the test locations, frequency, and alert parameters.
-4. Select **Create**.
+1. In the Application Insights portal on the **Availability** pane, select **Add Classic test**. Then select **Multi-step** as the **SKU**.
+1. Upload your multistep web test.
+1. Set the test locations, frequency, and alert parameters.
+1. Select **Create**.
-### Frequency & location
+### Frequency and location
-|Setting| Explanation
-|-|-|-|
-|**Test frequency**| Sets how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested on average every minute.|
-|**Test locations**| Are the places from where our servers send web requests to your URL. **Our minimum number of recommended test locations is five** in order to insure that you can distinguish problems in your website from network issues. You can select up to 16 locations.
+|Setting| Description |
+|-|-|
+|Test frequency| Sets how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested on average every minute.|
+|Test locations| The places from where our servers send web requests to your URL. *Our minimum number of recommended test locations is five* to ensure that you can distinguish problems in your website from network issues. You can select up to 16 locations.
### Success criteria
-|Setting| Explanation
-|-|-|-|
-| **Test timeout** |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site have not been received within this period. If you selected **Parse dependent requests**, then all the images, style files, scripts, and other dependent resources must have been received within this period.|
-| **HTTP response** | The returned status code that is counted as a success. 200 is the code that indicates that a normal web page has been returned.|
-| **Content match** | A string, like "Welcome!" We test that an exact case-sensitive match occurs in every response. It must be a plain string, without wildcards. Don't forget that if your page content changes you might have to update it. **Only English characters are supported with content match** |
+|Setting| Description|
+|-||
+| Test timeout |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site haven't been received within this period. If you selected **Parse dependent requests**, all the images, style files, scripts, and other dependent resources must have been received within this period.|
+| HTTP response | The returned status code that's counted as a success. The code 200 indicates that a normal webpage has been returned.|
+| Content match | A string, like "Welcome!" We test that an exact case-sensitive match occurs in every response. It must be a plain string, without wildcards. Don't forget that if your page content changes, you might have to update it. *Only English characters are supported with content match.* |
### Alerts
-|Setting| Explanation
-|-|-|-|
-|**Near-realtime (Preview)** | We recommend using Near-realtime alerts. Configuring this type of alert is done after your availability test is created. |
-|**Alert location threshold**|We recommend a minimum of 3/5 locations. The optimal relationship between alert location threshold and the number of test locations is **alert location threshold** = **number of test locations - 2, with a minimum of five test locations.**|
+|Setting| Description|
+|-||
+|Near real time (preview) | We recommend using near real time alerts. Configuring this type of alert is done after your availability test is created. |
+|Alert location threshold|We recommend a minimum of 3/5 locations. The optimal relationship between alert location threshold and the number of test locations is **alert location threshold** = **number of test locations - 2**, with a minimum of five test locations.|
## Configuration
-### Plugging time and random numbers into your test
+Follow these configuration steps.
+
+### Plug time and random numbers into your test
-Suppose you're testing a tool that gets time-dependent data such as stocks from an external feed. When you record your web test, you have to use specific times, but you set them as parameters of the test, StartTime and EndTime.
+Suppose you're testing a tool that gets time-dependent data, such as stock prices, from an external feed. When you record your web test, you have to use specific times, but you set them as parameters of the test, `StartTime` and `EndTime`.
-![My awesome stock app screenshot](./media/availability-multistep/app-insights-72webtest-parameters.png)
+![Screenshot that shows a stock app.](./media/availability-multistep/app-insights-72webtest-parameters.png)
-When you run the test, you'd like EndTime always to be the present time, and StartTime should be 15 minutes ago.
+When you run the test, you want `EndTime` always to be the present time. `StartTime` should be 15 minutes prior.
-The Web Test Date Time Plugin provides the way to handle parameterize times.
+The Web Test Date Time Plug-in provides the way to handle parameter times.
-1. Add a web test plug-in for each variable parameter value you want. In the web test toolbar, choose **Add Web Test Plugin**.
-
- ![Add Web Test Plug-in](./media/availability-multistep/app-insights-72webtest-plugin-name.png)
-
- In this example, we use two instances of the Date Time Plug-in. One instance is for "15 minutes ago" and another for "now."
+1. Add a Web Test Plug-in for each variable parameter value you want. On the web test toolbar, select **Add Web Test Plug-in**.
-2. Open the properties of each plug-in. Give it a name and set it to use the current time. For one of them, set Add Minutes = -15.
+ ![Screenshot that shows the Add Web Test Plug-in.](./media/availability-multistep/app-insights-72webtest-plugin-name.png)
- ![Context Parameters](./media/availability-multistep/app-insights-72webtest-plugin-parameters.png)
+ In this example, we use two instances of the Date Time Plug-in. One instance is for "15 minutes ago" and another is for "now."
-3. In the web test parameters, use {{plug-in name}} to reference a plug-in name.
+1. Open the properties of each plug-in. Give it a name and set it to use the current time. For one of them, set **Add Minutes = -15**.
- ![StartTime](./media/availability-multistep/app-insights-72webtest-plugins.png)
+ ![Screenshot that shows context parameters.](./media/availability-multistep/app-insights-72webtest-plugin-parameters.png)
-Now, upload your test to the portal. It will use the dynamic values on every run of the test.
+1. In the web test parameters, use `{{plug-in name}}` to reference a plug-in name.
-### Dealing with sign-in
+ ![Screenshot that shows StartTime.](./media/availability-multistep/app-insights-72webtest-plugins.png)
+
+Now, upload your test to the portal. It will use dynamic values on every run of the test.
+
+### Consider sign-in
If your users sign in to your app, you have various options for simulating sign-in so that you can test pages behind the sign-in. The approach you use depends on the type of security provided by the app.
-In all cases, you should create an account in your application just for the purpose of testing. If possible, restrict the permissions of this test account so that there's no possibility of the web tests affecting real users.
+In all cases, create an account in your application only for testing. If possible, restrict the permissions of this test account so that there's no possibility of the web tests affecting real users.
-**Simple username and password**
+#### Simple username and password
Record a web test in the usual way. Delete cookies first.
-**SAML authentication**
+#### SAML authentication
|Property name| Description|
-|-|--|
-| Audience Uri | The audience URI for the SAML token. This is the URI for the Access Control Service (ACS) ΓÇô including ACS namespace and host name. |
-| Certificate Password | The password for the client certificate which will grant access to the embedded private key. |
-| Client Certificate | The client certificate value with private key in Base64 encoded format. |
-| Name Identifier | The name identifier for the token |
-| Not After | The timespan for which the token will be valid. The default is 5 minutes. |
-| Not Before | The timespan for which a token created in the past will be valid (to address time skews). The default is (negative) 5 minutes. |
-| Target Context Parameter Name | The context parameter that will receive the generated assertion. |
-
+|-||
+| Audience URI | The audience URI for the SAML token. This URI is for the Access Control service, including the Access Control namespace and host name. |
+| Certificate password | The password for the client certificate, which will grant access to the embedded private key. |
+| Client certificate | The client certificate value with private key in Base64-encoded format. |
+| Name identifier | The name identifier for the token. |
+| Not after | The timespan for which the token will be valid. The default is 5 minutes. |
+| Not before | The timespan for which a token created in the past will be valid (to address time skews). The default is (negative) 5 minutes. |
+| Target context parameter name | The context parameter that will receive the generated assertion. |
-**Client secret**
-If your app has a sign-in route that involves a client secret, use that route. Azure Active Directory (AAD) is an example of a service that provides a client secret sign-in. In AAD, the client secret is the App Key.
+#### Client secret
+If your app has a sign-in route that involves a client secret, use that route. Azure Active Directory (Azure AD) is an example of a service that provides a client secret sign-in. In Azure AD, the client secret is the app key.
-Here's a sample web test of an Azure web app using an app key:
+Here's a sample web test of an Azure web app using an app key.
-![Sample screenshot](./media/availability-multistep/client-secret.png)
+![Screenshot that shows a sample.](./media/availability-multistep/client-secret.png)
-Get token from AAD using client secret (AppKey).
-Extract bearer token from response.
-Call API using bearer token in the authorization header.
-Make sure that the web test is an actual client - that is, it has its own app in AAD - and use its clientId + app key. Your service under test also has its own app in AAD: the appID URI of this app is reflected in the web test in the resource field.
+1. Get a token from Azure AD by using the client secret (the app key).
+1. Extract a bearer token from the response.
+1. Call the API by using the bearer token in the authorization header.
+1. Make sure that the web test is an actual client. That is, that it has its own app in Azure AD. Use its client ID and app key. Your service under test also has its own app in Azure AD. The app ID URI of this app is reflected in the web test in the resource field.
-### Open Authentication
-An example of open authentication is signing in with your Microsoft or Google account. Many apps that use OAuth provide the client secret alternative, so your first tactic should be to investigate that possibility.
+### Open authentication
+An example of open authentication is the act of signing in with your Microsoft or Google account. Many apps that use OAuth provide the client secret alternative, so your first tactic should be to investigate that possibility.
-If your test must sign in using OAuth, the general approach is:
+If your test must sign in by using OAuth, the general approach is:
-Use a tool such as Fiddler to examine the traffic between your web browser, the authentication site, and your app.
-Perform two or more sign-ins using different machines or browsers, or at long intervals (to allow tokens to expire).
-By comparing different sessions, identify the token passed back from the authenticating site, that is then passed to your app server after sign-in.
-Record a web test using Visual Studio.
-Parameterize the tokens, setting the parameter when the token is returned from the authenticator, and using it in the query to the site. (Visual Studio attempts to parameterize the test, but does not correctly parameterize the tokens.)
+1. Use a tool such as Fiddler to examine the traffic between your web browser, the authentication site, and your app.
+1. Perform two or more sign-ins using different machines or browsers, or at long intervals (to allow tokens to expire).
+1. By comparing different sessions, identify the token passed back from the authenticating site that's then passed to your app server after sign-in.
+1. Record a web test by using Visual Studio.
+1. Parameterize the tokens. Set the parameter when the token is returned from the authenticator, and use it in the query to the site. (Visual Studio attempts to parameterize the test, but doesn't correctly parameterize the tokens.)
## Troubleshooting
-Dedicated [troubleshooting article](troubleshoot-availability.md).
+For troubleshooting help, see the dedicated [troubleshooting](troubleshoot-availability.md) article.
## Next steps
-* [Availability Alerts](availability-alerts.md)
-* [Url ping web tests](monitor-web-app-availability.md)
+* [Availability alerts](availability-alerts.md)
+* [URL ping web tests](monitor-web-app-availability.md)
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
Title: Application Insights availability tests description: Set up recurring web tests to monitor availability and responsiveness of your app or website. Previously updated : 07/13/2021 Last updated : 11/15/2022
azure-monitor Availability Private Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-private-test.md
Title: Private availability testing - Azure Monitor Application Insights description: Learn how to use availability tests on internal servers that run behind a firewall with private testing. Previously updated : 05/14/2021 Last updated : 11/15/2022
azure-monitor Availability Standard Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md
Title: Availability Standard test - Azure Monitor Application Insights description: Set up Standard tests in Application Insights to check for availability of a website with a single request test. Previously updated : 07/13/2021 Last updated : 11/15/2022 # Standard test
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Title: Monitor performance on Azure VMs - Azure Application Insights
-description: Application performance monitoring for Azure VM and Azure virtual machine scale sets. Chart load and response time, dependency information, and set alerts on performance.
+ Title: Monitor performance on Azure VMs - Application Insights
+description: Application performance monitoring for Azure Virtual Machines and Azure Virtual Machine Scale Sets. Chart load and response time, dependency information, and set alerts on performance.
Previously updated : 10/31/2022 Last updated : 11/15/2022 ms.devlang: csharp, java, javascript, python
-# Deploy the Azure Monitor Application Insights Agent on Azure virtual machines and Azure virtual machine scale sets
+# Deploy Application Insights Agent on virtual machines and virtual machine scale sets
-Enabling monitoring for your .NET or Java based web applications running on [Azure virtual machines](https://azure.microsoft.com/services/virtual-machines/) and [Azure virtual machine scale sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code.
+Enabling monitoring for your .NET or Java-based web applications running on [Azure Virtual Machines](https://azure.microsoft.com/services/virtual-machines/) and [Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code.
-This article walks you through enabling Application Insights monitoring using the Application Insights Agent and provides preliminary guidance for automating the process for large-scale deployments.
-> [!IMPORTANT]
-> **Java** based applications running on Azure VMs and VMSS are monitored with the **[Application Insights Java 3.0 agent](./java-in-process-agent.md)**, which is generally available.
+This article walks you through enabling Application Insights monitoring by using Application Insights Agent. It also provides preliminary guidance for automating the process for large-scale deployments.
+
+Java-based applications running on Azure Virtual Machines and Azure Virtual Machine Scale Sets are monitored with the [Application Insights Java 3.0 agent](./java-in-process-agent.md), which is generally available.
> [!IMPORTANT]
-> Azure Application Insights Agent for ASP.NET and ASP.NET Core applications running on **Azure VMs and VMSS** is currently in public preview. For monitoring your ASP.NET applications running **on-premises**, use the [Azure Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md), which is generally available and fully supported.
-> The preview version for Azure VMs and VMSS is provided without a service-level agreement, and we don't recommend it for production workloads. Some features might not be supported, and some might have constrained capabilities.
+> Application Insights Agent for ASP.NET and ASP.NET Core applications running on Azure Virtual Machines and Azure Virtual Machine Scale Sets is currently in public preview. For monitoring your ASP.NET applications running on-premises, use [Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md), which is generally available and fully supported.
+>
+> The preview version for Azure Virtual Machines and Azure Virtual Machine Scale Sets is provided without a service-level agreement. We don't recommend it for production workloads. Some features might not be supported, and some might have constrained capabilities.
+>
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Enable Application Insights
-Auto-instrumentation is easy to enable with no advanced configuration required.
+Auto-instrumentation is easy to enable. Advanced configuration isn't required.
For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). > [!NOTE]
-> Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications and Java. Use an SDK to instrument Node.js and Python applications hosted on an Azure virtual machines and virtual machine scale sets.
+> Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications, and Java. Use an SDK to instrument Node.js and Python applications hosted on Azure Virtual Machines and Azure Virtual Machine Scale Sets.
### [.NET Framework](#tab/net)
-The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
+The Application Insights Agent autocollects the same dependency signals out-of-the-box as the SDK. To learn more, see [Dependency autocollection](./auto-collect-dependencies.md#net).
-### [.NET Core / .NET](#tab/core)
+### [.NET Core/.NET](#tab/core)
-The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
+The Application Insights Agent autocollects the same dependency signals out-of-the-box as the SDK. To learn more, see [Dependency autocollection](./auto-collect-dependencies.md#net).
### [Java](#tab/Java)
-For Java, **[Application Insights Java 3.0 agent](./java-in-process-agent.md)** is the recommended approach. The most popular libraries and frameworks, as well as logs and dependencies are [auto-collected](./java-in-process-agent.md#autocollected-requests), with a multitude of [other configurations](./java-standalone-config.md)
+We recommend [Application Insights Java 3.0 agent](./java-in-process-agent.md) for Java. The most popular libraries, frameworks, logs, and dependencies are [autocollected](./java-in-process-agent.md#autocollected-requests) along with many [other configurations](./java-standalone-config.md).
### [Node.js](#tab/nodejs)
To monitor Python apps, use the [SDK](./opencensus-python.md).
-## Manage Application Insights Agent for .NET applications on Azure virtual machines using PowerShell
+## Manage Application Insights Agent for .NET applications on virtual machines by using PowerShell
-> [!NOTE]
-> Before installing the Application Insights Agent, you'll need a connection string. [Create a new Application Insights Resource](./create-new-resource.md) or copy the connection string from an existing application insights resource.
+Before you install Application Insights Agent, you'll need a connection string. [Create a new Application Insights resource](./create-new-resource.md) or copy the connection string from an existing Application Insights resource.
> [!NOTE]
-> New to PowerShell? Check out the [Get Started Guide](/powershell/azure/get-started-azureps).
+> If you're new to PowerShell, see the [Get Started Guide](/powershell/azure/get-started-azureps).
+
+Install or update Application Insights Agent as an extension for virtual machines:
-Install or update the Application Insights Agent as an extension for Azure virtual machines
```powershell $publicCfgJsonString = ' {
Set-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>"
``` > [!NOTE]
-> You may install or update the Application Insights Agent as an extension across multiple Virtual Machines at-scale using a PowerShell loop.
+> You can install or update Application Insights Agent as an extension across multiple virtual machines at scale by using a PowerShell loop.
+
+Uninstall Application Insights Agent extension from a virtual machine:
-Uninstall Application Insights Agent extension from Azure virtual machine
```powershell Remove-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name "ApplicationMonitoring" ```
-Query Application Insights Agent extension status for Azure virtual machine
+Query Application Insights Agent extension status for a virtual machine:
+ ```powershell Get-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name ApplicationMonitoring -Status ```
-Get list of installed extensions for Azure virtual machine
+Get a list of installed extensions for a virtual machine:
+ ```powershell Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions"
Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<my
# Location : southcentralus # ResourceId : /subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions/ApplicationMonitoring ```
-You may also view installed extensions in the [Azure virtual machine section](../../virtual-machines/extensions/overview.md) in the Portal.
+
+You can also view installed extensions in the [Azure Virtual Machine section](../../virtual-machines/extensions/overview.md) of the Azure portal.
> [!NOTE]
-> Verify installation by clicking on Live Metrics Stream within the Application Insights Resource associated with the connection string you used to deploy the Application Insights Agent Extension. If you are sending data from multiple Virtual Machines, select the target Azure virtual machines under Server Name. It may take up to a minute for data to begin flowing.
+> Verify installation by selecting **Live Metrics Stream** within the Application Insights resource associated with the connection string you used to deploy the Application Insights Agent extension. If you're sending data from multiple virtual machines, select the target virtual machines under **Server Name**. It might take up to a minute for data to begin flowing.
+
+## Manage Application Insights Agent for .NET applications on virtual machine scale sets by using PowerShell
-## Manage Application Insights Agent for .NET applications on Azure virtual machine scale sets using PowerShell
+Install or update Application Insights Agent as an extension for a virtual machine scale set:
-Install or update the Application Insights Agent as an extension for Azure virtual machine scale set
```powershell $publicCfgHashtable = @{
Add-AzVmssExtension -VirtualMachineScaleSet $vmss -Name "ApplicationMonitoringWi
Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -VirtualMachineScaleSet $vmss
-# Note: depending on your update policy, you might need to run Update-AzVmssInstance for each instance
+# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance.
```
-Uninstall application monitoring extension from Azure virtual machine scale sets
+Uninstall the application monitoring extension from virtual machine scale sets:
+ ```powershell $vmss = Get-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myVmssName>"
Remove-AzVmssExtension -VirtualMachineScaleSet $vmss -Name "ApplicationMonitorin
Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -VirtualMachineScaleSet $vmss
-# Note: depending on your update policy, you might need to run Update-AzVmssInstance for each instance
+# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance.
```
-Query application monitoring extension status for Azure virtual machine scale sets
+Query the application monitoring extension status for virtual machine scale sets:
+ ```powershell # Not supported by extensions framework ```
-Get list of installed extensions for Azure virtual machine scale sets
+Get a list of installed extensions for virtual machine scale sets:
+ ```powershell Get-AzResource -ResourceId /subscriptions/<mySubscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.Compute/virtualMachineScaleSets/<myVmssName>/extensions
Get-AzResource -ResourceId /subscriptions/<mySubscriptionId>/resourceGroups/<myR
## Troubleshooting
-Find troubleshooting tips for Application Insights Monitoring Agent Extension for .NET applications running on Azure virtual machines and virtual machine scale sets.
+Find troubleshooting tips for the Application Insights Monitoring Agent extension for .NET applications running on Azure virtual machines and virtual machine scale sets.
> [!NOTE]
-> The steps below do not apply to Node.js and Python applications, which require SDK instrumentation.
+> The following steps don't apply to Node.js and Python applications, which require SDK instrumentation.
Extension execution output is logged to files found in the following directories:+ ```Windows C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWindows\<version>\ ```
C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWi
### 2.8.44 -- Updated ApplicationInsights .NET/.NET Core SDK to 2.20.1 - red field.-- Enabled SQL query collection.-- Enabled support for Azure Active Directory authentication.
+- Updated Application Insights .NET/.NET Core SDK to 2.20.1 - red field
+- Enabled SQL query collection
+- Enabled support for Azure Active Directory authentication
### 2.8.42 -- Updated ApplicationInsights .NET/.NET Core SDK to 2.18.1 - red field.
+Updated Application Insights .NET/.NET Core SDK to 2.18.1 - red field
### 2.8.41 -- Added ASP.NET Core Auto-Instrumentation feature.
+Added ASP.NET Core auto-instrumentation feature
## Next steps
-* Learn how to [deploy an application to an Azure virtual machine scale set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md).
-* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down.
+
+* Learn how to [deploy an application to a virtual machine scale set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md).
+* [Set up availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down.
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
Title: Monitor Azure app services performance Java | Microsoft Docs description: Application performance monitoring for Azure app services using Java. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/05/2021 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Title: Monitor Azure App Service performance in .NET Core | Microsoft Docs description: Application performance monitoring for Azure App Service using ASP.NET Core. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 11/09/2022 Last updated : 11/15/2022 ms.devlang: csharp
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md
Title: Monitor Azure app services performance Node.js | Microsoft Docs description: Application performance monitoring for Azure app services using Node.js. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/05/2021 Last updated : 11/15/2022 ms.devlang: javascript
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
Title: Monitor Azure App Service performance | Microsoft Docs description: Application performance monitoring for Azure App Service. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/05/2021 Last updated : 11/15/2022
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn about the steps required to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 08/23/2022 Last updated : 11/15/2022
No. Migration won't affect existing API access to data. After migration, you'll
### Will there be any impact on Live Metrics or other monitoring experiences?
-No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor--diagnose-with-1-second-latency) or other monitoring experiences.
+No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor-and-diagnose-with-1-second-latency) or other monitoring experiences.
### What happens with continuous export after migration?
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md
Title: Create a new Azure Application Insights resource | Microsoft Docs description: Manually set up Application Insights monitoring for a new live application. Previously updated : 02/10/2021 Last updated : 11/15/2022
azure-monitor Custom Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-endpoints.md
Title: Azure Application Insights override default SDK endpoints description: Modify default Azure Monitor Application Insights SDK endpoints for regions like Azure Government. Previously updated : 07/26/2019 Last updated : 11/14/2022 ms.devlang: csharp, java, javascript, python
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Title: IP addresses used by Azure Monitor | Microsoft Docs description: This article discusses server firewall exceptions that are required by Azure Monitor Previously updated : 08/19/2022 Last updated : 11/15/2022
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
Title: Application Insights IP address collection | Microsoft Docs description: Understand how Application Insights handles IP addresses and geolocation. Previously updated : 09/23/2020 Last updated : 11/15/2022
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Title: 'Quickstart: Java web app analytics with Azure Application Insights' description: 'Application Performance Monitoring for Java web apps with Application Insights. ' Previously updated : 11/22/2020 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Java In Process Agent Redirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent-redirect.md
Title: Azure Monitor Application Insights Java (redirect to OpenTelemetry) description: Redirect to OpenTelemetry agent Previously updated : 07/22/2022 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Title: Azure Monitor Application Insights Java description: Application performance monitoring for Java applications running in any environment without requiring code modification. Distributed tracing and application map. Previously updated : 11/12/2022 Last updated : 11/14/2022 ms.devlang: java
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 11/12/2022 Last updated : 11/14/2022 ms.devlang: java
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Title: Adding the JVM arg - Azure Monitor Application Insights for Java description: How to add the JVM arg that enables Azure Monitor Application Insights for Java Previously updated : 11/12/2022 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 11/12/2022 Last updated : 11/14/2022 ms.devlang: java
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
Title: Java Profiler for Azure Monitor Application Insights description: How to configure the Azure Monitor Application Insights for Java Profiler Previously updated : 07/19/2022 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
Title: Sampling overrides (preview) - Azure Monitor Application Insights for Java description: Learn to configure sampling overrides in Azure Monitor Application Insights for Java. Previously updated : 03/22/2021 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 11/12/2022 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Javascript Click Analytics Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-click-analytics-plugin.md
In JavaScript correlation is turned off by default in order to minimize the tele
- Check out the [GitHub Repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [NPM Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Auto-Collection Plugin. - Use [Events Analysis in Usage Experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions. - Find click data under content field within customDimensions attribute in CustomEvents table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). For more information, see [Sample App](https://go.microsoft.com/fwlink/?linkid=2152871).-- Build a [Workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md#integrating-queries) to create custom visualizations of click data.
+- Build a [Workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md#integrate-queries) to create custom visualizations of click data.
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Title: Azure Application Insights for JavaScript web apps description: Get page view and session counts, web client data, and single-page applications and track usage patterns. Detect exceptions and performance issues in JavaScript webpages. Previously updated : 08/06/2020 Last updated : 11/15/2022 ms.devlang: javascript
azure-monitor Kubernetes Codeless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md
Title: Monitor applications on Azure Kubernetes Service (AKS) with Application Insights - Azure Monitor | Microsoft Docs description: Azure Monitor seamlessly integrates with your application running on Kubernetes, and allows you to spot the problems with your apps in no time. Previously updated : 05/13/2020 Last updated : 11/15/2022
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Title: Diagnose with Live Metrics - Application Insights - Azure Monitor description: Monitor your web app in real time with custom metrics, and diagnose issues with a live feed of failures, traces, and events. Previously updated : 05/31/2022 Last updated : 11/15/2022 ms.devlang: csharp
-# Live Metrics: Monitor & Diagnose with 1-second latency
+# Live Metrics: Monitor and diagnose with 1-second latency
-Monitor your live, in-production web application by using Live Metrics (also known as QuickPulse) from [Application Insights](./app-insights-overview.md). Select and filter metrics and performance counters to watch in real time, without any disturbance to your service. Inspect stack traces from sample failed requests and exceptions. Together with [Profiler](./profiler.md) and [Snapshot debugger](./snapshot-debugger.md), Live Metrics provides a powerful and non-invasive diagnostic tool for your live website.
+Monitor your live, in-production web application by using Live Metrics (also known as QuickPulse) from [Application Insights](./app-insights-overview.md). You can select and filter metrics and performance counters to watch in real time, without any disturbance to your service. You can also inspect stack traces from sample failed requests and exceptions. Together with [Profiler](./profiler.md) and [Snapshot Debugger](./snapshot-debugger.md), Live Metrics provides a powerful and noninvasive diagnostic tool for your live website.
> [!NOTE] > Live Metrics only supports TLS 1.2. For more information, see [Troubleshooting](#troubleshooting). With Live Metrics, you can:
-* Validate a fix while it's released, by watching performance and failure counts.
-* Watch the effect of test loads, and diagnose issues live.
-* Focus on particular test sessions or filter out known issues, by selecting and filtering the metrics you want to watch.
+* Validate a fix while it's released by watching performance and failure counts.
+* Watch the effect of test loads and diagnose issues live.
+* Focus on particular test sessions or filter out known issues by selecting and filtering the metrics you want to watch.
* Get exception traces as they happen. * Experiment with filters to find the most relevant KPIs. * Monitor any Windows performance counter live.
-* Easily identify a server that is having issues, and filter all the KPI/live feed to just that server.
+* Easily identify a server that's having issues and filter all the KPI/live feed to just that server.
-![Live Metrics tab](./media/live-stream/live-metric.png)
+![Screenshot that shows the Live Metrics tab.](./media/live-stream/live-metric.png)
-Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions, Java, and Node.js apps.
+Live Metrics is currently supported for ASP.NET, ASP.NET Core, Azure Functions, Java, and Node.js apps.
> [!NOTE]
-> The number of monitored server instances displayed by Live Metrics may be lower than the actual number of instances allocated for the application. This is because many modern web servers will unload applications that do not receive requests over a period of time in order to conserve resources. Since Live Metrics only counts servers that are currently running the application, servers that have already unloaded the process will not be included in that total.
+> The number of monitored server instances displayed by Live Metrics might be lower than the actual number of instances allocated for the application. This mismatch is because many modern web servers will unload applications that don't receive requests over a period of time to conserve resources. Because Live Metrics only counts servers that are currently running the application, servers that have already unloaded the process won't be included in that total.
## Get started
-1. Follow language specific guidelines to enable Live Metrics.
- * [ASP.NET](./asp-net.md) - Live Metrics is enabled by default.
- * [ASP.NET Core](./asp-net-core.md) - Live Metrics is enabled by default.
- * [.NET/.NET Core Console/Worker](./worker-service.md) - Live Metrics is enabled by default.
- * [.NET Applications - Enable using code](#enable-live-metrics-using-code-for-any-net-application).
- * [Java](./java-in-process-agent.md) - Live Metrics is enabled by default.
+> [!IMPORTANT]
+> Monitoring ASP.NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications requires Application Insights version 2.8.0 or above. To enable Application Insights, ensure that it's activated in the Azure portal and that the Application Insights NuGet package is included. Without the NuGet package, some telemetry is sent to Application Insights, but that telemetry won't show in Live Metrics.
+
+1. Follow language-specific guidelines to enable Live Metrics:
+ * [ASP.NET](./asp-net.md): Live Metrics is enabled by default.
+ * [ASP.NET Core](./asp-net-core.md): Live Metrics is enabled by default.
+ * [.NET/.NET Core Console/Worker](./worker-service.md): Live Metrics is enabled by default.
+ * [.NET Applications: Enable using code](#enable-live-metrics-by-using-code-for-any-net-application).
+ * [Java](./java-in-process-agent.md): Live Metrics is enabled by default.
* [Node.js](./nodejs.md#live-metrics)
-2. In the [Azure portal](https://portal.azure.com), open the Application Insights resource for your app, then open Live Stream.
-
-3. [Secure the control channel](#secure-the-control-channel) if you might use sensitive data such as customer names in your filters.
+1. In the [Azure portal](https://portal.azure.com), open the Application Insights resource for your app. Then open Live Stream.
-> [!IMPORTANT]
-> Monitoring ASP.NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications require Application Insights version 2.8.0 or above. To enable Application Insights ensure it is both activated in the Azure Portal and that the Application Insights NuGet package is included. Without the NuGet package some telemetry is sent to Application Insights but that telemetry will not show in Live Metrics.
+1. [Secure the control channel](#secure-the-control-channel) if you might use sensitive data like customer names in your filters.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-### Enable Live Metrics using code for any .NET application
+### Enable Live Metrics by using code for any .NET application
> [!NOTE]
-> Live Metrics is enabled by default when onboarding using the recommended instructions for .NET Applications.
-
-How to manually set up Live Metrics:
+> Live Metrics is enabled by default when you onboard it by using the recommended instructions for .NET applications.
-1. Install the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector)
-2. The following sample console app code shows setting up Live Metrics.
+To manually set up Live Metrics:
-```csharp
-using Microsoft.ApplicationInsights;
-using Microsoft.ApplicationInsights.Extensibility;
-using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
-using System;
-using System.Threading.Tasks;
+1. Install the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector).
+1. The following sample console app code shows setting up Live Metrics:
-namespace LiveMetricsDemo
-{
- class Program
+ ```csharp
+ using Microsoft.ApplicationInsights;
+ using Microsoft.ApplicationInsights.Extensibility;
+ using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
+ using System;
+ using System.Threading.Tasks;
+
+ namespace LiveMetricsDemo
{
- static void Main(string[] args)
+ class Program
{
- // Create a TelemetryConfiguration instance.
- TelemetryConfiguration config = TelemetryConfiguration.CreateDefault();
- config.InstrumentationKey = "INSTRUMENTATION-KEY-HERE";
- QuickPulseTelemetryProcessor quickPulseProcessor = null;
- config.DefaultTelemetrySink.TelemetryProcessorChainBuilder
- .Use((next) =>
- {
- quickPulseProcessor = new QuickPulseTelemetryProcessor(next);
- return quickPulseProcessor;
- })
- .Build();
-
- var quickPulseModule = new QuickPulseTelemetryModule();
-
- // Secure the control channel.
- // This is optional, but recommended.
- quickPulseModule.AuthenticationApiKey = "YOUR-API-KEY-HERE";
- quickPulseModule.Initialize(config);
- quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
-
- // Create a TelemetryClient instance. It is important
- // to use the same TelemetryConfiguration here as the one
- // used to setup Live Metrics.
- TelemetryClient client = new TelemetryClient(config);
-
- // This sample runs indefinitely. Replace with actual application logic.
- while (true)
+ static void Main(string[] args)
{
- // Send dependency and request telemetry.
- // These will be shown in Live Metrics.
- // CPU/Memory Performance counter is also shown
- // automatically without any additional steps.
- client.TrackDependency("My dependency", "target", "http://sample",
- DateTimeOffset.Now, TimeSpan.FromMilliseconds(300), true);
- client.TrackRequest("My Request", DateTimeOffset.Now,
- TimeSpan.FromMilliseconds(230), "200", true);
- Task.Delay(1000).Wait();
+ // Create a TelemetryConfiguration instance.
+ TelemetryConfiguration config = TelemetryConfiguration.CreateDefault();
+ config.InstrumentationKey = "INSTRUMENTATION-KEY-HERE";
+ QuickPulseTelemetryProcessor quickPulseProcessor = null;
+ config.DefaultTelemetrySink.TelemetryProcessorChainBuilder
+ .Use((next) =>
+ {
+ quickPulseProcessor = new QuickPulseTelemetryProcessor(next);
+ return quickPulseProcessor;
+ })
+ .Build();
+
+ var quickPulseModule = new QuickPulseTelemetryModule();
+
+ // Secure the control channel.
+ // This is optional, but recommended.
+ quickPulseModule.AuthenticationApiKey = "YOUR-API-KEY-HERE";
+ quickPulseModule.Initialize(config);
+ quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
+
+ // Create a TelemetryClient instance. It is important
+ // to use the same TelemetryConfiguration here as the one
+ // used to set up Live Metrics.
+ TelemetryClient client = new TelemetryClient(config);
+
+ // This sample runs indefinitely. Replace with actual application logic.
+ while (true)
+ {
+ // Send dependency and request telemetry.
+ // These will be shown in Live Metrics.
+ // CPU/Memory Performance counter is also shown
+ // automatically without any additional steps.
+ client.TrackDependency("My dependency", "target", "http://sample",
+ DateTimeOffset.Now, TimeSpan.FromMilliseconds(300), true);
+ client.TrackRequest("My Request", DateTimeOffset.Now,
+ TimeSpan.FromMilliseconds(230), "200", true);
+ Task.Delay(1000).Wait();
+ }
} } }
-}
-```
+ ```
-While the above sample is for a console app, the same code can be used in any .NET applications. If any other TelemetryModules are enabled which auto-collects telemetry, it's important to ensure the same configuration used for initializing those modules is used for Live Metrics module as well.
+The preceding sample is for a console app, but the same code can be used in any .NET applications. If any other telemetry modules are enabled to autocollect telemetry, it's important to ensure that the same configuration used for initializing those modules is used for the Live Metrics module.
-## How does Live Metrics differ from Metrics Explorer and Analytics?
+## How does Live Metrics differ from metrics explorer and Log Analytics?
-| |Live Stream | Metrics Explorer and Analytics |
+| Capabilities |Live Stream | Metrics explorer and Log Analytics |
||||
-|**Latency**|Data displayed within one second|Aggregated over minutes|
-|**No retention**|Data persists while it's on the chart, and is then discarded|[Data retained for 90 days](./data-retention-privacy.md#how-long-is-the-data-kept)|
-|**On demand**|Data is only streamed while the Live Metrics pane is open |Data is sent whenever the SDK is installed and enabled|
-|**Free**|There's no charge for Live Stream data|Subject to [pricing](../logs/cost-logs.md#application-insights-billing)
-|**Sampling**|All selected metrics and counters are transmitted. Failures and stack traces are sampled. |Events may be [sampled](./api-filtering-sampling.md)|
-|**Control channel**|Filter control signals are sent to the SDK. We recommend you secure this channel.|Communication is one way, to the portal|
+|Latency|Data displayed within one second.|Aggregated over minutes.|
+|No retention|Data persists while it's on the chart and is then discarded.|[Data retained for 90 days.](./data-retention-privacy.md#how-long-is-the-data-kept)|
+|On demand|Data is only streamed while the Live Metrics pane is open. |Data is sent whenever the SDK is installed and enabled.|
+|Free|There's no charge for Live Stream data.|Subject to [pricing](../logs/cost-logs.md#application-insights-billing).
+|Sampling|All selected metrics and counters are transmitted. Failures and stack traces are sampled. |Events can be [sampled](./api-filtering-sampling.md).|
+|Control channel|Filter control signals are sent to the SDK. We recommend you secure this channel.|Communication is one way, to the portal.|
## Select and filter your metrics
-(Available with ASP.NET, ASP.NET Core, and Azure Functions (v2).)
+These capabilities are available with ASP.NET, ASP.NET Core, and Azure Functions (v2).
-You can monitor custom KPI live by applying arbitrary filters on any Application Insights telemetry from the portal. Select the filter control that shows when you mouse-over any of the charts. The following chart is plotting a custom Request count KPI with filters on URL and Duration attributes. Validate your filters with the Stream Preview section that shows a live feed of telemetry that matches the criteria you've specified at any point in time.
+You can monitor custom KPI live by applying arbitrary filters on any Application Insights telemetry from the portal. Select the filter control that shows when you mouse-over any of the charts. The following chart plots a custom **Request** count KPI with filters on **URL** and **Duration** attributes. Validate your filters with the stream preview section that shows a live feed of telemetry that matches the criteria you've specified at any point in time.
-![Filter request rate](./media/live-stream/filter-request.png)
+![Screenshot that shows the Filter request rate.](./media/live-stream/filter-request.png)
-You can monitor a value different from Count. The options depend on the type of stream, which could be any Application Insights telemetry: requests, dependencies, exceptions, traces, events, or metrics. It can be your own [custom measurement](./api-custom-events-metrics.md#properties):
+You can monitor a value different from **Count**. The options depend on the type of stream, which could be any Application Insights telemetry like requests, dependencies, exceptions, traces, events, or metrics. It can also be your own [custom measurement](./api-custom-events-metrics.md#properties).
-![Query builder on request rate with custom metric](./media/live-stream/query-builder-request.png)
+![Screenshot that shows the Query Builder on Request Rate with a custom metric.](./media/live-stream/query-builder-request.png)
-In addition to Application Insights telemetry, you can also monitor any Windows performance counter by selecting that from the stream options, and providing the name of the performance counter.
+Along with Application Insights telemetry, you can also monitor any Windows performance counter. Select it from the stream options and provide the name of the performance counter.
-Live Metrics are aggregated at two points: locally on each server, and then across all servers. You can change the default at either by selecting other options in the respective drop-downs.
+Live Metrics are aggregated at two points: locally on each server and then across all servers. You can change the default at either one by selecting other options in the respective dropdown lists.
-## Sample Telemetry: Custom Live Diagnostic Events
+## Sample telemetry: Custom live diagnostic events
By default, the live feed of events shows samples of failed requests and dependency calls, exceptions, events, and traces. Select the filter icon to see the applied criteria at any point in time.
-![Filter button](./media/live-stream/filter.png)
+![Screenshot that shows the Filter button.](./media/live-stream/filter.png)
-As with metrics, you can specify any arbitrary criteria to any of the Application Insights telemetry types. In this example, we're selecting specific request failures, and events.
+As with metrics, you can specify any arbitrary criteria to any of the Application Insights telemetry types. In this example, we're selecting specific request failures and events.
-![Query Builder](./media/live-stream/query-builder.png)
+![Screenshot that shows the Query Builder.](./media/live-stream/query-builder.png)
> [!NOTE]
-> Currently, for Exception message-based criteria, use the outermost exception message. In the preceding example, to filter out the benign exception with inner exception message (follows the "<--" delimiter) "The client disconnected." use a message not-contains "Error reading request content" criteria.
+> Currently, for exception message-based criteria, use the outermost exception message. In the preceding example, to filter out the benign exception with an inner exception message (follows the "<--" delimiter) "The client disconnected," use a message not-contains "Error reading request content" criteria.
-See the details of an item in the live feed by clicking it. You can pause the feed either by clicking **Pause** or simply scrolling down, or clicking an item. Live feed will resume after you scroll back to the top, or by clicking the counter of items collected while it was paused.
+To see the details of an item in the live feed, select it. You can pause the feed either by selecting **Pause** or by scrolling down and selecting an item. Live feed resumes after you scroll back to the top, or when you select the counter of items collected while it was paused.
-![Screenshot shows the Sample telemetry window with an exception selected and the exception details displayed at the bottom of the window.](./media/live-stream/sample-telemetry.png)
+![Screenshot that shows the Sample telemetry window with an exception selected and the exception details displayed at the bottom of the window.](./media/live-stream/sample-telemetry.png)
## Filter by server instance
-If you want to monitor a particular server role instance, you can filter by server. To filter, select the server name under *Servers*.
+If you want to monitor a particular server role instance, you can filter by server. To filter, select the server name under **Servers**.
-![Sampled live failures](./media/live-stream/filter-by-server.png)
+![Screenshot that shows the Sampled live failures.](./media/live-stream/filter-by-server.png)
## Secure the control channel
-Live Metrics custom filters allow you to control which of your application's telemetry is streamed to the Live Metrics view in Azure portal. The filters criteria is sent to the apps that are instrumented with the Application Insights SDK. The filter value could potentially contain sensitive information such as CustomerID. To keep this value secured and prevent potential disclosure to unauthorized applications, you have two options:
+Live Metrics custom filters allow you to control which of your application's telemetry is streamed to the Live Metrics view in the Azure portal. The filters criteria is sent to the apps that are instrumented with the Application Insights SDK. The filter value could potentially contain sensitive information, such as the customer ID. To keep this value secured and prevent potential disclosure to unauthorized applications, you have two options:
-- Recommended: Secure Live Metrics channel using [Azure AD authentication](./azure-ad-authentication.md#configuring-and-enabling-azure-ad-based-authentication)-- Legacy (no longer recommended): Set up an authenticated channel by configuring a secret API key as explained below
+- **Recommended:** Secure the Live Metrics channel by using [Azure Active Directory (Azure AD) authentication](./azure-ad-authentication.md#configuring-and-enabling-azure-ad-based-authentication).
+- **Legacy (no longer recommended):** Set up an authenticated channel by configuring a secret API key as explained in the "Legacy option" section.
> [!NOTE]
-> On 30 September 2025, API keys used to stream live metrics telemetry into application insights will be retired. After that date, applications which use API keys will no longer be able to send live metrics data to your application insights resource. Authenticated telemetry ingestion for live metrics streaming to application insights will need to be done with [Azure AD authentication for application insights](./azure-ad-authentication.md).
+> On September 30, 2025, API keys used to stream Live Metrics telemetry into Application Insights will be retired. After that date, applications that use API keys won't be able to send Live Metrics data to your Application Insights resource. Authenticated telemetry ingestion for Live Metrics streaming to Application Insights will need to be done with [Azure AD authentication for Application Insights](./azure-ad-authentication.md).
-It's possible to try custom filters without having to set up an authenticated channel. Simply click on any of the filter icons and authorize the connected servers. Notice that if you choose this option, you'll have to authorize the connected servers once every new session or when a new server comes online.
+It's possible to try custom filters without having to set up an authenticated channel. Select any of the filter icons and authorize the connected servers. If you choose this option, you'll have to authorize the connected servers once every new session or whenever a new server comes online.
> [!WARNING]
-> We strongly discourage the use of unsecured channels and will disable this option 6 months after you start using it. The ΓÇ£Authorize connected serversΓÇ¥ dialog displays the date (highlighted below) after which this option will be disabled.
+> We strongly discourage the use of unsecured channels and will disable this option six months after you start using it. The **Authorize connected servers** dialog displays the date after which this option will be disabled.
++
+### Legacy option: Create an API key
+
+1. Select the **API Access** tab and then select **Create API key**.
+
+ ![Screenshot that shows selecting the API Access tab and the Create API key button.](./media/live-stream/api-key.png)
+1. Select the **Authenticate SDK control channel** checkbox and then select **Generate key**.
-### Legacy option: Create API key
+ ![Screenshot that shows the Create API key pane. Select Authenticate SDK control channel checkbox and then select Generate key.](./media/live-stream/create-api-key.png)
-![API key > Create API key](./media/live-stream/api-key.png)
-![Create API Key tab. Select "authenticate SDK control channel" then "generate key"](./media/live-stream/create-api-key.png)
+### Add an API key to configuration
-### Add API key to Configuration
+You can add an API key to configuration for ASP.NET, ASP.NET Core, WorkerService, and Azure Functions apps.
#### ASP.NET
-In the applicationinsights.config file, add the AuthenticationApiKey to the QuickPulseTelemetryModule:
+In the *applicationinsights.config* file, add `AuthenticationApiKey` to `QuickPulseTelemetryModule`:
```xml <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector">
In the applicationinsights.config file, add the AuthenticationApiKey to the Quic
#### ASP.NET Core
-For [ASP.NET Core](./asp-net-core.md) applications, follow the instructions below.
+For [ASP.NET Core](./asp-net-core.md) applications, follow these instructions.
-Modify `ConfigureServices` of your Startup.cs file as follows:
+Modify `ConfigureServices` of your *Startup.cs* file as shown.
-Add the following namespace.
+Add the following namespace:
```csharp using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse; ```
-Then modify `ConfigureServices` method as below.
+Then modify the `ConfigureServices` method:
```csharp public void ConfigureServices(IServiceCollection services) {
- // existing code which include services.AddApplicationInsightsTelemetry() to enable Application Insights.
+ // Existing code which includes services.AddApplicationInsightsTelemetry() to enable Application Insights.
services.ConfigureTelemetryModule<QuickPulseTelemetryModule> ((module, o) => module.AuthenticationApiKey = "YOUR-API-KEY-HERE"); } ```
-More information on configuring ASP.NET Core applications can be found in our guidance on [configuring telemetry modules in ASP.NET Core](./asp-net-core.md#configuring-or-removing-default-telemetrymodules).
+For more information on how to configure ASP.NET Core applications, see [Configuring telemetry modules in ASP.NET Core](./asp-net-core.md#configuring-or-removing-default-telemetrymodules).
#### WorkerService
-For [WorkerService](./worker-service.md) applications, follow the instructions below.
+For [WorkerService](./worker-service.md) applications, follow these instructions.
-Add the following namespace.
+Add the following namespace:
```csharp using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse; ```
-Next, add the following line before the call `services.AddApplicationInsightsTelemetryWorkerService`.
+Next, add the following line before the call `services.AddApplicationInsightsTelemetryWorkerService`:
```csharp services.ConfigureTelemetryModule<QuickPulseTelemetryModule> ((module, o) => module.AuthenticationApiKey = "YOUR-API-KEY-HERE"); ```
-More information on configuring WorkerService applications can be found in our guidance on [configuring telemetry modules in WorkerServices](./worker-service.md#configure-or-remove-default-telemetry-modules).
+For more information on how to configure WorkerService applications, see [Configuring telemetry modules in WorkerServices](./worker-service.md#configure-or-remove-default-telemetry-modules).
-#### Azure Function Apps
+#### Azure Functions apps
-For Azure Function Apps (v2), securing the channel with an API key can be accomplished with an environment variable.
+For Azure Functions apps (v2), you can secure the channel with an API key by using an environment variable.
-Create an API key from within your Application Insights resource and go to **Settings > Configuration** for your Function App. Select **New application setting** and enter a name of `APPINSIGHTS_QUICKPULSEAUTHAPIKEY` and a value that corresponds to your API key.
+Create an API key from within your Application Insights resource and go to **Settings** > **Configuration** for your Azure Functions app. Select **New application setting**, enter a name of `APPINSIGHTS_QUICKPULSEAUTHAPIKEY`, and enter a value that corresponds to your API key.
## Supported features table
-| Language | Basic Metrics | Performance metrics | Custom filtering | Sample telemetry | CPU split by process |
+| Language | Basic metrics | Performance metrics | Custom filtering | Sample telemetry | CPU split by process |
|-|:--|:--|:--|:--|:| | .NET Framework | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | | .NET Core (target=.NET Framework)| Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) |
-| .NET Core (target=.NET Core) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported* | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | **Not Supported** |
-| Azure Functions v2 | Supported | Supported | Supported | Supported | **Not Supported** |
-| Java | Supported (V2.0.0+) | Supported (V2.0.0+) | **Not Supported** | Supported (V3.2.0+) | **Not Supported** |
-| Node.js | Supported (V1.3.0+) | Supported (V1.3.0+) | **Not Supported** | Supported (V1.3.0+) | **Not Supported** |
+| .NET Core (target=.NET Core) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported* | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | **Not supported** |
+| Azure Functions v2 | Supported | Supported | Supported | Supported | **Not supported** |
+| Java | Supported (V2.0.0+) | Supported (V2.0.0+) | **Not supported** | Supported (V3.2.0+) | **Not supported** |
+| Node.js | Supported (V1.3.0+) | Supported (V1.3.0+) | **Not supported** | Supported (V1.3.0+) | **Not supported** |
Basic metrics include request, dependency, and exception rate. Performance metrics (performance counters) include memory and CPU. Sample telemetry shows a stream of detailed information for failed requests and dependencies, exceptions, events, and traces.
- \* PerfCounters support varies slightly across versions of .NET Core that don't target the .NET Framework:
+ PerfCounters support varies slightly across versions of .NET Core that don't target the .NET Framework:
-- PerfCounters metrics are supported when running in Azure App Service for Windows. (AspNetCore SDK Version 2.4.1 or higher)-- PerfCounters are supported when app is running in ANY Windows machines (VM or Cloud Service or on-premises etc.) (AspNetCore SDK Version 2.7.1 or higher), but for apps targeting .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.-- PerfCounters are supported when app is running ANYWHERE (Linux, Windows, app service for Linux, containers, etc.) in the latest versions, but only for apps targeting .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
+- PerfCounters metrics are supported when running in Azure App Service for Windows (ASP.NET Core SDK version 2.4.1 or higher).
+- PerfCounters are supported when the app is running in *any* Windows machines, like VM, Azure Cloud Service, or on-premises (ASP.NET Core SDK version 2.7.1 or higher), but only for apps that target .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
+- PerfCounters are supported when the app is running *anywhere* (such as Linux, Windows, app service for Linux, or containers) in the latest versions, but only for apps that target .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
## Troubleshooting
-Live Metrics uses different IP addresses than other Application Insights telemetry. Make sure [those IP addresses](./ip-addresses.md) are open in your firewall. Also check the [outgoing ports for Live Metrics](./ip-addresses.md#outgoing-ports) are open in the firewall of your servers.
+Live Metrics uses different IP addresses than other Application Insights telemetry. Make sure [those IP addresses](./ip-addresses.md) are open in your firewall. Also check that [outgoing ports for Live Metrics](./ip-addresses.md#outgoing-ports) are open in the firewall of your servers.
-As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Live Metrics now only supports TLS 1.2. If you're using an older version of TLS, Live Metrics won't display any data. For applications based on .NET Framework 4.5.1, refer to [How to enable Transport Layer Security (TLS) 1.2 on clients - Configuration Manager](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client#bkmk_net) to support newer TLS version.
+As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Live Metrics now only supports TLS 1.2. If you're using an older version of TLS, Live Metrics won't display any data. For applications based on .NET Framework 4.5.1, see [Enable Transport Layer Security (TLS) 1.2 on clients - Configuration Manager](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client#bkmk_net) to support the newer TLS version.
### Missing configuration for .NET
-1. Verify you're using the latest version of the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector)
-2. Edit the `ApplicationInsights.config` file
- * Verify that the connection string points to the Application Insights resource you're using
- * Locate the `QuickPulseTelemetryModule` configuration option; if it isn't there, add it
- * Locate the `QuickPulseTelemetryProcessor` configuration option; if it isn't there, add it
-
- ```xml
-<TelemetryModules>
-<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.
-QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector"/>
-</TelemetryModules>
-
-<TelemetryProcessors>
-<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.
-QuickPulse.QuickPulseTelemetryProcessor, Microsoft.AI.PerfCounterCollector"/>
-<TelemetryProcessors>
-````
-3. Restart the application
+1. Verify that you're using the latest version of the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector).
+1. Edit the `ApplicationInsights.config` file:
+ * Verify that the connection string points to the Application Insights resource you're using.
+ * Locate the `QuickPulseTelemetryModule` configuration option. If it isn't there, add it.
+ * Locate the `QuickPulseTelemetryProcessor` configuration option. If it isn't there, add it.
+
+ ```xml
+ <TelemetryModules>
+ <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.
+ QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector"/>
+ </TelemetryModules>
+
+ <TelemetryProcessors>
+ <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.
+ QuickPulse.QuickPulseTelemetryProcessor, Microsoft.AI.PerfCounterCollector"/>
+ <TelemetryProcessors>
+ ````
+1. Restart the application.
## Next steps
-* [Monitoring usage with Application Insights](./usage-overview.md)
-* [Using Diagnostic Search](./diagnostic-search.md)
+* [Monitor usage with Application Insights](./usage-overview.md)
+* [Use Diagnostic Search](./diagnostic-search.md)
* [Profiler](./profiler.md)
-* [Snapshot debugger](./snapshot-debugger.md)
+* [Snapshot Debugger](./snapshot-debugger.md)
azure-monitor Mobile Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/mobile-center-quickstart.md
Title: Monitor mobile or universal Windows apps with Azure Monitor Application Insights description: Provides instructions to quickly set up a mobile or universal Windows app for monitoring with Azure Monitor Application Insights and App Center Previously updated : 07/21/2022 Last updated : 11/15/2022 ms.devlang: java, swift
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Title: Monitor applications running on Azure Functions with Application Insights - Azure Monitor | Microsoft Docs description: Azure Monitor seamlessly integrates with your application running on Azure Functions, and allows you to monitor the performance and spot the problems with your apps in no time. Previously updated : 08/27/2021 Last updated : 11/14/2022
azure-monitor Monitor Web App Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-web-app-availability.md
Title: Monitor availability with URL ping tests - Azure Monitor description: Set up ping tests in Application Insights. Get alerts if a website becomes unavailable or responds slowly. Previously updated : 07/13/2021 Last updated : 11/15/2022
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Title: Monitor Node.js services with Application Insights | Microsoft Docs description: Monitor performance and diagnose problems in Node.js services with Application Insights. Previously updated : 10/12/2021 Last updated : 11/15/2022 ms.devlang: javascript
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Title: Monitor Python applications with Azure Monitor | Microsoft Docs description: This article provides instructions on how to wire up OpenCensus Python with Azure Monitor. Previously updated : 8/19/2022 Last updated : 11/15/2022 ms.devlang: python
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 10/21/2022 Last updated : 11/15/2022 ms.devlang: csharp, javascript, python
azure-monitor Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/overview-dashboard.md
Title: Application Insights Overview dashboard | Microsoft Docs description: Monitor applications with Application Insights and Overview dashboard functionality. Previously updated : 06/03/2019 Last updated : 11/15/2022 # Application Insights Overview dashboard
azure-monitor Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/platforms.md
Title: 'Application Insights: Languages, platforms, and integrations | Microsoft Docs' description: Languages, platforms, and integrations that are available for Application Insights. Previously updated : 10/24/2022 Last updated : 11/15/2022
azure-monitor Powershell Azure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell-azure-diagnostics.md
- Title: Using PowerShell to setup Application Insights in an Azure | Microsoft Docs
-description: Automate configuring Azure Diagnostics to pipe data to Application Insights.
- Previously updated : 08/06/2019 -
-ms.reviwer: cogoodson
--
-# Using PowerShell to set up Application Insights for Azure Cloud Services
-
-[Microsoft Azure](https://azure.com) can be [configured to send Azure Diagnostics](../agents/diagnostics-extension-to-application-insights.md) to [Azure Application Insights](./app-insights-overview.md). The diagnostics relate to Azure Cloud Services and Azure VMs. They complement the telemetry that you send from within the app using the Application Insights SDK. As part of automating the process of creating new resources in Azure, you can configure diagnostics using PowerShell.
-
-## Azure template
-If the web app is in Azure and you create your resources using an Azure Resource Manager template, you can configure Application Insights by adding this to the resources node:
-
-```json
-{
- resources: [
- /* Create Application Insights resource */
- {
- "apiVersion": "2015-05-01",
- "type": "microsoft.insights/components",
- "name": "nameOfAIAppResource",
- "location": "centralus",
- "kind": "web",
- "properties": { "ApplicationId": "nameOfAIAppResource" },
- "dependsOn": [
- "[concat('Microsoft.Web/sites/', myWebAppName)]"
- ]
- }
- ]
-}
-```
-
-* `nameOfAIAppResource` - a name for the Application Insights resource
-* `myWebAppName` - the ID of the web app
-
-## Enable diagnostics extension as part of deploying a Cloud Service
-The `New-AzureDeployment` cmdlet has a parameter `ExtensionConfiguration`, which takes an array of diagnostics configurations. These can be created using the `New-AzureServiceDiagnosticsExtensionConfig` cmdlet. For example:
-
-```azurepowershell
-$service_package = "CloudService.cspkg"
-$service_config = "ServiceConfiguration.Cloud.cscfg"
-$diagnostics_storagename = "myservicediagnostics"
-$webrole_diagconfigpath = "MyService.WebRole.PubConfig.xml"
-$workerrole_diagconfigpath = "MyService.WorkerRole.PubConfig.xml"
-
-$primary_storagekey = (Get-AzStorageKey `
- -StorageAccountName "$diagnostics_storagename").Primary
-$storage_context = New-AzStorageContext `
- -StorageAccountName $diagnostics_storagename `
- -StorageAccountKey $primary_storagekey
-
-$webrole_diagconfig = `
- New-AzureServiceDiagnosticsExtensionConfig `
- -Role "WebRole" -Storage_context $storageContext `
- -DiagnosticsConfigurationPath $webrole_diagconfigpath
-$workerrole_diagconfig = `
- New-AzureServiceDiagnosticsExtensionConfig `
- -Role "WorkerRole" `
- -StorageContext $storage_context `
- -DiagnosticsConfigurationPath $workerrole_diagconfigpath
-
- New-AzureDeployment `
- -ServiceName $service_name `
- -Slot Production `
- -Package $service_package `
- -Configuration $service_config `
- -ExtensionConfiguration @($webrole_diagconfig,$workerrole_diagconfig)
-```
-
-## Enable diagnostics extension on an existing Cloud Service
-On an existing service, use `Set-AzureServiceDiagnosticsExtension`.
-
-```azurepowershell
-$service_name = "MyService"
-$diagnostics_storagename = "myservicediagnostics"
-$webrole_diagconfigpath = "MyService.WebRole.PubConfig.xml"
-$workerrole_diagconfigpath = "MyService.WorkerRole.PubConfig.xml"
-$primary_storagekey = (Get-AzStorageKey `
- -StorageAccountName "$diagnostics_storagename").Primary
-$storage_context = New-AzStorageContext `
- -StorageAccountName $diagnostics_storagename `
- -StorageAccountKey $primary_storagekey
-
-Set-AzureServiceDiagnosticsExtension `
- -StorageContext $storage_context `
- -DiagnosticsConfigurationPath $webrole_diagconfigpath `
- -ServiceName $service_name `
- -Slot Production `
- -Role "WebRole"
-Set-AzureServiceDiagnosticsExtension `
- -StorageContext $storage_context `
- -DiagnosticsConfigurationPath $workerrole_diagconfigpath `
- -ServiceName $service_name `
- -Slot Production `
- -Role "WorkerRole"
-```
-
-## Get current diagnostics extension configuration
-
-```azurepowershell
-Get-AzureServiceDiagnosticsExtension -ServiceName "MyService"
-```
--
-## Remove diagnostics extension
-
-```azurepowershell
-Remove-AzureServiceDiagnosticsExtension -ServiceName "MyService"
-```
-
-If you enabled the diagnostics extension using either `Set-AzureServiceDiagnosticsExtension` or `New-AzureServiceDiagnosticsExtensionConfig` without the Role parameter, then you can remove the extension using `Remove-AzureServiceDiagnosticsExtension` without the Role parameter. If the Role parameter was used when enabling the extension then it must also be used when removing the extension.
-
-To remove the diagnostics extension from each individual role:
-
-```azurepowershell
-Remove-AzureServiceDiagnosticsExtension -ServiceName "MyService" -Role "WebRole"
-```
--
-## See also
-* [Monitor Azure Cloud Services apps with Application Insights](./azure-web-apps-net-core.md)
-* [Send Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md)
--
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
Other automation articles:
* [Create an Application Insights resource](./create-new-resource.md#creating-a-resource-automatically) - quick method without using a template. * [Create web tests](../alerts/resource-manager-alerts-metric.md#availability-test-with-metric-alert)
-* [Send Azure Diagnostics to Application Insights](powershell-azure-diagnostics.md)
+* [Send Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md)
* [Create release annotations](annotations.md)
azure-monitor Resource Manager Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-web-app.md
Title: Resource Manager template samples for Azure App Service + Application Ins
description: Sample Azure Resource Manager templates to deploy an Azure App Service with an Application Insights resource. Previously updated : 07/11/2022 Last updated : 11/15/2022
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Title: Telemetry sampling in Azure Application Insights | Microsoft Docs description: How to keep the volume of telemetry under control. Previously updated : 08/26/2021 Last updated : 11/15/2022
builder.UseAdaptiveSampling(maxTelemetryItemsPerSecond:5, excludedTypes: "Depend
### Configuring adaptive sampling for ASP.NET Core applications
-ASP.NET Core applications may be configured in code or through the `appsettings.json` file. For more information, see [Configuration in ASP.NET Core](https://learn.microsoft.com/aspnet/core/fundamentals/configuration).
+ASP.NET Core applications may be configured in code or through the `appsettings.json` file. For more information, see [Configuration in ASP.NET Core](/aspnet/core/fundamentals/configuration).
Adaptive sampling is enabled by default for all ASP.NET Core applications. You can disable or customize the sampling behavior.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Title: Connection strings in Application Insights | Microsoft Docs description: This article shows how to use connection strings. Previously updated : 04/13/2022 Last updated : 11/15/2022
azure-monitor Sdk Support Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-support-guidance.md
Title: Application Insights SDK support guidance
description: Support guidance for Application Insights legacy and preview SDKs Previously updated : 08/22/2022 Last updated : 11/15/2022
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
Title: How to design your Application Insights deployment - One vs many resources?
+ Title: 'Design your Application Insights deployment: One vs. many resources?'
description: Direct telemetry to different resources for development, test, and production stamps. Previously updated : 11/01/2022 Last updated : 11/15/2022
-# How many Application Insights resources should I deploy
+# How many Application Insights resources should I deploy?
When you're developing the next version of a web application, you don't want to mix up the [Application Insights](../../azure-monitor/app/app-insights-overview.md) telemetry from the new version and the already released version.
-To avoid confusion, send the telemetry from different development stages to separate Application Insights resources, with separate instrumentation keys (ikeys).
+To avoid confusion, send the telemetry from different development stages to separate Application Insights resources with separate instrumentation keys.
-To make it easier to change the instrumentation key as a version moves from one stage to another, it can be useful to [set the ikey dynamically in code](#dynamic-ikey) instead of in the configuration file.
+To make it easier to change the instrumentation key as a version moves from one stage to another, it can be useful to [set the instrumentation key dynamically in code](#dynamic-instrumentation-key) instead of in the configuration file.
-(If your system is an Azure Cloud Service, there's [another method of setting separate ikeys](../../azure-monitor/app/azure-web-apps-net-core.md).)
+If your system is an instance of Azure Cloud Services, there's [another method of setting separate instrumentation keys](../../azure-monitor/app/azure-web-apps-net-core.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## About resources and instrumentation keys
-When you set up Application Insights monitoring for your web app, you create an Application Insights *resource* in Microsoft Azure. You open this resource in the Azure portal in order to see and analyze the telemetry collected from your app. The resource is identified by an *instrumentation key* (ikey). When you install the Application Insights package to monitor your app, you configure it with the instrumentation key, so that it knows where to send the telemetry.
+When you set up Application Insights monitoring for your web app, you create an Application Insights resource in Azure. You open this resource in the Azure portal to see and analyze the telemetry collected from your app. The resource is identified by an instrumentation key. When you install the Application Insights package to monitor your app, you configure it with the instrumentation key so that it knows where to send the telemetry.
-Each Application Insights resource comes with metrics that are available out-of-box. If separate components report to the same Application Insights resource, these metrics may not make sense to dashboard/alert on.
+Each Application Insights resource comes with metrics that are available out of the box. If separate components report to the same Application Insights resource, it might not make sense to alert on these metrics.
### When to use a single Application Insights resource
+Use a single Application Insights resource:
+ - For application components that are deployed together. These applications are usually developed by a single team and managed by the same set of DevOps/ITOps users.-- If it makes sense to aggregate Key Performance Indicators (KPIs) such as response durations, failure rates in dashboard etc., across all of them by default (you can choose to segment by role name in the Metrics Explorer experience).-- If there's no need to manage Azure role-based access control (Azure RBAC) differently between the application components.
+- If it makes sense to aggregate key performance indicators, such as response durations or failure rates in a dashboard, across all of them by default. You can choose to segment by role name in the metrics explorer.
+- If there's no need to manage Azure role-based access control differently between the application components.
- If you don't need metrics alert criteria that are different between the components. - If you don't need to manage continuous exports differently between the components. - If you don't need to manage billing/quotas differently between the components.
Each Application Insights resource comes with metrics that are available out-of-
- If it's okay to have the same smart detection and work item integration settings across all roles. > [!NOTE]
-> If you want to consolidate multiple Application Insights Resources, you may point your existing application components to a new, consolidated Application Insights Resource. The telemetry stored in your old resource will not be transfered to the new resource, so only delete the old resource when you have enough telemetry in the new resource for business continuity.
+> If you want to consolidate multiple Application Insights resources, you can point your existing application components to a new, consolidated Application Insights resource. The telemetry stored in your old resource won't be transferred to the new resource. Only delete the old resource when you have enough telemetry in the new resource for business continuity.
+
+### Other considerations
-### Other things to keep in mind
+Be aware that:
-- You may need to add custom code to ensure that meaningful values are set into the [Cloud_RoleName](./app-map.md?tabs=net#set-or-override-cloud-role-name) attribute. Without meaningful values set for this attribute, *NONE* of the portal experiences will work.-- For Service Fabric applications and classic cloud services, the SDK automatically reads from the Azure Role Environment and sets these. For all other types of apps, you'll likely need to set this explicitly.-- Live Metrics experience doesn't support splitting by role name.
+- You might need to add custom code to ensure that meaningful values are set into the [Cloud_RoleName](./app-map.md?tabs=net#set-or-override-cloud-role-name) attribute. Without meaningful values set for this attribute, none of the portal experiences will work.
+- For Azure Service Fabric applications and classic cloud services, the SDK automatically reads from the Azure Role Environment and sets these services. For all other types of apps, you'll likely need to set this explicitly.
+- Live Metrics doesn't support splitting by role name.
-## <a name="dynamic-ikey"></a> Dynamic instrumentation key
+## <a name="dynamic-instrumentation-key"></a> Dynamic instrumentation key
-To make it easier to change the ikey as the code moves between stages of production, reference the key dynamically in code instead of using a hardcoded/static value.
+To make it easier to change the instrumentation key as the code moves between stages of production, reference the key dynamically in code instead of using a hardcoded or static value.
-Set the key in an initialization method, such as global.aspx.cs in an ASP.NET service:
+Set the key in an initialization method, such as `global.aspx.cs`, in an ASP.NET service:
```csharp protected void Application_Start()
protected void Application_Start()
... ```
-In this example, the ikeys for the different resources are placed in different versions of the web configuration file. Swapping the web configuration file - which you can do as part of the release script - will swap the target resource.
+In this example, the instrumentation keys for the different resources are placed in different versions of the web configuration file. Swapping the web configuration file, which you can do as part of the release script, will swap the target resource.
-### Web pages
-The iKey is also used in your app's web pages, in the [script that you got from the quickstart pane](../../azure-monitor/app/javascript.md). Instead of coding it literally into the script, generate it from the server state. For example, in an ASP.NET app:
+### Webpages
+The instrumentation key is also used in your app's webpages, in the [script that you got from the quickstart pane](../../azure-monitor/app/javascript.md). Instead of coding it literally into the script, generate it from the server state. For example, in an ASP.NET app:
```javascript <script type="text/javascript">
-// Standard Application Insights web page script:
+// Standard Application Insights webpage script:
var appInsights = window.appInsights || function(config){ ... // Modify this part: }({instrumentationKey:
var appInsights = window.appInsights || function(config){ ...
//... ```
-## Create additional Application Insights resources
+## Create more Application Insights resources
-To create an Applications Insights resource follow the [resource creation guide](./create-new-resource.md).
+To create an Applications Insights resource, see [Create an Application Insights resource](./create-new-resource.md).
-### Getting the instrumentation key
+### Get the instrumentation key
The instrumentation key identifies the resource that you created. You need the instrumentation keys of all the resources to which your app will send data.
-## Filter on build number
+## Filter on the build number
When you publish a new version of your app, you'll want to be able to separate the telemetry from different builds.
-You can set the Application Version property so that you can filter [search](../../azure-monitor/app/diagnostic-search.md) and [metric explorer](../../azure-monitor/essentials/metrics-charts.md) results.
+You can set the **Application Version** property so that you can filter [search](../../azure-monitor/app/diagnostic-search.md) and [metric explorer](../../azure-monitor/essentials/metrics-charts.md) results.
-There are several different methods of setting the Application Version property.
+There are several different methods of setting the **Application Version** property.
* Set directly: `telemetryClient.Context.Component.Version = typeof(MyProject.MyClass).Assembly.GetName().Version;`
-* Wrap that line in a [telemetry initializer](../../azure-monitor/app/api-custom-events-metrics.md#defaults) to ensure that all TelemetryClient instances are set consistently.
-* [ASP.NET] Set the version in `BuildInfo.config`. The web module will pick up the version from the BuildLabel node. Include this file in your project and remember to set the Copy Always property in Solution Explorer.
+* Wrap that line in a [telemetry initializer](../../azure-monitor/app/api-custom-events-metrics.md#defaults) to ensure that all `TelemetryClient` instances are set consistently.
+* ASP.NET: Set the version in `BuildInfo.config`. The web module will pick up the version from the `BuildLabel` node. Include this file in your project and remember to set the **Copy Always** property in Solution Explorer.
```xml <?xml version="1.0" encoding="utf-8"?>
There are several different methods of setting the Application Version property.
</DeploymentEvent> ```
-* [ASP.NET] Generate BuildInfo.config automatically in MSBuild. To do this, add a few lines to your `.csproj` file:
+
+* ASP.NET: Generate `BuildInfo.config` automatically in the Microsoft Build Engine. Add a few lines to your `.csproj` file:
```xml <PropertyGroup>
There are several different methods of setting the Application Version property.
</PropertyGroup> ```
- This generates a file called *yourProjectName*.BuildInfo.config. The Publish process renames it to BuildInfo.config.
+ This step generates a file called *yourProjectName*`.BuildInfo.config`. The Publish process renames it to `BuildInfo.config`.
- The build label contains a placeholder (AutoGen_...) when you build with Visual Studio. But when built with MSBuild, it's populated with the correct version number.
+ The build label contains a placeholder (*AutoGen_...*) when you build with Visual Studio. But when built with the Microsoft Build Engine, it's populated with the correct version number.
- To allow MSBuild to generate version numbers, set the version like `1.0.*` in AssemblyReference.cs
+ To allow the Microsoft Build Engine to generate version numbers, set the version like `1.0.*` in `AssemblyReference.cs`.
## Version and release tracking To track the application version, make sure `buildinfo.config` is generated by your Microsoft Build Engine process. In your `.csproj` file, add:
To track the application version, make sure `buildinfo.config` is generated by y
</PropertyGroup> ```
-When it has the build info, the Application Insights web module automatically adds **Application version** as a property to every item of telemetry. That allows you to filter by version when you perform [diagnostic searches](../../azure-monitor/app/diagnostic-search.md), or when you [explore metrics](../../azure-monitor/essentials/metrics-charts.md).
+When the Application Insights web module has the build information, it automatically adds **Application Version** as a property to every item of telemetry. For this reason, you can filter by version when you perform [diagnostic searches](../../azure-monitor/app/diagnostic-search.md) or when you [explore metrics](../../azure-monitor/essentials/metrics-charts.md).
-However, notice that the build version number is generated only by the Microsoft Build Engine, not by the developer build from Visual Studio.
+The build version number is generated only by the Microsoft Build Engine, not by the developer build from Visual Studio.
### Release annotations
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Title: Application Insights Agent overview | Microsoft Docs description: Learn how to use Application Insights Agent to monitor website performance without redeploying the website. It works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. Previously updated : 09/16/2019 Last updated : 11/15/2022
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
Title: Application Insights transaction diagnostics | Microsoft Docs description: This article explains Application Insights end-to-end transaction diagnostics. Previously updated : 10/31/2022 Last updated : 11/15/2022
azure-monitor Tutorial App Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-app-dashboards.md
Title: Create custom dashboards in Application Insights | Microsoft Docs description: This tutorial shows you how to create custom KPI dashboards using Application Insights. Previously updated : 09/30/2020 Last updated : 11/15/2022
azure-monitor Tutorial Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md
description: Application Insights SDK tutorial to monitor ASP.NET Core web appli
ms.devlang: csharp Previously updated : 08/22/2022 Last updated : 11/15/2022
azure-monitor Tutorial Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-performance.md
Title: Diagnose performance issues using Application Insights | Microsoft Docs description: Tutorial to find and diagnose performance issues in your application by using Application Insights. Previously updated : 06/15/2020 Last updated : 11/15/2022
azure-monitor Usage Funnels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-funnels.md
Title: Application Insights Funnels
-description: Learn how you can use Funnels to discover how customers are interacting with your application.
+ Title: Application Insights funnels
+description: Learn how you can use funnels to discover how customers are interacting with your application.
Previously updated : 10/24/2022 Last updated : 11/15/2022
-# Discover how customers are using your application with Application Insights Funnels
+# Discover how customers are using your application with Application Insights funnels
-Understanding the customer experience is of the utmost importance to your business. If your application involves multiple stages, you need to know if most customers are progressing through the entire process, or if they're ending the process at some point. The progression through a series of steps in a web application is known as a *funnel*. You can use Application Insights Funnels to gain insights into your users, and monitor step-by-step conversion rates.
+Understanding the customer experience is of great importance to your business. If your application involves multiple stages, you need to know if customers are progressing through the entire process or ending the process at some point. The progression through a series of steps in a web application is known as a *funnel*. You can use Application Insights funnels to gain insights into your users and monitor step-by-step conversion rates.
## Create your funnel
-Before you create your funnel, decide on the question you want to answer. For example, you might want to know how many users are viewing the home page, viewing a customer profile, and creating a ticket.
+Before you create your funnel, decide on the question you want to answer. For example, you might want to know how many users view the home page, view a customer profile, and create a ticket.
To create a funnel:
-1. In the **Funnels** tab, select **Edit**.
-1. Choose your *Top step*.
+1. On the **Funnels** tab, select **Edit**.
+1. Choose your **Top Step**.
- :::image type="content" source="./media/usage-funnels/funnel.png" alt-text="Screenshot of the Funnel tab and selecting steps on the edit tab." lightbox="./media/usage-funnels/funnel.png":::
+ :::image type="content" source="./media/usage-funnels/funnel.png" alt-text="Screenshot that shows the Funnel tab and selecting steps on the Edit tab." lightbox="./media/usage-funnels/funnel.png":::
-1. To apply filters to the step select **Add filters**, which will appear after you choose an item for the top step.
-1. Then choose your *Second step* and so on.
+1. To apply filters to the step, select **Add filters**. This option appears after you choose an item for the top step.
+1. Then choose your **Second Step** and so on.
-> [!NOTE]
-> Funnels are limited to a maximum of six steps.
+ > [!NOTE]
+ > Funnels are limited to a maximum of six steps.
-1. Select the **View** tab to see your funnel results
+1. Select the **View** tab to see your funnel results.
- :::image type="content" source="./media/usage-funnels/funnel-2.png" alt-text="Screenshot of the funnel tab on view tab showing results from the top and second step." lightbox="./media/usage-funnels/funnel-2.png":::
+ :::image type="content" source="./media/usage-funnels/funnel-2.png" alt-text="Screenshot that shows the Funnels View tab that shows results from the top and second steps." lightbox="./media/usage-funnels/funnel-2.png":::
-1. To save your funnel to view at another time, select **Save** at the top. You can use **Open** to open your saved funnels.
+1. To save your funnel to view at another time, select **Save** at the top. Use **Open** to open your saved funnels.
### Funnels features -- If your app is sampled, you'll see a sampling banner. Selecting the banner opens a context pane, explaining how to turn sampling off. -- Select a step to see more details on the right. -- The historical conversion graph shows the conversion rates over the last 90 days. -- Understand your users better by accessing the users tool. You can use filters in each step.
+Funnels have the following features:
+
+- If your app is sampled, you'll see a sampling banner. Selecting the banner opens a context pane that explains how to turn off sampling.
+- Select a step to see more details on the right.
+- The historical conversion graph shows the conversion rates over the last 90 days.
+- Understand your users better by accessing the users tool. You can use filters in each step.
## Next steps+ * [Usage overview](usage-overview.md)
- * [Users, Sessions, and Events](usage-segmentation.md)
+ * [Users, sessions, and events](usage-segmentation.md)
* [Retention](usage-retention.md) * [Workbooks](../visualize/workbooks-overview.md) * [Add user context](./usage-overview.md)
- * [Export to Power BI](../logs/log-powerbi.md) if you've [migrated to a workspace-based resource](convert-classic-resource.md)
+ * [Export to Power BI](../logs/log-powerbi.md) if you've [migrated to a workspace-based resource](convert-classic-resource.md)
azure-monitor Web App Extension Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/web-app-extension-release-notes.md
Title: Release Notes for Azure web app extension - Application Insights description: Releases notes for Azure Web Apps Extension for runtime instrumentation with Application Insights. Previously updated : 06/26/2020 Last updated : 11/15/2022
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
description: Monitoring .NET Core/.NET Framework non-HTTP apps with Azure Monito
ms.devlang: csharp Previously updated : 05/12/2022 Last updated : 11/15/2022
azure-monitor Autoscale Common Scale Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-scale-patterns.md
Title: Overview of common autoscale patterns description: Learn some of the common patterns to auto scale your resource in Azure.+ Previously updated : 04/22/2022 Last updated : 11/17/2022 -++ # Overview of common autoscale patterns
-This article describes some of the common patterns to scale your resource in Azure.
-Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md).
+Autoscale settings help ensure that you have the right amount of resources running to handle the fluctuating load of your application. You can configure autoscale settings to be triggered based on metrics that indicate load or performance, or triggered at a scheduled date and time.
-## Lets get started
+Azure autoscale supports many resource types. For more information about supported resources, see [autoscale supported resources](./autoscale-overview.md#supported-services-for-autoscale).
-This article assumes that you are familiar with auto scale. You can [get started here to scale your resource][1]. The following are some of the common scale patterns.
+This article describes some of the common patterns you can use to scale your resources in Azure.
+## Prerequisites
-## Scale based on CPU
+This article assumes that you're familiar with auto scale. [Get started here to scale your resource](./autoscale-get-started.md).
-You have a web app (/VMSS/cloud service role) and
+## Scale based on metrics
-- You want to scale out/scale in based on CPU.-- Additionally, you want to ensure there is a minimum number of instances.-- Also, you want to ensure that you set a maximum limit to the number of instances you can scale to.
+Scale your resource based on metrics produce by the resource itself or any other resource.
+For example:
+* Scale your Virtual Machine Scale Set based on the CPU usage of the virtual machine.
+* Ensure a minimum number of instances.
+* Set a maximum limit on the number of instances.
-[![Scale based on CPU](./media/autoscale-common-scale-patterns/scale-based-on-cpu.png)](./media/autoscale-common-scale-patterns/scale-based-on-cpu.png#lightbox)
+The image below shows a default scale condition for a Virtual Machine Scale Set
+ * The **Scale rule** tab shows that the metric source is the scale set itself and the metric used is Percentage CPU.
+ * The minimum number of instances running is set to 2.
+ * The maximum number of instances is set to 10.
+ * When the scale set starts, the default number of instances is 3.
-## Scale differently on weekdays vs weekends
-You have a web app (/VMSS/cloud service role) and
+## Scale based on another resource's metric
-- You want 3 instances by default (on weekdays)-- You don't expect traffic on weekends and hence you want to scale down to 1 instance on weekends.
+Scale a resource based on the metrics from a different resource.
+The image below shows a scale rule that is scaling a Virtual Machine Scale Set based on the number of allocated ports on a load balancer.
-[![Scale differently on weekdays vs weekends](./media/autoscale-common-scale-patterns/scale-differently-on-weekends.png)](./media/autoscale-common-scale-patterns/scale-differently-on-weekends.png#lightbox)
-## Scale differently during holidays
+## Scale differently on weekends
-You have a web app (/VMSS/cloud service role) and
+You can scale your resources differently on different days of the week..
+For example, you have a web app and want to:
+- Set a minimum of 3 instances on weekdays.
+- Scale down to 1 instance on weekends when there's less traffic.
-- You want to scale up/down based on CPU usage by default-- However, during holiday season (or specific days that are important for your business) you want to override the defaults and have more capacity at your disposal.
-[![Scale differently on holidays](./media/autoscale-common-scale-patterns/scale-for-holiday.png)](./media/autoscale-common-scale-patterns/scale-for-holiday.png#lightbox)
+## Scale differently during specific events
-## Scale based on custom metric
+You can set your scale rules and instance limits differently for specific events.
+For example:
+- Set a minimum of 3 instances by default
+- For the week of Back Friday, set the minimum number of instances to 10 to handle the anticipated traffic.
-You have a web front end and an API tier that communicates with the backend.
-- You want to scale the API tier based on custom events in the front end (example: You want to scale your checkout process based on the number of items in the shopping cart)
+## Scale based on custom metrics
+Scale by custom metrics generated by your application.
+For example, you have a web front end and an API tier that communicates with the backend, and you want to scale the API tier based on custom events in the front end.
-![Scale based on custom metric][5]
-<!--Reference-->
-[1]: ./autoscale-get-started.md
-[2]: ./media/autoscale-common-scale-patterns/scale-based-on-cpu.png
-[3]: ./media/autoscale-common-scale-patterns/weekday-weekend-scale.png
-[4]: ./media/autoscale-common-scale-patterns/holidays-scale.png
-[5]: ./media/autoscale-common-scale-patterns/custom-metric-scale.png
+Next steps
+
+Learn more about autoscale by referring to the following articles :
+
+* [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md)
+* [Azure Monitor autoscale custom metrics](./autoscale-custom-metric.md)
+* [Autoscale with multiple profiles](./autoscale-multiprofile.md)
+* [Flapping in Autoscale](./autoscale-custom-metric.md)
+* [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md)
+* [Autoscale REST API](/rest/api/monitor/autoscalesettings)
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
The full list of configurable fields and descriptions is available in the [Autos
For code examples, see
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-template.md)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md)
## Horizontal vs vertical scaling Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a Virtual Machine Scale Set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
In contrast, vertical scaling, keeps the same number of resources constant, but
The following services are supported by autoscale:
-| Service | Schema & Documentation |
-| | |
-| Azure Virtual machines scale sets |[Overview of autoscale with Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md) |
-| Web apps |[Scaling Web Apps](autoscale-get-started.md) |
-| Azure API Management service|[Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md)
-| Azure Data Explorer Clusters|[Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling)|
-| Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) |
-| Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
- Spring Cloud |[Set up autoscale for microservice applications](../../spring-apps/how-to-setup-autoscale.md)|
-| Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |
-| Service Bus |[Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)|
-| Logic Apps - Integration Service Environment(ISE) | [Add ISE capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
+| Service | Schema & Documentation |
+||--|
+| Azure Virtual machines scale sets | [Overview of autoscale with Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md) |
+| Web apps | [Scaling Web Apps](autoscale-get-started.md) |
+| Azure API Management service | [Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md) |
+| Azure Data Explorer Clusters | [Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling) |
+| Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) |
+| Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
+| Azure Spring Apps | [Set up autoscale for applications](../../spring-apps/how-to-setup-autoscale.md) |
+| Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |
+| Service Bus | [Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md) |
+| Logic Apps - Integration Service Environment(ISE) | [Add ISE capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
## Next steps
To learn more about autoscale, see the following resources:
* [Azure Monitor autoscale common metrics](autoscale-common-metrics.md) * [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with Azure PowerShell](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
-* [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest)
-* [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings)
-* [PowerShell Az.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
-* [REST API reference. Autoscale Settings](https://learn.microsoft.com/rest/api/monitor/autoscale-settings).
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-template.md)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with Azure PowerShell](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md)
+* [Autoscale CLI reference](/cli/azure/monitor/autoscale)
+* [ARM template resource definition](/azure/templates/microsoft.insights/autoscalesettings)
+* [PowerShell Az.Monitor Reference](/powershell/module/az.monitor/#monitor)
+* [REST API reference. Autoscale Settings](/rest/api/monitor/autoscale-settings).
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Your retention requirement might be for compliance reasons or for occasional inv
You can configure retention and archiving for all tables in a workspace or configure each table separately. The options allow you to optimize your costs by setting only the retention you require for each data type.
-### Configure Basic Logs (preview)
+### Configure Basic Logs
You can save on data ingestion costs by configuring [certain tables](logs/basic-logs-configure.md#which-tables-support-basic-logs) in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as [Basic Logs](logs/basic-logs-configure.md).
azure-monitor Change Analysis Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-enable.md
In this guide, you'll learn the two ways to enable Change Analysis for Azure Fun
For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#diagnose-and-solve-problems-tool) section. > [!NOTE]
-> You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+> You may not immediately see web app in-guest file changes and configuration changes. Prepare for downtime and restart your web app to view changes within 30 minutes. If you still can't see changes, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
1. Navigate to Azure Monitor's Change Analysis UI in the portal.
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-troubleshoot.md
This error message is likely a temporary internet connectivity issue, since:
* The UI sent the resource provider registration request. * You've resolved your [permissions issue](#you-dont-have-enough-permissions-to-register-microsoftchangeanalysis-resource-provider).
-Try refreshing the page and checking your internet connection. If the error persists, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+Try refreshing the page and checking your internet connection. If the error persists, [submit an Azure support ticket](https://azure.microsoft.com/support/).
### This is taking longer than expected.
-You'll receive this error message when the registration takes longer than 2 minutes. While unusual, it doesn't mean something went wrong. Restart your web app to see your registration changes. Changes should show up within a few hours of app restart.
+You'll receive this error message when the registration takes longer than 2 minutes. While unusual, it doesn't mean something went wrong.
-If your changes still don't show after 6 hours, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+1. Prepare for downtime.
+1. Restart your web app to see your registration changes.
+
+Changes should show up within a few hours of app restart. If your changes still don't show after 6 hours, [submit an Azure support ticket](https://azure.microsoft.com/support/).
## Azure Lighthouse subscription is not supported.
Often, this message includes: `Azure Lighthouse subscription is not supported, t
Azure Lighthouse allows for cross-tenant resource administration. However, cross-tenant support needs to be built for each resource provider. Currently, Change Analysis has not built this support. If you're signed into one tenant, you can't query for resource or subscription changes whose home is in another tenant.
-If this is a blocking issue for you, we'd like to hear your feedback! [Contact the Change Analysis help team](mailto:changeanalysishelp@microsoft.com) to describe how you're trying to use Change Analysis.
+If this is a blocking issue for you, [submit an Azure support ticket](https://azure.microsoft.com/support/) to describe how you're trying to use Change Analysis.
## An error occurred while getting changes. Please refresh this page or come back later to view changes.
When changes can't be loaded, Azure Monitor's Change Analysis service presents t
- Internet connectivity error from the client device. - Change Analysis service being temporarily unavailable.
-Refreshing the page after a few minutes usually fixes this issue. If the error persists, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+Refreshing the page after a few minutes usually fixes this issue. If the error persists, [submit an Azure support ticket](https://azure.microsoft.com/support/).
## Only partial data loaded. This error message may occur in the Azure portal when loading change data via the Change Analysis home page. Typically, the Change Analysis service calculates and returns all change data. However, in a network failure or a temporary outage of service, you may receive an error message indicating only partial data was loaded.
-To load all change data, try waiting a few minutes and refreshing the page. If you are still only receiving partial data, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+To load all change data, try waiting a few minutes and refreshing the page. If you are still only receiving partial data, [submit an Azure support ticket](https://azure.microsoft.com/support/).
## You don't have enough permissions to view some changes. Contact your Azure subscription administrator.
This general unauthorized error message occurs when the current user doesn't hav
## Cannot see in-guest changes for newly enabled Web App.
-You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+You may not immediately see web app in-guest file changes and configuration changes.
+
+1. Prepare for brief downtime.
+1. Restart your web app.
+
+You should be able to view changes within 30 minutes. If not, [submit an Azure support ticket](https://azure.microsoft.com/support/).
## Diagnose and solve problems tool for virtual machines
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
Click into a change to view full Resource Manager snippet and other properties.
:::image type="content" source="./media/change-analysis/change-details.png" alt-text="Screenshot of change details":::
-Send any feedback to the [Change Analysis team](mailto:changeanalysisteam@microsoft.com) from the Change Analysis blade:
+Send feedback from the Change Analysis blade:
:::image type="content" source="./media/change-analysis/change-analysis-feedback.png" alt-text="Screenshot of feedback button in Change Analysis tab":::
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
If you are utilizing [Prometheus metric scraping](container-insights-prometheus.
### Configure Basic Logs
-You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs (preview)](../best-practices-cost.md#configure-basic-logs-preview). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
+You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs (preview)](../best-practices-cost.md#configure-basic-logs). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logging-v2.md).
azure-monitor Data Collection Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md
Use a transformation to add information to data that provides business context o
## Supported tables Transformations may be applied to the following tables in a Log Analytics workspace. -- Any Azure table listed in [Tables that support time transformations in Azure Monitor Logs (preview)](../logs/tables-feature-support.md)
+- Any Azure table listed in [Tables that support transformations in Azure Monitor Logs (preview)](../logs/tables-feature-support.md)
- Any custom table
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
The following table provides unique requirements for each destination including
| Destination | Requirements | |:|:| | Log Analytics workspace | The workspace doesn't need to be in the same region as the resource being monitored.|
-| Storage account | Don't use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.<br><br> Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.<br><br>[Azure DNS zone endpoints (preview)](/azure/storage/common/storage-account-overview#azure-dns-zone-endpoints-preview) and [Azure Premium LRS](/azure/storage/common/storage-redundancy#locally-redundant-storage) (locally redundant storage) storage accounts are not supported as a log or metric destination.|
+| Storage account | Don't use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.<br><br> Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.<br><br>[Azure DNS zone endpoints (preview)](../../storage/common/storage-account-overview.md#azure-dns-zone-endpoints-preview) and [Azure Premium LRS](../../storage/common/storage-redundancy.md#locally-redundant-storage) (locally redundant storage) storage accounts are not supported as a log or metric destination.|
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.| | Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
Every effort is made to ensure all log data is sent correctly to your destinatio
## Next step
-[Read more about Azure platform logs](./platform-logs-overview.md)
+[Read more about Azure platform logs](./platform-logs-overview.md)
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
Last updated 07/27/2022
The Diagnostic Settings Storage Retention feature is being deprecated. To configure retention for logs and metrics use Azure Storage Lifecycle Management.
-This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](/azure/storage/blobs/lifecycle-management-policy-configure?tabs=azure-portal) for retention.
+This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](../../storage/blobs/lifecycle-management-policy-configure.md?tabs=azure-portal) for retention.
> [!IMPORTANT] > **Deprecation Timeline.**
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
The only requirement to enable Azure Monitor managed service for Prometheus is t
The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards. ## Rules and alerts
-Azure Monitor managed service for Prometheus supports recording rules and alert rules using PromQL queries. Metrics recorded by recording rules are stored back in the Azure Monitor workspace and can be queried by dashboard or by other rules. Alerts fired by alert rules can trigger actions or notifications, as defined in the [action groups](/azure/azure-monitor/alerts/action-groups) configured for the alert rule. You can also view fired and resolved Prometheus alerts in the Azure portal along with other alert types. For your AKS cluster, a set of [predefined Prometheus alert rules](/azure/azure-monitor/containers/container-insights-metric-alerts) and [recording rules ](/azure/azure-monitor/essentials/prometheus-metrics-scrape-default#recording-rules)is provided to allow easy quick start.
+Azure Monitor managed service for Prometheus supports recording rules and alert rules using PromQL queries. Metrics recorded by recording rules are stored back in the Azure Monitor workspace and can be queried by dashboard or by other rules. Alerts fired by alert rules can trigger actions or notifications, as defined in the [action groups](../alerts/action-groups.md) configured for the alert rule. You can also view fired and resolved Prometheus alerts in the Azure portal along with other alert types. For your AKS cluster, a set of [predefined Prometheus alert rules](../containers/container-insights-metric-alerts.md) and [recording rules ](./prometheus-metrics-scrape-default.md#recording-rules)is provided to allow easy quick start.
## Limitations See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor workspaces.
Following are links to Prometheus documentation.
- [Enable Azure Monitor managed service for Prometheus](prometheus-metrics-enable.md). - [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md). - [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).-- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).-
+- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Log Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-powerbi.md
Title: Log Analytics integration with Power BI and Excel
-description: How to send results from Log Analytics to Power BI
+description: Learn how to send results from Log Analytics to Power BI.
Last updated 06/22/2022
# Log Analytics integration with Power BI
-This article focuses on ways to feed data from Log Analytics into Microsoft Power BI to create more visually appealing reports and dashboards.
+This article focuses on ways to feed data from Log Analytics into Power BI to create more visually appealing reports and dashboards.
-## Background
+## Background
-Azure Monitor Logs is a platform that provides an end-to-end solution for ingesting logs. [Azure Monitor Log Analytics](../data-platform.md) is the interface to query these logs. For more information on the entire Azure Monitor data platform including Log Analytics, see [Azure Monitor data platform](../data-platform.md).
-
-Microsoft Power BI is MicrosoftΓÇÖs data visualization platform. For more information on how to get started, see [Power BIΓÇÖs homepage](https://powerbi.microsoft.com/).
+Azure Monitor Logs is a platform that provides an end-to-end solution for ingesting logs. [Azure Monitor Log Analytics](../data-platform.md) is the interface to query these logs. For more information on the entire Azure Monitor data platform including Log Analytics, see [Azure Monitor data platform](../data-platform.md).
+Power BI is the Microsoft data visualization platform. For more information on how to get started, see the [Power BI home page](https://powerbi.microsoft.com/).
In general, you can use free Power BI features to integrate and create visually appealing reports and dashboards.
-More advanced features may require purchasing a Power BI Pro or premium account. These features include:
+More advanced features might require purchasing a Power BI Pro or Premium account. These features include:
-For more information, see [learn more about Power BI pricing and features](https://powerbi.microsoft.com/pricing/)
+ - Sharing your work.
+ - Scheduled refreshes.
+ - Power BI apps.
+ - Dataflows and incremental refresh.
-## Integrating queries
+For more information, see [Learn more about Power BI pricing and features](https://powerbi.microsoft.com/pricing/).
-Power BI uses the [M query language](/powerquery-m/power-query-m-language-specification/) as its main querying language.
+## Integrate queries
-Log Analytics queries can be exported to M and used in Power BI directly. After running a successful query, select the **Export to Power BI (M query)** from the **Export** button in Log Analytics UI top action bar.
+Power BI uses the [M query language](/powerquery-m/power-query-m-language-specification/) as its main querying language.
+Log Analytics queries can be exported to M and used in Power BI directly. After you run a successful query, select **Export to Power BI (M query)** from the **Export** dropdown list in the Log Analytics top toolbar.
Log Analytics creates a .txt file containing the M code that can be used directly in Power BI.
-## Connecting your logs to a dataset
+## Connect your logs to a dataset
-A Power BI dataset is a source of data ready for reporting and visualization. To connect a Log Analytics query to a dataset, copy the M code exported from Log Analytics into a blank query in Power BI.
+A Power BI dataset is a source of data ready for reporting and visualization. To connect a Log Analytics query to a dataset, copy the M code exported from Log Analytics into a blank query in Power BI.
-For more information, see [Understanding Power BI datasets](/power-bi/service-datasets-understand/).
+For more information, see [Understanding Power BI datasets](/power-bi/service-datasets-understand/).
-## Collect data with Power BI dataflows
+## Collect data with Power BI dataflows
-Power BI dataflows also allow you to collect and store data. For more information, see [Power BI Dataflows](/power-bi/service-dataflows-overview).
+Power BI dataflows also allow you to collect and store data. For more information, see [Power BI dataflows](/power-bi/service-dataflows-overview).
A dataflow is a type of "cloud ETL" designed to help you collect and prep your data. A dataset is the "model" designed to help you connect different entities and model them for your needs.
-## Incremental refresh
-
-Both Power BI datasets and Power BI dataflows have an incremental refresh option. Power BI dataflows and Power BI datasets support this feature. To use incremental refresh on dataflows, you need Power BI Premium.
+## Incremental refresh
+Both Power BI datasets and Power BI dataflows have an incremental refresh option. Power BI dataflows and Power BI datasets support this feature. To use incremental refresh on dataflows, you need Power BI Premium.
-Incremental refresh runs small queries and updates smaller amounts of data per run instead of ingesting all of the data again and again when you run the query. You have the option to save large amounts of data, but add a new increment of data every time the query is run. This behavior is ideal for longer running reports.
+Incremental refresh runs small queries and updates smaller amounts of data per run instead of ingesting all the data again and again when you run the query. You can save large amounts of data but add a new increment of data every time the query is run. This behavior is ideal for longer-running reports.
-Power BI incremental refresh relies on the existence of a *datetime* filed in the result set. Before configuring incremental refresh, make sure your Log Analytics query result set includes at least one *datetime* filed.
+Power BI incremental refresh relies on the existence of a **datetime** field in the result set. Before you configure incremental refresh, make sure your Log Analytics query result set includes at least one **datetime** field.
-To learn more and how to configure incremental refresh, see [Power BI Datasets and Incremental refresh](/power-bi/service-premium-incremental-refresh) and [Power BI dataflows and incremental refresh](/power-bi/service-dataflows-incremental-refresh).
+To learn more and how to configure incremental refresh, see [Power BI datasets and incremental refresh](/power-bi/service-premium-incremental-refresh) and [Power BI dataflows and incremental refresh](/power-bi/service-dataflows-incremental-refresh).
## Reports and dashboards After your data is sent to Power BI, you can continue to use Power BI to create reports and dashboards.
-For more information, see [this guide on how to create your first Power BI model and report](/training/modules/build-your-first-power-bi-report/).
+For more information, see [Create and share your first Power BI report](/training/modules/build-your-first-power-bi-report/).
## Excel integration
-You can use the same M integration used in Power BI to integrate with an Excel spreadsheet. For more information, see this [guide on how to integrate with excel](https://support.microsoft.com/office/import-data-from-external-data-sources-power-query-be4330b3-5356-486c-a168-b68e9e616f5a) and then paste the M query exported from Log Analytics.
+You can use the same M integration used in Power BI to integrate with an Excel spreadsheet. For more information, see [Import data from data sources (Power Query)](https://support.microsoft.com/office/import-data-from-external-data-sources-power-query-be4330b3-5356-486c-a168-b68e9e616f5a). Then paste the M query exported from Log Analytics.
-Additional information can be found in [Integrate Log Analytics and Excel](log-excel.md)
+For more information, see [Integrate Log Analytics and Excel](log-excel.md).
## Next steps
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
Title: Export data from Log Analytics workspace to Azure Storage Account using Logic App
-description: Describes a method to use Azure Logic Apps to query data from a Log Analytics workspace and send to Azure Storage.
-
+ Title: Export data from a Log Analytics workspace to a storage account by using Logic Apps
+description: This article describes a method to use Azure Logic Apps to query data from a Log Analytics workspace and send it to Azure Storage.
+
Last updated 03/01/2022
-# Export data from Log Analytics workspace to Azure Storage Account using Logic App
-This article describes a method to use [Azure Logic App](../../logic-apps/index.yml) to query data from a Log Analytics workspace in Azure Monitor and send to Azure Storage. Use this process when you need to export your Azure Monitor Log data for auditing and compliance scenarios or to allow another service to retrieve this data.
+# Export data from a Log Analytics workspace to a storage account by using Logic Apps
+This article describes a method to use [Azure Logic Apps](../../logic-apps/index.yml) to query data from a Log Analytics workspace in Azure Monitor and send it to Azure Storage. Use this process when you need to export your Azure Monitor Logs data for auditing and compliance scenarios or to allow another service to retrieve this data.
## Other export methods
-The method described in this article describes a scheduled export from a log query using a Logic App. Other options to export data for particular scenarios include the following:
+The method discussed in this article describes a scheduled export from a log query by using a logic app. Other options to export data for particular scenarios include:
-- To export data from your Log Analytics workspace to an Azure Storage Account or Event Hubs, use the Log Analytics workspace data export feature of Azure Monitor Logs. See [Log Analytics workspace data export in Azure Monitor](logs-data-export.md)-- One time export using a Logic App. See [Azure Monitor Logs connector for Logic Apps and Power Automate](logicapp-flow-connector.md).-- One time export to local machine using PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport).
+- To export data from your Log Analytics workspace to a storage account or Azure Event Hubs, use the Log Analytics workspace data export feature of Azure Monitor Logs. See [Log Analytics workspace data export in Azure Monitor](logs-data-export.md).
+- One-time export by using a logic app. See [Azure Monitor Logs connector for Logic Apps and Power Automate](logicapp-flow-connector.md).
+- One-time export to a local machine by using a PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport).
## Overview
-This procedure uses the [Azure Monitor Logs connector](/connectors/azuremonitorlogs) which lets you run a log query from a Logic App and use its output in other actions in the workflow. The [Azure Blob Storage connector](/connectors/azureblob) is used in this procedure to send the query output to Azure storage.
+This procedure uses the [Azure Monitor Logs connector](/connectors/azuremonitorlogs), which lets you run a log query from a logic app and use its output in other actions in the workflow. The [Azure Blob Storage connector](/connectors/azureblob) is used in this procedure to send the query output to storage.
-[![Logic app overview](media/logs-export-logic-app/logic-app-overview.png "Screenshot of Logic app flow.")](media/logs-export-logic-app/logic-app-overview.png#lightbox)
+[![Screenshot that shows a Logic Apps overview.](media/logs-export-logic-app/logic-app-overview.png "Screenshot that shows a Logic Apps flow.")](media/logs-export-logic-app/logic-app-overview.png#lightbox)
-When you export data from a Log Analytics workspace, you should limit the amount of data processed by your Logic App workflow, by filtering and aggregating your log data in query, to reduce to the required data. For example, if you need to export sign-in events, you should filter for required events and project only the required fields. For example:
+When you export data from a Log Analytics workspace, limit the amount of data processed by your Logic Apps workflow. Filter and aggregate your log data in the query to reduce the required data. For example, if you need to export sign-in events, filter for required events and project only the required fields. For example:
```Kusto SecurityEvent
SecurityEvent
| project TimeGenerated , Account , AccountType , Computer ```
-When you export the data on a schedule, use the ingestion_time() function in your query to ensure that you donΓÇÖt miss late arriving data. If data is delayed due to network or platform issues, using the ingestion time ensures that data is included in the next Logic App execution. See *Add Azure Monitor Logs action* under [Logic App procedure](#logic-app-procedure) for an example.
+When you export the data on a schedule, use the `ingestion_time()` function in your query to ensure that you don't miss late-arriving data. If data is delayed because of network or platform issues, using the ingestion time ensures that data is included in the next Logic Apps execution. For an example, see the step "Add Azure Monitor Logs action" in the [Logic Apps procedure](#logic-apps-procedure) section.
## Prerequisites
-Following are prerequisites that must be completed before this procedure.
--- Log Analytics workspace--The user who creates the Logic App must have at least read permission to the workspace. -- Azure Storage Account--The Storage Account doesnΓÇÖt have to be in the same subscription as your Log Analytics workspace. The user who creates the Logic App must have write permission to the Storage Account.
+The following prerequisites must be completed before you start this procedure:
+- **Log Analytics workspace**: The user who creates the logic app must have at least read permission to the workspace.
+- **Storage account**: The storage account doesn't have to be in the same subscription as your Log Analytics workspace. The user who creates the logic app must have write permission to the storage account.
## Connector limits
-Log Analytics workspace and log queries in Azure Monitor are multitenancy services that include limits, to protect and isolate customers, and maintain quality of service. When querying for a large amount of data, you should consider the following limits, which can affect how you configure the Logic App recurrence and your log query:
--- Log queries cannot return more than 500,000 rows.-- Log queries cannot return more than 64,000,000 bytes.-- Log queries cannot run longer than 10 minutes by default. -- Log Analytics connector is limited to 100 call per minute.-
-## Logic App procedure
-
-1. **Create container in the Storage Account**
-
- Use the procedure in [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) to add a container to your Storage Account to hold the exported data. The name used for the container in this article is **loganalytics-data**, but you can use any name.
-
-1. **Create Logic App**
-
- 1. Go to **Logic Apps** in the Azure portal and click **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new Logic App and then give it a unique name. You can turn on **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.
-\
- [![Create Logic App](media/logs-export-logic-app/create-logic-app.png "Screenshot of Logic App resource create.")](media/logs-export-logic-app/create-logic-app.png#lightbox)
-
- 2. Click **Review + create** and then **Create**. When the deployment is complete, click **Go to resource** to open the **Logic Apps Designer**.
-
-2. **Create a trigger for the Logic App**
-
- 1. Under **Start with a common trigger**, select **Recurrence**. This creates a Logic App that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day** and in the **Interval** box, enter **1** to run the workflow once per day.
- \
- [![Recurrence action](media/logs-export-logic-app/recurrence-action.png "Screenshot of recurrence action create.")](media/logs-export-logic-app/recurrence-action.png#lightbox)
-
-3. **Add Azure Monitor Logs action**
-
- The Azure Monitor Logs action lets you specify the query to run. The log query used in this example is optimized for hourly recurrence and collects the data ingested for the particular execution time. For example, if the workflow runs at 4:35, the time range would be 3:00 to 4:00. If you change the Logic App to run at a different frequency, you need the change the query as well. For example, if you set the recurrence to run daily, you would set startTime in the query to startofday(make_datetime(year,month,day,0,0)).
-
- You will be prompted to select a tenant to grant access to the Log Analytics workspace with the account that the workflow will use to run the query.
-
- 1. Click **+ New step** to add an action that runs after the recurrence action. Under **Choose an action**, type **azure monitor** and then select **Azure Monitor Logs**.
- \
- [![Azure Monitor Logs action](media/logs-export-logic-app/select-azure-monitor-connector.png "Screenshot of Azure Monitor Logs action create.")](media/logs-export-logic-app/select-azure-monitor-connector.png#lightbox)
-
- 1. Click **Azure Log Analytics ΓÇô Run query and list results**.
- \
- [![Azure Monitor Logs is highlighted under Choose an action.](media/logs-export-logic-app/select-query-action-list.png "Screenshot of a new action being added to a step in the Logic App Designer.")](media/logs-export-logic-app/select-query-action-list.png#lightbox)
-
- 2. Select the **Subscription** and **Resource Group** for your Log Analytics workspace. Select *Log Analytics Workspace* for the **Resource Type** and then select the workspace's name under **Resource Name**.
-
- 3. Add the following log query to the **Query** window.
-
- ```Kusto
- let dt = now();
- let year = datetime_part('year', dt);
- let month = datetime_part('month', dt);
- let day = datetime_part('day', dt);
- let hour = datetime_part('hour', dt);
- let startTime = make_datetime(year,month,day,hour,0)-1h;
- let endTime = startTime + 1h - 1tick;
- AzureActivity
- | where ingestion_time() between(startTime .. endTime)
- | project
- TimeGenerated,
- BlobTime = startTime,
- OperationName ,
- OperationNameValue ,
- Level ,
- ActivityStatus ,
- ResourceGroup ,
- SubscriptionId ,
- Category ,
- EventSubmissionTimestamp ,
- ClientIpAddress = parse_json(HTTPRequest).clientIpAddress ,
- ResourceId = _ResourceId
- ```
-
- 4. The **Time Range** specifies the records that will be included in the query based on the **TimeGenerated** column. This should be set to a value greater than the time range selected in the query. Since this query isn't using the **TimeGenerated** column, then **Set in query** option isn't available. See [Query scope](./scope.md) for more details about the time range. Select **Last 4 hours** for the **Time Range**. This will ensure that any records with an ingestion time larger than **TimeGenerated** will be included in the results.
- \
- [![Screenshot of the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logs-export-logic-app/run-query-list-action.png "of the settings for the new Azure Monitor Logs action named Run query and visualize results.")](media/logs-export-logic-app/run-query-list-action.png#lightbox)
-
-4. **Add Parse JSON activity (optional)**
-
- The output from the **Run query and list results** action is formatted in JSON. You can parse this data and manipulate it as part of the preparation for **Compose** action.
-
- You can provide a JSON schema that describes the payload you expect to receive. The designer parses JSON content by using this schema and generates user-friendly tokens that represent the properties in your JSON content. You can then easily reference and use those properties throughout your Logic App's workflow.
-
- You can use a sample output from **Run query and list results** step. Click **Run Trigger** in Logic App ribbon, then **Run**, download and save an output record. For the sample query in previous stem, you can use the following sample output:
+Log Analytics workspace and log queries in Azure Monitor are multitenancy services that include limits to protect and isolate customers and maintain quality of service. When you query for a large amount of data, consider the following limits, which can affect how you configure the Logic Apps recurrence and your log query:
+
+- Log queries can't return more than 500,000 rows.
+- Log queries can't return more than 64,000,000 bytes.
+- Log queries can't run longer than 10 minutes by default.
+- Log Analytics connector is limited to 100 calls per minute.
+
+## Logic Apps procedure
+
+The following sections walk you through the procedure.
+
+### Create a container in the storage account
+
+Use the procedure in [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) to add a container to your storage account to hold the exported data. The name used for the container in this article is **loganalytics-data**, but you can use any name.
+
+### Create a logic app
+
+1. Go to **Logic Apps** in the Azure portal and select **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new Logic App. Then give it a unique name. You can turn on the **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor Logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.
+
+ [![Screenshot that shows creating a logic app.](media/logs-export-logic-app/create-logic-app.png "Screenshot that shows creating a Logic Apps resource.")](media/logs-export-logic-app/create-logic-app.png#lightbox)
+
+1. Select **Review + create** and then select **Create**. After the deployment is finished, select **Go to resource** to open the **Logic Apps Designer**.
+
+### Create a trigger for the logic app
+
+Under **Start with a common trigger**, select **Recurrence**. This setting creates a logic app that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day**. In the **Interval** box, enter **1** to run the workflow once per day.
+
+[![Screenshot that shows a Recurrence action.](media/logs-export-logic-app/recurrence-action.png "Screenshot that shows creating a recurrence action.")](media/logs-export-logic-app/recurrence-action.png#lightbox)
+
+### Add an Azure Monitor Logs action
+
+The Azure Monitor Logs action lets you specify the query to run. The log query used in this example is optimized for hourly recurrence. It collects the data ingested for the particular execution time. For example, if the workflow runs at 4:35, the time range would be 3:00 to 4:00. If you change the logic app to run at a different frequency, you need to change the query too. For example, if you set the recurrence to run daily, you set `startTime` in the query to `startofday(make_datetime(year,month,day,0,0))`.
+
+You're prompted to select a tenant to grant access to the Log Analytics workspace with the account that the workflow will use to run the query.
+
+1. Select **+ New step** to add an action that runs after the recurrence action. Under **Choose an action**, enter **azure monitor**. Then select **Azure Monitor Logs**.
+
+ [![Screenshot that shows an Azure Monitor Logs action.](media/logs-export-logic-app/select-azure-monitor-connector.png "Screenshot that shows creating a Azure Monitor Logs action.")](media/logs-export-logic-app/select-azure-monitor-connector.png#lightbox)
+
+1. Select **Azure Log Analytics ΓÇô Run query and list results**.
+
+ [![Screenshot that shows Azure Monitor Logs is highlighted under Choose an action.](media/logs-export-logic-app/select-query-action-list.png "Screenshot that shows a new action being added to a step in the Logic Apps Designer.")](media/logs-export-logic-app/select-query-action-list.png#lightbox)
+
+1. Select the **Subscription** and **Resource Group** for your Log Analytics workspace. Select **Log Analytics Workspace** for the **Resource Type**. Then select the workspace name under **Resource Name**.
+
+1. Add the following log query to the **Query** window:
+
+ ```Kusto
+ let dt = now();
+ let year = datetime_part('year', dt);
+ let month = datetime_part('month', dt);
+ let day = datetime_part('day', dt);
+ let hour = datetime_part('hour', dt);
+ let startTime = make_datetime(year,month,day,hour,0)-1h;
+ let endTime = startTime + 1h - 1tick;
+ AzureActivity
+ | where ingestion_time() between(startTime .. endTime)
+ | project
+ TimeGenerated,
+ BlobTime = startTime,
+ OperationName ,
+ OperationNameValue ,
+ Level ,
+ ActivityStatus ,
+ ResourceGroup ,
+ SubscriptionId ,
+ Category ,
+ EventSubmissionTimestamp ,
+ ClientIpAddress = parse_json(HTTPRequest).clientIpAddress ,
+ ResourceId = _ResourceId
+ ```
+
+1. The **Time Range** specifies the records that will be included in the query based on the **TimeGenerated** column. The value should be greater than the time range selected in the query. Because this query isn't using the **TimeGenerated** column, the **Set in query** option isn't available. For more information about the time range, see [Query scope](./scope.md). Select **Last 4 hours** for the **Time Range**. This setting ensures that any records with an ingestion time larger than **TimeGenerated** will be included in the results.
+
+ [![Screenshot that shows the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logs-export-logic-app/run-query-list-action.png "Screenshot that shows the settings for the Azure Monitor Logs action named Run query.")](media/logs-export-logic-app/run-query-list-action.png#lightbox)
+
+### Add a Parse JSON action (optional)
+
+The output from the **Run query and list results** action is formatted in JSON. You can parse this data and manipulate it as part of the preparation for the **Compose** action.
+
+You can provide a JSON schema that describes the payload you expect to receive. The designer parses JSON content by using this schema and generates user-friendly tokens that represent the properties in your JSON content. You can then easily reference and use those properties throughout your Logic App's workflow.
+
+You can use a sample output from the **Run query and list results** step.
+
+1. Select **Run Trigger** in the Logic Apps ribbon. Then select **Run** and download and save an output record. For the sample query in the previous stem, you can use the following sample output:
```json {
Log Analytics workspace and log queries in Azure Monitor are multitenancy servic
} ```
- 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **json** and then select **Parse JSON**.
- \
- [![Select Parse JSON operator](media/logs-export-logic-app/select-parse-json.png "Screenshot of Parse JSON operator.")](media/logs-export-logic-app/select-parse-json.png#lightbox)
+1. Select **+ New step** and then select **+ Add an action**. Under **Choose an operation**, enter **json** and then select **Parse JSON**.
+
+ [![Screenshot that shows selecting a Parse JSON operator.](media/logs-export-logic-app/select-parse-json.png "Screenshot that shows the Parse JSON operator.")](media/logs-export-logic-app/select-parse-json.png#lightbox)
+
+1. Select the **Content** box to display a list of values from previous activities. Select **Body** from the **Run query and list results** action. This output is from the log query.
+
+ [![Screenshot that shows selecting a Body.](media/logs-export-logic-app/select-body.png "Screenshot that shows a Parse JSON Content setting with the output Body from the previous step.")](media/logs-export-logic-app/select-body.png#lightbox)
+
+1. Copy the sample record saved earlier. Select **Use sample payload to generate schema** and paste.
+
+ [![Screenshot that shows parsing a JSON payload.](media/logs-export-logic-app/parse-json-payload.png "Screenshot that shows a Parse JSON schema.")](media/logs-export-logic-app/parse-json-payload.png#lightbox)
+
+### Add the Compose action
+
+The **Compose** action takes the parsed JSON output and creates the object that you need to store in the blob.
+
+1. Select **+ New step**, and then select **+ Add an action**. Under **Choose an operation**, enter **compose**. Then select the **Compose** action.
+
+ [![Screenshot that shows selecting a Compose action.](media/logs-export-logic-app/select-compose.png "Screenshot that shows a Compose action.")](media/logs-export-logic-app/select-compose.png#lightbox)
+
+1. Select the **Inputs** box to display a list of values from previous activities. Select **Body** from the **Parse JSON** action. This parsed output is from the log query.
+
+ [![Screenshot that shows selecting a body for a Compose action.](media/logs-export-logic-app/select-body-compose.png "Screenshot that shows a body for Compose action.")](media/logs-export-logic-app/select-body-compose.png#lightbox)
+
+### Add the Create blob action
- 1. Click in the **Content** box to display a list of values from previous activities. Select **Body** from the **Run query and list results** action. This is the output from the log query.
- \
- [![Select Body](media/logs-export-logic-app/select-body.png "Screenshot of Par JSON Content setting with output Body from previous step.")](media/logs-export-logic-app/select-body.png#lightbox)
+The **Create blob** action writes the composed JSON to storage.
- 1. Copy the sample record saved earlier, click **Use sample payload to generate schema** and paste.
-\
- [![Parse JSON payload](media/logs-export-logic-app/parse-json-payload.png "Screenshot of Parse JSON schema.")](media/logs-export-logic-app/parse-json-payload.png#lightbox)
+1. Select **+ New step**, and then select **+ Add an action**. Under **Choose an operation**, enter **blob**. Then select the **Create blob** action.
-5. **Add the Compose action**
-
- The **Compose** action takes the parsed JSON output and creates the object that you need to store in the blob.
+ [![Screenshot that shows selecting the Create Blob action.](media/logs-export-logic-app/select-create-blob.png "Screenshot that shows creating a Blob storage action.")](media/logs-export-logic-app/select-create-blob.png#lightbox)
- 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **compose** and then select the **Compose** action.
- \
- [![Select Compose action](media/logs-export-logic-app/select-compose.png "Screenshot of Compose action.")](media/logs-export-logic-app/select-compose.png#lightbox)
+1. Enter a name for the connection to your storage account in **Connection Name**. Then select the folder icon in the **Folder path** box to select the container in your storage account. Select **Blob name** to see a list of values from previous activities. Select **Expression** and enter an expression that matches your time interval. For this query, which is run hourly, the following expression sets the blob name per previous hour:
- 1. Click the **Inputs** box display a list of values from previous activities. Select **Body** from the **Parse JSON** action. This is the parsed output from the log query.
- \
- [![Select body for Compose action](media/logs-export-logic-app/select-body-compose.png "Screenshot of body for Compose action.")](media/logs-export-logic-app/select-body-compose.png#lightbox)
+ ```json
+ subtractFromTime(formatDateTime(utcNow(),'yyyy-MM-ddTHH:00:00'), 1,'Hour')
+ ```
-6. **Add the Create Blob action**
-
- The Create Blob action writes the composed JSON to storage.
+ [![Screenshot that shows a blob expression.](media/logs-export-logic-app/blob-expression.png "Screenshot that shows a Blob action connection.")](media/logs-export-logic-app/blob-expression.png#lightbox)
- 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **blob** and then select the **Create Blob** action.
- \
- [![Select Create blob](media/logs-export-logic-app/select-create-blob.png "Screenshot of blob storage action create.")](media/logs-export-logic-app/select-create-blob.png#lightbox)
+1. Select the **Blob content** box to display a list of values from previous activities. Then select **Outputs** in the **Compose** section.
- 1. Type a name for the connection to your Storage Account in **Connection Name** and then click the folder icon in the **Folder path** box to select the container in your Storage Account. Click the **Blob name** to see a list of values from previous activities. Click **Expression** and enter an expression that matches your time interval. For this query which is run hourly, the following expression sets the blob name per previous hour:
+ [![Screenshot that shows creating a blob expression.](media/logs-export-logic-app/create-blob.png "Screenshot that shows a Blob action output configuration.")](media/logs-export-logic-app/create-blob.png#lightbox)
- ```json
- subtractFromTime(formatDateTime(utcNow(),'yyyy-MM-ddTHH:00:00'), 1,'Hour')
- ```
- \
- [![Blob expression](media/logs-export-logic-app/blob-expression.png "Screenshot of blob action connection.")](media/logs-export-logic-app/blob-expression.png#lightbox)
+### Test the logic app
- 2. Click the **Blob content** box to display a list of values from previous activities and then select **Outputs** in the **Compose** section.
- \
- [![Create blob expression](media/logs-export-logic-app/create-blob.png "Screenshot of blob action output configuration.")](media/logs-export-logic-app/create-blob.png#lightbox)
+To test the workflow, select **Run**. If the workflow has errors, they're indicated on the step with the problem. You can view the executions and drill in to each step to view the input and output to investigate failures. See [Troubleshoot and diagnose workflow failures in Azure Logic Apps](../../logic-apps/logic-apps-diagnosing-failures.md), if necessary.
+[![Screenshot that shows Runs history.](media/logs-export-logic-app/runs-history.png "Screenshot that shows trigger run history.")](media/logs-export-logic-app/runs-history.png#lightbox)
-7. **Test the Logic App**
-
- Test the workflow by clicking **Run**. If the workflow has errors, it will be indicated on the step with the problem. You can view the executions and drill in to each step to view the input and output to investigate failures. See [Troubleshoot and diagnose workflow failures in Azure Logic Apps](../../logic-apps/logic-apps-diagnosing-failures.md) if necessary.
- \
- [![Runs history](media/logs-export-logic-app/runs-history.png "Screenshot of trigger run history.")](media/logs-export-logic-app/runs-history.png#lightbox)
+### View logs in storage
+Go to the **Storage accounts** menu in the Azure portal and select your storage account. Select the **Blobs** tile. Then select the container you specified in the **Create blob** action. Select one of the blobs and then select **Edit blob**.
-8. **View logs in Storage**
-
- Go to the **Storage accounts** menu in the Azure portal and select your Storage Account. Click the **Blobs** tile and select the container you specified in the Create blob action. Select one of the blobs and then **Edit blob**.
- \
- [![Blob data](media/logs-export-logic-app/blob-data.png "Screenshot of sample data exported to blob.")](media/logs-export-logic-app/blob-data.png#lightbox)
+[![Screenshot that shows blob data.](media/logs-export-logic-app/blob-data.png "Screenshot that shows sample data exported to a blob.")](media/logs-export-logic-app/blob-data.png#lightbox)
## Next steps - Learn more about [log queries in Azure Monitor](./log-query-overview.md).-- Learn more about [Logic Apps](../../logic-apps/index.yml)
+- Learn more about [Logic Apps](../../logic-apps/index.yml).
- Learn more about [Power Automate](https://flow.microsoft.com).
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md
If your Private Link setup was created before April 19, 2021, it won't reach the
### Collecting custom logs and IIS log over Private Link Storage accounts are used in the ingestion process of custom logs. By default, service-managed storage accounts are used. However, to ingest custom logs on private links, you must use your own storage accounts and associate them with Log Analytics workspace(s).
-For more information on connecting your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md) and specifically [Using Private Links](private-storage.md#using-private-links) and [Link storage accounts to your Log Analytics workspace](private-storage.md#link-storage-accounts-to-your-log-analytics-workspace).
+For more information on connecting your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md) and specifically [Use Private Links](private-storage.md#use-private-links) and [Link storage accounts to your Log Analytics workspace](private-storage.md#link-storage-accounts-to-your-log-analytics-workspace).
### Automation If you use Log Analytics solutions that require an Automation account (such as Update Management, Change Tracking, or Inventory) you should also create a Private Link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md).
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
Title: Using customer-managed storage accounts in Azure Monitor Log Analytics
-description: Use your own storage account for Log Analytics scenarios
+ Title: Use customer-managed storage accounts in Azure Monitor Log Analytics
+description: Use your own Azure Storage account for Azure Monitor Log Analytics scenarios.
Last updated 04/04/2022
-# Using customer-managed storage accounts in Azure Monitor Log Analytics
+# Use customer-managed storage accounts in Azure Monitor Log Analytics
-Log Analytics relies on Azure Storage in various scenarios. This use is typically managed automatically. However, some cases require you to provide and manage your own storage account, also referred to as a customer-managed storage account. This document covers the use of customer-managed storage for WAD/LAD logs, Private Link, and customer-managed key (CMK) encryption.
+Log Analytics relies on Azure Storage in various scenarios. This use is typically managed automatically. But some cases require you to provide and manage your own storage account, which is also known as a customer-managed storage account. This article covers the use of customer-managed storage for WAD/LAD logs, Azure Private Link, and customer-managed key (CMK) encryption.
> [!NOTE]
-> We recommend that you donΓÇÖt take a dependency on the contents Log Analytics uploads to customer-managed storage, given that formatting and content may change.
+> We recommend that you don't take a dependency on the contents that Log Analytics uploads to customer-managed storage because formatting and content might change.
-## Ingesting Azure Diagnostics extension logs (WAD/LAD)
-The Azure Diagnostics extension agents (also called WAD and LAD for Windows and Linux agents respectively) collect various operating system logs and store them on a customer-managed storage account. You can then ingest these logs into Log Analytics to review and analyze them.
-### How to collect Azure Diagnostics extension logs from your storage account
-Connect the storage account to your Log Analytics workspace as a storage data source using [the Azure portal](../agents/diagnostics-extension-logs.md#collect-logs-from-azure-storage) or by calling the [Storage Insights API](/rest/api/loganalytics/storage-insights/create-or-update).
+## Ingest Azure Diagnostics extension logs (WAD/LAD)
+The Azure Diagnostics extension agents (also called WAD and LAD for Windows and Linux agents, respectively) collect various operating system logs and store them on a customer-managed storage account. You can then ingest these logs into Log Analytics to review and analyze them.
+
+### Collect Azure Diagnostics extension logs from your storage account
+Connect the storage account to your Log Analytics workspace as a storage data source by using the [Azure portal](../agents/diagnostics-extension-logs.md#collect-logs-from-azure-storage). You can also call the [Storage Insights API](/rest/api/loganalytics/storage-insights/create-or-update).
+
+Supported data types are:
-Supported data types:
* [Syslog](../agents/data-sources-syslog.md) * [Windows events](../agents/data-sources-windows-events.md)
-* Service Fabric
-* [ETW Events](../agents/data-sources-event-tracing-windows.md)
-* [IIS Logs](../agents/data-sources-iis-logs.md)
+* Azure Service Fabric
+* [Event Tracing for Windows (ETW) events](../agents/data-sources-event-tracing-windows.md)
+* [IIS logs](../agents/data-sources-iis-logs.md)
-## Using Private links
-Customer-managed storage accounts are used to ingest Custom logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
+## Use private links
+Customer-managed storage accounts are used to ingest custom logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
> [!IMPORTANT]
-> Collection of IIS logs is not supported with private link.
+> Collection of IIS logs isn't supported with private links.
+
+### Use a customer-managed storage account over a private link
+
+Meet the following requirements.
-### Using a customer-managed storage account over a Private Link
#### Workspace requirements
-When connecting to Azure Monitor over a private link, Log Analytics agents are only able to send logs to workspaces accessible over a private link. This requirement means you should:
-* Configure an Azure Monitor Private Link Scope (AMPLS) object
-* Connect it to your workspaces
-* Connect the AMPLS to your network over a private link.
+When you connect to Azure Monitor over a private link, Log Analytics agents are only able to send logs to workspaces accessible over a private link. This requirement means you should:
-For more information on the AMPLS configuration procedure, see [Use Azure Private Link to securely connect networks to Azure Monitor](./private-link-security.md).
+* Configure an Azure Monitor Private Link Scope (AMPLS) object.
+* Connect it to your workspaces.
+* Connect the AMPLS to your network over a private link.
+
+For more information on the AMPLS configuration procedure, see [Use Azure Private Link to securely connect networks to Azure Monitor](./private-link-security.md).
#### Storage account requirements For the storage account to successfully connect to your private link, it must:
-* Be located on your VNet or a peered network, and connected to your VNet over a private link.
-* Be located on the same region as the workspace itΓÇÖs linked to.
-* Allow Azure Monitor to access the storage account. If you chose to allow only select networks to access your storage account, you should select the exception: ΓÇ£Allow trusted Microsoft services to access this storage accountΓÇ¥.
-![Storage account trust MS services image](./media/private-storage/storage-trust.png)
-* If your workspace handles traffic from other networks as well, you should configure the storage account to allow incoming traffic coming from the relevant networks/internet.
-* Coordinate TLS version between the agents and the storage account - It's recommended that you send data to Log Analytics using TLS 1.2 or higher. Review [platform-specific guidance](./data-security.md#sending-data-securely-using-tls-12), and if required [configure your agents to use TLS 1.2](../agents/agent-windows.md#configure-agent-to-use-tls-12). If for some reason that's not possible, configure the storage account to accept TLS 1.0.
-
-### Using a customer-managed storage account for CMK data encryption
-Azure Storage encrypts all data at rest in a storage account. By default, it uses Microsoft-managed keys (MMK) to encrypt the data; However, Azure Storage also allows you to use CMK from Azure Key vault to encrypt your storage data. You can either import your own keys into Azure Key Vault, or you can use the Azure Key Vault APIs to generate keys.
+
+* Be located on your virtual network or a peered network and connected to your virtual network over a private link.
+* Be located on the same region as the workspace it's linked to.
+* Allow Azure Monitor to access the storage account. If you chose to allow only select networks to access your storage account, select the exception **Allow trusted Microsoft services to access this storage account**.
+
+ ![Screenshot that shows Storage account trust Microsoft services.](./media/private-storage/storage-trust.png)
+
+If your workspace handles traffic from other networks, configure the storage account to allow incoming traffic coming from the relevant networks/internet.
+
+Coordinate the TLS version between the agents and the storage account. We recommend that you send data to Log Analytics by using TLS 1.2 or higher. Review the [platform-specific guidance](./data-security.md#sending-data-securely-using-tls-12). If required, [configure your agents to use TLS 1.2](../agents/agent-windows.md#configure-agent-to-use-tls-12). If that's not possible, configure the storage account to accept TLS 1.0.
+
+### Use a customer-managed storage account for CMK data encryption
+Azure Storage encrypts all data at rest in a storage account. By default, it uses Microsoft-managed keys (MMKs) to encrypt the data. However, Azure Storage also allows you to use CMKs from Azure Key Vault to encrypt your storage data. You can either import your own keys into Key Vault or use the Key Vault APIs to generate keys.
+ #### CMK scenarios that require a customer-managed storage account
-* Encrypting log-alert queries with CMK
-* Encrypting saved queries with CMK
-#### How to apply CMK to customer-managed storage accounts
+A customer-managed storage account is required for:
+
+* Encrypting log-alert queries with CMKs.
+* Encrypting saved queries with CMKs.
+
+#### Apply CMKs to customer-managed storage accounts
+
+Follow this guidance to apply CMKs to customer-managed storage accounts.
+ ##### Storage account requirements
-The storage account and the key vault must be in the same region, but they can be in different subscriptions. For more information about Azure Storage encryption and key management, see [Azure Storage encryption for data at rest](../../storage/common/storage-service-encryption.md).
+The storage account and the key vault must be in the same region, but they also can be in different subscriptions. For more information about Azure Storage encryption and key management, see [Azure Storage encryption for data at rest](../../storage/common/storage-service-encryption.md).
-##### Apply CMK to your storage accounts
-To configure your Azure Storage account to use CMK with Azure Key Vault, use the [Azure portal](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), [PowerShell](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), or the [CLI](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json).
+##### Apply CMKs to your storage accounts
+To configure your Azure Storage account to use CMKs with Key Vault, use the [Azure portal](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), [PowerShell](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), or the [Azure CLI](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json).
## Link storage accounts to your Log Analytics workspace > [!NOTE]
-> - Delending if you link storage account for queries, or for log alerts, existing queries will be removed from workspace. Copy saved searches and log alerts that you need before this configuration. You can find directions for moving saved queries and log alerts in [workspace move procedure](./move-workspace-region.md).
-> - You can connect up to five storage accounts for the ingestion of Custom logs & IIS logs, and one storage account for Saved queries and Saved log alert queries (each).
-
-### Using the Azure portal
-On the Azure portal, open your Workspace' menu and select *Linked storage accounts*. A blade will open, showing the linked storage accounts by the use cases mentioned above (Ingestion over Private Link, applying CMK to saved queries or to alerts).
-![Linked storage accounts blade image](./media/private-storage/all-linked-storage-accounts.png)
-Selecting an item on the table will open its storage account details, where you can set or update the linked storage account for this type.
-![Link a storage account blade image](./media/private-storage/link-a-storage-account-blade.png)
+> If you link a storage account for queries, or for log alerts, existing queries will be removed from the workspace. Copy saved searches and log alerts that you need before you undertake this configuration. For directions on moving saved queries and log alerts, see [Workspace move procedure](./move-workspace-region.md).
+>
+> You can connect up to:
+> - Five storage accounts for the ingestion of custom logs and IIS logs.
+> - One storage account for saved queries.
+> - One storage account for saved log alert queries.
+
+### Use the Azure portal
+On the Azure portal, open your workspace menu and select **Linked storage accounts**. A pane shows the linked storage accounts by the use cases previously mentioned (ingestion over Private Link, applying CMKs to saved queries or to alerts).
+
+![Screenshot that shows the Linked storage accounts pane.](./media/private-storage/all-linked-storage-accounts.png)
+
+Selecting an item on the table opens its storage account details, where you can set or update the linked storage account for this type.
+
+![Screenshot that shows the Link storage account pane.](./media/private-storage/link-a-storage-account-blade.png)
You can use the same account for different use cases if you prefer.
-### Using the Azure CLI or REST API
+### Use the Azure CLI or REST API
You can also link a storage account to your workspace via the [Azure CLI](/cli/azure/monitor/log-analytics/workspace/linked-storage) or [REST API](/rest/api/loganalytics/linkedstorageaccounts).
-The applicable dataSourceType values are:
-* CustomLogs ΓÇô to use the storage account for custom logs and IIS logs ingestion
-* Query - to use the storage account to store saved queries (required for CMK encryption)
-* Alerts - to use the storage account to store log-based alerts (required for CMK encryption)
+The applicable `dataSourceType` values are:
+
+* `CustomLogs`: To use the storage account for custom logs and IIS logs ingestion.
+* `Query`: To use the storage account to store saved queries (required for CMK encryption).
+* `Alerts`: To use the storage account to store log-based alerts (required for CMK encryption).
+## Manage linked storage accounts
-## Managing linked storage accounts
+Follow this guidance to manage your linked storage accounts.
### Create or modify a link
-When you link a storage account to a workspace, Log Analytics will start using it instead of the storage account owned by the service. You can
-* Register multiple storage accounts to spread the load of logs between them
-* Reuse the same storage account for multiple workspaces
+When you link a storage account to a workspace, Log Analytics will start using it instead of the storage account owned by the service. You can:
+
+* Register multiple storage accounts to spread the load of logs between them.
+* Reuse the same storage account for multiple workspaces.
### Unlink a storage account
-To stop using a storage account, unlink the storage from the workspace.
-Unlinking all storage accounts from a workspace means Log Analytics will attempt to rely on service-managed storage accounts. If your network has limited access to the internet, these storages may not be available and any scenario that relies on storage will fail.
+To stop using a storage account, unlink the storage from the workspace. Unlinking all storage accounts from a workspace means Log Analytics will attempt to rely on service-managed storage accounts. If your network has limited access to the internet, these storage accounts might not be available and any scenario that relies on storage will fail.
### Replace a storage account
-To replace a storage account used for ingestion,
-1. **Create a link to a new storage account.** The logging agents will get the updated configuration and start sending data to the new storage as well. The process could take a few minutes.
-2. **Then unlink the old storage account so agents will stop writing to the removed account.** The ingestion process keeps reading data from this account until itΓÇÖs all ingested. DonΓÇÖt delete the storage account until you see all logs were ingested.
+To replace a storage account used for ingestion:
+
+1. **Create a link to a new storage account**. The logging agents will get the updated configuration and start sending data to the new storage. The process could take a few minutes.
+2. **Unlink the old storage account so agents will stop writing to the removed account**. The ingestion process keeps reading data from this account until it's all ingested. Don't delete the storage account until you see that all logs were ingested.
+
+### Maintain storage accounts
+
+Follow this guidance to maintain your storage accounts.
-### Maintaining storage accounts
#### Manage log retention
-When using your own storage account, retention is up to you. Log Analytics won't delete logs stored on your private storage. Instead, you should set up a policy to handle the load according to your preferences.
+When you use your own storage account, retention is up to you. Log Analytics won't delete logs stored on your private storage. Instead, you should set up a policy to handle the load according to your preferences.
#### Consider load
-Storage accounts can handle a certain load of read and write requests before they start throttling requests (For more information, see [Scalability and performance targets for Blob storage](../../storage/common/scalability-targets-standard-account.md)). Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register an additional storage account to spread the load between them. To monitor your storage accountΓÇÖs capacity and performance review its [Insights in the Azure portal](../../storage/common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+Storage accounts can handle a certain load of read and write requests before they start throttling requests. For more information, see [Scalability and performance targets for Azure Blob Storage](../../storage/common/scalability-targets-standard-account.md).
-### Related charges
-Storage accounts are charged by the volume of stored data, the type of the storage, and the type of redundancy. For details see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs) and [Table Storage pricing](https://azure.microsoft.com/pricing/details/storage/tables).
+Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register another storage account to spread the load between them. To monitor your storage account's capacity and performance, review its [Insights in the Azure portal](../../storage/common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+### Related charges
+Storage accounts are charged by the volume of stored data, the type of storage, and the type of redundancy. For more information, see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs) and [Azure Table Storage pricing](https://azure.microsoft.com/pricing/details/storage/tables).
## Next steps -- Learn about [using Azure Private Link to securely connect networks to Azure Monitor](private-link-security.md)-- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md)
+- Learn about [using Private Link to securely connect networks to Azure Monitor](private-link-security.md).
+- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md).
azure-monitor Tutorial Workspace Transformations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-portal.md
Title: Tutorial - Add workspace transformation to Azure Monitor Logs using Azure portal
-description: Describes how to add a custom transformation to data flowing through Azure Monitor Logs using the Azure portal.
+ Title: 'Tutorial: Add a workspace transformation to Azure Monitor Logs by using the Azure portal'
+description: Describes how to add a custom transformation to data flowing through Azure Monitor Logs by using the Azure portal.
Last updated 07/01/2022
-# Tutorial: Add transformation in workspace data collection rule using the Azure portal (preview)
-This tutorial walks you through configuration of a sample [transformation in a workspace data collection rule](../essentials/data-collection-transformations.md) using the Azure portal. [Transformations](../essentials/data-collection-transformations.md) in Azure Monitor allow you to filter or modify incoming data before it's sent to its destination. Workspace transformations provide support for [ingestion-time transformations](../essentials/data-collection-transformations.md) for workflows that don't yet use the [Azure Monitor data ingestion pipeline](../essentials/data-collection.md).
+# Tutorial: Add a transformation in a workspace data collection rule by using the Azure portal (preview)
+This tutorial walks you through configuration of a sample [transformation in a workspace data collection rule (DCR)](../essentials/data-collection-transformations.md) by using the Azure portal. [Transformations](../essentials/data-collection-transformations.md) in Azure Monitor allow you to filter or modify incoming data before it's sent to its destination. Workspace transformations provide support for [ingestion-time transformations](../essentials/data-collection-transformations.md) for workflows that don't yet use the [Azure Monitor data ingestion pipeline](../essentials/data-collection.md).
-Workspace transformations are stored together in a single [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) for the workspace, called the workspace DCR. Each transformation is associated with a particular table. The transformation will be applied to all data sent to this table from any workflow not using a DCR.
+Workspace transformations are stored together in a single [DCR](../essentials/data-collection-rule-overview.md) for the workspace, which is called the workspace DCR. Each transformation is associated with a particular table. The transformation will be applied to all data sent to this table from any workflow not using a DCR.
> [!NOTE]
-> This tutorial uses the Azure portal to configure a workspace transformation. See [Tutorial: Add transformation in workspace data collection rule to Azure Monitor using resource manager templates (preview)](tutorial-workspace-transformations-api.md) for the same tutorial using resource manager templates and REST API.
+> This tutorial uses the Azure portal to configure a workspace transformation. For the same tutorial using Azure Resource Manager templates and REST API, see [Tutorial: Add transformation in workspace data collection rule to Azure Monitor using resource manager templates (preview)](tutorial-workspace-transformations-api.md).
-In this tutorial, you learn to:
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Configure [workspace transformation](../essentials/data-collection-transformations.md#workspace-transformation-dcr) for a table in a Log Analytics workspace.
+> * Configure a [workspace transformation](../essentials/data-collection-transformations.md#workspace-transformation-dcr) for a table in a Log Analytics workspace.
> * Write a log query for a workspace transformation. - ## Prerequisites
-To complete this tutorial, you need the following:
+To complete this tutorial, you need:
-- Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create data collection rule (DCR) objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.-- The table must already have some data.
+- A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).
+- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- A table that already has some data.
- The table can't be linked to the [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr).
+## Overview of the tutorial
+In this tutorial, you'll reduce the storage requirement for the `LAQueryLogs` table by filtering out certain records. You'll also remove the contents of a column while parsing the column data to store a piece of data in a custom column. The [LAQueryLogs table](query-audit.md#audit-data) is created when you enable [log query auditing](query-audit.md) in a workspace. You can use this same basic process to create a transformation for any [supported table](tables-feature-support.md) in a Log Analytics workspace.
-## Overview of tutorial
-In this tutorial, you'll reduce the storage requirement for the `LAQueryLogs` table by filtering out certain records. You'll also remove the contents of a column while parsing the column data to store a piece of data in a custom column. The [LAQueryLogs table](query-audit.md#audit-data) is created when you enable [log query auditing](query-audit.md) in a workspace. You can use this same basic process to create a transformation for any [supported table](tables-feature-support.md) in a Log Analytics workspace.
-
-This tutorial will use the Azure portal which provides a wizard to walk you through the process of creating an ingestion-time transformation. The following actions are performed for you when you complete this wizard:
+This tutorial uses the Azure portal, which provides a wizard to walk you through the process of creating an ingestion-time transformation. After you finish the steps, you'll see that the wizard:
-- Updates the table schema with any additional columns from the query.-- Creates a `WorkspaceTransforms` data collection rule (DCR) and links it to the workspace if a default DCR isn't already linked to the workspace.
+- Updates the table schema with any other columns from the query.
+- Creates a `WorkspaceTransforms` DCR and links it to the workspace if a default DCR isn't already linked to the workspace.
- Creates an ingestion-time transformation and adds it to the DCR. - ## Enable query audit logs
-You need to enable [query auditing](query-audit.md) for your workspace to create the `LAQueryLogs` table that you'll be working with. This is not required for all ingestion time transformations. It's just to generate the sample data that we'll be working with.
+You need to enable [query auditing](query-audit.md) for your workspace to create the `LAQueryLogs` table that you'll be working with. This step isn't required for all ingestion time transformations. It's just to generate the sample data that we'll be working with.
-1. From the **Log Analytics workspaces** menu in the Azure portal, select **Diagnostic settings** and then **Add diagnostic setting**.
+1. On the **Log Analytics workspaces** menu in the Azure portal, select **Diagnostic settings** > **Add diagnostic setting**.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/diagnostic-settings.png" lightbox="media/tutorial-workspace-transformations-portal/diagnostic-settings.png" alt-text="Screenshot of diagnostic settings.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/diagnostic-settings.png" lightbox="media/tutorial-workspace-transformations-portal/diagnostic-settings.png" alt-text="Screenshot that shows diagnostic settings.":::
-2. Provide a name for the diagnostic setting and select the workspace so that the auditing data is stored in the same workspace. Select the **Audit** category and then click **Save** to save the diagnostic setting and close the diagnostic setting page.
+1. Enter a name for the diagnostic setting. Select the workspace so that the auditing data is stored in the same workspace. Select the **Audit** category and then select **Save** to save the diagnostic setting and close the **Diagnostic setting** page.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/new-diagnostic-setting.png" lightbox="media/tutorial-workspace-transformations-portal/new-diagnostic-setting.png" alt-text="Screenshot of new diagnostic setting.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/new-diagnostic-setting.png" lightbox="media/tutorial-workspace-transformations-portal/new-diagnostic-setting.png" alt-text="Screenshot that shows the new diagnostic setting.":::
-3. Select **Logs** and then run some queries to populate `LAQueryLogs` with some data. These queries don't need to return data to be added to the audit log.
+1. Select **Logs** and then run some queries to populate `LAQueryLogs` with some data. These queries don't need to return data to be added to the audit log.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/sample-queries.png" lightbox="media/tutorial-workspace-transformations-portal/sample-queries.png" alt-text="Screenshot of sample log queries.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/sample-queries.png" lightbox="media/tutorial-workspace-transformations-portal/sample-queries.png" alt-text="Screenshot that shows sample log queries.":::
-## Add transformation to the table
+## Add a transformation to the table
Now that the table's created, you can create the transformation for it.
-1. From the **Log Analytics workspaces** menu in the Azure portal, select **Tables (preview)**. Locate the `LAQueryLogs` table and select **Create transformation**.
+1. On the **Log Analytics workspaces** menu in the Azure portal, select **Tables (preview)**. Locate the `LAQueryLogs` table and select **Create transformation**.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/create-transformation.png" lightbox="media/tutorial-workspace-transformations-portal/create-transformation.png" alt-text="Screenshot of creating a new transformation.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/create-transformation.png" lightbox="media/tutorial-workspace-transformations-portal/create-transformation.png" alt-text="Screenshot that shows creating a new transformation.":::
+1. Because this transformation is the first one in the workspace, you must create a [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr). If you create transformations for other tables in the same workspace, they'll be stored in this same DCR. Select **Create a new data collection rule**. The **Subscription** and **Resource group** will already be populated for the workspace. Enter a name for the DCR and select **Done**.
-2. Since this is the first transformation in the workspace, you need to create a [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr). If you create transformations for other tables in the same workspace, they will be stored in this same DCR. Click **Create a new data collection rule**. The **Subscription** and **Resource group** will already be populated for the workspace. Provide a name for the DCR and click **Done**.
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/new-data-collection-rule.png" lightbox="media/tutorial-workspace-transformations-portal/new-data-collection-rule.png" alt-text="Screenshot that shows creating a new data collection rule.":::
- :::image type="content" source="media/tutorial-workspace-transformations-portal/new-data-collection-rule.png" lightbox="media/tutorial-workspace-transformations-portal/new-data-collection-rule.png" alt-text="Screenshot of creating a new data collection rule.":::
+1. Select **Next** to view sample data from the table. As you define the transformation, the result will be applied to the sample data. For this reason, you can evaluate the results before you apply it to actual data. Select **Transformation editor** to define the transformation.
-3. Click **Next** to view sample data from the table. As you define the transformation, the result will be applied to the sample data allowing you to evaluate the results before applying it to actual data. Click **Transformation editor** to define the transformation.
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/sample-data.png" lightbox="media/tutorial-workspace-transformations-portal/sample-data.png" alt-text="Screenshot that shows sample data from the log table.":::
- :::image type="content" source="media/tutorial-workspace-transformations-portal/sample-data.png" lightbox="media/tutorial-workspace-transformations-portal/sample-data.png" alt-text="Screenshot of sample data from the log table.":::
+1. In the transformation editor, you can see the transformation that will be applied to the data prior to its ingestion into the table. The incoming data is represented by a virtual table named `source`, which has the same set of columns as the destination table itself. The transformation initially contains a simple query that returns the `source` table with no changes.
-4. In the transformation editor, you can see the transformation that will be applied to the data prior to its ingestion into the table. The incoming data is represented by a virtual table named `source`, which has the same set of columns as the destination table itself. The transformation initially contains a simple query returning the `source` table with no changes.
-
-5. Modify the query to the following:
+1. Modify the query to the following example:
``` kusto source
Now that the table's created, you can create the transformation for it.
| project-away RequestContext, Context ```
- This makes the following changes:
-
- - Drop rows related to querying the `LAQueryLogs` table itself to save space since these log entries aren't useful.
- - Add a column for the name of the workspace that was queried.
- - Remove data from the `RequestContext` column to save space.
-
+ The modification makes the following changes:
+ - Rows related to querying the `LAQueryLogs` table itself were dropped to save space because these log entries aren't useful.
+ - A column for the name of the workspace that was queried was added.
+ - Data from the `RequestContext` column was removed to save space.
> [!Note]
- > Using the Azure portal, the output of the transformation will initiate changes to the table schema if required. Columns will be added to match the transformation output if they don't already exist. Make sure that your output doesn't contain any additional columns that you don't want added to the table. If the output does not include columns that are already in the table, those columns will not be removed, but data will not be added.
+ > Using the Azure portal, the output of the transformation will initiate changes to the table schema if required. Columns will be added to match the transformation output if they don't already exist. Make sure that your output doesn't contain any columns that you don't want added to the table. If the output doesn't include columns that are already in the table, those columns won't be removed, but data won't be added.
>
- > Any custom columns added to a built-in table must end in *_CF*. Columns added to a custom table (a table with a name that ends in *_CL*) does not need to have this suffix.
+ > Any custom columns added to a built-in table must end in `_CF`. Columns added to a custom table don't need to have this suffix. A custom table has a name that ends in `_CL`.
-6. Copy the query into the transformation editor and click **Run** to view results from the sample data. You can verify that the new `Workspace_CF` column is in the query.
+1. Copy the query into the transformation editor and select **Run** to view results from the sample data. You can verify that the new `Workspace_CF` column is in the query.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/transformation-editor.png" lightbox="media/tutorial-workspace-transformations-portal/transformation-editor.png" alt-text="Screenshot of transformation editor.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/transformation-editor.png" lightbox="media/tutorial-workspace-transformations-portal/transformation-editor.png" alt-text="Screenshot that shows the transformation editor.":::
-7. Click **Apply** to save the transformation and then **Next** to review the configuration. Click **Create** to update the data collection rule with the new transformation.
+1. Select **Apply** to save the transformation and then select **Next** to review the configuration. Select **Create** to update the DCR with the new transformation.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/save-transformation.png" lightbox="media/tutorial-workspace-transformations-portal/save-transformation.png" alt-text="Screenshot of saving transformation.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/save-transformation.png" lightbox="media/tutorial-workspace-transformations-portal/save-transformation.png" alt-text="Screenshot that shows saving the transformation.":::
-## Test transformation
-Allow about 30 minutes for the transformation to take effect and then test it by running a query against the table. Only data sent to the table after the transformation was applied will be affected.
+## Test the transformation
+Allow about 30 minutes for the transformation to take effect and then test it by running a query against the table. Only data sent to the table after the transformation was applied will be affected.
-For this tutorial, run some sample queries to send data to the `LAQueryLogs` table. Include some queries against `LAQueryLogs` so you can verify that the transformation filters these records. Notice that the output has the new `Workspace_CF` column, and there are no records for `LAQueryLogs`.
+For this tutorial, run some sample queries to send data to the `LAQueryLogs` table. Include some queries against `LAQueryLogs` so that you can verify that the transformation filters these records. Now the output has the new `Workspace_CF` column, and there are no records for `LAQueryLogs`.
## Troubleshooting
-This section describes different error conditions you may receive and how to correct them.
+This section describes different error conditions you might receive and how to correct them.
### IntelliSense in Log Analytics not recognizing new columns in the table
-The cache that drives IntelliSense may take up to 24 hours to update.
+The cache that drives IntelliSense might take up to 24 hours to update.
### Transformation on a dynamic column isn't working
-There is currently a known issue affecting dynamic columns. A temporary workaround is to explicitly parse dynamic column data using `parse_json()` prior to performing any operations against them.
+A known issue currently affects dynamic columns. A temporary workaround is to explicitly parse dynamic column data by using `parse_json()` prior to performing any operations against them.
## Next steps
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
This article is a reference of the different applications and services that are
Azure Monitor data is collected and stored based on resource provider namespaces. Each resource in Azure has a unique ID. The resource provider namespace is part of all unique IDs. For example, a key vault resource ID would be similar to `/subscriptions/d03b04c7-d1d4-eeee-aaaa-87b6fcb38b38/resourceGroups/KeyVaults/providers/Microsoft.KeyVault/vaults/mysafekeys ` . *Microsoft.KeyVault* is the resource provider namespace. *Microsoft.KeyVault/vaults/* is the resource provider.
-For a list of Azure resource provider namespaces, see [Resource providers for Azure services](/azure/azure-resource-manager/management/azure-services-resource-providers).
+For a list of Azure resource provider namespaces, see [Resource providers for Azure services](../azure-resource-manager/management/azure-services-resource-providers.md).
For a list of resource providers that support Azure Monitor
Azure Monitor can collect data from resources outside of Azure by using the meth
- Read more about the [Azure Monitor data platform that stores the logs and metrics collected by insights and solutions](data-platform.md). - Complete a [tutorial on monitoring an Azure resource](essentials/tutorial-resource-logs.md). - Complete a [tutorial on writing a log query to analyze data in Azure Monitor Logs](essentials/tutorial-resource-logs.md).-- Complete a [tutorial on creating a metrics chart to analyze data in Azure Monitor Metrics](essentials/tutorial-metrics.md).
+- Complete a [tutorial on creating a metrics chart to analyze data in Azure Monitor Metrics](essentials/tutorial-metrics.md).
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
|Sub-service| Article | Description | |||| |General|Table of contents|We have updated the Azure Monitor Table of Contents. The new TOC structure better reflects the customer experience and makes it easier for users to navigate and discover our content.|
-Alerts|[Connect Azure to ITSM tools by using IT Service Management](https://docs.microsoft.com/azure/azure-monitor/alerts/itsmc-definition)|Deprecating support for sending ITSM actions and events to ServiceNow. Instead, use ITSM actions in action groups based on Azure alerts to create work items in your ITSM tool.|
-Alerts|[Create a new alert rule](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-create-new-alert-rule)|New PowerShell commands to create and manage log alerts.|
+Alerts|[Connect Azure to ITSM tools by using IT Service Management](./alerts/itsmc-definition.md)|Deprecating support for sending ITSM actions and events to ServiceNow. Instead, use ITSM actions in action groups based on Azure alerts to create work items in your ITSM tool.|
+Alerts|[Create a new alert rule](./alerts/alerts-create-new-alert-rule.md)|New PowerShell commands to create and manage log alerts.|
Alerts|[Types of Azure Monitor alerts](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-types)|Updated to include Prometheus alerts.|
-Alerts|[Customize alert notifications using Logic Apps](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-logic-apps)|New: How to use alerts to send emails or Teams posts using logic apps|
-Application-insights|[Sampling in Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/sampling)|The "When to use sampling" and "How sampling works" sections have been prioritized as prerequisite information for the rest of the article.|
-Application-insights|[What is auto-instrumentation for Azure Monitor Application Insights?](https://docs.microsoft.com/azure/azure-monitor/app/codeless-overview)|The auto-instrumentation overview has been visually overhauled with links and footnotes.|
-Application-insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications (preview)](https://docs.microsoft.com/azure/azure-monitor/app/opentelemetry-enable)|Open Telemetry Metrics are now available for .NET, Node.js and Python applications.|
-Application-insights|[Find and diagnose performance issues with Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-performance)|The URL Ping (Classic) Test has been replaced with the Standard Test step-by-step instructions.|
-Application-insights|[Application Insights API for custom events and metrics](https://docs.microsoft.com/azure/azure-monitor/app/api-custom-events-metrics)|Flushing information was added to the FAQ.|
-Application-insights|[Azure AD authentication for Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/azure-ad-authentication)|We updated the `TelemetryConfiguration` code sample using .NET.|
-Application-insights|[Using Azure Monitor Application Insights with Spring Boot](https://docs.microsoft.com/azure/azure-monitor/app/java-spring-boot)|Spring Boot information was updated to 3.4.2.|
-Application-insights|[Configuration options: Azure Monitor Application Insights for Java](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-config)|New features include Capture Log4j Markers and Logback Markers as custom properties on the corresponding trace (log message) telemetry.|
-Application-insights|[Create custom KPI dashboards using Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-app-dashboards)|This article has been refreshed with new screenshots and instructions.|
-Application-insights|[Share Azure dashboards by using Azure role-based access control](https://docs.microsoft.com/azure/azure-portal/azure-portal-dashboard-share-access)|This article has been refreshed with new screenshots and instructions.|
-Application-insights|[Application Monitoring for Azure App Service and ASP.NET](https://docs.microsoft.com/azure/azure-monitor/app/azure-web-apps-net)|Important notes added regarding System.IO.FileNotFoundException after 2.8.44 auto-instrumentation upgrade.|
-Application-insights|[Geolocation and IP address handling](https://docs.microsoft.com/azure/azure-monitor/app/ip-collection)| Geolocation lookup information has been updated.|
-Containers|[Metric alert rules in Container insights (preview)](https://docs.microsoft.com/azure/azure-monitor/containers/container-insights-metric-alerts)|Container insights metric Alerts|
+Alerts|[Customize alert notifications using Logic Apps](./alerts/alerts-logic-apps.md)|New: How to use alerts to send emails or Teams posts using logic apps|
+Application-insights|[Sampling in Application Insights](./app/sampling.md)|The "When to use sampling" and "How sampling works" sections have been prioritized as prerequisite information for the rest of the article.|
+Application-insights|[What is auto-instrumentation for Azure Monitor Application Insights?](./app/codeless-overview.md)|The auto-instrumentation overview has been visually overhauled with links and footnotes.|
+Application-insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications (preview)](./app/opentelemetry-enable.md)|Open Telemetry Metrics are now available for .NET, Node.js and Python applications.|
+Application-insights|[Find and diagnose performance issues with Application Insights](./app/tutorial-performance.md)|The URL Ping (Classic) Test has been replaced with the Standard Test step-by-step instructions.|
+Application-insights|[Application Insights API for custom events and metrics](./app/api-custom-events-metrics.md)|Flushing information was added to the FAQ.|
+Application-insights|[Azure AD authentication for Application Insights](./app/azure-ad-authentication.md)|We updated the `TelemetryConfiguration` code sample using .NET.|
+Application-insights|[Using Azure Monitor Application Insights with Spring Boot](./app/java-spring-boot.md)|Spring Boot information was updated to 3.4.2.|
+Application-insights|[Configuration options: Azure Monitor Application Insights for Java](./app/java-standalone-config.md)|New features include Capture Log4j Markers and Logback Markers as custom properties on the corresponding trace (log message) telemetry.|
+Application-insights|[Create custom KPI dashboards using Application Insights](./app/tutorial-app-dashboards.md)|This article has been refreshed with new screenshots and instructions.|
+Application-insights|[Share Azure dashboards by using Azure role-based access control](../azure-portal/azure-portal-dashboard-share-access.md)|This article has been refreshed with new screenshots and instructions.|
+Application-insights|[Application Monitoring for Azure App Service and ASP.NET](./app/azure-web-apps-net.md)|Important notes added regarding System.IO.FileNotFoundException after 2.8.44 auto-instrumentation upgrade.|
+Application-insights|[Geolocation and IP address handling](./app/ip-collection.md)| Geolocation lookup information has been updated.|
+Containers|[Metric alert rules in Container insights (preview)](./containers/container-insights-metric-alerts.md)|Container insights metric Alerts|
Containers|[Custom metrics collected by Container insights](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-custom-metrics?tabs=portal)|New article.| Containers|[Overview of Container insights in Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-overview)|Rewritten to simplify onboarding options.| Containers|[Enable Container insights for Azure Kubernetes Service (AKS) cluster](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-enable-aks?tabs=azure-cli)|Updated to combine new and existing clusters.|
Containers Prometheus|[Query logs from Container insights](https://learn.microso
Containers Prometheus|[Collect Prometheus metrics with Container insights](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-prometheus?tabs=cluster-wide)|Updated to include Azure Monitor managed service for Prometheus.| Essentials Prometheus|[Metrics in Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/essentials/data-platform-metrics)|Updated to include Azure Monitor managed service for Prometheus| Essentials Prometheus|<ul> <li> [Azure Monitor workspace overview (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/azure-monitor-workspace-overview?tabs=azure-portal) </li><li> [Overview of Azure Monitor Managed Service for Prometheus (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-overview) </li><li>[Rule groups in Azure Monitor Managed Service for Prometheus (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-rule-groups)</li><li>[Remote-write in Azure Monitor Managed Service for Prometheus (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-remote-write-managed-identity) </li><li>[Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-grafana)</li><li>[Troubleshoot collection of Prometheus metrics in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-troubleshoot)</li><li>[Default Prometheus metrics configuration in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-default)</li><li>[Scrape Prometheus metrics at scale in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-scale)</li><li>[Customize scraping of Prometheus metrics in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-configuration)</li><li>[Create, validate and troubleshoot custom configuration file for Prometheus metrics in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-validate)</li><li>[Minimal Prometheus ingestion profile in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-configuration-minimal)</li><li>[Collect Prometheus metrics from AKS cluster (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-enable)</li><li>[Send Prometheus metrics to multiple Azure Monitor workspaces (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-multiple-workspaces) </li></ul> |New articles. Public preview of Azure Monitor managed service for Prometheus|
-Essentials Prometheus|[Azure Monitor managed service for Prometheus remote write - managed identity (preview)](https://docs.microsoft.com/azure/azure-monitor/essentials/prometheus-remote-write-managed-identity)|Addition: Verify Prometheus remote write is working correctly|
-Essentials|[Azure resource logs](https://docs.microsoft.com/azure/azure-monitor/essentials/resource-logs)|Clarification: Which blobs logs are written to, and when|
+Essentials Prometheus|[Azure Monitor managed service for Prometheus remote write - managed identity (preview)](./essentials/prometheus-remote-write-managed-identity.md)|Addition: Verify Prometheus remote write is working correctly|
+Essentials|[Azure resource logs](./essentials/resource-logs.md)|Clarification: Which blobs logs are written to, and when|
Essentials|[Resource Manager template samples for Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/resource-manager-samples?tabs=portal)|Added template deployment methods.| Essentials|[Azure Monitor service limits](https://learn.microsoft.com/azure/azure-monitor/service-limits)|Added Azure Monitor managed service for Prometheus|
-Logs|[Manage access to Log Analytics workspaces](https://docs.microsoft.com/azure/azure-monitor/logs/manage-access)|Table-level role-based access control (RBAC) lets you give specific users or groups read access to particular tables.|
-Logs|[Configure Basic Logs in Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/logs/basic-logs-configure)|General availability of the Basic Logs data plan, retention and archiving, search job, and the table management user experience in the Azure portal.|
+Logs|[Manage access to Log Analytics workspaces](./logs/manage-access.md)|Table-level role-based access control (RBAC) lets you give specific users or groups read access to particular tables.|
+Logs|[Configure Basic Logs in Azure Monitor](./logs/basic-logs-configure.md)|General availability of the Basic Logs data plan, retention and archiving, search job, and the table management user experience in the Azure portal.|
Logs|[Guided project - Analyze logs in Azure Monitor with KQL - Training](https://learn.microsoft.com/training/modules/analyze-logs-with-kql/)|New Learn module. Learn to write KQL queries to retrieve and transform log data to answer common business and operational questions.| Logs|[Detect and analyze anomalies with KQL in Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/logs/kql-machine-learning-azure-monitor)|New tutorial. Walkthrough of how to use KQL for time series analysis and anomaly detection in Azure Monitor Log Analytics. |
-Virtual-machines|[Enable VM insights for a hybrid virtual machine](https://docs.microsoft.com/azure/azure-monitor/vm/vminsights-enable-hybrid)|Updated versions of standalone installers.|
-Visualizations|[Retrieve legacy Application Insights workbooks](https://docs.microsoft.com/azure/azure-monitor/visualize/workbooks-retrieve-legacy-workbooks)|New article about how to access legacy workbooks in the Azure portal.|
-Visualizations|[Azure Workbooks](https://docs.microsoft.com/azure/azure-monitor/visualize/workbooks-overview)|New video to see how you can use Azure Workbooks to get insights and visualize your data. |
+Virtual-machines|[Enable VM insights for a hybrid virtual machine](./vm/vminsights-enable-hybrid.md)|Updated versions of standalone installers.|
+Visualizations|[Retrieve legacy Application Insights workbooks](./visualize/workbooks-retrieve-legacy-workbooks.md)|New article about how to access legacy workbooks in the Azure portal.|
+Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to see how you can use Azure Workbooks to get insights and visualize your data. |
## September 2022
Visualizations|[Azure Workbooks](https://docs.microsoft.com/azure/azure-monitor/
||| |[Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](./app/java-in-process-agent.md)|New OpenTelemetry `@WithSpan` annotation guidance.| |[Capture Application Insights custom metrics with .NET and .NET Core](./app/tutorial-asp-net-custom-metrics.md)|Tutorial steps and images have been updated.|
-|[Configuration options - Azure Monitor Application Insights for Java](/azure/azure-monitor/app/java-in-process-agent)|Connection string guidance updated.|
+|[Configuration options - Azure Monitor Application Insights for Java](./app/java-in-process-agent.md)|Connection string guidance updated.|
|[Enable Application Insights for ASP.NET Core applications](./app/tutorial-asp-net-core.md)|Tutorial steps and images have been updated.| |[Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)](./app/opentelemetry-enable.md)|Our product feedback link at the bottom of each document has been fixed.| |[Filter and preprocess telemetry in the Application Insights SDK](./app/api-filtering-sampling.md)|Added sample initializer to control which client IP gets used as part of geo-location mapping.|
azure-portal Azure Portal Dashboard Share Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboard-share-access.md
To share access to a dashboard, you must first publish it. When you do so, other
By default, sharing publishes your dashboard to a resource group named **dashboards**. To select a different resource group, clear the checkbox.
-1. To [add optional tags](/azure/azure-resource-manager/management/tag-resources) to the dashboard, enter one or more name/value pairs.
+1. To [add optional tags](../azure-resource-manager/management/tag-resources.md) to the dashboard, enter one or more name/value pairs.
1. Select **Publish**.
For each dashboard that you have published, you can assign Azure RBAC built-in r
* View the list of [Azure built-in roles](../role-based-access-control/built-in-roles.md). * Learn about [managing groups in Azure AD](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). * Learn more about [managing Azure resources by using the Azure portal](../azure-resource-manager/management/manage-resources-portal.md).
-* [Create a dashboard](azure-portal-dashboards.md) in the Azure portal.
+* [Create a dashboard](azure-portal-dashboards.md) in the Azure portal.
azure-resource-manager Scenarios Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scenarios-rbac.md
The following example shows how to create a user-assigned managed identity and a
When you delete a user, group, service principal, or managed identity from Azure AD, it's a good practice to delete any role assignments. They aren't deleted automatically.
-Any role assignments that refer to a deleted principal ID become invalid. If you try to reuse a role assignment's name for another role assignment, the deployment will fail. To work around this behavior, you should either remove the old role assignment before you recreate it, or ensure that you use a unique name when you deploy a new role assignment. [This quickstart template illustrates how you can define a role assignment in a Bicep module and use a principal ID as a seed value for the role assignment name.](https://azure.microsoft.com/resources/templates/key-vault-managed-identity-role-assignment/)
+Any role assignments that refer to a deleted principal ID become invalid. If you try to reuse a role assignment's name for another role assignment, the deployment will fail. To work around this behavior, you should either remove the old role assignment before you recreate it, or ensure that you use a unique name when you deploy a new role assignment. This [quickstart template](/samples/azure/azure-quickstart-templates/key-vault-managed-identity-role-assignment) illustrates how you can define a role assignment in a Bicep module and use a principal ID as a seed value for the role assignment name.
## Custom role definitions
Role definition resource names must be unique within the Azure Active Directory
- [Assign a role at subscription scope](https://azure.microsoft.com/resources/templates/subscription-role-assignment/) - [Assign a role at tenant scope](https://azure.microsoft.com/resources/templates/tenant-role-assignment/) - [Create a resourceGroup, apply a lock and RBAC](https://azure.microsoft.com/resources/templates/create-rg-lock-role-assignment/)
- - [Create key vault, managed identity, and role assignment](https://azure.microsoft.com/resources/templates/key-vault-managed-identity-role-assignment/)
+ - [Create key vault, managed identity, and role assignment](/samples/azure/azure-quickstart-templates/key-vault-managed-identity-role-assignment)
- Community blog posts - [Create role assignments for different scopes with Bicep](https://4bes.nl/2022/04/24/create-role-assignments-for-different-scopes-with-bicep/), by Barbara Forbes
azure-resource-manager Virtual Machines Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md
The following scenarios aren't yet supported:
## Azure disk encryption
-You can't move a virtual machine that is integrated with a key vault to implement [Azure Disk Encryption for Linux VMs](../../../virtual-machines/linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](../../../virtual-machines/windows/disk-encryption-overview.md). To move the VM, you must disable encryption.
+A virtual machine that is integrated with a key vault to implement [Azure Disk Encryption for Linux VMs](../../../virtual-machines/linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](../../../virtual-machines/windows/disk-encryption-overview.md) can be moved to another resource group when it is in deallocated state.
+
+However, to move such virtual machine to another subscription, you must disable encryption.
# [Azure CLI](#tab/azure-cli)
azure-resource-manager Resource Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-extensions.md
Azure Resource Manager template (ARM template) extensions are small applications
The existing extensions are: -- [Microsoft.Compute/virtualMachines/extensions](/azure/templates/microsoft.compute/2018-10-01/virtualmachines/extensions)-- [Microsoft.Compute virtualMachineScaleSets/extensions](/azure/templates/microsoft.compute/2018-10-01/virtualmachinescalesets/extensions)-- [Microsoft.HDInsight clusters/extensions](/azure/templates/microsoft.hdinsight/2018-06-01-preview/clusters)-- [Microsoft.Sql servers/databases/extensions](/azure/templates/microsoft.sql/2014-04-01/servers/databases/extensions)-- [Microsoft.Web/sites/siteextensions](/azure/templates/microsoft.web/2016-08-01/sites/siteextensions)
+- [Microsoft.Compute/virtualMachines/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions)
+- [Microsoft.Compute virtualMachineScaleSets/extensions](/azure/templates/microsoft.compute/virtualmachinescalesets/extensions)
+- [Microsoft.HDInsight clusters/extensions](/azure/templates/microsoft.hdinsight/clusters)
+- [Microsoft.Sql servers/databases/extensions](/azure/templates/microsoft.sql/servers/databases/extensions)
+- [Microsoft.Web/sites/siteextensions](/azure/templates/microsoft.web/sites/siteextensions)
To find out the available extensions, browse to the [template reference](/azure/templates/). In **Filter by title**, enter **extension**.
azure-resource-manager Template Tutorial Add Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-resource.md
Most resources also have a `location` property, which sets the region where you
The other properties vary by resource type and API version. It's important to understand the connection between the API version and the available properties, so let's jump into more detail.
-In this tutorial, you add a storage account to the template. You can see the storage account's API version at [storageAccounts 2021-04-01](/azure/templates/microsoft.storage/2021-04-01/storageaccounts). Notice that you don't add all the properties to your template. Many of the properties are optional. The `Microsoft.Storage` resource provider could release a new API version, but the version you're deploying doesn't have to change. You can continue using that version and know that the results of your deployment are consistent.
+In this tutorial, you add a storage account to the template. You can see the storage account's API version at [storageAccounts 2021-09-01](/azure/templates/microsoft.storage/2021-09-01/storageaccounts). Notice that you don't add all the properties to your template. Many of the properties are optional. The `Microsoft.Storage` resource provider could release a new API version, but the version you're deploying doesn't have to change. You can continue using that version and know that the results of your deployment are consistent.
-If you view an older API version, such as [storageAccounts 2016-05-01](/azure/templates/microsoft.storage/2016-05-01/storageaccounts), you see that a smaller set of properties is available.
+If you view an older [API version](/azure/templates/microsoft.storage/allversions) you might see that a smaller set of properties is available.
If you decide to change the API version for a resource, make sure you evaluate the properties for that version and adjust your template appropriately.
azure-video-indexer Edit Speakers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-speakers.md
# Edit speakers with the Azure Video Indexer website
-Azure Video Indexer identifies speakers in your video but in some cases you may want to edit these names. You can perform the following editing actions, while in the edit mode. The following editing actions only apply to the currently selected video.
--- Add new speaker.-- Rename existing speaker.
-
- The update applies to all speakers identified by this name.
-- Assign a speaker for a transcript line.
+Azure Video Indexer identifies each speaker in a video and attributes each transcribed line to a speaker. The speakers are given a unique identity such as `Speaker #1` and `Speaker #2`. To provide clarity and enrich the transcript quality, you may want to replace the assigned identity with each speaker's actual name. To edit speakers' names, use the edit actions as described in the article.
The article demonstrates how to edit speakers with the [Azure Video Indexer website](https://www.videoindexer.ai/). The same editing operations are possible with an API. To use API, call [update video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index).
-## Prerequisites
+> [!NOTE]
+> The addition or editing of a speaker name is applied throughout the transcript of the video but is not applied to other videos in your Azure Video Indexer account.
+
+## Start editing
1. Sign in to the [Azure Video Indexer website](https://www.videoindexer.ai/). 2. Select a video.
This action allows adding new speakers that were not identified by Azure Video I
## Rename an existing speaker
-This action allows renaming an existing speaker that was identified by Azure Video Indexer. To rename a speaker from the website for the selected video, do the following:
+This action allows renaming an existing speaker that was identified by Azure Video Indexer. The update applies to all speakers identified by this name.
+
+To rename a speaker from the website for the selected video, do the following:
1. Select the edit mode. 1. Go to the transcript line where the speaker you wish to rename appears.
azure-vmware Upgrade Hcx Azure Vmware Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/upgrade-hcx-azure-vmware-solutions.md
+
+ Title: Upgrade HCX on Azure VMware Solution
+description: This article explains how to upgrade HCX on Azure VMware Solution.
+ Last updated : 11/09/2022++
+# Upgrade HCX on Azure VMware Solution
+
+In this article, you'll learn how to upgrade Azure VMware Solution for HCX service updates that may include new features, software fixes, or security patches.
+
+You can update HCX Connector and HCX Cloud systems during separate maintenance windows, but for optimal compatibility, it's recommended you update both systems together. Apply service updates during a maintenance window where no new HCX operations are queued up.
+
+>[!IMPORTANT]
+>Starting with HCX 4.4.0, HCX appliances install the VMware Photon Operating System. When upgrading to HCX 4.4.x or later from an HCX version prior to version 4.4.0, you must also upgrade all Service Mesh appliances.
+
+## System requirements
+
+- For systems requirements, compatibility, and upgrade prerequisites, see the [VMware HCX release notes](https://docs.vmware.com/en/VMware-HCX/https://docsupdatetracker.net/index.html).
+
+- For more information about the upgrade path, see the [Product Interoperability Matrix](https://interopmatrix.vmware.com/Upgrade?productId=660).
+
+- Ensure HCX manager and site pair configurations are healthy.
+
+- As part of HCX update planning, and to ensure that HCX components are updated successfully, review the service update considerations and requirements. For planning HCX upgrade, see [Planning for HCX Updates](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-61F5CED2-C347-4A31-8ACB-A4553BFC62E3.html).
+
+- Ensure that you have a backup and snapshot of HCX connector in the on-premises environment, if applicable. 
+
+### Backup HCX 
+- Azure VMware Solution backs up HCX Cloud Manager configuration daily.
++
+- Use the appliance management interface to create backup of HCX in on-premises, see [Backing Up HCX Manager](https://docs.vmware.com/en/VMware-HCX/4.4/hcx-user-guide/GUID-6A9D1451-3EF3-4E49-B23E-A9A781E5214A.html). You can use the configuration backup to restore the appliance to its state before the backup. The contents of the backup file supersede configuration changes made before restoring the appliance.
+ 
+- HCX cloud manager snapshots are taken automatically during upgrades to HCX 4.4 or later. HCX retains automatic snapshots for 24 hours before deleting them. To take a manual snapshot on HCX Cloud Manager or help with reverting from a snapshot, [create a support ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview). 
+
+## Upgrade HCX
+The upgrade process is in two steps:
+1. Upgrade HCX Manager
+ 1. HCX cloud manager  
+ 1. HCX connector (You can update site-paired HCX Managers simultaneously) 
+1. Upgrade HCX Service Mesh appliances
+
+### Upgrade HCX manager
+The HCX update is first applied to the HCX Manager systems.
+
+**What to expect**
+- HCX manager is rebooted as part of the upgrade process.  
+- HCX vCenter Plugins will be updated.  
+- There's no data-plane outage during this procedure.
+
+**Prerequisites**
+- Verify the HCX Manager system reports healthy connections to the connected (vCenter Server, NSX Manager (if applicable).
+- Verify the HCX Manager system reports healthy connections to the HCX Interconnect service components. (Ensure HCX isn't in an out of sync state)
+- Verify that Site Pair configurations are healthy.
+- No VM migrations should be in progress during this upgrade.
+
+**Procedure**
+
+To follow the HCX Manager upgrade process, see [Upgrading the HCX Manager](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-02DB88E1-EC81-434B-9AE9-D100E427B31C.html)
+
+### Upgrade HCX Service Mesh appliances
+
+While Service Mesh appliances are upgraded independently to the HCX Manager, they must be upgraded. These appliances are flagged for new available updates anytime the HCX Manager has newer software available.
+
+**What to expect**
+
+- Service VMs will be rebooted as part of the upgrade.
+- There is a small data plane outage during this procedure.
+- In-service upgrade of Network-extension can be considered to reduce downtime during HCX Network extension upgrades.
+
+**Prerequisites**
+- All paired HCX Managers on both the source and the target site are updated and all services have returned to a fully converged state.
+- Service Mesh appliances must be initiated using the HCX plug-in of vCenter or the 443 console at the source site
+- No VM migrations should be in progress during this upgrade.
+
+**Procedure**
+
+To follow the Service Mesh appliances upgrade process, see [Upgrading the HCX Service Mesh Appliances](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-EF89A098-D09B-4270-9F10-AEFA37CE5C93.html)
+
+## FAQ
+
+### What is the impact of an HCX upgrade?
+
+Apply service updates during a maintenance window where no new HCX operations and migration are queued up. The upgrade window accounts for a brief disruption to the Network Extension service, while the appliances are redeployed with the updated code.
+For individual HCX component upgrade impact, see [Planning for HCX Updates](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-61F5CED2-C347-4A31-8ACB-A4553BFC62E3.html).
+
+### Do I need to upgrade the service mesh appliances?
+
+The HCX Service Mesh can be upgraded once all paired HCX Manager systems are updated, and all services have returned to a fully converged state. Check HCX release notes for upgrade requirements. Starting with HCX 4.4.0, HCX appliances installed the VMware Photon Operating System. When upgrading to HCX 4.4.x or later from an HCX version prior to 4.4.0 version, you must upgrade all Service Mesh appliances.
+
+### How do I roll back HCX upgrade using a snapshot?
+
+See [Rolling Back an Upgrade Using Snapshots](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-B34728B9-B187-48E5-AE7B-74E92D09B98B.html). On the cloud side, open a [support ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) to roll back the upgrade.
+
+## Next steps
+[Software Versioning, Skew and Legacy Support Policies](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-skew-policy/GUID-787FB2A1-52AF-483C-B595-CF382E728674.html)
+
+[Updating VMware HCX](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-508A94B2-19F6-47C7-9C0D-2C89A00316B9.html)
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 06/06/2022 Last updated : 11/15/2022
Azure Backup offers two ways to modify protection for a data-source:
In both scenarios, the new policy is applied to all older recovery points, which are in standard tier and archive tier. So, older recovery points might get deleted if there's a policy change.
-When you move recovery points to archive, they're subjected to an early deletion period of 180 days. The charges are prorated. If a recovery point that hasnΓÇÖt stayed in archive for 180 days is deleted, it incurs cost equivalent to 180 minus the number of days it has spent in standard tier.
-
-If you delete recovery points that haven't stayed in archive for a minimum of 180 days, they incur early deletion cost.
+When you move recovery points to archive, they're subjected to an early deletion period of 180 days. The charges are prorated. If you delete a recovery point that hasn't stayed in vault-archive for 180 days, then you're charged for the remaining retention period selected at vault-archive tier price.
## Stop protection and delete data
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
description: Learn how to create a shareable link to let a user connect to a tar
Previously updated : 09/13/2022 Last updated : 11/16/2022
By default, users in your org will have only read access to shared links. If a u
## Considerations
-* Shareable Links isn't currently supported on peered VNets.
+* Shareable Links isn't currently supported on peered VNets that aren't in the same subscription.
* Shareable Links is not supported for national clouds during preview. * The Standard SKU is required for this feature.
batch Batch Pool Cloud Service To Virtual Machine Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-cloud-service-to-virtual-machine-configuration.md
Last updated 09/03/2021
Currently, Batch pools can be created using either [virtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) or [cloudServiceConfiguration](/rest/api/batchservice/pool/add#cloudserviceconfiguration). We recommend using Virtual Machine Configuration only, as this configuration supports all Batch capabilities.
-Cloud Services Configuration pools don't support some of the current Batch features, and won't support any newly-added features. You won't be able to create new 'CloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
+Cloud Services Configuration pools don't support some of the current Batch features, and won't support any newly added features. You won't be able to create new 'CloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
If your Batch solutions currently use 'cloudServiceConfiguration' pools, we recommend changing to 'virtualMachineConfiguration' as soon as possible. This will enable you to benefit from all Batch capabilities, such as an expanded [selection of VM series](batch-pool-vm-sizes.md), Linux VMs, [containers](batch-docker-container-workloads.md), [Azure Resource Manager virtual networks](batch-virtual-network.md), and [node disk encryption](disk-encryption.md).
Some of the key differences between the two configurations include:
- 'virtualMachineConfiguration' pool nodes utilize managed OS disks. The [managed disk type](../virtual-machines/disks-types.md) that is used for each node depends on the VM size chosen for the pool. If a 's' VM size is specified for the pool, for example 'Standard_D2s_v3', then a premium SSD is used. If a 'non-s' VM size is specified, for example 'Standard_D2_v3', then a standard HDD is used. > [!IMPORTANT]
- > As with Virtual Machines and Virtual Machine Scale Sets, the OS managed disk used for each node incurs a cost, which is additional to the cost of the VMs. 'virtualMachineConfiguration' pools can use [ephemeral OS disks](create-pool-ephemeral-os-disk.md), which create the OS disk on the VM cache or temporary SSD, to avoid extra costs associated with managed disks.There is no OS disk cost for 'cloudServiceConfiguration' nodes, as the OS disk is created on the nodes local SSD.
+ > As with Virtual Machines and Virtual Machine Scale Sets, the OS managed disk used for each node incurs a cost, which is additional to the cost of the VMs. 'virtualMachineConfiguration' pools can use [ephemeral OS disks](create-pool-ephemeral-os-disk.md), which create the OS disk on the VM cache or temporary disk, to avoid extra costs associated with managed disks.There is no OS disk cost for 'cloudServiceConfiguration' nodes, as the OS disk is created on the node's local disk.
## Azure Data Factory custom activity pools
Azure Batch pools can be used to run Data Factory custom activities. Any 'cloudS
When creating your new pools to run Data Factory custom activities, follow these practices: - Pause all pipelines before creating the new pools and deleting the old ones to ensure no executions will be interrupted.-- The same pool id can be used to avoid linked service configuration changes.
+- The same pool ID can be used to avoid linked service configuration changes.
- Resume pipelines when new pools have been created. For more information about using Azure Batch to run Data Factory custom activities, see [Azure Batch linked service](../data-factory/compute-linked-services.md#azure-batch-linked-service) and [Custom activities in a Data Factory pipeline](../data-factory/transform-data-using-dotnet-custom-activity.md)
For more information about using Azure Batch to run Data Factory custom activiti
- Learn more about [pool configurations](nodes-and-pools.md#configurations). - Learn more about [pool best practices](best-practices.md#pools).-- See the REST API reference for [pool addition](/rest/api/batchservice/pool/add) and [virtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration).
+- See the REST API reference for [pool addition](/rest/api/batchservice/pool/add) and [virtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration).
batch Batch Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-virtual-network.md
Title: Provision a pool in a virtual network description: How to create a Batch pool in an Azure virtual network so that compute nodes can communicate securely with other VMs in the network, such as a file server. Previously updated : 11/14/2022 Last updated : 11/15/2022
or `CloudServiceConfiguration`. `VirtualMachineConfiguration` for Batch pools is
pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). > [!IMPORTANT]
-> Batch pools can be configured in one of two communication modes. `Classic` communication
-> mode is where the Batch service initiates communication to the compute nodes.
-> [`Simplified` communication mode](simplified-compute-node-communication.md)
+> Batch pools can be configured in one of two node communication modes. Classic node communication mode is
+> where the Batch service initiates communication to the compute nodes.
+> [Simplified](simplified-compute-node-communication.md) node communication mode
> is where the compute nodes initiate communication to the Batch Service. ## Pools in Virtual Machine Configuration
NSG with at least the inbound and outbound security rules that are shown in the
> [!WARNING] > Batch service IP addresses can change over time. Therefore, we highly recommend that you use the
-> BatchNodeManagement.*region* service tag (or a regional variant) for the NSG rules indicated in the
-> following tables. Avoid populating NSG rules with specific Batch service IP addresses.
+> BatchNodeManagement.*region* service tag for the NSG rules indicated in the following tables. Avoid
+> populating NSG rules with specific Batch service IP addresses.
#### Inbound security rules
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 11/14/2022 Last updated : 11/15/2022
This article discusses best practices and useful tips for using the Azure Batch
- **Pool allocation mode:** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable a small but important subset of scenarios. For more information, see [configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode). -- **'virtualMachineConfiguration' or 'cloudServiceConfiguration':** While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'cloudServiceConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools don't support all features and no new capabilities are planned. You won't be able to create new 'cloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
+- **`virtualMachineConfiguration` or `cloudServiceConfiguration`:** While you can currently create pools using either
+configuration, new pools should be configured using `virtualMachineConfiguration` and not `cloudServiceConfiguration`.
+All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Service Configuration
+pools don't support all features and no new capabilities are planned. You won't be able to create new
+`cloudServiceConfiguration` pools or add new nodes to existing pools
+[after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
+For more information, see
+[Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
+
+- **`classic` or `simplified` node communication mode:** Pools can be configured in one of two node communication modes,
+classic or [simplified](simplified-compute-node-communication.md). In the classic node communication model, the Batch service
+initiates communication to the compute nodes, and compute nodes also require communicating to Azure Storage. In the simplified
+node communication model, compute nodes initiate communication with the Batch service. Due to the reduced scope of
+inbound/outbound connections required, and not requiring Azure Storage outbound access for baseline operation, the recommendation
+is to use the simplified node communication model. Some future improvements to the Batch service will also require the simplified
+node communication model.
- **Job and task run time considerations:** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job isn't long, don't allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job. - **Multiple compute nodes:** Individual nodes aren't guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes. -- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It's your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with. An image without a specified `batchSupportEndOfLife` date indicates that such a date has not been determined yet by the Batch service. Absence of a date does not indicate that the respective image will be supported indefinitely. An EOL date may be added or updated in the future at anytime.
+- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support
+end of life (EOL) dates. These dates can be discovered via the
+[`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages),
+[PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or
+[Azure CLI](/cli/azure/batch/pool/supported-images). It's your responsibility to periodically refresh your view of the EOL
+dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a
+specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is
+derived or aligned with. An image without a specified `batchSupportEndOfLife` date indicates that such a date hasn't been
+determined yet by the Batch service. Absence of a date doesn't indicate that the respective image will be supported
+indefinitely. An EOL date may be added or updated in the future at any time.
- **Unique resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another similar pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. You can create uniqueness by using a GUID (either as the entire resource name, or as a part of it) or by embedding the date and time that the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can give a resource a more readable name even if the actual resource ID is something that isn't human-friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
batch Security Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-best-practices.md
Title: Batch security and compliance best practices description: Learn best practices and useful tips for enhancing security with your Azure Batch solutions. Previously updated : 09/01/2021 Last updated : 11/15/2022
Many features are available to help you create a more secure Azure Batch deploym
### Pool configuration
-Many security features are only available for pools configured using [Virtual Machine Configuration](nodes-and-pools.md#configurations), and not for pools with Cloud Services Configuration. We recommend using Virtual Machine Configuration pools, which utilize [virtual machine scale sets](../virtual-machine-scale-sets/overview.md), whenever possible.
+Many security features are only available for pools configured using [Virtual Machine Configuration](nodes-and-pools.md#configurations), and not for pools with Cloud Services Configuration. We recommend using Virtual Machine Configuration pools, which utilize [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), whenever possible.
+
+Pools can also be configured in one of two node communication modes, classic or [simplified](simplified-compute-node-communication.md).
+In the classic node communication model, the Batch service initiates communication to the compute nodes, and compute nodes
+also require communicating to Azure Storage. In the simplified node communication model, compute nodes initiate communication
+with the Batch service. Due to the reduced scope of inbound/outbound connections required, and not requiring Azure Storage
+outbound access for baseline operation, the recommendation is to use the simplified node communication model.
### Batch account authentication
We strongly recommend using Azure AD for Batch account authentication. Some Batc
When creating a Batch account, you can choose between two [pool allocation modes](accounts.md#batch-accounts): -- **Batch service**: The default option, where the underlying Cloud Service or virtual machine scale set resources used to allocate and manage pool nodes are created in internal subscriptions, and aren't directly visible in the Azure portal. Only the Batch pools and nodes are visible.-- **User subscription**: The underlying Cloud Service or virtual machine scale set resources are created in the same subscription as the Batch account. These resources are therefore visible in the subscription, in addition to the corresponding Batch resources.
+- **Batch service**: The default option, where the underlying Cloud Service or Virtual Machine Scale Set resources used to allocate and manage pool nodes are created on Batch-owned subscriptions, and aren't directly visible in the Azure portal. Only the Batch pools and nodes are visible.
+- **User subscription**: The underlying Cloud Service or Virtual Machine Scale Set resources are created in the same subscription as the Batch account. These resources are therefore visible in the subscription, in addition to the corresponding Batch resources.
-With user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription mode is required if you want to create Batch pools using Azure Reserved VM Instances, use Azure Policy on virtual machine scale set resources, and/or manage the core quota on the subscription (shared across all Batch accounts in the subscription). To create a Batch account in user subscription mode, you must also register your subscription with Azure Batch, and associate the account with an Azure Key Vault.
+With user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription mode is required if you want to create Batch pools using Azure Reserved VM Instances, use Azure Policy on Virtual Machine Scale Set resources, and/or manage the core quota on the subscription (shared across all Batch accounts in the subscription). To create a Batch account in user subscription mode, you must also register your subscription with Azure Batch, and associate the account with an Azure Key Vault.
## Restrict network endpoint access ### Batch network endpoints
-Be aware that by default, endpoints with public IP addresses are used to communicate with Batch accounts, Batch pools, and pool nodes.
+By default, endpoints with public IP addresses are used to communicate with Batch accounts, Batch pools, and pool nodes.
### Batch account API
For more information, see [Create an Azure Batch pool in a virtual network](bat
#### Create pools with static public IP addresses
-By default, the public IP addresses associated with pools are dynamic; they are created when a pool is created and IP addresses can be added or removed when a pool is resized. When the task applications running on pool nodes need to access external services, access to those services may need to be restricted to specific IPs. In this case, having dynamic IP addresses will not be manageable.
+By default, the public IP addresses associated with pools are dynamic; they're created when a pool is created
+and IP addresses can be added or removed when a pool is resized. When the task applications running on pool
+nodes need to access external services, access to those services may need to be restricted to specific IPs.
+In this case, having dynamic IP addresses won't be manageable.
You can create static public IP address resources in the same subscription as the Batch account before pool creation. You can then specify these addresses when creating your pool.
For extra security, encrypt these disks using one of these Azure disk encryption
## Securely access services from compute nodes
-Batch nodes can securely access credentials stored in [Azure Key Vault](../key-vault/general/overview.md), which can be used by task applications to access other services. A certificate is used to grant the pool nodes access to Key Vault. By [enabling automatic certificate rotation in your Batch pool](automatic-certificate-rotation.md), the credentials will be automatically renewed. This is the recommended option for Batch nodes to access credentials stored in Azure Key Vault, although you can also [set up Batch nodes to securely access credentials and secrets with a certificate](credential-access-key-vault.md) without automatic certificate rotation.
+Use [Pool managed identities](managed-identity-pools.md) with the appropriate access permissions configured for the
+user-assigned managed identity to access Azure services that support managed identity, including Azure Key Vault. If
+you need to provision certificates on Batch nodes, utilize the available Azure Key Vault VM extension with pool
+Managed Identity to install and manage certificates on your Batch pool. For more information on deploying certificates
+from Azure Key Vault with Managed Identity on Batch pools, see
+[Enable automatic certificate rotation in a Batch pool](automatic-certificate-rotation.md).
## Governance and compliance
These offerings are based on various types of assurances, including formal cert
Depending on your pool allocation mode and the resources to which a policy should apply, use Azure Policy with Batch in one of the following ways: - Directly, using the Microsoft.Batch/batchAccounts resource. A subset of the properties for a Batch account can be used. For example, your policy can include valid Batch account regions, allowed pool allocation mode, and whether a public network is enabled for accounts.-- Indirectly, using the Microsoft.Compute/virtualMachineScaleSets resource. Batch accounts with user subscription pool allocation mode can have policy set on the virtual machine scale set resources that are created in the Batch account subscription. For example, allowed VM sizes and ensure certain extensions are run on each pool node.
+- Indirectly, using the Microsoft.Compute/virtualMachineScaleSets resource. Batch accounts with user subscription pool allocation mode can have policy set on the Virtual Machine Scale Set resources that are created in the Batch account subscription. For example, allowed VM sizes and ensure certain extensions are run on each pool node.
## Next steps
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
The following faults are available for use today. Visit the [Fault Providers](./
|-|-| | Fault Provider | N/A | | Supported OS Types | N/A |
-| Description | Adds a time delay before, between, or after other actions. Useful for waiting for the impact of a fault to appear in a service or for waiting for an activity outside of the experiment to complete (for example, waiting for autohealing to occur before injecting another fault). |
+| Description | Adds a time delay before, between, or after other actions. This fault is useful for waiting for the impact of a fault to appear in a service, or for waiting for an activity outside of the experiment to complete. For example, waiting for autohealing to occur before injecting another fault. |
| Prerequisites | N/A | | Urn | urn:csci:microsoft:chaosStudio:timedDelay/1.0 | | duration | The duration of the delay in ISO 8601 format (Example: PT10M) |
The following faults are available for use today. Visit the [Fault Providers](./
| Capability Name | CPUPressure-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows, Linux |
-| Description | Add CPU pressure up to the specified value on the VM where this fault is injected for the duration of the fault action. The artificial CPU pressure is removed at the end of the duration or if the experiment is canceled. On Windows, the "% Processor Utility" performance counter is used at fault start to determine current CPU percentage and this is subtracted from the pressureLevel defined in the fault so that % Processor Utility will hit approximately the pressureLevel defined in the fault parameters. |
+| Description | Adds CPU pressure, up to the specified value, on the VM where this fault is injected during the fault action. The artificial CPU pressure is removed at the end of the duration or if the experiment is canceled. On Windows, the "% Processor Utility" performance counter is used at fault start to determine current CPU percentage, which is subtracted from the `pressureLevel` defined in the fault so that % Processor Utility will hit approximately the `pressureLevel` defined in the fault parameters. |
| Prerequisites | **Linux:** Running the fault on a Linux VM requires the **stress-ng** utility to be installed. You can install it using the package manager for your Linux distro, </br> APT Command to install stress-ng: *sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng* </br> YUM Command to install stress-ng: *sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng* | | | **Windows:** None. | | Urn | urn:csci:microsoft:agent:cpuPressure/1.0 | | Parameters (key, value) | | pressureLevel | An integer between 1 and 99 that indicates how much CPU pressure (%) will be applied to the VM. |
-| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. |
+| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a Virtual Machine Scale Set. Required for Virtual Machine Scale Sets. |
### Sample JSON ```json
Known issues on Linux:
| Capability Name | PhysicalMemoryPressure-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows, Linux |
-| Description | Add physical memory pressure up to the specified value on the VM where this fault is injected for the duration of the fault action. The artificial physical memory pressure is removed at the end of the duration or if the experiment is canceled. |
+| Description | Add physical memory pressure up to the specified value on the VM where this fault is injected during of the fault action. The artificial physical memory pressure is removed at the end of the duration or if the experiment is canceled. |
| Prerequisites | **Linux:** Running the fault on a Linux VM requires the **stress-ng** utility to be installed. You can install it using the package manager for your Linux distro, </br> APT Command to install stress-ng: *sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng* </br> YUM Command to install stress-ng: *sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng* | | | **Windows:** None. | | Urn | urn:csci:microsoft:agent:physicalMemoryPressure/1.0 | | Parameters (key, value) | | | pressureLevel | An integer between 1 and 99 that indicates how much physical memory pressure (%) will be applied to the VM. |
-| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. |
+| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a Virtual Machine Scale Set. Required for Virtual Machine Scale Sets. |
### Sample JSON
Known issues on Linux:
| Capability Name | VirtualMemoryPressure-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows |
-| Description | Add virtual memory pressure up to the specified value on the VM where this fault is injected for the duration of the fault action. The artificial virtual memory pressure is removed at the end of the duration or if the experiment is canceled. |
+| Description | Add virtual memory pressure up to the specified value on the VM where this fault is injected during the fault action. The artificial virtual memory pressure is removed at the end of the duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:agent:virtualMemoryPressure/1.0 | | Parameters (key, value) | |
Known issues on Linux:
| Capability Name | DiskIOPressure-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows |
-| Description | Uses the [diskspd utility](https://github.com/Microsoft/diskspd/wiki) to add disk pressure to the primary storage of the VM where it is injected for the duration of the fault action. This fault has five different modes of execution. The artificial disk pressure is removed at the end of the duration or if the experiment is canceled. |
+| Description | Uses the [diskspd utility](https://github.com/Microsoft/diskspd/wiki) to add disk pressure to the primary storage of the VM where it's injected during the fault action. This fault has five different modes of execution. The artificial disk pressure is removed at the end of the duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:agent:diskIOPressure/1.0 | | Parameters (key, value) | |
Known issues on Linux:
| Prerequisites | Running the fault on a Linux VM requires the **stress-ng** utility to be installed. You can install it using the package manager for your Linux distro, </br> APT Command to install stress-ng: *sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng* </br> YUM Command to install stress-ng: *sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng* | | Urn | urn:csci:microsoft:agent:linuxDiskIOPressure/1.0 | | Parameters (key, value) | |
-| workerCount | Number of worker processes to run. Setting this to 0 will generate as many worker processes as there are number of processors. |
+| workerCount | Number of worker processes to run. Setting `workerCount` to 0 will generate as many worker processes as there are number of processors. |
| fileSizePerWorker | Size of the temporary file a worker will perform I/O operations against. Integer plus a unit in bytes (b), kilobytes (k), megabytes (m), or gigabytes (g) (for example, 4m for 4 megabytes, 256g for 256 gigabytes) | | blockSize | Block size to be used for disk I/O operations, capped at 4 megabytes. Integer plus a unit in bytes (b), kilobytes (k), or megabytes (m) (for example, 512k for 512 kilobytes) | | virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. |
Known issues on Linux:
| Capability Name | StopService-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows |
-| Description | Uses the Windows Service Controller APIs to stop a Windows service for the duration of the fault, restarting it at the end of the duration or if the experiment is canceled. |
+| Description | Uses the Windows Service Controller APIs to stop a Windows service during the fault, restarting it at the end of the duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:agent:stopService/1.0 | | Parameters (key, value) | |
-| serviceName | The name of the Windows service you want to stop. You can run `sc.exe query` in command prompt to explore service names, Windows service friendly names are not supported. |
+| serviceName | The name of the Windows service you want to stop. You can run `sc.exe query` in command prompt to explore service names, Windows service friendly names aren't supported. |
| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. | ### Sample JSON
Known issues on Linux:
| Capability Name | TimeChange-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows |
-| Description | Changes the system time for the VM where it is injected and resets it at the end of the duration or if the experiment is canceled. |
+| Description | Changes the system time of the VM where it's injected, and resets the time at the end of the expiriment or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:agent:timeChange/1.0 | | Parameters (key, value) | |
-| dateTime | A DateTime string in [ISO8601 format](https://www.cryptosys.net/pki/manpki/pki_iso8601datetime.html). If YYYY-MM-DD values are missing, they are defaulted to the current day when the experiment runs. If Thh:mm:ss values are missing, the default value is 12:00:00 AM. If a 2-digit year is provided (YY), it is converted to a 4-digit year (YYYY) based on the current century. If \<Z\> is missing, it is defaulted to the offset of the local timezone. \<Z\> must always include a sign symbol (negative or positive). |
+| dateTime | A DateTime string in [ISO8601 format](https://www.cryptosys.net/pki/manpki/pki_iso8601datetime.html). If YYYY-MM-DD values are missing, they're defaulted to the current day when the experiment runs. If Thh:mm:ss values are missing, the default value is 12:00:00 AM. If a 2-digit year is provided (YY), it's converted to a 4-digit year (YYYY) based on the current century. If \<Z\> is missing, it's defaulted to the offset of the local timezone. \<Z\> must always include a sign symbol (negative or positive). |
| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. | ### Sample JSON
Known issues on Linux:
| Capability Name | DnsFailure-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows |
-| Description | Substitutes the response of a DNS lookup request with a specified error code. |
+| Description | Substitutes DNS lookup request responses with a specified error code. DNS lookup requests that will be substituted must:<ul><li>Originate from the VM</li><li>Match the defined fault parameters</li></ul>**Note**: DNS lookups that aren't made by the Windows DNS client won't be affected by this fault. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:agent:dnsFailure/1.0 | | Parameters (key, value) | |
-| hosts | Delimited JSON array of host names to fail DNS lookup request for. |
+| hosts | Delimited JSON array of host names to fail DNS lookup request for.<br><br>This property accepts wildcards (`*`), but only for the first subdomain in an address and only applies to the subdomain for which they're specified. For example:<ul><li>\*.microsoft.com is supported</li><li>subdomain.\*.microsoft isn't supported</li><li>\*.microsoft.com won't account for multiple subdomains in an address such as subdomain1.subdomain2.microsoft.com.</li></ul> |
| dnsFailureReturnCode | DNS error code to be returned to the client for the lookup failure (FormErr, ServFail, NXDomain, NotImp, Refused, XDomain, YXRRSet, NXRRSet, NotAuth, NotZone). For more details on DNS return codes, visit [the IANA website](https://www.iana.org/assignments/dns-parameters/dns-parameters.xml#dns-parameters-6) | | virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. |
Known issues on Linux:
| Target type | Microsoft-Agent | | Supported OS Types | Windows | | Description | Increases network latency for a specified port range and network block. |
-| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it is run as administrator by default. |
+| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
| Urn | urn:csci:microsoft:agent:networkLatency/1.0 | | Parameters (key, value) | | | latencyInMilliseconds | Amount of latency to be applied in milliseconds. |
-| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 3. |
+| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 16. |
| address | IP address indicating the start of the IP range. | | subnetMask | Subnet mask for the IP address range. | | portLow | (Optional) Port number of the start of the port range. |
Known issues on Linux:
| Target type | Microsoft-Agent | | Supported OS Types | Windows | | Description | Blocks outbound network traffic for specified port range and network block. |
-| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it is run as administrator by default. |
+| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
| Urn | urn:csci:microsoft:agent:networkDisconnect/1.0 | | Parameters (key, value) | |
-| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 3. |
+| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 16. |
| address | IP address indicating the start of the IP range. | | subnetMask | Subnet mask for the IP address range. | | portLow | (Optional) Port number of the start of the port range. |
Known issues on Linux:
| Target type | Microsoft-Agent | | Supported OS Types | Windows | | Description | Applies a Windows firewall rule to block outbound traffic for specified port range and network block. |
-| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it is run as administrator by default. |
+| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
| Urn | urn:csci:microsoft:agent:networkDisconnectViaFirewall/1.0 | | Parameters (key, value) | | | destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 3. |
Known issues on Linux:
| Capability Name | Shutdown-1.0 | | Target type | Microsoft-VirtualMachine | | Supported OS Types | Windows, Linux |
-| Description | Shuts down a VM for the duration of the fault and restarts the VM at the end of the fault duration or if the experiment is canceled. Only Azure Resource Manager VMs are supported. |
+| Description | Shuts down a VM for the duration of the fault, and restarts it at the end of the expiriment or if the experiment is canceled. Only Azure Resource Manager VMs are supported. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:virtualMachine:shutdown/1.0 | | Parameters (key, value) | |
Known issues on Linux:
| Capability Name | Shutdown-1.0 | | Target type | Microsoft-VirtualMachineScaleSet | | Supported OS Types | Windows, Linux |
-| Description | Shuts down or kill a virtual machine scale set instance for the duration of the fault and restarts the VM at the end of the fault duration or if the experiment is canceled. |
+| Description | Shuts down or kills a virtual machine scale set instance during the fault, and restarts the VM at the end of the fault duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:virtualMachineScaleSet:shutdown/1.0 | | Parameters (key, value) | |
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [NetworkChaos kind](https://chaos-mesh.org/docs/simulate-network-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [NetworkChaos kind](https://chaos-mesh.org/docs/simulate-network-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:podChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [PodChaos kind](https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/#create-experiments-using-yaml-configuration-files). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [PodChaos kind](https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/#create-experiments-using-yaml-configuration-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:stressChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [StressChaos kind](https://chaos-mesh.org/docs/simulate-heavy-stress-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [StressChaos kind](https://chaos-mesh.org/docs/simulate-heavy-stress-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:IOChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [IOChaos kind](https://chaos-mesh.org/docs/simulate-io-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [IOChaos kind](https://chaos-mesh.org/docs/simulate-io-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:timeChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [TimeChaos kind](https://chaos-mesh.org/docs/simulate-time-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [TimeChaos kind](https://chaos-mesh.org/docs/simulate-time-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:kernelChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [KernelChaos kind](https://chaos-mesh.org/docs/simulate-kernel-chaos-on-kubernetes/#configuration-file). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [KernelChaos kind](https://chaos-mesh.org/docs/simulate-kernel-chaos-on-kubernetes/#configuration-file).You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:httpChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [HTTPChaos kind](https://chaos-mesh.org/docs/simulate-http-chaos-on-kubernetes/#create-experiments). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [HTTPChaos kind](https://chaos-mesh.org/docs/simulate-http-chaos-on-kubernetes/#create-experiments). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md) and the [DNS service must be installed](https://chaos-mesh.org/docs/simulate-dns-chaos-on-kubernetes/#deploy-chaos-dns-service). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:dnsChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [DNSChaos kind](https://chaos-mesh.org/docs/simulate-dns-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [DNSChaos kind](https://chaos-mesh.org/docs/simulate-dns-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
|-|-| | Capability Name | SecurityRule-1.0 | | Target type | Microsoft-NetworkSecurityGroup |
-| Description | Enables manipulation or creation of a rule in an existing Azure Network Security Group or set of Azure Network Security Groups (assuming the rule definition is applicable cross security groups). Useful for simulating an outage of a downstream or cross-region dependency/non-dependency, simulating an event that is expected to trigger a logic to force a service failover, simulating an event that is expected to trigger an action from a monitoring or state management service, or as an alternative for blocking, or allowing, network traffic where Chaos Agent cannot be deployed. |
+| Description | Enables manipulation or rule creation in an existing Azure Network Security Group or set of Azure Network Security Groups, assuming the rule definition is applicable cross security groups. Useful for simulating an outage of a downstream or cross-region dependency/non-dependency, simulating an event that's expected to trigger a logic to force a service failover, simulating an event that is expected to trigger an action from a monitoring or state management service, or as an alternative for blocking or allowing network traffic where Chaos Agent can't be deployed. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:networkSecurityGroup:securityRule/1.0 | | Parameters (key, value) | |
Known issues on Linux:
### Limitations * The fault can only be applied to an existing Network Security Group.
-* When an NSG rule that is intended to deny traffic is applied existing connections will not be broken until they have been **idle** for 4 minutes. One workaround is to add another branch in the same step that uses a fault that would cause existing connections to break when the NSG fault is applied. For example, killing the process, temporarily stopping the service, or restarting the VM would cause connections to reset.
+* When an NSG rule that is intended to deny traffic is applied existing connections won't be broken until they've been **idle** for 4 minutes. One workaround is to add another branch in the same step that uses a fault that would cause existing connections to break when the NSG fault is applied. For example, killing the process, temporarily stopping the service, or restarting the VM would cause connections to reset.
* Rules are applied at the start of the action. Any external changes to the rule during the duration of the action will cause the experiment to fail.
-* Creating or modifying Application Security Group rules is not supported.
+* Creating or modifying Application Security Group rules isn't supported.
* Priority values must be unique on each NSG targeted. Attempting to create a new rule that has the same priority value as another will cause the experiment to fail. ## Azure Cache for Redis reboot
Known issues on Linux:
| Capability Name | Reboot-1.0 | | Target type | Microsoft-AzureClusteredCacheForRedis | | Description | Causes a forced reboot operation to occur on the target to simulate a brief outage. |
-| Prerequisites | The target Azure Cache for Redis resource must be a Redis Cluster, which requires that the cache must be a Premium Tier cache. Standard and Basic Tiers are not supported. |
+| Prerequisites | The target Azure Cache for Redis resource must be a Redis Cluster, which requires that the cache must be a Premium Tier cache. Standard and Basic Tiers aren't supported. |
| Urn | urn:csci:microsoft:azureClusteredCacheForRedis:reboot/1.0 | | Fault type | Discrete | | Parameters (key, value) | |
Known issues on Linux:
### Limitations * The reboot fault causes a forced reboot to better simulate an outage event, which means there is the potential for data loss to occur.
-* The reboot fault is a **discrete** fault type. Unlike continuous faults, it is a one-time action and therefore has no duration.
+* The reboot fault is a **discrete** fault type. Unlike continuous faults, it's a one-time action and therefore has no duration.
## Cloud Services (Classic) shutdown
Known issues on Linux:
|-|-| | Capability Name | Shutdown-1.0 | | Target type | Microsoft-DomainName |
-| Description | Stops a deployment for the duration of the fault and restarts the deployment at the end of the fault duration or if the experiment is canceled. |
+| Description | Stops a deployment during the fault and restarts the deployment at the end of the fault duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:domainName:shutdown/1.0 | | Fault type | Continuous |
Known issues on Linux:
| Capability Name | DenyAccess-1.0 | | Target type | Microsoft-KeyVault | | Description | Blocks all network access to a Key Vault by temporarily modifying the Key Vault network rules, preventing an application dependent on the Key Vault from accessing secrets, keys, and/or certificates. If the Key Vault allows access to all networks, this is changed to only allow access from selected networks with no virtual networks in the allowed list at the start of the fault and returned to allowing access to all networks at the end of the fault duration. If they Key Vault is set to only allow access from selected networks, any virtual networks in the allowed list are removed at the start of the fault and restored at the end of the fault duration. |
-| Prerequisites | The target Key Vault cannot have any firewall rules and must not be set to allow Azure services to bypass the firewall. If the target Key Vault is set to only allow access from selected networks, there must be at least one virtual network rule. The Key Vault cannot be in recover mode. |
+| Prerequisites | The target Key Vault can't have any firewall rules and must not be set to allow Azure services to bypass the firewall. If the target Key Vault is set to only allow access from selected networks, there must be at least one virtual network rule. The Key Vault can't be in recover mode. |
| Urn | urn:csci:microsoft:keyVault:denyAccess/1.0 | | Fault type | Continuous | | Parameters (key, value) | None. |
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
VNet is the fundamental building block for your private network in Azure. VNet e
## How VNet Injection works in Chaos Studio VNet injection allows Chaos resource provider to inject containerized workloads into your VNet. This means that resources without public endpoints can be accessed via a private IP address on the VNet. Below are the steps you can follow for vnet injection:
-1. Register the Microsoft.ContainerInstance resource provider with your subscription (if applicable).
-2. Re-register the Microsoft.Chaos resource provider with your subscription.
-3. Create a subnet named ChaosStudioSubnet in the VNet you want to inject into.
-4. Set the properties.subnetId property when you create or update the Target resource. The value should be the resource ID of the subnet created in step 1.
+
+1. Register the `Microsoft.ContainerInstance` resource provider with your subscription (if applicable).
+
+ ```bash
+ az provider register --namespace 'Microsoft.ContainerInstance' --wait
+ ```
+
+ Verify the registration by running the following command:
+
+ ```bash
+ az provider show --namespace 'Microsoft.ContainerInstance' | grep registrationState
+ ```
+
+ You should see output similar to the following:
+
+ ```bash
+ "registrationState": "Registered",
+ ```
+
+2. Re-register the `Microsoft.Chaos` resource provider with your subscription.
+
+ ```bash
+ az provider register --namespace 'Microsoft.Chaos' --wait
+ ```
+
+ Verify the registration by running the following command:
+
+ ```bash
+ az provider show --namespace 'Microsoft.Chaos' | grep registrationState
+ ```
+
+ You should see output similar to the following:
+
+ ```bash
+ "registrationState": "Registered",
+ ```
+
+3. Create a subnet named `ChaosStudioSubnet` in the VNet you want to inject into. And delegate the subnet to `Microsoft.ContainerInstance/containerGroups` service.
+
+4. Set the `properties.subnetId` property when you create or update the Target resource. The value should be the resource ID of the subnet created in step 3.
+
+ Replace `$SUBSCRIPTION_ID` with your Azure subscription ID, `$RESOURCE_GROUP` and `$AKS_CLUSTER` with the resource group name and your AKS cluster resource name. Also, replace `$AKS_INFRA_RESOURCE_GROUP` and `$AKS_VNET` with your AKS's infrastructure resource group name and VNet name.
+
+ ```bash
+ URL=https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.ContainerService/managedClusters/$AKS_CLUSTER/providers/Microsoft.Chaos/targets/microsoft-azurekubernetesservicechaosmesh?api-version=2022-10-01-preview
+ SUBNET_ID=/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$AKS_INFRA_RESOURCE_GROUP/providers/Microsoft.Network/virtualNetworks/$AKS_VNET/subnets/ChaosStudioSubnet
+ BODY="{ \"properties\": { \"subnetId\": \"$SUBNET_ID\" } }"
+ az rest --method put --url $URL --body "$BODY"
+ ```
+ 5. Start the experiment. ## Limitations
chaos-studio Chaos Studio Quickstart Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-azure-portal.md
Title: Create and run a chaos experiment using Azure Chaos Studio description: Understand the steps to create and run a Chaos Studio experiment in 10mins -+ Last updated 11/10/2021
If this is your first time using Chaos Studio, you must first register the Chaos
5. In the list of resource providers that appears, search for **Microsoft.Chaos**. 6. Click on the Microsoft.Chaos provider, and click the **Register** button.
+## Create an Azure resource supported by Chaos Studio
+
+Create an azure resource and ensure this is one of the supported [fault providers](chaos-studio-fault-providers.md). Also validate if this resource is being created in the [region](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio) where Chaos Studio is available. In this experiment we choose an Azure Virtual Machine which is one of the supported fault providers for Chaos Studio.
++ ## Enable Chaos Studio on the Virtual Machine you created 1. Open the [Azure portal](https://portal.azure.com). 2. Search for **Chaos Studio (preview)** in the search bar.
chaos-studio Sample Template Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-template-experiment.md
In this sample, we create a chaos experiment with a single target resource and a
"value": "eastus" }, "chaosTargetResourceId": {
- "value": "/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.DocumentDB/databaseAccounts/<account-name>"
+ "value": "/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.DocumentDB/databaseAccounts/<account-name>/providers/Microsoft.Chaos/targets/microsoft-cosmosdb"
} } }
cloud-shell Cloud Shell Windows Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/cloud-shell-windows-users.md
- Title: Azure Cloud Shell for Windows users | Microsoft Docs
-description: Guide for users who are not familiar with Linux systems
---
-tags: azure-resource-manager
---- Previously updated : 08/16/2022---
-# PowerShell in Azure Cloud Shell for Windows users
-
-In May 2018, changes were [announced](https://azure.microsoft.com/blog/pscloudshellrefresh/) to PowerShell in Azure Cloud Shell.
-The PowerShell experience in Azure Cloud Shell now runs [PowerShell Core 6](https://github.com/powershell/powershell) in a Linux environment.
-With this change, there may be some differences in the PowerShell experience in Cloud Shell compared to what is expected in a Windows PowerShell experience.
-
-## File system case sensitivity
-
-The file system is case-insensitive in Windows, whereas on Linux, the file system is case-sensitive.
-Previously `file.txt` and `FILE.txt` were considered to be the same file, but now they are considered to be different files.
-Proper casing must be used while `tab-completing` in the file system.
-PowerShell specific experiences, such as `tab-completing` cmdlet names, parameters, and values, are not case-sensitive.
-
-## Windows PowerShell aliases vs Linux utilities
-
-Some existing PowerShell aliases have the same names as built-in Linux commands, such as `cat`,`ls`, `sort`, `sleep`, etc.
-In PowerShell Core 6, aliases that collide with built-in Linux commands have been removed.
-Below are the common aliases that have been removed as well as their equivalent commands:
-
-|Removed Alias |Equivalent Command |
-|||
-|`cat` | `Get-Content` |
-|`curl` | `Invoke-WebRequest` |
-|`diff` | `Compare-Object` |
-|`ls` | `dir` <br> `Get-ChildItem` |
-|`mv` | `Move-Item` |
-|`rm` | `Remove-Item` |
-|`sleep` | `Start-Sleep` |
-|`sort` | `Sort-Object` |
-|`wget` | `Invoke-WebRequest` |
-
-## Persisting $HOME
-
-Earlier users could only persist scripts and other files in their Cloud Drive.
-Now, the user's $HOME directory is also persisted across sessions.
-
-## PowerShell profile
-
-By default, a user's PowerShell profile is not created.
-To create your profile, create a `PowerShell` directory under `$HOME/.config`.
-
-```azurepowershell-interactive
-mkdir (Split-Path $profile.CurrentUserAllHosts)
-```
-
-Under `$HOME/.config/PowerShell`, you can create your profile files - `profile.ps1` and/or `Microsoft.PowerShell_profile.ps1`.
-
-## What's new in PowerShell
-
-For more information about what is new in PowerShell, reference the
-[PowerShell What's New](/powershell/scripting/whats-new/overview) and
-[Discover PowerShell](/powershell/scripting/discover-powershell).
cloud-shell Embed Cloud Shell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/embed-cloud-shell.md
Title: Embed Azure Cloud Shell | Microsoft Docs+ description: Learn to embed Azure Cloud Shell.---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 12/11/2017-++
+tags: azure-resource-manager
+ Title: Embed Azure Cloud Shell
# Embed Azure Cloud Shell
-Embedding Cloud Shell enables developers and content writers to directly open Cloud Shell from a dedicated URL, [shell.azure.com](https://shell.azure.com). This immediately brings the full power of Cloud Shell's authentication, tooling, and up-to-date Azure CLI/Azure PowerShell tools to your users.
+Embedding Cloud Shell enables developers and content writers to directly open Cloud Shell from a
+dedicated URL, `shell.azure.com`. This link brings the full power of Cloud Shell's authentication,
+tooling, and up-to-date Azure CLI and Azure PowerShell tools to your users.
+
+You can use the following images in your own webpages and app as buttons to start a Cloud Shell
+session.
Regular sized button
-[![Regular launch](https://shell.azure.com/images/launchcloudshell.png "Launch Azure Cloud Shell")](https://shell.azure.com)
+![Regular launch](media/embed-cloud-shell/launch-cloud-shell-1.png "Launch Azure Cloud Shell")
Large sized button
-[![Large launch](https://shell.azure.com/images/launchcloudshell@2x.png "Launch Azure Cloud Shell")](https://shell.azure.com)
+![Large launch](media/embed-cloud-shell/launch-cloud-shell-2.png "Launch Azure Cloud Shell")
## How-to
-Integrate Cloud Shell's launch button into markdown files by copying the following:
+To integrate Cloud Shell's launch button into markdown files by copying the following code:
+
+Regular sized button
```markdown
-[![Launch Cloud Shell](https://shell.azure.com/images/launchcloudshell.png "Launch Cloud Shell")](https://shell.azure.com)
+[![Launch Cloud Shell](https://learn.microsoft.com/azure/cloud-shell/media/embed-cloud-shell/launch-cloud-shell-1.png](https://shell.azure.com)
```
-The HTML to embed a pop-up Cloud Shell is below:
-```html
-<a style="cursor:pointer" onclick='javascript:window.open("https://shell.azure.com", "_blank", "toolbar=no,scrollbars=yes,resizable=yes,menubar=no,location=no,status=no")'><img alt="Launch Azure Cloud Shell" src="https://shell.azure.com/images/launchcloudshell.png"></a>
+Large sized button
+
+```markdown
+[![Launch Cloud Shell](https://learn.microsoft.com/azure/cloud-shell/media/embed-cloud-shell/launch-cloud-shell-2.png](https://shell.azure.com)
```
+The location of these image files is subject to change. We recommend that you download the files for
+use in your applications.
+ ## Customize experience Set a specific shell experience by augmenting your URL.
-|Experience |URL |
-|||
-|Most recently used shell |[shell.azure.com](https://shell.azure.com) |
-|Bash |[shell.azure.com/bash](https://shell.azure.com/bash) |
-|PowerShell |[shell.azure.com/powershell](https://shell.azure.com/powershell) |
+| Experience | URL |
+| | |
+| Most recently used shell | `https://shell.azure.com` |
+| Bash | `https://shell.azure.com/bash` |
+| PowerShell | `https://shell.azure.com/powershell` |
## Next steps
-[Bash in Cloud Shell quickstart](quickstart.md)<br>
-[PowerShell in Cloud Shell quickstart](quickstart-powershell.md)
+
+- [Bash in Cloud Shell quickstart][07]
+- [PowerShell in Cloud Shell quickstart][06]
+
+<!-- updated link references -->
+[01]: https://shell.azure.com
+[06]: quickstart-powershell.md
+[07]: quickstart.md
cloud-shell Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md
Title: Azure Cloud Shell features | Microsoft Docs+ description: Overview of features in Azure Cloud Shell---
-tags: azure-resource-manager
-++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 09/20/2022-++
+tags: azure-resource-manager
+ Title: Azure Cloud Shell features
- # Features & tools for Azure Cloud Shell
+Azure Cloud Shell is a browser-based shell experience to manage and develop Azure resources.
+
+Cloud Shell offers a browser-accessible, pre-configured shell experience for managing Azure
+resources without the overhead of installing, versioning, and maintaining a machine yourself.
-Azure Cloud Shell runs on **Common Base Linux - Mariner** (CBL-Mariner),
-Microsoft's Linux distribution for cloud-infrastructure-edge products and services.
+Cloud Shell allocates machines on a per-request basis and as a result machine state doesn't
+persist across sessions. Since Cloud Shell is built for interactive sessions, shells automatically
+terminate after 20 minutes of shell inactivity.
+
+<!--
+TODO:
+- need to verify Distro - showing Ubuntu currently
+- need to verify all experiences described here eg. cd Azure: - I have different results
+-->
+Azure Cloud Shell runs on **Common Base Linux - Mariner** (CBL-Mariner), Microsoft's Linux
+distribution for cloud-infrastructure-edge products and services.
Microsoft internally compiles all the packages included in the **CBL-Mariner** repository to help guard against supply chain attacks. Tooling has been updated to reflect the new base image CBL-Mariner. You can get a full list of installed package versions using the following command:
-`tdnf list installed`. If these changes affected your Cloud Shell environment, please contact
-Azuresupport or create an issue in the
-[Cloud Shell repository](https://github.com/Azure/CloudShell/issues).
+`tdnf list installed`. If these changes affected your Cloud Shell environment, contact Azure Support
+or create an issue in the [Cloud Shell repository][12].
## Features ### Secure automatic authentication
-Cloud Shell securely and automatically authenticates account access for the Azure CLI and Azure PowerShell.
+Cloud Shell securely and automatically authenticates account access for the Azure CLI and Azure
+PowerShell.
### $HOME persistence across sessions
-To persist files across sessions, Cloud Shell walks you through attaching an Azure file share on first launch.
-Once completed, Cloud Shell will automatically attach your storage (mounted as `$HOME\clouddrive`) for all future sessions.
-Additionally, your `$HOME` directory is persisted as an .img in your Azure File share.
-Files outside of `$HOME` and machine state are not persisted across sessions. Use best practices when storing secrets such as SSH keys. Services like [Azure Key Vault have tutorials for setup](../key-vault/general/manage-with-cli2.md#prerequisites).
+To persist files across sessions, Cloud Shell walks you through attaching an Azure file share on
+first launch. Once completed, Cloud Shell will automatically attach your storage (mounted as
+`$HOME\clouddrive`) for all future sessions. Additionally, your `$HOME` directory is persisted as an
+.img in your Azure File share. Files outside of `$HOME` and machine state aren't persisted across
+sessions. Use best practices when storing secrets such as SSH keys. Services, like
+Azure Key Vault, have [tutorials for setup][02].
-[Learn more about persisting files in Cloud Shell.](persisting-shell-storage.md)
+[Learn more about persisting files in Cloud Shell.][29]
### Azure drive (Azure:)
-PowerShell in Cloud Shell provides the Azure drive (`Azure:`). You can switch to the Azure drive with `cd Azure:` and back to your home directory with `cd ~`.
-The Azure drive enables easy discovery and navigation of Azure resources such as Compute, Network, Storage etc. similar to filesystem navigation.
-You can continue to use the familiar [Azure PowerShell cmdlets](/powershell/azure) to manage these resources regardless of the drive you are in.
-Any changes made to the Azure resources, either made directly in Azure portal or through Azure PowerShell cmdlets, are reflected in the Azure drive. You can run `dir -Force` to refresh your resources.
+PowerShell in Cloud Shell provides the Azure drive (`Azure:`). You can switch to the Azure drive
+with `cd Azure:` and back to your home directory with `cd ~`. The Azure drive enables easy
+discovery and navigation of Azure resources such as Compute, Network, Storage etc. similar to
+filesystem navigation. You can continue to use the familiar [Azure PowerShell cmdlets][07] to manage
+these resources regardless of the drive you are in. Any changes made to the Azure resources, either
+made directly in Azure portal or through Azure PowerShell cmdlets, are reflected in the Azure drive.
+You can run `dir -Force` to refresh your resources.
-![Screenshot of an Azure Cloud Shell being initialized and a list of directory resources.](media/features-powershell/azure-drive.png)
+![Screenshot of an Azure Cloud Shell being initialized and a list of directory resources.][26]
### Manage Exchange Online
-PowerShell in Cloud Shell contains a private build of the Exchange Online module. Run `Connect-EXOPSSession` to get your Exchange cmdlets.
+PowerShell in Cloud Shell contains a private build of the Exchange Online module. Run
+`Connect-EXOPSSession` to get your Exchange cmdlets.
-![Screenshot of an Azure Cloud Shell running the commands Connect-EXOPSSession and Get-User.](media/features-powershell/exchangeonline.png)
+![Screenshot of an Azure Cloud Shell running the commands Connect-EXOPSSession and Get-User.][27]
Run `Get-Command -Module tmp_*`+ > [!NOTE]
-> The module name should begin with `tmp_`, if you have installed modules with the same prefix, their cmdlets will also be surfaced.
+> The module name should begin with `tmp_`, if you have installed modules with the same prefix,
+> their cmdlets will also be surfaced.
+
+![Screenshot of an Azure Cloud Shell running the command Get-Command -Module tmp_*.][28]
+
+### Deep integration with open source tooling
+
+Cloud Shell includes pre-configured authentication for open source tools such as Terraform, Ansible,
+and Chef InSpec. Try it out from the example walkthroughs.
-![Screenshot of an Azure Cloud Shell running the command Get-Command -Module tmp_*.](medilets.png)
+### Pre-installed tools
-### Deep integration with open-source tooling
+<!--
+TODO:
+- remove obsolete tools
+- separate by bash vs. pwsh
+- link to docs rather than github
+-->
-Cloud Shell includes pre-configured authentication for open-source tools such as Terraform, Ansible, and Chef InSpec. Try it out from the example walkthroughs.
+Linux tools
-## Tools
+- bash
+- zsh
+- sh
+- tmux
+- dig
-|Category |Name |
-|||
-|Linux tools |bash<br> zsh<br> sh<br> tmux<br> dig<br> |
-|Azure tools |[Azure CLI](https://github.com/Azure/azure-cli) and [Azure classic CLI](https://github.com/Azure/azure-xplat-cli)<br> [AzCopy](../storage/common/storage-use-azcopy-v10.md)<br> [Azure Functions CLI](https://github.com/Azure/azure-functions-core-tools)<br> [Service Fabric CLI](../service-fabric/service-fabric-cli.md)<br> [Batch Shipyard](https://github.com/Azure/batch-shipyard)<br> [blobxfer](https://github.com/Azure/blobxfer)|
-|Text editors |code (Cloud Shell editor)<br> vim<br> nano<br> emacs |
-|Source control |git |
-|Build tools |make<br> maven<br> npm<br> pip |
-|Containers |[Docker Machine](https://github.com/docker/machine)<br> [Kubectl](https://kubernetes.io/docs/user-guide/kubectl-overview/)<br> [Helm](https://github.com/kubernetes/helm)<br> [DC/OS CLI](https://github.com/dcos/dcos-cli) |
-|Databases |MySQL client<br> PostgreSql client<br> [sqlcmd Utility](/sql/tools/sqlcmd-utility)<br> [mssql-scripter](https://github.com/Microsoft/sql-xplat-cli) |
-|Other |iPython Client<br> [Cloud Foundry CLI](https://github.com/cloudfoundry/cli)<br> [Terraform](https://www.terraform.io/docs/providers/azurerm/)<br> [Ansible](https://www.ansible.com/microsoft-azure)<br> [Chef InSpec](https://www.chef.io/inspec/)<br> [Puppet Bolt](https://puppet.com/docs/bolt/latest/bolt.html)<br> [HashiCorp Packer](https://www.packer.io/)<br> [Office 365 CLI](https://pnp.github.io/office365-cli/)|
+Azure tools
-## Language support
+- [Azure CLI][09]
+- [AzCopy][04]
+- [Azure Functions CLI][05]
+- [Service Fabric CLI][03]
+- [Batch Shipyard][10]
+- [blobxfer][11]
-|Language |Version |
-|||
-|.NET Core |[3.1.302](https://github.com/dotnet/core/blob/master/release-notes/3.1/3.1.6/3.1.302-download.md) |
-|Go |1.9 |
-|Java |1.8 |
-|Node.js |8.16.0 |
-|PowerShell |[7.0.0](https://github.com/PowerShell/powershell/releases) |
-|Python |2.7 and 3.7 (default)|
+Text editors
+
+- code (Cloud Shell editor)
+- vim
+- nano
+- emacs
+
+Source control
+
+- git
+
+Build tools
+
+- make
+- maven
+- npm
+- pip
+
+Containers
+
+- [Docker Desktop][15]
+- [Kubectl][19]
+- [Helm][17]
+- [DC/OS CLI][14]
+
+Databases
+
+- MySQL client
+- PostgreSql client
+- [sqlcmd Utility][09]
+- [mssql-scripter][18]
+
+Other
+
+- iPython Client
+- [Cloud Foundry CLI][13]
+- [Terraform][25]
+- [Ansible][22]
+- [Chef InSpec][23]
+- [Puppet Bolt][21]
+- [HashiCorp Packer][24]
+- [Office 365 CLI][20]
+
+### Language support
+
+| Language | Version |
+| - | |
+| .NET Core | [6.0.402][16] |
+| Go | 1.9 |
+| Java | 1.8 |
+| Node.js | 8.16.0 |
+| PowerShell | [7.2][08] |
+| Python | 2.7 and 3.7 (default) |
## Next steps
-[Bash in Cloud Shell Quickstart](quickstart.md) <br>
-[PowerShell in Cloud Shell Quickstart](quickstart-powershell.md) <br>
-[Learn about Azure CLI](/cli/azure/) <br>
-[Learn about Azure PowerShell](/powershell/azure/) <br>
+
+- [Bash in Cloud Shell Quickstart][31]
+- [PowerShell in Cloud Shell Quickstart][30]
+- [Learn about Azure CLI][06]
+- [Learn about Azure PowerShell][07]
+
+<!-- link references -->
+[02]: ../key-vault/general/manage-with-cli2.md#prerequisites
+[03]: ../service-fabric/service-fabric-cli.md
+[04]: ../storage/common/storage-use-azcopy-v10.md
+[05]: /azure/azure-functions/functions-run-local
+[06]: /cli/azure/
+[07]: /powershell/azure
+[08]: /powershell/scripting/whats-new/what-s-new-in-powershell-72
+[09]: /sql/tools/sqlcmd-utility
+[10]: https://batch-shipyard.readthedocs.io/en/latest/
+[11]: https://blobxfer.readthedocs.io/en/latest/
+[12]: https://github.com/Azure/CloudShell/issues
+[13]: https://docs.cloudfoundry.org/cf-cli/
+[14]: https://docs.d2iq.com/dkp/2.3/azure-quick-start
+[15]: https://docs.docker.com/desktop/
+[16]: https://dotnet.microsoft.com/download/dotnet/6.0
+[17]: https://helm.sh/docs/
+[18]: https://github.com/microsoft/mssql-scripter/blob/dev/doc/usage_guide.md
+[19]: https://kubernetes.io/docs/user-guide/kubectl-overview/
+[20]: https://pnp.github.io/office365-cli/
+[21]: https://puppet.com/docs/bolt/latest/bolt.html
+[22]: https://www.ansible.com/microsoft-azure
+[23]: https://docs.chef.io/
+[24]: https://developer.hashicorp.com/packer/docs
+[25]: https://www.terraform.io/docs/providers/azurerm/
+[26]: media/features/azure-drive.png
+[27]: media/features/exchangeonline.png
+[28]: medilets.png
+[29]: persisting-shell-storage.md
+[30]: quickstart-powershell.md
+[31]: quickstart.md
cloud-shell Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/limitations.md
Title: Azure Cloud Shell limitations | Microsoft Docs+ description: Overview of limitations of Azure Cloud Shell---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 02/15/2018-++
+tags: azure-resource-manager
+ Title: Azure Cloud Shell limitations
- # Limitations of Azure Cloud Shell Azure Cloud Shell has the following known limitations:
Azure Cloud Shell has the following known limitations:
## General limitations ### System state and persistence-
-The machine that provides your Cloud Shell session is temporary, and it is recycled after your session is inactive for 20 minutes. Cloud Shell requires an Azure file share to be mounted. As a result, your subscription must be able to set up storage resources to access Cloud Shell. Other considerations include:
-
-* With mounted storage, only modifications within the `$Home` directory are persisted.
-* Azure file shares can be mounted only from within your [assigned region](persisting-shell-storage.md#mount-a-new-clouddrive).
- * In Bash, run `env` to find your region set as `ACC_LOCATION`.
+<!--
+TODO:
+- verify the regions
+-->
+The machine that provides your Cloud Shell session is temporary, and it's recycled after your
+session is inactive for 20 minutes. Cloud Shell requires an Azure file share to be mounted. As a
+result, your subscription must be able to set up storage resources to access Cloud Shell. Other
+considerations include:
+
+- With mounted storage, only modifications within the `$HOME` directory are persisted.
+- Azure file shares can be mounted only from within your [assigned region][02].
+ - In Bash, run `env` to find your region set as `ACC_LOCATION`.
### Browser support-
-Cloud Shell supports the latest versions of Microsoft Edge, Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, and Apple Safari. Safari in private mode is not supported.
+<!--
+TODO:
+- Do we still support Microsoft Internet Explorer?
+-->
+Cloud Shell supports the latest versions of Microsoft Edge, Microsoft Internet Explorer, Google
+Chrome, Mozilla Firefox, and Apple Safari. Safari in private mode isn't supported.
### Copy and paste
+- Windows: <kbd>Ctrl</kbd>-<kbd>C</kbd> to copy is supported but use
+ <kbd>Shift</kbd>-<kbd>Insert</kbd> to paste.
+ - FireFox/IE may not support clipboard permissions properly.
+- macOS: <kbd>Cmd</kbd>-<kbd>C</kbd> to copy and <kbd>Cmd</kbd>-<kbd>V</kbd> to paste.
-### For a given user, only one shell can be active
+### Only one shell can be active for a given user
-Users can only launch one type of shell at a time, either **Bash** or **PowerShell**. However, you may have multiple instances of Bash or PowerShell running at one time. Swapping between Bash or PowerShell by using the menu causes Cloud Shell to restart, which terminates existing sessions. Alternatively, you can run bash inside PowerShell by typing `bash`, and you can run PowerShell inside bash by typing `pwsh`.
+Users can only launch one Cloud Shell session at a time. However, you may have multiple instances of
+Bash or PowerShell running within that session. Switching between Bash or PowerShell using the menu
+restarts the Cloud Shell session and terminate the existing session. To avoid losing your current
+session, you can run `bash` inside PowerShell and you can run `pwsh` inside of Bash.
### Usage limits
-Cloud Shell is intended for interactive use cases. As a result, any long-running non-interactive sessions are ended without warning.
+Cloud Shell is intended for interactive use cases. As a result, any long-running non-interactive
+sessions are ended without warning.
## Bash limitations ### User permissions
-Permissions are set as regular users without sudo access. Any installation outside your `$Home` directory is not persisted.
-
-### Editing .bashrc or $PROFILE
-
-Take caution when editing .bashrc or PowerShell's $PROFILE file, doing so can cause unexpected errors in Cloud Shell.
+Permissions are set as regular users without sudo access. Any installation outside your `$Home`
+directory isn't persisted.
## PowerShell limitations-
+<!--
+TODO:
+- outdated info about AzureAD and SQL
+- Not running on Windows so the GUI comment not valid
+-->
### `AzureAD` module name
-The `AzureAD` module name is currently `AzureAD.Standard.Preview`, the module provides the same functionality.
+The `AzureAD` module name is currently `AzureAD.Standard.Preview`, the module provides the same
+functionality.
### `SqlServer` module functionality
-The `SqlServer` module included in Cloud Shell has only prerelease support for PowerShell Core. In particular, `Invoke-SqlCmd` is not available yet.
+The `SqlServer` module included in Cloud Shell has only prerelease support for PowerShell Core. In
+particular, `Invoke-SqlCmd` isn't available yet.
+
+### Default file location when created from Azure drive
-### Default file location when created from Azure drive:
+You can't create files under the `Azure:` drive. When users create new files using other tools, such
+as vim or nano, the files are saved to the `$HOME` by default.
-Using PowerShell cmdlets, users can not create files under the Azure: drive. When users create new files using other tools, such as vim or nano, the files are saved to the `$HOME` by default.
+### GUI applications aren't supported
-### GUI applications are not supported
+If the user runs a command that would create a dialog box, one sees an error message such
+as:
+
+> Unable to load DLL 'IEFRAME.dll': The specified module couldn't be found.
-If the user runs a command that would create a Windows dialog box, one sees an error message such as: `Unable to load DLL 'IEFRAME.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)`.
### Large Gap after displaying progress bar
-If the user performs an action that displays a progress bar, such as a tab completing while in the `Azure:` drive, then it is possible that the cursor is not set properly and a gap appears where the progress bar was previously.
+When the user performs an action that displays a progress bar, such as a tab completing while in the
+`Azure:` drive, it's possible that the cursor isn't set properly and a gap appears where the
+progress bar was previously.
## Next steps
-[Troubleshooting Cloud Shell](troubleshooting.md) <br>
-[Quickstart for Bash](quickstart.md) <br>
-[Quickstart for PowerShell](quickstart-powershell.md)
+- [Troubleshooting Cloud Shell][05]
+- [Quickstart for Bash][04]
+- [Quickstart for PowerShell][03]
+
+<!-- link references -->
+[02]: persisting-shell-storage.md#mount-a-new-clouddrive
+[03]: quickstart-powershell.md
+[04]: quickstart.md
+[05]: troubleshooting.md
cloud-shell Msi Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/msi-authorization.md
Title: Acquiring a user token in Azure Cloud Shell
-description: How to acquire a token for the authenticated user in Azure Cloud Shell
---
-tags: azure-resource-manager
+
+description: How to acquire a token for the authenticated user in Azure Cloud Shell
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 09/29/2021++
+tags: azure-resource-manager
+ Title: Acquiring a user token in Azure Cloud Shell
- # Acquire a token in Azure Cloud Shell-
-Azure Cloud Shell provides an endpoint that will automatically authenticate the user logged into the Azure portal. Use this endpoint to acquire access tokens to interact with Azure services.
+<!--
+TODO:
+- MSI is never mentioned in this article - what is it?
+- Need powershell example - there are examples in other articles - be consistent
+-->
+Azure Cloud Shell provides an endpoint that automatically authenticates the user logged into the
+Azure portal. Use this endpoint to acquire access tokens to interact with Azure services.
## Authenticating in the Cloud Shell
-The Azure Cloud Shell has its own endpoint that interacts with your browser to automatically log you in. When this endpoint receives a request, it sends the request back to your browser, which forwards it to the parent Portal frame. The Portal window makes a request to Azure Active Directory, and the resulting token is returned.
-If you want to authenticate with different credentials, you can do so using `az login` or `Connect-AzAccount`
+The Azure Cloud Shell has its own endpoint that interacts with your browser to automatically log you
+in. When this endpoint receives a request, it sends the request back to your browser, which forwards
+it to the parent Portal frame. The Portal window makes a request to Azure Active Directory, and the
+resulting token is returned.
+
+If you want to authenticate with different credentials, you can do so using `az login` or
+`Connect-AzAccount`
## Acquire and use access token in Cloud Shell ### Acquire token
-Execute the following commands to set your user access token as an environment variable, `access_token`.
-```
+Execute the following commands to set your user access token as an environment variable,
+`access_token`.
+
+```bash
response=$(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s) access_token=$(echo $response | python -c 'import sys, json; print (json.load(sys.stdin)["access_token"])') echo The access token is $access_token
echo The access token is $access_token
### Use token
-Execute the following command to get a list of all Virtual Machines in your account, using the token you acquired in the previous step.
+Execute the following command to get a list of all Virtual Machines in your account, using the token
+you acquired in the previous step.
-```
+```bash
curl https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachines?api-version=2021-07-01 -H "Authorization: Bearer $access_token" -H "x-ms-version: 2019-02-02" ``` ## Handling token expiration
-The local authentication endpoint caches tokens. You can call it as often as you like, and an authentication call to Azure Active Directory will only happen if there's no token stored in the cache, or the token is expired.
+The local authentication endpoint caches tokens. You can call it as often as you like. Cloud Shell
+only calls Azure Active Directory only occurs when there's no token stored in the cache or the token
+has expired.
## Limitations-- There's an allowlist of resources that Cloud Shell tokens can be provided for. If you run a command and receive a message similar to `"error":{"code":"AudienceNotSupported","message":"Audience https://newservice.azure.com/ is not a supported MSI token audience...."}`, you've come across this limitation. You can file an issue on [GitHub](https://github.com/Azure/CloudShell/issues) to request that this service is added to the allowlist.-- If you log in explicitly using the `az login` command, any Conditional Access rules your company may have in place will be evaluated based on the Cloud Shell container rather than the machine where your browser runs. The Cloud Shell container doesnΓÇÖt count as a managed device for these policies so rights may be limited by the policy.-- Azure Managed Identities aren't available in the Azure Cloud Shell. [Read more about Azure Managed Identities](../active-directory/managed-identities-azure-resources/overview.md).+
+- There's an allowlist of resources that Cloud Shell tokens can be provided for. When you try to use
+ a token with a service that is not listed, you may see the following error message:
+
+ ```
+ "error":{"code":"AudienceNotSupported","message":"Audience https://newservice.azure.com/
+ isn't a supported MSI token audience...."}
+ ```
+
+ You can open an issue on [GitHub][02] to request for the service to be added to the allowlist.
+
+- If you sign in explicitly using the `az login` command, any Conditional Access rules your company
+ may have in place are evaluated based on the Cloud Shell container rather than the machine where
+ your browser runs. The Cloud Shell container doesn't count as a managed device for these policies
+ so rights may be limited by the policy.
+
+- Azure Managed Identities aren't available in the Azure Cloud Shell. Read more about
+ [Azure Managed Identities][01].
+
+<!-- link references -->
+[01]: ../active-directory/managed-identities-azure-resources/overview.md
+[02]: https://github.com/Azure/CloudShell/issues
cloud-shell Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/overview.md
Title: Azure Cloud Shell overview | Microsoft Docs+ description: Overview of the Azure Cloud Shell.---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 06/4/2021-+
+tags: azure-resource-manager
+ Title: Azure Cloud Shell overview
- # Overview of Azure Cloud Shell
-Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work, either Bash or PowerShell.
+Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure
+resources. It provides the flexibility of choosing the shell experience that best suits the way you
+work, either Bash or PowerShell.
-You can access the Cloud Shell in three ways:
+You can access Cloud Shell in three ways:
-- **Direct link**: Open a browser to [https://shell.azure.com](https://shell.azure.com).
+- **Direct link**: Open a browser to [https://shell.azure.com][11].
-- **Azure portal**: Select the Cloud Shell icon on the [Azure portal](https://portal.azure.com):
+- **Azure portal**: Select the Cloud Shell icon on the [Azure portal][10]:
- ![Icon to launch the Cloud Shell from the Azure portal](media/overview/portal-launch-icon.png)
+ ![Icon to launch Cloud Shell from the Azure portal][14]
-- **Code snippets**: In Microsoft [technical documentation](/) and [training resources](/training), select the **Try It** button that appears with Azure CLI and Azure PowerShell code snippets:
+- **Code samples**: In Microsoft [technical documentation][02] and [training resources][05], select
+ the **Try It** button that appears with Azure CLI and Azure PowerShell code snippets:
```azurecli-interactive az account show
You can access the Cloud Shell in three ways:
Get-AzSubscription ```
- The **Try It** button opens the Cloud Shell directly alongside the documentation using Bash (for Azure CLI snippets) or PowerShell (for Azure PowerShell snippets).
+ The **Try It** button opens Cloud Shell directly alongside the documentation using Bash (for
+ Azure CLI snippets) or PowerShell (for Azure PowerShell snippets).
- To run the command, use **Copy** in the code snippet, use **Ctrl**+**Shift**+**V** (Windows/Linux) or **Cmd**+**Shift**+**V** (macOS) to paste the command, and then press **Enter**.
+ To run the command, use **Copy** in the code snippet, use
+ <kbd>Ctrl</kbd>+<kbd>Shift</kbd>+<kbd>V</kbd> (Windows/Linux) or
+ <kbd>Cmd</kbd>+<kbd>Shift</kbd>+<kbd>V</kbd> (macOS) to paste the command, and then press
+ <kbd>Enter</kbd>.
## Features ### Browser-based shell experience
-Cloud Shell enables access to a browser-based command-line experience built with Azure management tasks in mind. Leverage Cloud Shell to work untethered from a local machine in a way only the cloud can provide.
+Cloud Shell enables access to a browser-based command-line experience built with Azure management
+tasks in mind. Use Cloud Shell to work untethered from a local machine in a way only the cloud
+can provide.
### Choice of preferred shell experience
Users can choose between Bash or PowerShell.
1. Select **Cloud Shell**.
- ![Cloud Shell icon](media/overview/overview-cloudshell-icon.png)
+ ![Cloud Shell icon][13]
-2. Select **Bash** or **PowerShell**.
+1. Select **Bash** or **PowerShell**.
- ![Choose either Bash or PowerShell](media/overview/overview-choices.png)
+ ![Choose either Bash or PowerShell][12]
- After first launch, you can use the shell type drop-down control to switch between Bash and PowerShell:
+ After first launch, you can use the shell type drop-down control to switch between Bash and
+ PowerShell:
- ![Drop-down control to select Bash or PowerShell](media/overview/select-shell-drop-down.png)
+ ![Drop-down control to select Bash or PowerShell][15]
### Authenticated and configured Azure workstation
-Cloud Shell is managed by Microsoft so it comes with popular command-line tools and language support. Cloud Shell also securely authenticates automatically for instant access to your resources through the Azure CLI or Azure PowerShell cmdlets.
+Cloud Shell is managed by Microsoft so it comes with popular command-line tools and language
+support. Cloud Shell also securely authenticates automatically for instant access to your resources
+through the Azure CLI or Azure PowerShell cmdlets.
-View the full [list of tools installed in Cloud Shell.](features.md#tools)
+View the full [list of tools installed in Cloud Shell.][07]
### Integrated Cloud Shell editor
-Cloud Shell offers an integrated graphical text editor based on the open-source Monaco Editor. Simply create and edit configuration files by running `code .` for seamless deployment through Azure CLI or Azure PowerShell.
+Cloud Shell offers an integrated graphical text editor based on the open source Monaco Editor.
+Create and edit configuration files by running `code .` for seamless deployment through Azure CLI or
+Azure PowerShell.
-[Learn more about the Cloud Shell editor](using-cloud-shell-editor.md).
+[Learn more about the Cloud Shell editor][20].
### Multiple access points Cloud Shell is a flexible tool that can be used from:
-* [portal.azure.com](https://portal.azure.com)
-* [shell.azure.com](https://shell.azure.com)
-* [Azure CLI documentation](/cli/azure)
-* [Azure PowerShell documentation](/powershell/azure/)
-* [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/)
-* [Visual Studio Code Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account)
+- [portal.azure.com][10]
+- [shell.azure.com][11]
+- [Azure CLI documentation][03]
+- [Azure PowerShell documentation][04]
+- [Azure mobile app][08]
+- [Visual Studio Code Azure Account extension][09]
### Connect your Microsoft Azure Files storage
-Cloud Shell machines are temporary, but your files are persisted in two ways: through a disk image, and through a mounted file share named `clouddrive`. On first launch, Cloud Shell prompts to create a resource group, storage account, and Azure Files share on your behalf. This is a one-time step and will be automatically attached for all sessions. A single file share can be mapped and will be used by both Bash and PowerShell in Cloud Shell.
+Cloud Shell machines are temporary, but your files are persisted in two ways: through a disk image,
+and through a mounted file share named `clouddrive`. On first launch, Cloud Shell prompts to create
+a resource group, storage account, and Azure Files share on your behalf. This is a one-time step and
+the resources created are automatically attached for all future sessions. A single file share can be
+mapped and is used by both Bash and PowerShell in Cloud Shell.
-Read more to learn how to mount a [new or existing storage account](persisting-shell-storage.md) or to learn about the [persistence mechanisms used in Cloud Shell](persisting-shell-storage.md#how-cloud-shell-storage-works).
+Read more to learn how to mount a [new or existing storage account][16] or to learn about the
+[persistence mechanisms used in Cloud Shell][17].
> [!NOTE]
-> Azure storage firewall is not supported for cloud shell storage accounts.
+> Azure storage firewall isn't supported for cloud shell storage accounts.
## Concepts
-* Cloud Shell runs on a temporary host provided on a per-session, per-user basis
-* Cloud Shell times out after 20 minutes without interactive activity
-* Cloud Shell requires an Azure file share to be mounted
-* Cloud Shell uses the same Azure file share for both Bash and PowerShell
-* Cloud Shell is assigned one machine per user account
-* Cloud Shell persists $HOME using a 5-GB image held in your file share
-* Permissions are set as a regular Linux user in Bash
+- Cloud Shell runs on a temporary host provided on a per-session, per-user basis
+- Cloud Shell times out after 20 minutes without interactive activity
+- Cloud Shell requires an Azure file share to be mounted
+- Cloud Shell uses the same Azure file share for both Bash and PowerShell
+- Cloud Shell is assigned one machine per user account
+- Cloud Shell persists $HOME using a 5-GB image held in your file share
+- Permissions are set as a regular Linux user in Bash
-Learn more about features in [Bash in Cloud Shell](features.md) and [PowerShell in Cloud Shell](./features.md).
+Learn more about features in [Bash in Cloud Shell][06] and [PowerShell in Cloud Shell][01].
## Compliance+ ### Encryption at rest
-All Cloud Shell infrastructure is compliant with double encryption at rest by default. No action is required by users.
+
+All Cloud Shell infrastructure is compliant with double encryption at rest by default. No action is
+required by users.
## Pricing
-The machine hosting Cloud Shell is free, with a pre-requisite of a mounted Azure Files share. Regular storage costs apply.
+The machine hosting Cloud Shell is free, with a pre-requisite of a mounted Azure Files share.
+Regular storage costs apply.
## Next steps
-[Bash in Cloud Shell quickstart](quickstart.md) <br>
-[PowerShell in Cloud Shell quickstart](quickstart-powershell.md)
+- [Bash in Cloud Shell quickstart][19]
+- [PowerShell in Cloud Shell quickstart][18]
+
+<!-- link references -->
+[01]: ./features.md
+[02]: /samples/browse
+[03]: /cli/azure
+[04]: /powershell/azure
+[05]: /training
+[06]: features.md
+[07]: features.md#pre-installed-tools
+[08]: https://azure.microsoft.com/features/azure-portal/mobile-app/
+[09]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account
+[10]: https://portal.azure.com
+[11]: https://shell.azure.com
+[12]: media/overview/overview-choices.png
+[13]: media/overview/overview-cloudshell-icon.png
+[14]: media/overview/portal-launch-icon.png
+[15]: media/overview/select-shell-drop-down.png
+[16]: persisting-shell-storage.md
+[17]: persisting-shell-storage.md#how-cloud-shell-storage-works
+[18]: quickstart-powershell.md
+[19]: quickstart.md
+[20]: using-cloud-shell-editor.md
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/persisting-shell-storage.md
Title: Persist files in Azure Cloud Shell | Microsoft Docs+ description: Walkthrough of how Azure Cloud Shell persists files.---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 02/24/2020-++
+tags: azure-resource-manager
+ Title: Persist files in Azure Cloud Shell
- # Persist files in Azure Cloud Shell
-Cloud Shell utilizes Azure Files to persist files across sessions. On initial start, Cloud Shell prompts you to associate a new or existing file share to persist files across sessions.
-> [!NOTE]
-> Bash and PowerShell share the same file share. Only one file share can be associated with automatic mounting in Cloud Shell.
+Cloud Shell uses Azure Files to persist files across sessions. On initial start, Cloud Shell prompts
+you to associate a new or existing fileshare to persist files across sessions.
> [!NOTE]
-> Azure storage firewall is not supported for cloud shell storage accounts.
+> Bash and PowerShell share the same fileshare. Only one fileshare can be associated with
+> automatic mounting in Cloud Shell.
+>
+> Azure storage firewall isn't supported for cloud shell storage accounts.
## Create new storage
-When you use basic settings and select only a subscription, Cloud Shell creates three resources on your behalf in the supported region that's nearest to you:
-* Resource group: `cloud-shell-storage-<region>`
-* Storage account: `cs<uniqueGuid>`
-* File share: `cs-<user>-<domain>-com-<uniqueGuid>`
+When you use basic settings and select only a subscription, Cloud Shell creates three resources on
+your behalf in the supported region that's nearest to you:
+
+- Resource group: `cloud-shell-storage-<region>`
+- Storage account: `cs<uniqueGuid>`
+- fileshare: `cs-<user>-<domain>-com-<uniqueGuid>`
-![The Subscription setting](media/persisting-shell-storage/basic-storage.png)
+![Screenshot of choosing the subscription for your storage account][09]
-The file share mounts as `clouddrive` in your `$Home` directory. This is a one-time action, and the file share mounts automatically in subsequent sessions.
+The fileshare mounts as `clouddrive` in your `$HOME` directory. This is a one-time action, and the
+fileshare mounts automatically in subsequent sessions.
-The file share also contains a 5-GB image that is created for you which automatically persists data in your `$Home` directory. This applies for both Bash and PowerShell.
+The fileshare also contains a 5-GB image that automatically persists data in your `$HOME` directory.
+This fileshare is used for both Bash and PowerShell.
## Use existing resources
-By using the advanced option, you can associate existing resources. When selecting a Cloud Shell region you must select a backing storage account co-located in the same region. For example, if your assigned region is West US then you must associate a file share that resides within West US as well.
+Using the advanced option, you can associate existing resources. When selecting a Cloud Shell region,
+you must select a backing storage account co-located in the same region. For example, if your
+assigned region is West US then you must associate a fileshare that resides within West US as well.
-When the storage setup prompt appears, select **Show advanced settings** to view additional options. The populated storage options filter for locally redundant storage (LRS), geo-redundant storage (GRS), and zone-redundant storage (ZRS) accounts.
+When the storage setup prompt appears, select **Show advanced settings** to view more options. The
+populated storage options filter for locally redundant storage (LRS), geo-redundant storage (GRS),
+and zone-redundant storage (ZRS) accounts.
> [!NOTE]
-> Using GRS or ZRS storage accounts are recommended for additional resiliency for your backing file share. Which type of redundancy depends on your goals and price preference. [Learn more about replication options for Azure Storage accounts](../storage/common/storage-redundancy.md).
+> Using GRS or ZRS storage accounts are recommended for additional resiliency for your backing file
+> share. Which type of redundancy depends on your goals and price preference.
+> [Learn more about replication options for Azure Storage accounts][04].
-![The Resource group setting](media/persisting-shell-storage/advanced-storage.png)
+![Screenshot of configuring your storage account][08]
## Securing storage access
-For security, each user should provision their own storage account. For Azure role-based access control (Azure RBAC), users must have contributor access or above at the storage account level.
-Cloud Shell uses an Azure File Share in a storage account, inside a specified subscription. Due to inherited permissions, users with sufficient access rights to the subscription will be able to access all the storage accounts, and file shares contained in the subscription.
+For security, each user should create their own storage account. For Azure role-based access control
+(Azure RBAC), users must have contributor access or above at the storage account level.
-Users should lock down access to their files by setting the permissions at the storage account or the subscription level.
+Cloud Shell uses an Azure fileshare in a storage account, inside a specified subscription. Due to
+inherited permissions, users with sufficient access rights to the subscription can access all the
+storage accounts, and file shares contained in the subscription.
-The Cloud Shell storage account will contain files created by the Cloud Shell user in their home directory, which may include sensitive information including access tokens or credentials.
+Users should lock down access to their files by setting the permissions at the storage account or
+the subscription level.
+
+The Cloud Shell storage account contains files created by the Cloud Shell user in their home
+directory, which may include sensitive information including access tokens or credentials.
## Supported storage regions
-To find your current region you may run `env` in Bash and locate the variable `ACC_LOCATION`, or from PowerShell run `$env:ACC_LOCATION`. File shares receive a 5-GB image created for you to persist your `$Home` directory.
+
+To find your current region you may run `env` in Bash and locate the variable `ACC_LOCATION`, or
+from PowerShell run `$env:ACC_LOCATION`. File shares receive a 5-GB image created for you to persist
+your `$HOME` directory.
Cloud Shell machines exist in the following regions:
-|Area|Region|
-|||
-|Americas|East US, South Central US, West US|
-|Europe|North Europe, West Europe|
-|Asia Pacific|India Central, Southeast Asia|
+| Area | Region |
+| | - |
+| Americas | East US, South Central US, West US |
+| Europe | North Europe, West Europe |
+| Asia Pacific | India Central, Southeast Asia |
-Customers should choose a primary region, unless they have a requirement that their data at rest be stored in a particular region. If they have such a requirement, a secondary storage region should be used.
+Customers should choose a primary region, unless they have a requirement that their data at rest be
+stored in a particular region. If they have such a requirement, a secondary storage region should be
+used.
### Secondary storage regions
-If a secondary storage region is used, the associated Azure storage account resides in a different region as the Cloud Shell machine that you're mounting them to. For example, Jane can set her storage account to be located in Canada East, a secondary region, but the machine she is mounted to is still located in a primary region. Her data at rest is located in Canada, but it is processed in the United States.
+
+If a secondary storage region is used, the associated Azure storage account resides in a different
+region as the Cloud Shell machine that you're mounting them to. For example, you can set your
+storage account to be located in Canada East, a secondary region, but your Cloud Shell machine is
+still located in a primary region. Your data at rest is located in Canada, but it's processed in the
+United States.
> [!NOTE] > If a secondary region is used, file access and startup time for Cloud Shell may be slower.
-A user can run `(Get-CloudDrive | Get-AzStorageAccount).Location` in PowerShell to see the location of their File Share.
+A user can run `(Get-CloudDrive | Get-AzStorageAccount).Location` in PowerShell to see the location
+of their fileshare.
## Restrict resource creation with an Azure resource policy
-Storage accounts that you create in Cloud Shell are tagged with `ms-resource-usage:azure-cloud-shell`. If you want to disallow users from creating storage accounts in Cloud Shell, create an [Azure resource policy for tags](../governance/policy/samples/index.md) that are triggered by this specific tag.
-## How Cloud Shell storage works
-Cloud Shell persists files through both of the following methods:
-* Creating a disk image of your `$Home` directory to persist all contents within the directory. The disk image is saved in your specified file share as `acc_<User>.img` at `fileshare.storage.windows.net/fileshare/.cloudconsole/acc_<User>.img`, and it automatically syncs changes.
-* Mounting your specified file share as `clouddrive` in your `$Home` directory for direct file-share interaction. `/Home/<User>/clouddrive` is mapped to `fileshare.storage.windows.net/fileshare`.
-
+Storage accounts that you create in Cloud Shell are tagged with
+`ms-resource-usage:azure-cloud-shell`. If you want to disallow users from creating storage accounts
+in Cloud Shell, create an [Azure resource policy for tags][03] that is triggered by this specific
+tag.
+
+## How Cloud Shell storage works
+
+Cloud Shell persists files through both of the following methods:
+
+- Creating a disk image of your `$HOME` directory to persist all contents within the directory. The
+ disk image is saved in your specified fileshare as `acc_<User>.img` at
+ `fileshare.storage.windows.net/fileshare/.cloudconsole/acc_<User>.img`, and it automatically syncs
+ changes.
+- Mounting your specified fileshare as `clouddrive` in your `$HOME` directory for direct file-share
+ interaction. `/Home/<User>/clouddrive` is mapped to `fileshare.storage.windows.net/fileshare`.
+ > [!NOTE]
-> All files in your `$Home` directory, such as SSH keys, are persisted in your user disk image, which is stored in your mounted file share. Apply best practices when you persist information in your `$Home` directory and mounted file share.
+> All files in your `$HOME` directory, such as SSH keys, are persisted in your user disk image,
+> which is stored in your mounted fileshare. Apply best practices when you persist information in
+> your `$HOME` directory and mounted fileshare.
## clouddrive commands ### Use the `clouddrive` command
-In Cloud Shell, you can run a command called `clouddrive`, which enables you to manually update the file share that is mounted to Cloud Shell.
-![Running the "clouddrive" command](media/persisting-shell-storage/clouddrive-h.png)
+In Cloud Shell, you can run a command called `clouddrive`, which enables you to manually update the
+fileshare that's mounted to Cloud Shell.
+
+![Screenshot of running the clouddrive command in bash][10]
### List `clouddrive`
-To discover which file share is mounted as `clouddrive`, run the `df` command.
-The file path to clouddrive shows your storage account name and file share in the URL. For example, `//storageaccountname.file.core.windows.net/filesharename`
+To discover which fileshare is mounted as `clouddrive`, run the `df` command.
-```
+The file path to clouddrive shows your storage account name and fileshare in the URL. For example,
+`//storageaccountname.file.core.windows.net/filesharename`
+
+```bash
justin@Azure:~$ df Filesystem 1K-blocks Used Available Use% Mounted on overlay 29711408 5577940 24117084 19% /
tmpfs 986716 0 986716
/dev/sda1 29711408 5577940 24117084 19% /etc/hosts shm 65536 0 65536 0% /dev/shm //mystoragename.file.core.windows.net/fileshareName 5368709120 64 5368709056 1% /home/justin/clouddrive
-justin@Azure:~$
``` ### Mount a new clouddrive #### Prerequisites for manual mounting
-You can update the file share that's associated with Cloud Shell by using the `clouddrive mount` command.
-If you mount an existing file share, the storage accounts must be located in your select Cloud Shell region. Retrieve the location by running `env` and checking the `ACC_LOCATION`.
+You can update the fileshare that's associated with Cloud Shell using the `clouddrive mount`
+command.
+
+If you mount an existing fileshare, the storage accounts must be located in your select Cloud Shell
+region. Retrieve the location by running `env` and checking the `ACC_LOCATION`.
#### The `clouddrive mount` command > [!NOTE]
-> If you're mounting a new file share, a new user image is created for your `$Home` directory. Your previous `$Home` image is kept in your previous file share.
+> If you're mounting a new fileshare, a new user image is created for your `$HOME` directory. Your
+> previous `$HOME` image is kept in your previous fileshare.
Run the `clouddrive mount` command with the following parameters:
-```
+```bash
clouddrive mount -s mySubscription -g myRG -n storageAccountName -f fileShareName ``` To view more details, run `clouddrive mount -h`, as shown here:
-![Running the `clouddrive mount`command](media/persisting-shell-storage/mount-h.png)
+![Screenshot of running the clouddrive mount command in bash][11]
### Unmount clouddrive
-You can unmount a file share that's mounted to Cloud Shell at any time. Since Cloud Shell requires a mounted file share to be used, you will be prompted to create and mount another file share on the next session.
+
+You can unmount a fileshare that's mounted to Cloud Shell at any time. Since Cloud Shell requires a
+mounted fileshare to be used, Cloud Shell prompts you to create and mount another fileshare on the
+next session.
1. Run `clouddrive unmount`.
-2. Acknowledge and confirm prompts.
+1. Acknowledge and confirm prompts.
-Your file share will continue to exist unless you delete it manually. Cloud Shell will no longer search for this file share on subsequent sessions. To view more details, run `clouddrive unmount -h`, as shown here:
+The unmounted fileshare continues to exist until you manually delete it. After unmounting, Cloud
+Shell no longer searches for this fileshare in subsequent sessions. To view more details, run
+`clouddrive unmount -h`, as shown here:
-![Running the `clouddrive unmount`command](media/persisting-shell-storage/unmount-h.png)
+![Screenshot of running the clouddrive unmount command in bash][12]
> [!WARNING]
-> Although running this command will not delete any resources, manually deleting a resource group, storage account, or file share that's mapped to Cloud Shell erases your `$Home` directory disk image and any files in your file share. This action cannot be undone.
+> Although running this command doesn't delete any resources, manually deleting a resource group,
+> storage account, or fileshare that's mapped to Cloud Shell erases your `$HOME` directory disk
+> image and any files in your fileshare. This action can't be undone.
+ ## PowerShell-specific commands ### List `clouddrive` Azure file shares
-The `Get-CloudDrive` cmdlet retrieves the Azure file share information currently mounted by the `clouddrive` in the Cloud Shell. <br>
-![Running Get-CloudDrive](media/persisting-shell-storage-powershell/Get-Clouddrive.png)
+
+The `Get-CloudDrive` cmdlet retrieves the Azure fileshare information currently mounted by the
+`clouddrive` in Cloud Shell.
+
+![Screenshot of running the Get-CloudDrive command in PowerShell][07]
### Unmount `clouddrive`
-You can unmount an Azure file share that's mounted to Cloud Shell at any time. If the Azure file share has been removed, you will be prompted to create and mount a new Azure file share at the next session.
-The `Dismount-CloudDrive` cmdlet unmounts an Azure file share from the current storage account. Dismounting the `clouddrive` terminates the current session. The user will be prompted to create and mount a new Azure file share during the next session.
-![Running Dismount-CloudDrive](media/persisting-shell-storage-powershell/Dismount-Clouddrive.png)
+You can unmount an Azure fileshare that's mounted to Cloud Shell at any time. The
+`Dismount-CloudDrive` cmdlet unmounts an Azure fileshare from the current storage account.
+Dismounting the `clouddrive` terminates the current session.
+
+If the Azure fileshare has been removed, you'll be prompted to create and mount a new Azure
+fileshare in the next session.
+
+![Screenshot of running the Dismount-CloudDrive command in PowerShell][06]
+## Transfer local files to Cloud Shell
-Note: If you need to define a function in a file and call it from the PowerShell cmdlets, then the dot operator must be included.
-For example: . .\MyFunctions.ps1
+The `clouddrive` directory syncs with the Azure portal storage blade. Use this blade to transfer
+local files to or from your file share. Updating files from within Cloud Shell is reflected in the
+file storage GUI when you refresh the blade.
+
+### Download files
+
+![Screenshot listing local files in the Azure portal][13]
+1. In the Azure portal, go to the mounted file share.
+2. Select the target file.
+3. Select the **Download** button.
+
+### Upload files
+
+![Screenshot showing how to upload files in the Azure portal][14]
+1. Go to your mounted file share.
+2. Select the **Upload** button.
+3. Select the file or files that you want to upload.
+4. Confirm the upload.
+
+You should now see the files that are accessible in your `clouddrive` directory in Cloud Shell.
+
+> [!NOTE]
+> If you need to define a function in a file and call it from the PowerShell cmdlets, then the
+> dot operator must be included. For example: `. .\MyFunctions.ps1`
## Next steps
-[Cloud Shell Quickstart](quickstart.md) <br>
-[Learn about Microsoft Azure Files storage](../storage/files/storage-files-introduction.md) <br>
-[Learn about storage tags](../azure-resource-manager/management/tag-resources.md) <br>
+
+- [Cloud Shell Quickstart][15]
+- [Learn about Microsoft Azure Files storage][05]
+- [Learn about storage tags][02]
+
+<!-- link references -->
+[01]: includes/cloud-shell-persisting-shell-storage-endblock.md
+[02]: ../azure-resource-manager/management/tag-resources.md
+[03]: ../governance/policy/samples/index.md
+[04]: ../storage/common/storage-redundancy.md
+[05]: ../storage/files/storage-files-introduction.md
+[06]: media/persisting-shell-storage/dismount-clouddrive.png
+[07]: media/persisting-shell-storage/get-clouddrive.png
+[08]: media/persisting-shell-storage/advanced-storage.png
+[09]: media/persisting-shell-storage/basic-storage.png
+[10]: media/persisting-shell-storage/clouddrive-h.png
+[11]: media/persisting-shell-storage/mount-h.png
+[12]: media/persisting-shell-storage/unmount-h.png
+[13]: media/persisting-shell-storage/download.png
+[14]: media/persisting-shell-storage/upload.png
+[15]: quickstart.md
cloud-shell Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/pricing.md
Title: Azure Cloud Shell pricing | Microsoft Docs+ description: Overview of pricing of Azure Cloud Shell---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 09/25/2017-+
+tags: azure-resource-manager
+ Title: Azure Cloud Shell pricing
- # Pricing+ Bash in Cloud Shell and PowerShell in Cloud Shell are subject to information below.
-## Compute Cost
-Azure Cloud Shell runs on a machine provided for free by Azure, but requires an Azure file share to use.
+## Compute cost
+
+Azure Cloud Shell runs on a machine provided for free by Azure, but requires an Azure file share to
+use.
+
+## Storage cost
+
+Cloud Shell requires a new or existing Azure Files share to be mounted to persist files across
+sessions. Storage incurs regular costs.
-## Storage Cost
-Cloud Shell requires a new or existing Azure Files share to be mounted to persist files across sessions. Storage incurs regular costs.
+Check [here for details on Azure Files costs][01].
-Check [here for details on Azure Files costs](https://azure.microsoft.com/pricing/details/storage/files/).
+<!-- link references -->
+[01]: https://azure.microsoft.com/pricing/details/storage/files/
cloud-shell Private Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/private-vnet.md
Title: Cloud Shell in an Azure Virtual Network+ description: Deploy Cloud Shell into an Azure virtual network---
-tags: azure-resource-manager
-++
+ms.contributor: jahelmic
+ Last updated : 11/14/2022 - vm-linux Previously updated : 04/27/2022--++
+tags: azure-resource-manager
+ Title: Cloud Shell in an Azure virtual network
# Deploy Cloud Shell into an Azure virtual network
-A regular Cloud Shell session runs in a container in a Microsoft network separate from your resources. This means that commands running inside the container cannot access resources that can only be accessed from a specific virtual network. For example, you cannot use SSH to connect from Cloud Shell to a virtual machine that only has a private IP address, or use kubectl to connect to a Kubernetes cluster which has locked down access.
+A regular Cloud Shell session runs in a container in a Microsoft network separate from your
+resources. Commands running inside the container can't access resources that can only be accessed
+from a specific virtual network. For example, you can't use SSH to connect from Cloud Shell to a
+virtual machine that only has a private IP address, or use `kubectl` to connect to a Kubernetes
+cluster that has locked down access.
-This optional feature addresses these limitations and allows you to deploy Cloud Shell into an Azure virtual network that you control. From there, the container is able to interact with resources within the virtual network you select.
+This optional feature addresses these limitations and allows you to deploy Cloud Shell into an Azure
+virtual network that you control. From there, the container is able to interact with resources
+within the virtual network you select.
-Below you can see the resource architecture that will be deployed and used in this scenario.
+The following diagram shows the resource architecture that's deployed and used in this scenario.
-![Illustrates the Cloud Shell isolated VNET architecture.](media/private-vnet/data-diagram.png)
+![Illustrates the Cloud Shell isolated VNET architecture.][06]
-Before you can use Cloud Shell in your own Azure Virtual Network, you will need to create several resources to support this functionality. This article shows how to set up the required resources using an ARM template.
+Before you can use Cloud Shell in your own Azure Virtual Network, you need to create several
+resources. This article shows how to set up the required resources using an ARM template.
> [!NOTE]
-> These resources only need to be set up once for the virtual network. They can then be shared by all administrators with access to the virtual network.
+> These resources only need to be set up once for the virtual network. They can then be shared by
+> all administrators with access to the virtual network.
## Required network resources ### Virtual network+ A virtual network defines the address space in which one or more subnets are created.
-The desired virtual network to be used for Cloud Shell needs to be identified. This will usually be an existing virtual network that contains resources you would like to manage or a network that peers with networks that contain your resources.
+You need to identify the virtual network to be used for Cloud Shell. Usually, you want to use an
+existing virtual network that contains resources you want to manage or a network that peers with
+networks that contain your resources.
### Subnet
-Within the selected virtual network, a dedicated subnet must be used for Cloud Shell containers. This subnet is delegated to the Azure Container Instances (ACI) service. When a user requests a Cloud Shell container in a virtual network, Cloud Shell uses ACI to create a container that is in this delegated subnet. No other resources can be created in this subnet.
+
+Within the selected virtual network, a dedicated subnet must be used for Cloud Shell containers.
+This subnet is delegated to the Azure Container Instances (ACI) service. When a user requests a
+Cloud Shell container in a virtual network, Cloud Shell uses ACI to create a container that's in
+this delegated subnet. No other resources can be created in this subnet.
### Network profile
-A network profile is a network configuration template for Azure resources that specifies certain network properties for the resource.
+
+A network profile is a network configuration template for Azure resources that specifies certain
+network properties for the resource.
### Azure Relay
-An [Azure Relay](../azure-relay/relay-what-is-it.md) allows two endpoints that are not directly reachable to communicate. In this case, it is used to allow the administrator's browser to communicate with the container in the private network.
-The Azure Relay instance used for Cloud Shell can be configured to control which networks can access container resources:
-- Accessible from the public internet: In this configuration, Cloud Shell provides a way to reach otherwise internal resources from outside. -- Accessible from specified networks: In this configuration, administrators will have to access the Azure portal from a computer running in the appropriate network to be able to use Cloud Shell.
+An [Azure Relay][01] allows two endpoints that aren't directly reachable to communicate. In this
+case, it's used to allow the administrator's browser to communicate with the container in the
+private network.
+
+The Azure Relay instance used for Cloud Shell can be configured to control which networks can access
+container resources:
+
+- Accessible from the public internet: In this configuration, Cloud Shell provides a way to reach
+ the internal resources from outside.
+- Accessible from specified networks: In this configuration, administrators must access the Azure
+ portal from a computer running in the appropriate network to be able to use Cloud Shell.
## Storage requirements
-As in standard Cloud Shell, a storage account is required while using Cloud Shell in a virtual network. Each administrator needs a file share to store their files. The storage account needs to be accessible from the virtual network that is used by Cloud Shell.
+
+As in standard Cloud Shell, a storage account is required while using Cloud Shell in a virtual
+network. Each administrator needs a fileshare to store their files. The storage account needs to be
+accessible from the virtual network that's used by Cloud Shell.
> [!NOTE] > Secondary storage regions are currently not supported in Cloud Shell VNET scenarios. ## Virtual network deployment limitations
-* Due to the additional networking resources involved, starting Cloud Shell in a virtual network is typically slower than a standard Cloud Shell session.
-* All Cloud Shell primary regions apart from Central India are currently supported.
-
-* [Azure Relay](../azure-relay/relay-what-is-it.md) is not a free service, please view their [pricing](https://azure.microsoft.com/pricing/details/service-bus/). In the Cloud Shell scenario, one hybrid connection is used for each administrator while they are using Cloud Shell. The connection will automatically be shut down after the Cloud Shell session is complete.
+- Starting Cloud Shell in a virtual network is typically slower than a standard Cloud Shell session.
+- All Cloud Shell primary regions, except Central India, are supported.
+- [Azure Relay][01] is a paid service. See the [pricing][04] guide. In the Cloud Shell scenario, one
+ hybrid connection is used for each administrator while they're using Cloud Shell. The connection
+ is automatically shut down after the Cloud Shell session is ended.
## Register the resource provider
-The Microsoft.ContainerInstances resource provider needs to be registered in the subscription that holds the virtual network you want to use. Select the appropriate subscription with `Set-AzContext -Subscription {subscriptionName}`, and then run:
+The Microsoft.ContainerInstances resource provider needs to be registered in the subscription that
+holds the virtual network you want to use. Select the appropriate subscription with
+`Set-AzContext -Subscription {subscriptionName}`, and then run:
```powershell PS> Get-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance | select ResourceTypes,RegistrationState
ResourceTypes RegistrationState
... ```
-If **RegistrationState** is `Registered`, no action is required. If it is `NotRegistered`, run `Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance`.
+If **RegistrationState** is `Registered`, no action is required. If it's `NotRegistered`, run
+`Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance`.
## Deploy network resources
-
+ ### Create a resource group and virtual network+ If you already have a desired VNET that you would like to connect to, skip this section.
-In the Azure portal, or using Azure CLI, Azure PowerShell, etc. create a resource group and a virtual network in the new resource group, **the resource group and virtual network need to be in the same region**.
+In the Azure portal, or using Azure CLI, Azure PowerShell, etc. create a resource group and a
+virtual network in the new resource group, **the resource group and virtual network need to be in
+the same region**.
### ARM templates
-Utilize the [Azure Quickstart Template](https://aka.ms/cloudshell/docs/vnet/template) for creating Cloud Shell resources in a virtual network, and the [Azure Quickstart Template](https://azure.microsoft.com/resources/templates/cloud-shell-vnet-storage/) for creating necessary storage. Take note of your resource names, primarily your file share name.
+
+Use the [Azure Quickstart Template][03] for creating Cloud Shell resources in a virtual network,
+and the [Azure Quickstart Template][05] for creating necessary storage. Take note of your resource
+names, primarily your fileshare name.
### Open relay firewall
-Navigate to the relay created using the above template, select "Networking" in settings, allow access from your browser network to the relay. By default the relay is only accessible from the virtual network it has been created in.
+
+By default the relay is only accessible from the virtual network where it was created. To open
+access, navigate to the relay created using the previous template, select "Networking" in settings,
+allow access from your browser network to the relay.
### Configuring Cloud Shell to use a virtual network.+ > [!NOTE]
-> This step must be completed for each administrator will use Cloud Shell.
+> This step must be completed for each administrator that uses Cloud Shell.
-After deploying completing the above steps, navigate to Cloud Shell in the Azure portal or on https://shell.azure.com. One of these experiences must be used each time you want to connect to an isolated Cloud Shell experience.
+After deploying and completing the previous steps, open Cloud Shell. One of these experiences must
+be used each time you want to connect to an isolated Cloud Shell experience.
> [!NOTE]
-> If Cloud Shell has been used in the past, the existing clouddrive must be unmounted. To do this run `clouddrive unmount` from an active Cloud Shell session, refresh your page.
+> If Cloud Shell has been used in the past, the existing clouddrive must be unmounted. To do this
+> run `clouddrive unmount` from an active Cloud Shell session, refresh your page.
-Connect to Cloud Shell, you will be prompted with the first run experience. Select your preferred shell experience, select "Show advanced settings" and select the "Show VNET isolation settings" box. Fill in the fields in the pop-up. Most fields will autofill to the available resources that can be associated with Cloud Shell in a virtual network. The File Share name will have to be filled in by the user.
+Connect to Cloud Shell. You'll be prompted with the first run experience. Select your preferred
+shell experience, select **Show advanced settings** and select the **Show VNET isolation settings**
+box. Fill in the fields in the form. Most fields will be autofilled to the available resources that
+can be associated with Cloud Shell in a virtual network. You must provide a name for the fileshare.
-
-![Illustrates the Cloud Shell isolated VNET first experience settings.](media/private-vnet/vnet-settings.png)
+![Illustrates the Cloud Shell isolated VNET first experience settings.][07]
## Next steps
-[Learn about Azure Virtual Networks](../virtual-network/virtual-networks-overview.md)
+
+[Learn about Azure Virtual Networks][02]
+
+<!-- link references -->
+[01]: ../azure-relay/relay-what-is-it.md
+[02]: ../virtual-network/virtual-networks-overview.md
+[03]: https://aka.ms/cloudshell/docs/vnet/template
+[04]: https://azure.microsoft.com/pricing/details/service-bus/
+[05]: https://azure.microsoft.com/resources/templates/cloud-shell-vnet-storage/
+[06]: media/private-vnet/data-diagram.png
+[07]: media/private-vnet/vnet-settings.png
cloud-shell Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-powershell.md
Title: Azure Cloud Shell Quickstart - PowerShell+ description: Learn how to use the PowerShell in your browser with Azure Cloud Shell.--
-tags: azure-resource-manager
++
+ms.contributor: jahelmic
+ Last updated : 11/14/2022 - vm-linux Previously updated : 10/18/2018-+
+tags: azure-resource-manager
+ Title: Quickstart for PowerShell in Azure Cloud Shell
- # Quickstart for PowerShell in Azure Cloud Shell
-This document details how to use the PowerShell in Cloud Shell in the [Azure portal](https://portal.azure.com/).
+This document details how to use the PowerShell in Cloud Shell in the [Azure portal][06].
-> [!NOTE]
-> A [Bash in Azure Cloud Shell](quickstart.md) Quickstart is also available.
+The PowerShell experience in Azure Cloud Shell now runs [PowerShell 7.2][02] in a Linux environment.
+There are differences in the PowerShell experience in Cloud Shell compared Windows PowerShell.
+
+The filesystem in Linux is case-sensitive. Windows considers `file.txt` and `FILE.txt` to be the
+same file. In Linux, they're considered to be different files. Proper casing must be used while
+tab-completing in the filesystem. PowerShell specific experiences, such as tab-completing cmdlet
+names, parameters, and values, aren't case-sensitive.
+
+For a detailed list of differences, see [PowerShell differences on non-Windows platforms][01].
## Start Cloud Shell
-1. Click on **Cloud Shell** button from the top navigation bar of the Azure portal
+1. Select on **Cloud Shell** button from the top navigation bar of the Azure portal
- ![Screenshot showing how to start Azure Cloud Shell from the Azure portal.](media/quickstart-powershell/shell-icon.png)
+ ![Screenshot showing how to start Azure Cloud Shell from the Azure portal.][09]
-2. Select the PowerShell environment from the drop-down and you will be in Azure drive `(Azure:)`
+1. Select the PowerShell environment from the drop-down and you'll be in Azure drive `(Azure:)`
- ![Screenshot showing how to select the PowerShell environment for the Azure Cloud Shell.](media/quickstart-powershell/environment-ps.png)
+ ![Screenshot showing how to select the PowerShell environment for the Azure Cloud Shell.][08]
## Run PowerShell commands
MyResourceGroup MyVM2 eastus Standard_DS2_v2_Promo Windows S
### Interact with virtual machines
-You can find all your virtual machines under the current subscription via `VirtualMachines` directory.
+You can find all your virtual machines under the current subscription via `VirtualMachines`
+directory.
```azurepowershell-interactive PS Azure:\MySubscriptionName\VirtualMachines> dir
TestVm10 MyResourceGroup2 eastus Standard_DS1_v2 Windows mytest
#### Invoke PowerShell script across remote VMs > [!WARNING]
- > Please refer to [Troubleshooting remote management of Azure VMs](troubleshooting.md#troubleshooting-remote-management-of-azure-vms).
+ > Please refer to [Troubleshooting remote management of Azure VMs][11].
- Assuming you have a VM, MyVM1, let's use `Invoke-AzVMCommand` to invoke a PowerShell script block on the remote machine.
+Assuming you have a VM, MyVM1, let's use `Invoke-AzVMCommand` to invoke a PowerShell script block on
+the remote machine.
- ```azurepowershell-interactive
- Enable-AzVMPSRemoting -Name MyVM1 -ResourceGroupname MyResourceGroup
- Invoke-AzVMCommand -Name MyVM1 -ResourceGroupName MyResourceGroup -Scriptblock {Get-ComputerInfo} -Credential (Get-Credential)
- ```
+```azurepowershell-interactive
+Enable-AzVMPSRemoting -Name MyVM1 -ResourceGroupname MyResourceGroup
+Invoke-AzVMCommand -Name MyVM1 -ResourceGroupName MyResourceGroup -Scriptblock {Get-ComputerInfo} -Credential (Get-Credential)
+```
- You can also navigate to the VirtualMachines directory first and run `Invoke-AzVMCommand` as follows.
+You can also navigate to the VirtualMachines directory first and run `Invoke-AzVMCommand` as follows.
- ```azurepowershell-interactive
- PS Azure:\> cd MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines
- PS Azure:\MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines> Get-Item MyVM1 | Invoke-AzVMCommand -Scriptblock {Get-ComputerInfo} -Credential (Get-Credential)
- ```
+```azurepowershell-interactive
+PS Azure:\> cd MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines
+PS Azure:\MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines> Get-Item MyVM1 | Invoke-AzVMCommand -Scriptblock {Get-ComputerInfo} -Credential (Get-Credential)
+```
- ```output
- # You will see output similar to the following:
+```output
+# You will see output similar to the following:
- PSComputerName : 65.52.28.207
- RunspaceId : 2c2b60da-f9b9-4f42-a282-93316cb06fe1
- WindowsBuildLabEx : 14393.1066.amd64fre.rs1_release_sec.170327-1835
- WindowsCurrentVersion : 6.3
- WindowsEditionId : ServerDatacenter
- WindowsInstallationType : Server
- WindowsInstallDateFromRegistry : 5/18/2017 11:26:08 PM
- WindowsProductId : 00376-40000-00000-AA947
- WindowsProductName : Windows Server 2016 Datacenter
- WindowsRegisteredOrganization :
- ...
- ```
+PSComputerName : 65.52.28.207
+RunspaceId : 2c2b60da-f9b9-4f42-a282-93316cb06fe1
+WindowsBuildLabEx : 14393.1066.amd64fre.rs1_release_sec.170327-1835
+WindowsCurrentVersion : 6.3
+WindowsEditionId : ServerDatacenter
+WindowsInstallationType : Server
+WindowsInstallDateFromRegistry : 5/18/2017 11:26:08 PM
+WindowsProductId : 00376-40000-00000-AA947
+WindowsProductName : Windows Server 2016 Datacenter
+WindowsRegisteredOrganization :
+...
+```
-#### Interactively log on to a remote VM
+#### Interactively sign-in to a remote VM
You can use `Enter-AzVM` to interactively log into a VM running in Azure.
- ```azurepowershell-interactive
- PS Azure:\> Enter-AzVM -Name MyVM1 -ResourceGroupName MyResourceGroup -Credential (Get-Credential)
- ```
+```azurepowershell-interactive
+Enter-AzVM -Name MyVM1 -ResourceGroupName MyResourceGroup -Credential (Get-Credential)
+```
You can also navigate to the `VirtualMachines` directory first and run `Enter-AzVM` as follows: ```azurepowershell-interactive
-PS Azure:\MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines> Get-Item MyVM1 | Enter-AzVM -Credential (Get-Credential)
+Get-Item MyVM1 | Enter-AzVM -Credential (Get-Credential)
``` ### Discover WebApps
PS Azure:\MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\vi
By entering into the `WebApps` directory, you can easily navigate your web apps resources ```azurepowershell-interactive
-PS Azure:\MySubscriptionName> dir .\WebApps\
+dir .\WebApps\
``` ```output
mywebapp3 Running MyResourceGroup3 {mywebapp3.azurewebsites.net... So
## SSH To authenticate to servers or VMs using SSH, generate the public-private key pair in Cloud Shell and
-publish the public key to `authorized_keys` on the remote machine, such as `/home/user/.ssh/authorized_keys`.
+publish the public key to `authorized_keys` on the remote machine, such as
+`/home/user/.ssh/authorized_keys`.
> [!NOTE]
-> You can create SSH private-public keys using `ssh-keygen` and publish them to `$env:USERPROFILE\.ssh` in Cloud Shell.
+> You can create SSH private-public keys using `ssh-keygen` and publish them to
+> `$env:USERPROFILE\.ssh` in Cloud Shell.
### Using SSH
-Follow instructions [here](../virtual-machines/linux/quick-create-powershell.md) to create a new VM configuration using Azure PowerShell cmdlets.
-Before calling into `New-AzVM` to kick off the deployment, add SSH public key to the VM configuration.
-The newly created VM will contain the public key in the `~\.ssh\authorized_keys` location, thereby enabling credential-free SSH session to the VM.
+Follow instructions [here][03] to create a new VM configuration using Azure PowerShell cmdlets.
+Before calling into `New-AzVM` to kick off the deployment, add SSH public key to the VM
+configuration. The newly created VM will contain the public key in the `~\.ssh\authorized_keys`
+location, thereby enabling credential-free SSH session to the VM.
```azurepowershell-interactive # Create VM config object - $vmConfig using instructions on linked page above # Generate SSH keys in Cloud Shell
-ssh-keygen -t rsa -b 2048 -f $HOME\.ssh\id_rsa
+ssh-keygen -t rsa -b 2048 -f $HOME\.ssh\id_rsa
# Ensure VM config is updated with SSH keys $sshPublicKey = Get-Content "$HOME\.ssh\id_rsa.pub"
ssh azureuser@MyVM.Domain.Com
Under `Azure` drive, type `Get-AzCommand` to get context-specific Azure commands.
-Alternatively, you can always use `Get-Command *az* -Module Az.*` to find out the available Azure commands.
+Alternatively, you can always use `Get-Command *az* -Module Az.*` to find out the available Azure
+commands.
## Install custom modules
-You can run `Install-Module` to install modules from the [PowerShell Gallery][gallery].
+You can run `Install-Module` to install modules from the [PowerShell Gallery][07].
## Get-Help
Get-Help Get-AzVM
## Use Azure Files to store your data
-You can create a script, say `helloworld.ps1`, and save it to your `clouddrive` to use it across shell sessions.
+You can create a script, say `helloworld.ps1`, and save it to your `clouddrive` to use it across
+shell sessions.
```azurepowershell-interactive cd $HOME\clouddrive
code .\helloworld.ps1
Hello World! ```
-Next time when you use PowerShell in Cloud Shell, the `helloworld.ps1` file will exist under the `$HOME\clouddrive` directory that mounts your Azure Files share.
+Next time when you use PowerShell in Cloud Shell, the `helloworld.ps1` file will exist under the
+`$HOME\clouddrive` directory that mounts your Azure Files share.
## Use custom profile
-You can customize your PowerShell environment, by creating PowerShell profile(s) - `profile.ps1` (or `Microsoft.PowerShell_profile.ps1`).
-Save it under `$profile.CurrentUserAllHosts` (or `$profile.CurrentUserCurrentHost`), so that it can be loaded in every PowerShell in Cloud Shell session.
+You can customize your PowerShell environment, by creating PowerShell profiles - `profile.ps1` (or
+`Microsoft.PowerShell_profile.ps1`). Save it under `$profile.CurrentUserAllHosts` (or
+`$profile.CurrentUserCurrentHost`), so that it can be loaded in every PowerShell in Cloud Shell
+session.
-For how to create a profile, refer to [About Profiles][profile].
+For how to create a profile, refer to [About Profiles][04].
## Use Git
-To clone a Git repo in the Cloud Shell, you need to create a [personal access token][githubtoken] and use it as the username. Once you have your token, clone the repository as follows:
+To clone a Git repo in Cloud Shell, you need to create a [personal access token][05] and use it as
+the username. Once you have your token, clone the repository as follows:
```azurepowershell-interactive
- git clone https://<your-access-token>@github.com/username/repo.git
+git clone https://<your-access-token>@github.com/username/repo.git
``` ## Exit the shell Type `exit` to terminate the session.
-[bashqs]: quickstart.md
-[gallery]: https://www.powershellgallery.com/
-[customex]: ../virtual-machines/extensions/custom-script-windows.md
-[profile]: /powershell/module/microsoft.powershell.core/about/about_profiles
-[azmount]: ../storage/files/storage-how-to-use-files-windows.md
-[githubtoken]: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
+<!-- link references -->
+[01]: /powershell/scripting/whats-new/unix-support
+[02]: /powershell/scripting/whats-new/what-s-new-in-powershell-72
+[03]: ../virtual-machines/linux/quick-create-powershell.md
+[04]: /powershell/module/microsoft.powershell.core/about/about_profiles
+[05]: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
+[06]: https://portal.azure.com/
+[07]: https://www.powershellgallery.com/
+[08]: media/quickstart-powershell/environment-ps.png
+[09]: media/quickstart-powershell/shell-icon.png
+[11]: troubleshooting.md#troubleshooting-remote-management-of-azure-vms
cloud-shell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart.md
Title: Azure Cloud Shell Quickstart - Bash+ description: Learn how to use the Bash command line in your browser with Azure Cloud Shell.--
-tags: azure-resource-manager
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 03/12/2018-+
+tags: azure-resource-manager
+ Title: Quickstart for Bash in Azure Cloud Shell
- # Quickstart for Bash in Azure Cloud Shell
-This document details how to use Bash in Azure Cloud Shell in the [Azure portal](https://portal.azure.com/).
+This document details how to use Bash in Azure Cloud Shell in the [Azure portal][03].
> [!NOTE]
-> A [PowerShell in Azure Cloud Shell](quickstart-powershell.md) Quickstart is also available.
+> A [PowerShell in Azure Cloud Shell][09] Quickstart is also available.
## Start Cloud Shell
-1. Launch **Cloud Shell** from the top navigation of the Azure portal. <br>
-![Screenshot showing how to start Azure Cloud Shell in the Azure portal.](media/quickstart/shell-icon.png)
-2. Select a subscription to create a storage account and Microsoft Azure Files share.
-3. Select "Create storage"
+1. Launch **Cloud Shell** from the top navigation of the Azure portal.
+
+ ![Screenshot showing how to start Azure Cloud Shell in the Azure portal.][05]
+
+1. Select a subscription to create a storage account and Microsoft Azure Files share.
+1. Select "Create storage"
> [!TIP] > You are automatically authenticated for Azure CLI in every session. ### Select the Bash environment
-Check that the environment drop-down from the left-hand side of shell window says `Bash`. <br>
-![Screenshot showing how to select the Bash environment for the Azure Cloud Shell.](media/quickstart/env-selector.png)
+
+Check that the environment drop-down from the left-hand side of shell window says `Bash`.
+
+![Screenshot showing how to select the Bash environment for the Azure Cloud Shell.][04]
### Set your subscription+ 1. List subscriptions you have access to.+ ```azurecli-interactive az account list ```
-2. Set your preferred subscription:
+1. Set your preferred subscription:
```azurecli-interactive az account set --subscription 'my-subscription-name' ``` > [!TIP]
-> Your subscription will be remembered for future sessions using `/home/<user>/.azure/azureProfile.json`.
+> Your subscription is remembered for future sessions using `/home/<user>/.azure/azureProfile.json`.
### Create a resource group+ Create a new resource group in WestUS named "MyRG".+ ```azurecli-interactive az group create --location westus --name MyRG ``` ### Create a Linux VM
-Create an Ubuntu VM in your new resource group. The Azure CLI will create SSH keys and set up the VM with them. <br>
+
+Create an Ubuntu VM in your new resource group. The Azure CLI will create SSH keys and set up the VM
+with them.
```azurecli-interactive az vm create -n myVM -g MyRG --image UbuntuLTS --generate-ssh-keys ``` > [!NOTE]
-> Using `--generate-ssh-keys` instructs Azure CLI to create and set up public and private keys in your VM and `$Home` directory. By default keys are placed in Cloud Shell at `/home/<user>/.ssh/id_rsa` and `/home/<user>/.ssh/id_rsa.pub`. Your `.ssh` folder is persisted in your attached file share's 5-GB image used to persist `$Home`.
+> Using `--generate-ssh-keys` instructs Azure CLI to create and set up public and private keys in
+> your VM and `$Home` directory. By default keys are placed in Cloud Shell at
+> `/home/<user>/.ssh/id_rsa` and `/home/<user>/.ssh/id_rsa.pub`. Your `.ssh` folder is persisted in
+> your attached file share's 5-GB image used to persist `$Home`.
Your username on this VM will be your username used in Cloud Shell ($User@Azure:). ### SSH into your Linux VM+ 1. Search for your VM name in the Azure portal search bar.
-2. Click "Connect" to get your VM name and public IP address. <br>
- ![Screenshot showing how to connect to a Linux V M using S S H.](medi-copy.png)
+1. Select **Connect** to get your VM name and public IP address.
-3. SSH into your VM with the `ssh` cmd.
- ```
+ ![Screenshot showing how to connect to a Linux VM using SSH.][06]
+
+1. SSH into your VM with the `ssh` cmd.
+
+ ```bash
ssh username@ipaddress ```
-Upon establishing the SSH connection, you should see the Ubuntu welcome prompt. <br>
-![Screenshot showing the Ubuntu initialization and welcome prompt after you establish an S S H connection.](media/quickstart/ubuntu-welcome.png)
+Upon establishing the SSH connection, you should see the Ubuntu welcome prompt.
+
+![Screenshot showing the Ubuntu initialization and welcome prompt after you establish an SSH connection.][07]
+
+## Cleaning up
-## Cleaning up
1. Exit your ssh session.+ ``` exit ```
-2. Delete your resource group and any resources within it.
+1. Delete your resource group and any resources within it.
+ ```azurecli-interactive az group delete -n MyRG ``` ## Next steps
-[Learn about persisting files for Bash in Cloud Shell](persisting-shell-storage.md) <br>
-[Learn about Azure CLI](/cli/azure/) <br>
-[Learn about Azure Files storage](../storage/files/storage-files-introduction.md) <br>
+
+- [Learn about persisting files for Bash in Cloud Shell][08]
+- [Learn about Azure CLI][02]
+- [Learn about Azure Files storage][01]
+
+<!-- link references -->
+[01]: ../storage/files/storage-files-introduction.md
+[02]: /cli/azure/
+[03]: https://portal.azure.com/
+[04]: media/quickstart/env-selector.png
+[05]: media/quickstart/shell-icon.png
+[06]: medi-copy.png
+[07]: media/quickstart/ubuntu-welcome.png
+[08]: persisting-shell-storage.md
+[09]: quickstart-powershell.md
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
Title: Azure Cloud Shell troubleshooting | Microsoft Docs
-description: Troubleshooting Azure Cloud Shell
---
-tags: azure-resource-manager
-
+
+description: This article covers troubleshooting Cloud Shell common scenarios.
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 01/28/2022-++
+tags: azure-resource-manager
+ Title: Azure Cloud Shell troubleshooting
- # Troubleshooting & Limitations of Azure Cloud Shell
-Known resolutions for troubleshooting issues in Azure Cloud Shell include:
-
+This article covers troubleshooting Cloud Shell common scenarios.
## General troubleshooting ### Error running AzureAD cmdlets in PowerShell -- **Details**: When you run AzureAD cmdlets like `Get-AzureADUser` in Cloud Shell, you might see an error: `You must call the Connect-AzureAD cmdlet before calling any other cmdlets`. -- **Resolution**: Run the `Connect-AzureAD` cmdlet. Previously, Cloud Shell ran this cmdlet automatically during PowerShell startup. To speed up start time, the cmdlet no longer runs automatically. You can choose to restore the previous behavior by adding `Connect-AzureAD` to the $PROFILE file in PowerShell.
+- **Details**: When you run AzureAD cmdlets like `Get-AzureADUser` in Cloud Shell, you might see an
+ error: `You must call the Connect-AzureAD cmdlet before calling any other cmdlets`.
+- **Resolution**: Run the `Connect-AzureAD` cmdlet. Previously, Cloud Shell ran this cmdlet
+ automatically during PowerShell startup. To speed up start time, the cmdlet no longer runs
+ automatically. You can choose to restore the previous behavior by adding `Connect-AzureAD` to the
+ $PROFILE file in PowerShell.
### Early timeouts in FireFox -- **Details**: Cloud Shell utilizes an open websocket to pass input/output to your browser. FireFox has preset policies that can close the websocket prematurely causing early timeouts in Cloud Shell.-- **Resolution**: Open FireFox and navigate to "about:config" in the URL box. Search for "network.websocket.timeout.ping.request" and change the value from 0 to 10.
+- **Details**: Cloud Shell uses an open websocket to pass input/output to your browser. FireFox has
+ preset policies that can close the websocket prematurely causing early timeouts in Cloud Shell.
+- **Resolution**: Open FireFox and navigate to "about:config" in the URL box. Search for
+ "network.websocket.timeout.ping.request" and change the value from 0 to 10.
### Disabling Cloud Shell in a locked down network environment -- **Details**: Administrators may wish to disable access to Cloud Shell for their users. Cloud Shell utilizes access to the `ux.console.azure.com` domain, which can be denied, stopping any access to Cloud Shell's entrypoints including `portal.azure.com`, `shell.azure.com`, Visual Studio Code Azure Account extension, and `learn.microsoft.com`. In the US Government cloud, the entrypoint is `ux.console.azure.us`; there is no corresponding `shell.azure.us`.-- **Resolution**: Restrict access to `ux.console.azure.com` or `ux.console.azure.us` via network settings to your environment. The Cloud Shell icon will still exist in the Azure portal, but will not successfully connect to the service.
+- **Details**: Administrators may wish to disable access to Cloud Shell for their users. Cloud Shell
+ depends on access to the `ux.console.azure.com` domain, which can be denied, stopping any access
+ to Cloud Shell's entry points including `portal.azure.com`, `shell.azure.com`, Visual Studio Code
+ Azure Account extension, and `learn.microsoft.com`. In the US Government cloud, the entry point is
+ `ux.console.azure.us`; there's no corresponding `shell.azure.us`.
+- **Resolution**: Restrict access to `ux.console.azure.com` or `ux.console.azure.us` via network
+ settings to your environment. The Cloud Shell icon will still exist in the Azure portal, but can't
+ connect to the service.
### Storage Dialog - Error: 403 RequestDisallowedByPolicy -- **Details**: When creating a storage account through Cloud Shell, it is unsuccessful due to an Azure Policy assignment placed by your admin. Error message will include: `The resource action 'Microsoft.Storage/storageAccounts/write' is disallowed by one or more policies.`-- **Resolution**: Contact your Azure administrator to remove or update the Azure Policy assignment denying storage creation.
+- **Details**: When creating a storage account through Cloud Shell, it's unsuccessful due to an
+ Azure Policy assignment placed by your admin. The error message includes:
+
+ > The resource action 'Microsoft.Storage/storageAccounts/write' is disallowed by
+ > one or more policies.
+
+- **Resolution**: Contact your Azure administrator to remove or update the Azure Policy assignment
+ denying storage creation.
### Storage Dialog - Error: 400 DisallowedOperation -- **Details**: When using an Azure Active Directory subscription, you cannot create storage.-- **Resolution**: Use an Azure subscription capable of creating storage resources. Azure AD subscriptions are not able to create Azure resources.
+- **Details**: When using an Azure Active Directory subscription, you can't create storage.
+- **Resolution**: Use an Azure subscription capable of creating storage resources. Azure AD
+ subscriptions aren't able to create Azure resources.
+
+### Terminal output - Error: Failed to connect terminal: websocket can't be established
-### Terminal output - Error: Failed to connect terminal: websocket cannot be established. Press `Enter` to reconnect.
-- **Details**: Cloud Shell requires the ability to establish a websocket connection to Cloud Shell infrastructure.-- **Resolution**: Check you have configured your network settings to enable sending https requests and websocket requests to domains at *.console.azure.com.
+- **Details**: Cloud Shell requires the ability to establish a websocket connection to Cloud Shell
+ infrastructure.
+- **Resolution**: Confirm that your network settings to allow sending HTTPS and websocket requests
+ to domains at `*.console.azure.com`.
### Set your Cloud Shell connection to support using TLS 1.2+
+ - **Details**: To define the version of TLS for your connection to Cloud Shell, you must set
+ browser-specific settings.
+ - **Resolution**: Navigate to the security settings of your browser and select the checkbox next to
+ **Use TLS 1.2**.
## Bash troubleshooting
-### Cannot run the docker daemon
+### You can't run the docker daemon
-- **Details**: Cloud Shell utilizes a container to host your shell environment, as a result running the daemon is disallowed.-- **Resolution**: Utilize [docker-machine](https://docs.docker.com/machine/overview/), which is installed by def