Updates from: 11/17/2022 02:15:42
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
Previously updated : 04/04/2022 Last updated : 11/12/2022
# Troubleshoot on-premises application provisioning ## Troubleshoot test connection issues
-After you configure the provisioning agent and ECMA host, it's time to test connectivity from the Azure Active Directory (Azure AD) provisioning service to the provisioning agent, the ECMA host, and the application. To perform this end-to-end test, select **Test connection** in the application in the Azure portal. When the test connection fails, try the following troubleshooting steps:
+After you configure the provisioning agent and ECMA host, it's time to test connectivity from the Azure Active Directory (Azure AD) provisioning service to the provisioning agent, the ECMA host, and the application. To perform this end-to-end test, select **Test connection** in the application in the Azure portal. Be sure to wait 10 to 20 minutes after assigning an initial agent or changing the agent before testing the connection. If after this time the test connection fails, try the following troubleshooting steps:
1. Check that the agent and ECMA host are running: 1. On the server with the agent installed, open **Services** by going to **Start** > **Run** > **Services.msc**.
After you configure the provisioning agent and ECMA host, it's time to test conn
6. After you assign an agent, you need to wait 10 to 20 minutes for the registration to complete. The connectivity test won't work until the registration completes. 7. Ensure that you're using a valid certificate. Go to the **Settings** tab of the ECMA host to generate a new certificate. 8. Restart the provisioning agent by going to the taskbar on your VM by searching for the Microsoft Azure AD Connect provisioning agent. Right-click **Stop**, and then select **Start**.
- 9. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host. The error message 'invalid resource' generally indicates that the URL does not follow the expected format.
+ 1. If you continue to see `The ECMA host is currently importing data from the target application` even after restarting the ECMA Connector Host and the provisioning agent, and waiting for the initial import to complete, then you may need to cancel and re-start configuring provisioning to the application in the Azure portal.
+ 1. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host. The error message 'invalid resource' generally indicates that the URL does not follow the expected format.
``` https://localhost:8585/ecma2host_connectorName/scim
After the ECMA Connector Host schema mapping has been configured, start the serv
| Error | Resolution | | -- | -- | | Could not load file or assembly 'file:///C:\Program Files\Microsoft ECMA2Host\Service\ECMA\Cache\8b514472-c18a-4641-9a44-732c296534e8\Microsoft.IAM.Connector.GenericSql.dll' or one of its dependencies. Access is denied. | Ensure that the network service account has 'full control' permissions over the cache folder. |
-| Invalid LDAP style of object's DN. DN: username@domain.com" | Ensure the 'DN is Anchor' checkbox is not checked in the 'connectivity' page of the ECMA host. Ensure the 'autogenerated' checkbox is selected in the 'object types' page of the ECMA host. See [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names) for more information.|
+| Invalid LDAP style of object's DN. DN: username@domain.com" or `Target Site: ValidByLdapStyle` | Ensure the 'DN is Anchor' checkbox is not checked in the 'connectivity' page of the ECMA host. Ensure the 'autogenerated' checkbox is selected in the 'object types' page of the ECMA host. See [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names) for more information.|
## Understand incoming SCIM requests
By using Azure AD, you can monitor the provisioning service in the cloud and col
``` ### I am getting an Invalid LDAP style DN error when trying to configure the ECMA Connector Host with SQL
-By default, the genericSQL connector expects the DN to be populated using the LDAP style (when the 'DN is anchor' attribute is left unchecked in the first connectivity page). In the error message above, you can see that the DN is a UPN, rather than an LDAP style DN that the connector expects.
+By default, the generic SQL connector expects the DN to be populated using the LDAP style (when the 'DN is anchor' attribute is left unchecked in the first connectivity page). In the error message `Invalid LDAP style DN` or `Target Site: ValidByLdapStyle`, you may see that the DN field contains a user principal name (UPN), rather than an LDAP style DN that the connector expects.
To resolve this, ensure that **Autogenerated** is selected on the object types page when you configure the connector.
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
Previously updated : 06/23/2022 Last updated : 11/16/2022
Users may have a combination of up to five OATH hardware tokens or authenticator
> > When two methods are required, users can reset using either a notification or verification code in addition to any other enabled methods. +
+## FIPS 140 compliant for Azure AD authentication
+
+Beginning with version 6.6.8, Microsoft Authenticator for iOS is compliant with [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/3/final?azure-portal=true) for all Azure AD authentications using push multi-factor authentications (MFA), passwordless Phone Sign-In (PSI), and time-based one-time passcodes (TOTP).  
+
+Consistent with the guidelines outlined in [NIST SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html?azure-portal=true), authenticators are required to use FIPS 140 validated cryptography. This helps federal agencies meet the requirements of [Executive Order (EO) 14028](https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/?azure-portal=true) and healthcare organizations working with [Electronic Prescriptions for Controlled Substances (EPCS)](/azure/compliance/offerings/offering-epcs-us). 
+
+FIPS 140 is a US government standard that defines minimum security requirements for cryptographic modules in information technology products and systems. Testing against the FIPS 140 standard is maintained by the [Cryptographic Module Validation Program (CMVP)](https://csrc.nist.gov/Projects/cryptographic-module-validation-program?azure-portal=true).
+
+No changes in configurations are required in Microsoft Authenticator or the Azure portal to enable FIPS 140 compliance. Beginning with Microsoft Authenticator for iOS version 6.6.8, Azure AD authentications will be FIPS 140 compliant by default.
+
+Authenticator leverages the native Apple cryptography to achieve FIPS 140, Security Level 1 compliance on Apple iOS devices beginning with Microsoft Authenticator version 6.6.8. For more information about the certifications being used, see the [Apple CoreCrypto module](https://support.apple.com/guide/sccc/security-certifications-for-ios-scccfa917cb49/web?azure-portal=true). 
+
+FIPS 140 compliance for Microsoft Authenticator on Android is in progress and will follow soon.
+ ## Next steps - To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md). - Learn more about configuring authentication methods using the [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview).+
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Previously updated : 11/14/2022 Last updated : 11/16/2022
The MFA Server Migration utility targets a single Azure AD group for all migrati
To begin the migration process, enter the name or GUID of the Azure AD group you want to migrate. Once complete, press Tab or click outside the window and the utility will begin searching for the appropriate group. The window will populate all users in the group. A large group can take several minutes to finish.
-To view user attribute data for a user, highlight the user, and select **View**:
+To view attribute data for a user, highlight the user, and select **View**:
:::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/view-user.png" alt-text="Screenshot of how to view use settings.":::
The settings option allows you to change the settings for the migration process:
:::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/settings.png" alt-text="Screenshot of settings."::: - Migrate ΓÇô This setting allows you to specify which method(s) should be migrated for the selection of users-- User Match ΓÇô Allows you to specify a different on-premises Active Directory attribute for matching Azure AD UPN instead of the default match to userPrincipalName
+- User Match ΓÇô Allows you to specify a different on-premises Active Directory attribute for matching Azure AD UPN instead of the default match to userPrincipalName:
+ - The migration utility tries direct matching to UPN before using the on-premises Active Directory attribute.
+ - If no match is found, it calls a Windows API to find the Azure AD UPN and get the SID, which it uses to search the MFA Server user list.
+ - If the Windows API doesnΓÇÖt find the user or the SID isnΓÇÖt found in the MFA Server, then it will use the configured Active Directory attribute to find the user in the on-premises Active Directory, and then use the SID to search the MFA Server user list.
- Automatic synchronization ΓÇô Starts a background service that will continually monitor any authentication method changes to users in the on-premises MFA Server, and write them to Azure AD at the specified time interval defined The migration process can be an automatic process, or a manual process.
Content-Type: application/json
} ```
-Set the **Staged Rollout for Azure MFA** to **Off**. Users will once again be redirected to your on-premises federation server for MFA.
+Users will no longer be redirected to your on-premises federation server for MFA, whether theyΓÇÖre targeted by the Staged Rollout tool or not. Note this can take up to 24 hours to take effect.
>[!NOTE] >The update of the domain federation setting can take up to 24 hours to take effect.
If the upgrade had issues, follow these steps to roll back:
} ```
-Users will no longer be redirected to your on-premises federation server for MFA, whether theyΓÇÖre targeted by the Staged Rollout tool or not. Note this can take up to 24 hours to take effect.
+
+Set the **Staged Rollout for Azure MFA** to **Off**. Users will once again be redirected to your on-premises federation server for MFA.
## Next steps
active-directory Overview Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/overview-authentication.md
Persistent session tokens are stored as persistent cookies on the web browser's
| ESTSAUTHPERSISTENT | Common | Contains user's session information to facilitate SSO. Persistent. | | ESTSAUTHLIGHT | Common | Contains Session GUID Information. Lite session state cookie used exclusively by client-side JavaScript in order to facilitate OIDC sign-out. Security feature. | | SignInStateCookie | Common | Contains list of services accessed to facilitate sign-out. No user information. Security feature. |
-| CCState | Common | Contains session information state to be used between Azure AD and the [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults). |
+| CCState | Common | Contains session information state to be used between Azure AD and the [Azure AD Backup Authentication Service](../conditional-access/resilience-defaults.md). |
| buid | Common | Tracks browser related information. Used for service telemetry and protection mechanisms. | | fpc | Common | Tracks browser related information. Used for tracking requests and throttling. | | esctx | Common | Session context cookie information. For CSRF protection. Binds a request to a specific browser instance so the request can't be replayed outside the browser. No user information. |
Persistent session tokens are stored as persistent cookies on the web browser's
| wlidperf | Common | Client-side cookie (set by JavaScript) that tracks local time for performance purposes. | | x-ms-gateway-slice | Common | Azure AD Gateway cookie used for tracking and load balance purposes. | | stsservicecookie | Common | Azure AD Gateway cookie also used for tracking purposes. |
-| x-ms-refreshtokencredential | Specific | Available when [Primary Refresh Token (PRT)](/azure/active-directory/devices/concept-primary-refresh-token) is in use. |
+| x-ms-refreshtokencredential | Specific | Available when [Primary Refresh Token (PRT)](../devices/concept-primary-refresh-token.md) is in use. |
| estsStateTransient | Specific | Applicable to new session information model only. Transient. | | estsStatePersistent | Specific | Same as estsStateTransient, but persistent. | | ESTSNCLOGIN | Specific | National Cloud Login related Cookie. | | UsGovTraffic | Specific | US Gov Cloud Traffic Cookie. | | ESTSWCTXFLOWTOKEN | Specific | Saves flowToken information when redirecting to ADFS. |
-| CcsNtv | Specific | To control when Azure AD Gateway will send requests to [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults). Native flows. |
-| CcsWeb | Specific | To control when Azure AD Gateway will send requests to [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults). Web flows. |
-| Ccs* | Specific | Cookies with prefix Ccs*, have the same purpose as the ones without prefix, but only apply when [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults) is in use. |
+| CcsNtv | Specific | To control when Azure AD Gateway will send requests to [Azure AD Backup Authentication Service](../conditional-access/resilience-defaults.md). Native flows. |
+| CcsWeb | Specific | To control when Azure AD Gateway will send requests to [Azure AD Backup Authentication Service](../conditional-access/resilience-defaults.md). Web flows. |
+| Ccs* | Specific | Cookies with prefix Ccs*, have the same purpose as the ones without prefix, but only apply when [Azure AD Backup Authentication Service](../conditional-access/resilience-defaults.md) is in use. |
| threxp | Specific | Used for throttling control. | | rrc | Specific | Cookie used to identify a recent B2B invitation redemption. | | debug | Specific | Cookie used to track if user's browser session is enabled for DebugMode. |
To learn more about multi-factor authentication concepts, see [How Azure AD Mult
[tutorial-sspr]: tutorial-enable-sspr.md [tutorial-azure-mfa]: tutorial-enable-azure-mfa.md [concept-sspr]: concept-sspr-howitworks.md
-[concept-mfa]: concept-mfa-howitworks.md
+[concept-mfa]: concept-mfa-howitworks.md
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Applications must have the Intune SDK with policy assurance implemented and must
The following client apps are confirmed to support this setting, this list isn't exhaustive and is subject to change:
+- iAnnotate for Office 365
- Microsoft Cortana - Microsoft Edge - Microsoft Excel
The following client apps are confirmed to support this setting, this list isn't
- Microsoft PowerApps - Microsoft PowerPoint - Microsoft SharePoint
+- Microsoft Stream Mobile Native 2.0
- Microsoft Teams - Microsoft To Do - Microsoft Word
+- Microsoft Whiteboard Services
- Microsoft Field Service (Dynamics 365) - MultiLine for Intune - Nine Mail - Email and Calendar
active-directory Developer Guide Conditional Access Authentication Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-guide-conditional-access-authentication-context.md
Previously updated : 05/18/2021 Last updated : 11/15/2022 --++ -+ + # Developer guide to Conditional Access authentication context
-[Conditional Access](../conditional-access/overview.md) is the Zero Trust control plane that allows you to target policies for access to all your apps ΓÇô old or new, private, or public, on-premises, or multi-cloud. With [Conditional Access authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context), you can apply different policies within those apps.
+[Conditional Access](../conditional-access/overview.md) is the Zero Trust control plane that allows you to target policies for access to all your apps ΓÇô old or new, private, or public, on-premises, or multicloud. With [Conditional Access authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context), you can apply different policies within those apps.
Conditional Access authentication context (auth context) allows you to apply granular policies to sensitive data and actions instead of just at the app level. You can refine your Zero Trust policies for least privileged access while minimizing user friction and keeping users more productive and your resources more secure. Today, it can be used by applications using [OpenId Connect](https://openid.net/specs/openid-connect-core-1_0.html) for authentication developed by your company to protect sensitive resources, like high-value transactions or viewing employee personal data.
Use the Azure AD Conditional Access engine's new auth context feature to trigger
## Problem statement
-The IT administrators and regulators often struggle between balancing prompting their users with additional factors of authentication too frequently and achieving adequate security and policy adherence for applications and services where parts of them contain sensitive data and operations. It can be a choice between a strong policy that impacts users' productivity when they access most data and actions or a policy that is not strong enough for sensitive resources.
+The IT administrators and regulators often struggle between balancing prompting their users with additional factors of authentication too frequently and achieving adequate security and policy adherence for applications and services where parts of them contain sensitive data and operations. It can be a choice between a strong policy that impacts users' productivity when they access most data and actions or a policy that isn't strong enough for sensitive resources.
So, what if apps were able to mix both, where they can function with a relatively lesser security and less frequent prompts for most users and operations and yet conditionally stepping up the security requirement when the users accessed more sensitive parts?
For example, while users may sign in to SharePoint using multi-factor authentica
**Second**, [Conditional Access](../conditional-access/overview.md) requires Azure AD Premium P1 licensing. More information about licensing can be found on the [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
-**Third**, today it is only available to applications that sign-in users. Applications that authenticate as themselves are not supported. Use the [Authentication flows and application scenarios guide](authentication-flows-app-scenarios.md) to learn about the supported authentication app types and flows in the Microsoft Identity Platform.
+**Third**, today it's only available to applications that sign-in users. Applications that authenticate as themselves aren't supported. Use the [Authentication flows and application scenarios guide](authentication-flows-app-scenarios.md) to learn about the supported authentication app types and flows in the Microsoft Identity Platform.
## Integration steps
Create or modify your Conditional Access policies to use the Conditional Access
1. Identity actions in the code that can be made available to map against auth context Ids. 1. Build a screen in the admin portal of the app (or an equivalent functionality) that IT admins can use to map sensitive actions against an available auth context ID.
-1. See the code sample, [Use the Conditional Access Auth Context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md) for an example on how it is done.
+1. See the code sample, [Use the Conditional Access Auth Context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md) for an example on how it's done.
These steps are the changes that you need to carry in your code base. The steps broadly comprise of
These steps are the changes that you need to carry in your code base. The steps
- Checks if the application's action being called requires step-up authentication. It does so by checking its database for a saved mapping for this method - If this action indeed requires an elevated auth context, it checks the **acrs** claim for an existing, matching Auth Context ID.
- - If a matching Auth Context ID is not found, it raises a [claims challenge](claims-challenge.md#claims-challenge-header-format).
+ - If a matching Auth Context ID isn't found, it raises a [claims challenge](claims-challenge.md#claims-challenge-header-format).
```csharp public void CheckForRequiredAuthContext(string method)
These steps are the changes that you need to carry in your code base. The steps
## Caveats and recommendations
-Do not hard-code Auth Context values in your app. Apps should read and apply auth context [using MS Graph calls](/graph/api/resources/authenticationcontextclassreference). This practice is critical for [multi-tenant applications](howto-convert-app-to-be-multi-tenant.md). The Auth Context values will vary between Azure AD tenants will not available in Azure AD free edition. For more information on how an app should query, set, and use auth context in their code, see the code sample, [Use the Conditional Access auth context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md) as how an app should query, set and use auth context in their code.
+Don't hard-code Auth Context values in your app. Apps should read and apply auth context [using MS Graph calls](/graph/api/resources/authenticationcontextclassreference). This practice is critical for [multi-tenant applications](howto-convert-app-to-be-multi-tenant.md). The Auth Context values will vary between Azure AD tenants and won't be available in Azure AD free edition. For more information on how an app should query, set, and use auth context in their code, see the code sample, [Use the Conditional Access auth context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md) as how an app should query, set and use auth context in their code.
-Do not use auth context where the app itself is going to be a target of Conditional Access policies. The feature works best when parts of the application require the user to meet a higher bar of authentication.
+Don't use auth context where the app itself is going to be a target of Conditional Access policies. The feature works best when parts of the application require the user to meet a higher bar of authentication.
## Code samples - [Use the Conditional Access auth context to perform step-up authentication for high-privilege operations in a web app](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) - [Use the Conditional Access auth context to perform step-up authentication for high-privilege operations in a web API](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md)
+## Authentication context [ACRs] in Conditional Access expected behavior
+
+## Explicit auth context satisfaction in requests
+
+A client can explicitly ask for a token with an Auth Context (ACRS) through the claims in the request's body. If an ACRS was requested, Conditional Access will allow issuing the token with the requested ACRS if all challenges were completed.
+
+## Expected behavior when an auth context isn't protected by Conditional Access in the tenant
+
+Conditional Access may issue an ACRS in a token's claims when all Conditional Access policy assigned to the ACRS value has been satisfied. If no Conditional Access policy is assigned to an ACRS value the claim may still be issued, because all policy requirements have been satisfied.
+
+## Summary table for expected behavior when ACRS are explicitly requested
+
+ACRS requested | Policy applied | Control satisfied | ACRS added to claims |
+|--|--|--|--|--|
+|Yes | No | Yes | Yes |
+|Yes | Yes | No | No |
+|Yes | Yes | Yes | Yes |
+|Yes | No policies configured with ACRS | Yes | Yes |
+
+## Implicit auth context satisfaction by opportunistic evaluation
+
+A resource provider may opt in to the optional 'acrs' claim. Conditional Access will try to add ACRS to the token claims opportunistically in order to avoid round trips to acquire new tokens to Azure AD. In that evaluation, Conditional Access will check if the policies protecting Auth Context challenges are already satisfied and will add the ACRS to the token claims if so.
+
+> [!NOTE]
+> Each token type will need to be individually opted-in (ID token, Access token).
+>
+> If a resource provider doesn't opt in to the optional 'acrs' claim, the only way to get an ACRS in the token will be to explicitly ask for it in a token request. It will not get the benefits of the opportunistic evaluation, therefore every time that the required ACRS will be missing from the token claims, the resource provider will challenge the client to acquire a new token containing it in the claims.
+
+## Expected behavior with auth context and session controls for implicit ACRS opportunistic evaluation
+
+### Sign-in frequency by interval
+
+Conditional Access will consider "sign-in frequency by interval" as satisfied for opportunistic ACRS evaluation when all the present authentication factors auth instants are within the sign-in frequency interval. In case that the first factor auth instant is stale, or if the second factor (MFA) is present and its auth instant is stale, the sign-in frequency by interval won't be satisfied and the ACRS won't be issued in the token opportunistically.
+
+### Cloud App Security (CAS)
+
+Conditional Access will consider CAS session control as satisfied for opportunistic ACRS evaluation, when a CAS session was established during that request. For example, when a request comes in and any Conditional Access policy applied and enforced a CAS session, and in addition there's a Conditional Access policy that also requires a CAS session, since the CAS session will be enforced, that will satisfy the CAS session control for the opportunistic evaluation.
+
+## Expected behavior when a tenant contain Conditional Access policies protecting auth context
+
+The table below will show all corner cases where ACRS is added to the token's claims by opportunistic evaluation.
+
+**Policy A**: Require MFA from all users, excluding the user "Ariel", when asking for "c1" acrs.
+**Policy B**: Block all users, excluding user "Jay", when asking for "c2", or "c3" acrs.
+
+| Flow | ACRS requested | Policy applied | Control satisfied | ACRS added to claims |
+|--|--|--|--|--|
+| Ariel requests for an access token | "c1" | None | Yes for "c1". No for "c2" and "c3" | "c1" (requested) |
+| Ariel requests for an access token | "c2" | Policy B | Blocked by policy B | None |
+| Ariel requests for an access token | None | None | Yes for "c1". No for "c2" and "c3" | "c1" (opportunistically added from policy A) |
+| Jay requests for an access token (without MFA) | "c1" | Policy A | No | None |
+| Jay requests for an access token (with MFA) | "c1" | Policy A | Yes | "c1" (requested), "c2" (opportunistically added from policy B), "c3" (opportunistically added from policy B)|
+| Jay requests for an access token (without MFA) | "c2" | None | Yes for "c2" and "c3". No for "c1" | "c2" (requested), "c3" (opportunistically added from policy B) |
+| Jay requests for an access token (with MFA) | "c2" | None | Yes for "c1", "c2" and "c3" | "c1" (best effort from A), "c2" (requested), "c3" (opportunistically added from policy B) |
+| Jay requests for an access token (with MFA) | None | None | Yes for "c1", "c2" and "c3" | "c1", "c2", "c3" all opportunistically added |
+| Jay requests for an access token (without MFA) | None | None | Yes for "c2" and "c3". No for "c1"| "c2", "c3" all opportunistically added |
+ ## Next steps - [Granular Conditional Access for sensitive data and actions (Blog)](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/granular-conditional-access-for-sensitive-data-and-actions/ba-p/1751775)
active-directory Msal Node Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-node-migration.md
## Update app registration settings
-When working with ADAL Node, you were likely using the **Azure AD v1.0 endpoint**. Apps migrating from ADAL to MSAL should also consider switching to **Azure AD v2.0 endpoint**.
+When working with ADAL Node, you were likely using the **Azure AD v1.0 endpoint**. Apps migrating from ADAL to MSAL should switch to **Azure AD v2.0 endpoint**.
1. Review the [differences between v1 and v2 endpoints](../azuread-dev/azure-ad-endpoint-comparison.md) 1. Update, if necessary, your existing app registrations accordingly.
-> [!NOTE]
-> In order to ensure backward compatibility, MSAL Node supports both v1.0 end v2.0 endpoints.
- ## Install and import MSAL 1. install MSAL Node package via NPM:
authenticationContext.acquireTokenWithAuthorizationCode(
); ```
-MSAL Node supports both **v1.0** and **v2.0** endpoints. The v2.0 endpoint employs a *scope-centric* model to access resources. Thus, when you request an access token for a resource, you also need to specify the scope for that resource:
+The v2.0 endpoint employs a *scope-centric* model to access resources. Thus, when you request an access token for a resource, you also need to specify the scope for that resource:
```javascript const tokenRequest = {
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md
The error message displayed will be: "Due to a configuration change made by your
When a request to add a verified publisher is made, many signals are used to make a security risk assessment. If the user risk state is determined to be ΓÇÿAtRiskΓÇÖ, an error, ΓÇ£You're unable to add a verified publisher to this application. Contact your administrator for assistanceΓÇ¥ will be returned. Please investigate the user risk and take the appropriate steps to remediate the risk (guidance below):
-> [Investigate risk](/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk#risky-users)
+> [Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users)
-> [Remediate risk/unblock users](/azure/active-directory/identity-protection/howto-identity-protection-remediate-unblock)
+> [Remediate risk/unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md)
-> [Self-remediation guidance](/azure/active-directory/identity-protection/howto-identity-protection-remediate-unblock)
+> [Self-remediation guidance](../identity-protection/howto-identity-protection-remediate-unblock.md)
> Self-serve password reset (SSPR): If the organization allows SSPR, use aka.ms/sspr to reset the password for remediation. Please choose a strong password; Choosing a weak password may not reset the risk state.
If you've reviewed all of the previous information and are still receiving an er
- TenantId where app is registered - MPN ID - REST request being made -- Error code and message being returned
+- Error code and message being returned
active-directory V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-overview.md
Title: Microsoft identity platform overview description: Learn about the components of the Microsoft identity platform and how they can help you build identity and access management (IAM) support into your applications. -+ Previously updated : 10/18/2022--- Last updated : 11/16/2022++ # Customer intent: As an application developer, I want a quick introduction to the Microsoft identity platform so I can decide if this platform meets my application development requirements.
Learn how core authentication and Azure AD concepts apply to the Microsoft ident
[Azure AD B2B](../external-identities/what-is-b2b.md) - Invite external users into your Azure AD tenant as "guest" users, and assign permissions for authorization while they use their existing credentials for authentication.
-[Azure Active Directory for developers (v1.0)](../azuread-dev/v1-overview.md) - Exclusively for developers with existing apps that use the older v1.0 endpoint. **Do not** use v1.0 for new projects.
- ## Next steps If you have an Azure account, then you have access to an Azure Active Directory tenant. However, most Microsoft identity platform developers need their own Azure AD tenant for use while developing applications, known as a *dev tenant*.
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
The value of `{tenant}` varies based on the application's sign-in audience as sh
| `8eaef023-2b34-4da1-9baa-8bc8c9d6a490` or `contoso.onmicrosoft.com` | Only users from a specific Azure AD tenant (directory members with a work or school account or directory guests with a personal Microsoft account) can sign in to the application. <br/><br/>The value can be the domain name of the Azure AD tenant or the tenant ID in GUID format. You can also use the consumer tenant GUID, `9188040d-6c67-4c5b-b112-36a304b66dad`, in place of `consumers`. | > [!TIP]
-> Note that when using the `common` or `consumers` authority for personal Microsoft accounts, the consuming resource application must be configured to support such type of accounts in accordance with [signInAudience](/azure/active-directory/develop/supported-accounts-validation).
+> Note that when using the `common` or `consumers` authority for personal Microsoft accounts, the consuming resource application must be configured to support such type of accounts in accordance with [signInAudience](./supported-accounts-validation.md).
You can also find your app's OpenID configuration document URI in its app registration in the Azure portal.
When you redirect the user to the `end_session_endpoint`, the Microsoft identity
* Review the [UserInfo endpoint documentation](userinfo.md). * [Populate claim values in a token](active-directory-claims-mapping.md) with data from on-premises systems.
-* [Include your own claims in tokens](active-directory-optional-claims.md).
+* [Include your own claims in tokens](active-directory-optional-claims.md).
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md
az identity federated-credential delete --name $ficId --identity-name $uaId --re
::: zone pivot="identity-wif-mi-methods-powershell" ## Prerequisites -- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](/azure/active-directory/managed-identities-azure-resources/overview). Be sure to review the [difference between a system-assigned and user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types).
+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types).
- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles#managed-identity-contributor) role assignment.
+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
- To run the example scripts, you have two options: - Use [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open by using the **Try It** button in the upper-right corner of code blocks. - Run scripts locally with Azure PowerShell, as described in the next section.-- [Create a user-assigned manged identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-powershell#list-user-assigned-managed-identities-2)
+- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-powershell#list-user-assigned-managed-identities-2)
- Find the object ID of the user-assigned managed identity, which you need in the following steps. ### Configure Azure PowerShell locally
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation.md
The following scenarios are supported for accessing Azure AD protected resources
Create a trust relationship between the external IdP and an app registration or user-assigned managed identity in Azure AD. The federated identity credential is used to indicate which token from the external IdP should be trusted by your application or managed identity. You configure a federated identity either: -- On an Azure AD [App registration](/azure/active-directory/develop/quickstart-register-app) in the Azure portal or through Microsoft Graph. This configuration allows you to get an access token for your application without needing to manage secrets outside Azure. For more information, learn how to [configure an app to trust an external identity provider](workload-identity-federation-create-trust.md).
+- On an Azure AD [App registration](./quickstart-register-app.md) in the Azure portal or through Microsoft Graph. This configuration allows you to get an access token for your application without needing to manage secrets outside Azure. For more information, learn how to [configure an app to trust an external identity provider](workload-identity-federation-create-trust.md).
- On a user-assigned managed identity through the Azure portal, Azure CLI, Azure PowerShell, Azure SDK, and Azure Resource Manager (ARM) templates. The external workload uses the access token to access Azure AD protected resources without needing to manage secrets (in supported scenarios). The [steps for configuring the trust relationship](workload-identity-federation-create-trust-user-assigned-managed-identity.md) will differ, depending on the scenario and external IdP. The workflow for exchanging an external token for an access token is the same, however, for all scenarios. The following diagram shows the general workflow of a workload exchanging an external token for an access token and then accessing Azure AD protected resources.
Learn more about how workload identity federation works:
- How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust.md) on an app registration. - How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust-user-assigned-managed-identity.md) on a user-assigned managed identity. - Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources.-- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
active-directory Clean Up Stale Guest Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-stale-guest-accounts.md
As users collaborate with external partners, itΓÇÖs possible that many guest accounts get created in Azure Active Directory (Azure AD) tenants over time. When collaboration ends and the users no longer access your tenant, the guest accounts may become stale. Admins can use Access Reviews to automatically review inactive guest users and block them from signing in, and later, delete them from the directory.
-Learn more about [how to manage inactive user accounts in Azure AD](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts).
+Learn more about [how to manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md).
There are a few recommended patterns that are effective at cleaning up stale guest accounts: 1. Create a multi-stage review whereby guests self-attest whether they still need access. A second-stage reviewer assesses results and makes a final decision. Guests with denied access are disabled and later deleted.
-2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts#how-to-detect-inactive-user-accounts).
+2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](../reports-monitoring/howto-manage-inactive-user-accounts.md#how-to-detect-inactive-user-accounts).
Use the following instructions to learn how to create Access Reviews that follow these patterns. Consider the configuration recommendations and then make the needed changes that suit your environment. ## Create a multi-stage review for guests to self-attest continued access
-1. Create a [dynamic group](/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+1. Create a [dynamic group](./groups-create-rule.md) for the guest users you want to review. For example,
`(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
-2. To [create an Access Review](/azure/active-directory/governance/create-access-review)
+2. To [create an Access Review](../governance/create-access-review.md)
for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**. 3. Select **New access review**.
Use the following instructions to learn how to create Access Reviews that follow
## Create a review to remove inactive external guests
-1. Create a [dynamic group](/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+1. Create a [dynamic group](./groups-create-rule.md) for the guest users you want to review. For example,
`(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
-2. To [create an access review](/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
+2. To [create an access review](../governance/create-access-review.md) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
3. Select **New access review**.
Use the following instructions to learn how to create Access Reviews that follow
Guest users who don't sign into the tenant for the number of days you configured are disabled for 30 days, then deleted. After deletion, you can restore guests for up to 30 days, after which a new invitation is
-needed.
+needed.
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
B2B collaboration is enabled by default, but comprehensive admin settings let yo
- Use [external collaboration settings](external-collaboration-settings-configure.md) to define who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory. -- Use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and [Microsoft Azure Government](/azure/azure-government) or [Microsoft Azure China 21Vianet](/azure/china).
+- Use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and [Microsoft Azure Government](../../azure-government/index.yml) or [Microsoft Azure China 21Vianet](/azure/china).
## Easily invite guest users from the Azure AD portal
You can [enable integration with SharePoint and OneDrive](/sharepoint/sharepoint
- [External Identities pricing](external-identities-pricing.md) - [Add B2B collaboration guest users in the portal](add-users-administrator.md)-- [Understand the invitation redemption process](redemption-experience.md)
+- [Understand the invitation redemption process](redemption-experience.md)
active-directory 10 Secure Local Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/10-secure-local-guest.md
Azure Active Directory (Azure AD B2B) allows external users to collaborate using their own identities. However, it isn't uncommon for organizations to issue local usernames and passwords to external users. This approach isn't recommended as the bring-your-own-identity (BYOI) capabilities provided by Azure AD B2B to provide better security, lower cost, and reduce complexity when compared to local account creation. Learn more
-[here.](/azure/active-directory/fundamentals/secure-external-access-resources)
+[here.](./secure-external-access-resources.md)
If your organization currently issues local credentials that external users have to manage and would like to migrate to using Azure AD B2B instead, this document provides a guide to make the transition as seamlessly as possible.
If your organization currently issues local credentials that external users have
Before migrating local accounts to Azure AD B2B, admins should understand what applications and workloads these external users need to access. For example, if external users need access to an application that is hosted on-premises, admins will need to validate that the application is integrated with Azure AD and that a provisioning process is implemented to provision the user from Azure AD to the application. The existence and use of on-premises applications could be a reason why local accounts are created in the first place. Learn more about [provisioning B2B guests to on-premises
-applications.](/azure/active-directory/external-identities/hybrid-cloud-to-on-premises)
+applications.](../external-identities/hybrid-cloud-to-on-premises.md)
All external-facing applications should have single-sign on (SSO) and provisioning integrated with Azure AD for the best end user experience.
External users should be notified that the migration will be taking place and wh
## Migrate local guest accounts to Azure AD B2B
-Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](/azure/active-directory/external-identities/invite-internal-users)
+Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](../external-identities/invite-internal-users.md)
This can be done in the UX or programmatically via PowerShell or the Microsoft Graph API. Once complete, the users will no longer authenticate with their local password, but will instead authenticate with their home identity/email that was populated in the user.mail attribute. You've successfully migrated to Azure AD B2B.
See the following articles on securing external access to resources. We recommen
1. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) 1. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md) 1. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
-1. [Convert local guest accounts to B2B](10-secure-local-guest.md) (YouΓÇÖre here)
+1. [Convert local guest accounts to B2B](10-secure-local-guest.md) (YouΓÇÖre here)
active-directory Active Directory Users Assign Role Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md
# Assign user roles with Azure Active Directory
-The ability to manage Azure resources is granted by assigning roles that provide the required permissions. Roles can be assigned to individual users or groups. To align with the [Zero Trust guiding principles](/azure/security/fundamentals/zero-trust), use Just-In-Time and Just-Enough-Access policies when assigning roles.
+The ability to manage Azure resources is granted by assigning roles that provide the required permissions. Roles can be assigned to individual users or groups. To align with the [Zero Trust guiding principles](../../security/fundamentals/zero-trust.md), use Just-In-Time and Just-Enough-Access policies when assigning roles.
Before assigning roles to users, review the following Microsoft Learn articles:
You can remove role assignments from the **Administrative roles** page for a sel
- [Add guest users from another directory](../external-identities/what-is-b2b.md) -- [Explore other user management tasks](../enterprise-users/index.yml)
+- [Explore other user management tasks](../enterprise-users/index.yml)
active-directory Automate Provisioning To Applications Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/automate-provisioning-to-applications-solutions.md
Use the numbered sections in the next two section to cross reference the followi
As customers transition identity management to the cloud, more users and groups are created directly in Azure AD. However, they still need a presence on-premises in AD DS to access various resources.
-3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD DS](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to [on-premises Windows-Integrated Authentication or Kerberos-based applications](/azure/active-directory/external-identities/hybrid-cloud-to-on-premises). Alternatively, customers can user [PowerShell scripts](https://github.com/Azure-Samples/B2B-to-AD-Sync) to automate the creation of guest accounts on-premises.
+3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD DS](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to [on-premises Windows-Integrated Authentication or Kerberos-based applications](../external-identities/hybrid-cloud-to-on-premises.md). Alternatively, customers can user [PowerShell scripts](https://github.com/Azure-Samples/B2B-to-AD-Sync) to automate the creation of guest accounts on-premises.
1. When a group is created in Azure AD, it can be automatically synchronized to AD DS using [Azure AD Connect sync](../hybrid/how-to-connect-group-writeback-v2.md).
As customers transition identity management to the cloud, more users and groups
|No.| What | From | To | Technology | | - | - | - | - | - |
-| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](/azure/active-directory/cloud-sync/what-is-cloud-sync) |
-| 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](/azure/active-directory/hybrid/whatis-azure-ad-connect) |
+| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](../cloud-sync/what-is-cloud-sync.md) |
+| 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](../hybrid/whatis-azure-ad-connect.md) |
| 3 |Groups| Azure AD| AD DS| [Azure AD Connect Sync](../hybrid/how-to-connect-group-writeback-v2.md) | | 4 |Guest accounts| Azure AD| AD DS| [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario), [PowerShell](https://github.com/Azure-Samples/B2B-to-AD-Sync)| | 5 |Users, groups| Azure AD| Managed AD| [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) |
After users are provisioned into Azure AD, use Lifecycle Workflows (LCW) to auto
* **Leaver**: When users leave the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner.
-[Learn more about Azure AD Lifecycle Workflows](/azure/active-directory/governance/what-are-lifecycle-workflows)
+[Learn more about Azure AD Lifecycle Workflows](../governance/what-are-lifecycle-workflows.md)
> [!Note] > For scenarios not covered by LCW, customers can leverage the extensibility of [Logic Applications](../..//logic-apps/logic-apps-overview.md).
Organizations often need a complete audit trail of what users have access to app
1. Automate provisioning with any of your applications that are in the [Azure AD app gallery](../saas-apps/tutorial-list.md), support [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md), [SQL](../app-provisioning/on-premises-sql-connector-configure.md), or [LDAP](../app-provisioning/on-premises-ldap-connector-configure.md). 2. Evaluate [Azure AD Cloud Sync](../cloud-sync/what-is-cloud-sync.md) for synchronization between AD DS and Azure AD
-3. Use the [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) for complex provisioning scenarios
+3. Use the [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) for complex provisioning scenarios
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
When a requirement exists to deploy IaaS workloads to Azure that require identit
![Diagram that shows Azure AD authentication to Azure VMs.](media/secure-with-azure-ad-resource-management/sign-into-vm.png)
-**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-linux).
+**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../devices/howto-vm-sign-in-azure-ad-linux.md).
**Credentials**: One of the key benefits of signing into virtual machines in Azure using Azure AD authentication is the ability to use the same federated or managed Azure AD credentials that you normally use for access to Azure AD services for sign-in to the virtual machine. >[!NOTE] >The Azure AD tenant that is used for sign-in in this scenario is the Azure AD tenant that is associated with the subscription that the virtual machine has been provisioned into. This Azure AD tenant can be one that has identities synchronized from on-premises AD DS. Organizations should make an informed choice that aligns with their isolation principals when choosing which subscription and Azure AD tenant they wish to use for sign-in to these servers.
-**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-linux) for more information.
+**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../devices/howto-vm-sign-in-azure-ad-linux.md) for more information.
**Role-based Access Control (RBAC)**: Two RBAC roles are available to provide the appropriate level of access to these virtual machines. These RBAC roles can be configured via the Azure AD Portal or via the Azure Cloud Shell Experience. For more information, see [Configure role assignments for the VM](../devices/howto-vm-sign-in-azure-ad-windows.md).
For this isolated model, it's assumed that there's no connectivity to the VNet t
* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
-* [Best practices](secure-with-azure-ad-best-practices.md)
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
Group writeback allows you to write cloud groups back to your on-premises Active Directory instance by using Azure Active Directory (Azure AD) Connect sync. You can use this feature to manage groups in the cloud, while controlling access to on-premises applications and resources. > [!NOTE]
-> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](/azure/active-directory/hybrid/how-to-connect-group-writeback-v2#understand-limitations-of-public-preview) before you enable this functionality.
+> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](#understand-limitations-of-public-preview) before you enable this functionality.
There are two versions of group writeback. The original version is in general availability and is limited to writing back Microsoft 365 groups to your on-premises Active Directory instance as distribution groups. The new, expanded version of group writeback is in public preview and enables the following capabilities:
active-directory How To Upgrade Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-upgrade-previous-version.md
This method is preferred when you have a single server and less than about 100,0
![In-place upgrade](./media/how-to-upgrade-previous-version/inplaceupgrade.png)
-If you've made changes to the out-of-box synchronization rules, then these rules are set back to the default configuration on upgrade. To make sure that your configuration is kept between upgrades, make sure that you make changes as they're described in [Best practices for changing the default configuration](how-to-connect-sync-best-practices-changing-default-configuration.md). If you already changed the default sync rules, please see how to [Fix modified default rules in Azure AD Connect](/azure/active-directory/hybrid/how-to-connect-sync-best-practices-changing-default-configuration), before starting the upgrade process.
+If you've made changes to the out-of-box synchronization rules, then these rules are set back to the default configuration on upgrade. To make sure that your configuration is kept between upgrades, make sure that you make changes as they're described in [Best practices for changing the default configuration](how-to-connect-sync-best-practices-changing-default-configuration.md). If you already changed the default sync rules, please see how to [Fix modified default rules in Azure AD Connect](./how-to-connect-sync-best-practices-changing-default-configuration.md), before starting the upgrade process.
During in-place upgrade, there may be changes introduced that require specific synchronization activities (including Full Import step and Full Synchronization step) to be executed after upgrade completes. To defer such activities, refer to section [How to defer full synchronization after upgrade](#how-to-defer-full-synchronization-after-upgrade).
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-t
### Functional changes
+ - We added a new attribute 'employeeLeaveDateTime' for syncing to Azure AD. To learn more about how to use this attribute to manage your users' life cycles, please refer to [this article](../governance/how-to-lifecycle-workflow-sync-attributes.md)
### Bug fixes
You can use these cmdlets to retrieve the TLS 1.2 enablement status or set it as
- We added the following new user properties to sync from on-premises Active Directory to Azure AD: - employeeType - employeeHireDate
+ >[!NOTE]
+ > There's no corresponding EmployeeHireDate or EmployeeLeaveDateTime attribute in Active Directory. If you're importing from on-premises AD, you'll need to identify an attribute in AD that can be used. This attribute must be a string. For more information see, [Synchronizing lifecycle workflow attributes](../governance/how-to-lifecycle-workflow-sync-attributes.md)
+ - This release requires PowerShell version 5.0 or newer to be installed on the Windows server. This version is part of Windows Server 2016 and newer. - We increased the group sync membership limits to 250,000 with the new V2 endpoint. - We updated the Generic LDAP Connector and the Generic SQL Connector to the latest versions. To learn more about these connectors, see the reference documentation for:
This is a bug fix release. There are no functional changes in this release.
## Next steps
-Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
+Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
active-directory Concept Identity Protection Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-policies.md
Previously updated : 10/04/2022 Last updated : 11/11/2022
If risks are detected on a sign-in, users can perform the required access contro
Identity Protection analyzes signals about user accounts and calculates a risk score based on the probability that the user has been compromised. If a user has risky sign-in behavior, or their credentials have been leaked, Identity Protection will use these signals to calculate the user risk level. Administrators can configure user risk-based Conditional Access policies to enforce access controls based on user risk, including requirements such as: - Block access-- Allow access but require a secure password change using [Azure AD self-service password reset](../authentication/howto-sspr-deployment.md).
+- Allow access but require a secure password change.
A secure password change will remediate the user risk and close the risky user event to prevent unnecessary noise for administrators.
-> [!NOTE]
-> Users must have previously registered for self-service password reset before triggering the user risk policy.
- ## Identity Protection policies While Identity Protection also offers a user interface for creating user risk policy and sign-in risk policy, we highly recommend that you [use Azure AD Conditional Access to create risk-based policies](howto-identity-protection-configure-risk-policies.md) for the following benefits:
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Previously updated : 08/16/2022 Last updated : 11/10/2022
Premium detections are visible only to Azure AD Premium P2 customers. Customers
| Risk detection | Detection type | Description | | | | | | Possible attempt to access Primary Refresh Token (PRT) | Offline | This risk detection type is detected by Microsoft Defender for Endpoint (MDE). A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10, Windows Server 2016, and later versions, iOS, and Android devices. A PRT is a JSON Web Token (JWT) that's specially issued to Microsoft first-party token brokers to enable single sign-on (SSO) across the applications used on those devices. Attackers can attempt to access this resource to move laterally into an organization or perform credential theft. This detection will move users to high risk and will only fire in organizations that have deployed MDE. This detection is low-volume and will be seen infrequently by most organizations. However, when it does occur it's high risk and users should be remediated. |
-| Anomalous user activity | Offline | This risk detection indicates that suspicious patterns of activity have been identified for an authenticated user. The post-authentication behavior of users is assessed for anomalies. This behavior is based on actions occurring for the account, along with any sign-in risk detected. |
+| Anomalous user activity | Offline | This risk detection baselines normal administrative user behavior in Azure AD, and spots anomalous patterns of behavior like suspicious changes to the directory. The detection is triggered against the administrator making the change or the object that was changed. |
+ #### Nonpremium user risk detections
active-directory Concept Identity Protection User Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-user-experience.md
Previously updated : 01/21/2022 Last updated : 11/11/2022
When an administrator has configured a policy for sign-in risks, affected users
### Risky sign-in self-remediation
-1. The user is informed that something unusual was detected about their sign-in. This could be something like, such as signing in from a new location, device, or app.
+1. The user is informed that something unusual was detected about their sign-in. This behavior could be something like, such as signing in from a new location, device, or app.
![Something unusual prompt](./media/concept-identity-protection-user-experience/120.png)
If your organization has users who are delegated access to another tenant and th
1. An organization has a managed service provider (MSP) or cloud solution provider (CSP) who takes care of configuring their cloud environment. 1. One of the MSPs technicians credentials are leaked and triggers high risk. That technician is blocked from signing in to other tenants. 1. The technician can self-remediate and sign in if the home tenant has enabled the appropriate policies [requiring password change for high risk users](../conditional-access/howto-conditional-access-policy-risk-user.md) or [MFA for risky users](../conditional-access/howto-conditional-access-policy-risk.md).
- 1. If the home tenant hasn't enabled self-remediation policies, an administrator in the technician's home tenant will have to [remediate the risk](howto-identity-protection-remediate-unblock.md#remediation).
+ 1. If the home tenant hasn't enabled self-remediation policies, an administrator in the technician's home tenant will have to [remediate the risk](howto-identity-protection-remediate-unblock.md#risk-remediation).
## See also
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
Previously updated : 02/07/2022 Last updated : 11/10/2022
We detect risk on workload identities across sign-in behavior and offline indica
| Admin confirmed account compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). | | Leaked Credentials | Offline | This risk detection indicates that the account's valid credentials have been leaked. This leak can occur when someone checks in the credentials in public code artifact on GitHub, or when the credentials are leaked through a data breach. <br><br> When the Microsoft leaked credentials service acquires credentials from GitHub, the dark web, paste sites, or other sources, they're checked against current valid credentials in Azure AD to find valid matches. | | Malicious application | Offline | This detection indicates that Microsoft has disabled an application for violating our terms of service. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
-| Suspicious application | Offline | This detection indicates that Microsoft has identified an application that may be violating our terms of service, but has not disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
-| Anomalous service principal activity | Offline | This risk detection indicates that suspicious patterns of activity have been identified for an authenticated service principal. The post-authentication behavior of service principals is assessed for anomalies. This behavior is based on actions occurring for the account, along with any sign-in risk detected. |
+| Suspicious application | Offline | This detection indicates that Microsoft has identified an application that may be violating our terms of service, but hasn't disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
+| Anomalous service principal activity | Offline | This risk detection baselines normal administrative service principal behavior in Azure AD, and spots anomalous patterns of behavior like suspicious changes to the directory. The detection is triggered against the administrative service principal making the change or the object that was changed. |
## Identify risky workload identities
active-directory Howto Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-investigate-risk.md
Previously updated : 01/24/2022 Last updated : 11/11/2022
Administrators can then choose to return to the user's risk or sign-ins report t
Organizations may use the following frameworks to begin their investigation into any suspicious activity. Investigations may require having a conversation with the user in question, review of the [sign-in logs](../reports-monitoring/concept-sign-ins.md), or review of the [audit logs](../reports-monitoring/concept-audit-logs.md) to name a few. 1. Check the logs and validate whether the suspicious activity is normal for the given user.
- 1. Look at the userΓÇÖs past activities including at least the following properties to see if they are normal for the given user.
+ 1. Look at the userΓÇÖs past activities including at least the following properties to see if they're normal for the given user.
1. Application 1. Device - Is the device registered or compliant? 1. Location - Is the user traveling to a different location or accessing devices from multiple locations?
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
Previously updated : 02/17/2022 Last updated : 11/11/2022
# Remediate risks and unblock users
-After completing your [investigation](howto-identity-protection-investigate-risk.md), you need to take action to remediate the risk or unblock users. Organizations can enable automated remediation using their [risk policies](howto-identity-protection-configure-risk-policies.md). Organizations should try to close all risk detections that they're presented in a time period your organization is comfortable with. Microsoft recommends closing events quickly, because time matters when working with risk.
+After completing your [investigation](howto-identity-protection-investigate-risk.md), you need to take action to remediate the risky users or unblock them. Organizations can enable automated remediation by setting up [risk-based policies](howto-identity-protection-configure-risk-policies.md). Organizations should try to investigate and remediate all risky users in a time period that your organization is comfortable with. Microsoft recommends acting quickly, because time matters when working with risks.
-## Remediation
+## Risk remediation
-All active risk detections contribute to the calculation of a value called user risk level. The user risk level is an indicator (low, medium, high) for the probability that an account has been compromised. As an administrator, you want to get all risk detections closed, so that the affected users are no longer at risk.
+All active risk detections contribute to the calculation of the user's risk level. The user risk level is an indicator (low, medium, high) of the probability that the user's account has been compromised. As an administrator, after thorough investigation on the risky users and the corresponding risky sign-ins and detections, you want to remediate the risky users so that they're no longer at risk and won't be blocked.
-Some risks detections may be marked by Identity Protection as "Closed (system)" because the events were no longer determined to be risky.
+Some risk detections and the corresponding risky sign-ins may be marked by Identity Protection as dismissed with risk state "Dismissed" and risk detail "Azure AD Identity Protection assessed sign-in safe" because those events were no longer determined to be risky.
Administrators have the following options to remediate:--- Self-remediation with risk policy
+- Set up [risk-based policies](howto-identity-protection-configure-risk-policies.md) to allow users to self-remediate their risks
- Manual password reset - Dismiss user risk-- Close individual risk detections manually
-### Remediation framework
+### Self-remediation with risk-based policy
-1. If the account is confirmed compromised:
- 1. Select the event or user in the **Risky sign-ins** or **Risky users** reports and choose "Confirm compromised".
- 1. If a risk policy or a Conditional Access policy wasn't triggered at part of the risk detection, and the risk wasn't [self-remediated](#self-remediation-with-risk-policy), then:
- 1. [Request a password reset](#manual-password-reset).
- 1. Block the user if you suspect the attacker can reset the password or do multi-factor authentication for the user.
- 1. Revoke refresh tokens.
- 1. [Disable any devices](../devices/device-management-azure-portal.md) considered compromised.
- 1. If using [continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md), revoke all access tokens.
+You can allow users to self-remediate their sign-in risks and user risks by setting up [risk-based policies](howto-identity-protection-configure-risk-policies.md). If users pass the required access control, such as Azure AD multifactor authentication (MFA) or secure password change, then their risks are automatically remediated. The corresponding risk detections, risky sign-ins, and risky users will be reported with the risk state "Remediated" instead of "At risk".
-For more information about what happens when confirming compromise, see the section [How should I give risk feedback and what happens under the hood?](howto-identity-protection-risk-feedback.md#how-should-i-give-risk-feedback-and-what-happens-under-the-hood).
+Here are the prerequisites on users before risk-based policies can be applied to them to allow self-remediation of risks:
+- To perform MFA to self-remediate a sign-in risk:
+ - The user must have registered for Azure AD MFA.
+- To perform secure password change to self-remediate a user risk:
+ - The user must have registered for Azure AD MFA.
+ - For hybrid users that are synced from on-premises to cloud, password writeback must have been enabled on them.
+
+If a risk-based policy is applied to a user during sign-in before the above prerequisites are met, then the user will be blocked because they aren't able to perform the required access control, and admin intervention will be required to unblock the user.
-### Self-remediation with risk policy
+Risk-based policies are configured based on risk levels and will only apply if the risk level of the sign-in or user matches the configured level. Some detections may not raise risk to the level where the policy will apply, and administrators will need to handle those risky users manually. Administrators may determine that extra measures are necessary like [blocking access from locations](../conditional-access/howto-conditional-access-policy-location.md) or lowering the acceptable risk in their policies.
-If you allow users to self-remediate, with Azure AD multifactor authentication (MFA) and self-service password reset (SSPR) in your risk policies, they can unblock themselves when risk is detected. These detections are then considered closed. Users must have previously registered for Azure AD MFA and SSPR for use when risk is detected.
+### Self-remediation with self-service password reset
-Some detections may not raise risk to the level where a user self-remediation would be required but administrators should still evaluate these detections. Administrators may determine that extra measures are necessary like [blocking access from locations](../conditional-access/howto-conditional-access-policy-location.md) or lowering the acceptable risk in their policies.
+If a user has registered for self-service password reset (SSPR), then they can also remediate their own user risk by performing a self-service password reset.
### Manual password reset
-If requiring a password reset using a user risk policy isn't an option, administrators can close all risk detections for a user with a manual password reset.
+If requiring a password reset using a user risk policy isn't an option, administrators can remediate a risky user by requiring a password reset.
Administrators are given two options when resetting a password for their users:
Administrators are given two options when resetting a password for their users:
### Dismiss user risk
-If a password reset isn't an option for you, you can choose to dismiss user risk detections.
+If after investigation and confirming that the user account isn't at risk of being compromised, then you can choose to dismiss the risky user.
-When you select **Dismiss user risk**, all events are closed and the affected user is no longer at risk. However, because this method doesn't have an impact on the existing password, it doesn't bring the related identity back into a safe state.
+To **Dismiss user risk**, search for and select **Azure AD Risky users** in the Azure portal or the Entra portal, select the affected user, and select **Dismiss user(s) risk**.
-To **Dismiss user risk**, search for and select **Azure AD Risky users**, select the affected user, and select **Dismiss user(s) risk**.
+When you select **Dismiss user risk**, the user will no longer be at risk, and all the risky sign-ins of this user and corresponding risk detections will be dismissed as well.
-### Close individual risk detections manually
+Because this method doesn't have an impact on the user's existing password, it doesn't bring their identity back into a safe state.
-You can close individual risk detections manually. By closing risk detections manually, you can lower the user risk level. Typically, risk detections are closed manually in response to a related investigation. For example, when talking to a user reveals that an active risk detection isn't required anymore.
-
-When closing risk detections manually, you can choose to take any of the following actions to change the status of a risk detection:
+#### Risk state and detail based on dismissal of risk
-- Confirm user compromised-- Dismiss user risk-- Confirm sign-in safe-- Confirm sign-in compromised
+- Risky user:
+ - Risk state: "At risk" -> "Dismissed"
+ - Risk detail (the risk remediation detail): "-" -> "Admin dismissed all risk for user"
+- All the risky sign-ins of this user and the corresponding risk detections:
+ - Risk state: "At risk" -> "Dismissed"
+ - Risk detail (the risk remediation detail): "-" -> "Admin dismissed all risk for user"
-#### Deleted users
+### Confirm a user to be compromised
+
+If after investigation, an account is confirmed compromised:
+ 1. Select the event or user in the **Risky sign-ins** or **Risky users** reports and choose "Confirm compromised".
+ 2. If a risk-based policy wasn't triggered, and the risk wasn't [self-remediated](#self-remediation-with-risk-based-policy), then do one or more of the followings:
+ 1. [Request a password reset](#manual-password-reset).
+ 1. Block the user if you suspect the attacker can reset the password or do multi-factor authentication for the user.
+ 1. Revoke refresh tokens.
+ 1. [Disable any devices](../devices/device-management-azure-portal.md) that are considered compromised.
+ 1. If using [continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md), revoke all access tokens.
+
+For more information about what happens when confirming compromise, see the section [How should I give risk feedback and what happens under the hood?](howto-identity-protection-risk-feedback.md#how-should-i-give-risk-feedback-and-what-happens-under-the-hood).
+
+### Deleted users
It isn't possible for administrators to dismiss risk for users who have been deleted from the directory. To remove deleted users, open a Microsoft support case.
An administrator may choose to block a sign-in based on their risk policy or inv
To unblock an account blocked because of user risk, administrators have the following options:
-1. **Reset password** - You can reset the user's password.
-1. **Dismiss user risk** - The user risk policy blocks a user if the configured user risk level for blocking access has been reached. You can reduce a user's risk level by dismissing user risk or manually closing reported risk detections.
-1. **Exclude the user from policy** - If you think that the current configuration of your sign-in policy is causing issues for specific users, you can exclude the users from it. For more information, see the section Exclusions in the article [How To: Configure and enable risk policies](howto-identity-protection-configure-risk-policies.md#exclusions).
+1. **Reset password** - You can reset the user's password. If a user has been compromised or is at risk of being compromised, the user's password should be reset to protect their account and your organization.
+1. **Dismiss user risk** - The user risk policy blocks a user if the configured user risk level for blocking access has been reached. If after investigation you're confident that the user isn't at risk of being compromised, and it's safe to allow their access, then you can reduce a user's risk level by dismissing their user risk.
+1. **Exclude the user from policy** - If you think that the current configuration of your sign-in policy is causing issues for specific users, and it's safe to grant access to these users without applying this policy to them, then you can exclude them from this policy. For more information, see the section Exclusions in the article [How To: Configure and enable risk policies](howto-identity-protection-configure-risk-policies.md#exclusions).
1. **Disable policy** - If you think that your policy configuration is causing issues for all your users, you can disable the policy. For more information, see the article [How To: Configure and enable risk policies](howto-identity-protection-configure-risk-policies.md). ### Unblocking based on sign-in risk
active-directory Datawiza Azure Ad Sso Oracle Peoplesoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-peoplesoft.md
The scenario solution has the following components:
- **Oracle PeopleSoft application**: Legacy application going to be protected by Azure AD and DAB.
-Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
+Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](./datawiza-with-azure-ad.md#datawiza-with-azure-ad-authentication-architecture).
## Prerequisites
Ensure the following prerequisites are met.
- An Azure AD tenant linked to the Azure subscription.
- - See, [Quickstart: Create a new tenant in Azure Active Directory.](/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
+ - See, [Quickstart: Create a new tenant in Azure Active Directory.](../fundamentals/active-directory-access-create-new-tenant.md)
- Docker and Docker Compose
Ensure the following prerequisites are met.
- User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory.
- - See, [Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis).
+ - See, [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md).
- An account with Azure AD and the Application administrator role
- - See, [Azure AD built-in roles, all roles](/azure/active-directory/roles/permissions-reference#all-roles).
+ - See, [Azure AD built-in roles, all roles](../roles/permissions-reference.md#all-roles).
- An Oracle PeopleSoft environment
For the Oracle PeopleSoft application to recognize the user correctly, there's a
## Enable Azure AD Multi-Factor Authentication To provide an extra level of security for sign-ins, enforce multi-factor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure
-portal](/azure/active-directory/authentication/tutorial-enable-azure-mfa).
+portal](../authentication/tutorial-enable-azure-mfa.md).
1. Sign in to the Azure portal as a **Global Administrator**.
To confirm Oracle PeopleSoft application access occurs correctly, a prompt appea
- [Watch the video - Enable SSO/MFA for Oracle PeopleSoft with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90). -- [Configure Datawiza and Azure AD for secure hybrid access](/azure/active-directory/manage-apps/datawiza-with-azure-ad)
+- [Configure Datawiza and Azure AD for secure hybrid access](./datawiza-with-azure-ad.md)
-- [Configure Datawiza with Azure AD B2C](/azure/active-directory-b2c/partner-datawiza)
+- [Configure Datawiza with Azure AD B2C](../../active-directory-b2c/partner-datawiza.md)
-- [Datawiza documentation](https://docs.datawiza.com/)
+- [Datawiza documentation](https://docs.datawiza.com/)
active-directory How Managed Identities Work Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md
The following table shows the differences between the system-assigned and user-a
2. Azure Resource Manager creates a service principal in Azure AD for the identity of the VM. The service principal is created in the Azure AD tenant that's trusted by the subscription.
-3. Azure Resource Manager updates the VM identity using the Azure Instance Metadata Service identity endpoint (for [Windows](/azure/virtual-machines/windows/instance-metadata-service) and [Linux](/azure/virtual-machines/linux/instance-metadata-service)), providing the endpoint with the service principal client ID and certificate.
+3. Azure Resource Manager updates the VM identity using the Azure Instance Metadata Service identity endpoint (for [Windows](../../virtual-machines/windows/instance-metadata-service.md) and [Linux](../../virtual-machines/linux/instance-metadata-service.md)), providing the endpoint with the service principal client ID and certificate.
4. After the VM has an identity, use the service principal information to grant the VM access to Azure resources. To call Azure Resource Manager, use Azure Role-Based Access Control (Azure RBAC) to assign the appropriate role to the VM service principal. To call Key Vault, grant your code access to the specific secret or key in Key Vault.
The following table shows the differences between the system-assigned and user-a
Get started with the managed identities for Azure resources feature with the following quickstarts: * [Use a Windows VM system-assigned managed identity to access Resource Manager](tutorial-windows-vm-access-arm.md)
-* [Use a Linux VM system-assigned managed identity to access Resource Manager](tutorial-linux-vm-access-arm.md)
+* [Use a Linux VM system-assigned managed identity to access Resource Manager](tutorial-linux-vm-access-arm.md)
active-directory Adstream Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adstream-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Adstream'
+description: Learn how to configure single sign-on between Azure Active Directory and Adstream.
++++++++ Last updated : 11/16/2022++++
+# Azure Active Directory SSO integration with Adstream
+
+In this article, you'll learn how to integrate Adstream with Azure Active Directory (Azure AD). Adstream provides the safest and easiest to use business solution for sending and receiving files. When you integrate Adstream with Azure AD, you can:
+
+* Control in Azure AD who has access to Adstream.
+* Enable your users to be automatically signed-in to Adstream with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Adstream in a test environment. Adstream supports both **SP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Adstream, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Adstream single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Adstream application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Adstream from the Azure AD gallery
+
+Add Adstream from the Azure AD application gallery to configure single sign-on with Adstream. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Adstream** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Reply URL** textbox, type the URL:
+ `https://msft.adstream.com/saml/assert`
+
+ b. In the **Sign on URL** textbox, type the URL:
+ `https://msft.adstream.com`
+
+ c. In the **Relay State** textbox, type the URL:
+ `https://a5.adstream.com/projects#/projects/projects`
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Adstream** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Adstream SSO
+
+To configure single sign-on on **Adstream** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Adstream support team](mailto:support@adstream.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Adstream test user
+
+In this section, you create a user called Britta Simon in Seculio. Work with [Adstream support team](mailto:support@adstream.com) to add the users in the Seculio platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Adstream Sign-on URL where you can initiate the login flow.
+
+* Go to Adstream Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Adstream tile in the My Apps, this will redirect to Adstream Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Adstream you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Boomi Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/boomi-tutorial.md
Previously updated : 02/25/2021 Last updated : 11/14/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, sign in to your Boomi company site as an administrator.
-1. Navigate to **Company Name** and go to **Set up**.
-
-1. Click the **SSO Options** tab and perform below steps.
+1. Go to the **Settings**, click the **SSO Options** in the security options and perform the below steps.
![Configure Single Sign-On On App Side](./media/boomi-tutorial/import.png)
- a. Check **Enable SAML Single Sign-On** checkbox.
+ a. Select **Enabled** in **Enable SAML Single Sign-On**.
b. Click **Import** to upload the downloaded certificate from Azure AD to **Identity Provider Certificate**.
- c. In the **Identity Provider Login URL** textbox, put the value of **Login URL** from Azure AD application configuration window.
+ c. In the **Identity Provider Sign In URL** textbox, paste the value of **Login URL** from Azure AD application configuration window.
d. For **Federation Id Location**, select the **Federation Id is in FEDERATION_ID Attribute element** radio button.
- e. Copy the **AtomSphere MetaData URL**, go to the **MetaData URL** via the browser of your choice, and save the output to a file. Upload the **MetaData URL** in the **Basic SAML Configuration** section in the Azure portal.
+ e. For **SAML Authentication Context**, select the **Password Protected Transport** radio button.
+
+ f. Copy the **AtomSphere Sign In URL**, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ g. Copy the **AtomSphere MetaData URL**, go to the **MetaData URL** via the browser of your choice, and save the output to a file. Upload the **MetaData URL** in the **Basic SAML Configuration** section in the Azure portal.
- f. Click **Save** button.
+ h. Click **Save** button.
### Create Boomi test user
In order to enable Azure AD users to sign in to Boomi, they must be provisioned
1. Sign in to your Boomi company site as an administrator.
-1. After logging in, navigate to **User Management** and go to **Users**.
-
- ![Screenshot shows the User Management page with Users selected.](./media/boomi-tutorial/user.png "Users")
+1. After logging in, navigate to **User Management** ->**Users**.
1. Click **+** icon and the **Add/Maintain User Roles** dialog opens. ![Screenshot shows the + icon selected.](./media/boomi-tutorial/add.png "Users")
- ![Screenshot shows the Add / Maintain User Roles where you configure a user.](./media/boomi-tutorial/roles.png "Users")
- a. In the **User e-mail address** textbox, type the email of user like B.Simon@contoso.com. b. In the **First name** textbox, type the First name of user like B.
active-directory Dx Netops Portal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dx-netops-portal-tutorial.md
+
+ Title: Azure Active Directory SSO integration with DX NetOps Portal'
+description: Learn how to configure single sign-on between Azure Active Directory and DX NetOps Portal.
++++++++ Last updated : 11/07/2022++++
+# Azure Active Directory SSO integration with DX NetOps Portal
+
+In this article, you'll learn how to integrate DX NetOps Portal with Azure Active Directory (Azure AD). DX NetOps Portal provides network observability, topology with fault correlation and root-cause analysis at telecom carrier level scale, over traditional and software defined networks, internal and external. When you integrate DX NetOps Portal with Azure AD, you can:
+
+* Control in Azure AD who has access to DX NetOps Portal.
+* Enable your users to be automatically signed-in to DX NetOps Portal with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for DX NetOps Portal in a test environment. DX NetOps Portal supports **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with DX NetOps Portal, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* DX NetOps Portal single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the DX NetOps Portal application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add DX NetOps Portal from the Azure AD gallery
+
+Add DX NetOps Portal from the Azure AD application gallery to configure single sign-on with DX NetOps Portal. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **DX NetOps Portal** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `<DX NetOps Portal hostname>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<DX NetOps Portal FQDN>:<SSO port>/sso/saml2/UserAssertionService`
+
+ c. In the **Relay State** textbox, type a URL using the following pattern:
+ `SsoProductCode=pc&SsoRedirectUrl=https://<DX NetOps Portal FQDN>:<https port>/pc/desktop/page`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Relay State URL. Contact [DX NetOps Portal Client support team](https://support.broadcom.com/web/ecx/contact-support) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Your DX NetOps Portal application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but DX NetOps Portal expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Attributes")
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up DX NetOps Portal** section, copy the appropriate URL(s) as per your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure DX NetOps Portal SSO
+
+To configure single sign-on on **DX NetOps Portal** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [DX NetOps Portal support team](https://support.broadcom.com/web/ecx/contact-support). The support team will use the copied URLs to configure the single sign-on on the application.
+
+### Create DX NetOps Portal test user
+
+To be able to test and use single sign-on, you have to create and activate users in the DX NetOps Portal application.
+
+In this section, you create a user called Britta Simon in DX NetOps Portal that corresponds with the Azure AD user you already created in the previous section. Work with [DX NetOps Portal support team](https://support.broadcom.com/web/ecx/contact-support) to add the user in the DX NetOps Portal platform.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the DX NetOps Portal for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the DX NetOps Portal tile in the My Apps, you should be automatically signed in to the DX NetOps Portal for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure DX NetOps Portal you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Factset Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/factset-tutorial.md
Previously updated : 10/10/2022 Last updated : 11/15/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure FactSet SSO
-To configure single sign-on on **FactSet** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to FactSet account support representatives or to [FactSet Support Team](https://www.factset.com/contact-us). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on FactSet side you will need to visit FactSet's [Control Center](https://controlcenter.factset.com) and configure the **Federation Metadata XML** and appropriate copied URLs from Azure portal under **Control Center's Security > Single Sign-On (SSO)** page. If you require access to this page, please contact [FactSet Support Team](https://www.factset.com/contact-us) and request FactSet product 8514 (Control Center - Source IPs, Security + Authentication).
### Create FactSet test user
-In this section, you create a user called Britta Simon in FactSet. Work with your FactSet account support representatives or contact [FactSet Support Team](https://www.factset.com/contact-us) to add the users in the FactSet platform. Users must be created and activated before you use single sign-on.
+Work with your FactSet account support representatives or contact [FactSet Support Team](https://www.factset.com/contact-us) to add the users in the FactSet platform. Users must be created and activated before you use single sign-on.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
+In this section, you test your Azure AD single sign-on configuration with following option.
-* Click on Test this application in Azure portal and you should be automatically signed in to the FactSet for which you set up the SSO.
-
-* You can use Microsoft My Apps. When you click the FactSet tile in the My Apps, you should be automatically signed in to the FactSet for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* FactSet only supports SP-initiated SAML. You may test SSO by visiting any authenticated FactSet URL such as [Issue Tracker](https://issuetracker.factset.com) or [FactSet-Web](https://my.factset.com), click on **Single Sign-On (SSO)** on the logon portal and supply your email address in the subsequent page. Please see supplied [documentation](https://download.factset.com/documents/web/FactSet_Single_Sign-On.pdf) for additional information and usage.
## Next steps
active-directory Icertisicm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/icertisicm-tutorial.md
- Title: 'Tutorial: Azure Active Directory integration with Icertis Contract Management Platform | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Icertis Contract Management Platform.
-------- Previously updated : 04/22/2021--
-# Tutorial: Azure Active Directory integration with Icertis Contract Management Platform
-
-In this tutorial, you'll learn how to integrate Icertis Contract Management Platform with Azure Active Directory (Azure AD). When you integrate Icertis Contract Management Platform with Azure AD, you can:
-
-* Control in Azure AD who has access to Icertis Contract Management Platform.
-* Enable your users to be automatically signed-in to Icertis Contract Management Platform with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Icertis Contract Management Platform single sign-on (SSO) enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-
-* Icertis Contract Management Platform supports **SP** initiated SSO.
-
-## Add Icertis Contract Management Platform from the gallery
-
-To configure the integration of Icertis Contract Management Platform into Azure AD, you need to add Icertis Contract Management Platform from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Icertis Contract Management Platform** in the search box.
-1. Select **Icertis Contract Management Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-
-## Configure and test Azure AD SSO for Icertis Contract Management Platform
-
-Configure and test Azure AD SSO with Icertis Contract Management Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Icertis Contract Management Platform.
-
-To configure and test Azure AD SSO with Icertis Contract Management Platform, perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Icertis Contract Management Platform SSO](#configure-icertis-contract-management-platform-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Icertis Contract Management Platform test user](#create-icertis-contract-management-platform-test-user)** - to have a counterpart of B.Simon in Icertis Contract Management Platform that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **Icertis Contract Management Platform** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<company name>.icertis.com`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<company name>.icertis.com`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Icertis Contract Management Platform Client support team](https://www.icertis.com/company/contact/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-6. On the **Set up Icertis Contract Management Platform** section, copy the appropriate URL(s) as per your requirement. For **Login URL**, use the value with the following pattern: `https://login.microsoftonline.com/_my_directory_id_/wsfed`
-
- > [!Note]
- > _my_directory_id_ is the tenant id of Azure AD subscription.
-
- ![Copy configuration URLs](media/icertisicm-tutorial/configurls.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Icertis Contract Management Platform.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Icertis Contract Management Platform**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Icertis Contract Management Platform SSO
-
-To configure single sign-on on **Icertis Contract Management Platform** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Icertis Contract Management Platform support team](https://www.icertis.com/company/contact/). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create Icertis Contract Management Platform test user
-
-In this section, you create a user called Britta Simon in Icertis Contract Management Platform. Work with [Icertis Contract Management Platform support team](https://www.icertis.com/company/contact/) to add the users in the Icertis Contract Management Platform platform. Users must be created and activated before you use single sign-on.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-* Click on **Test this application** in Azure portal. This will redirect to Icertis Contract Management Platform Sign-on URL where you can initiate the login flow.
-
-* Go to Icertis Contract Management Platform Sign-on URL directly and initiate the login flow from there.
-
-* You can use Microsoft My Apps. When you click the Icertis Contract Management Platform tile in the My Apps, this will redirect to Icertis Contract Management Platform Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Next steps
-
-Once you configure Icertis Contract Management Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory New Relic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/new-relic-tutorial.md
Previously updated : 02/02/2021 Last updated : 11/14/2022
In this tutorial, you'll learn how to integrate New Relic by Account with Azure
* Enable your users to be automatically signed-in to New Relic by Account with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
+> [!NOTE]
+> This document is only relevant if you're using the [Original User Model](https://docs.newrelic.com/docs/accounts/original-accounts-billing/original-users-roles/overview-user-models/) in New Relic. Please refer to [New Relic (By Organization)](new-relic-limited-release-tutorial.md) if you're using New Relic's newer user model.
+ ## Prerequisites To get started, you need the following items:
active-directory Tableauserver Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableauserver-tutorial.md
Previously updated : 01/25/2021 Last updated : 11/14/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
> [!NOTE] > Customer have to upload A PEM-encoded x509 Certificate file with a .crt extension and a RSA or DSA private key file that has the .key extension, as a Certificate Key file. For more information on Certificate file and Certificate Key file, please refer to [this](https://help.tableau.com/current/server/en-us/saml_requ.htm) document. If you need help configuring SAML on Tableau Server then please refer to this article [Configure Server Wide SAML](https://help.tableau.com/current/server/en-us/config_saml.htm).
+ > [!NOTE]
+ > The SAML Certificate and SAML Key files are generated separately and uploaded to the Tableau Server Manager. For example, in the linux shell, use openssl to generate the cert and key like so: `openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout private.key -out saml.crt` then upload the `saml.crt` and `private.key` files via the TSM Configruation GUI (As shown in the screenshot at the start of this step) or via the [command line according to the tableau docs](https://help.tableau.com/current/server-linux/en-us/config_saml.htm). If you are in a production environment, you may want to find a more secure way to handle SAML certs and keys.
+ ### Create Tableau Server test user The objective of this section is to create a user called B.Simon in Tableau Server. You need to provision all the users in the Tableau server.
active-directory Timetabling Solutions Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timetabling-solutions-tutorial.md
Previously updated : 06/04/2022 Last updated : 11/16/2022
In this section, you create a user called Britta Simon in the Timetabling Soluti
> [!NOTE]
-> Work with [Timetabling Solutions support team](https://www.timetabling.com.au/contact-us/) to add the users in the Timetabling Solutions platform. Users must be created and activated before you use single sign-on.
+> To add the users in the Timetabling Solutions platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Tranxfer Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tranxfer-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Tranxfer'
+description: Learn how to configure single sign-on between Azure Active Directory and Tranxfer.
++++++++ Last updated : 11/07/2022++++
+# Azure Active Directory SSO integration with Tranxfer
+
+In this article, you'll learn how to integrate Tranxfer with Azure Active Directory (Azure AD). Tranxfer provides the safest and easiest to use business solution for sending and receiving files. When you integrate Tranxfer with Azure AD, you can:
+
+* Control in Azure AD who has access to Tranxfer.
+* Enable your users to be automatically signed-in to Tranxfer with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Tranxfer in a test environment. Tranxfer supports **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Tranxfer, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Tranxfer single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Tranxfer application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Tranxfer from the Azure AD gallery
+
+Add Tranxfer from the Azure AD application gallery to configure single sign-on with Tranxfer. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Tranxfer** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:-
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.tranxfer.com`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.tranxfer.com/SAMLResponse`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.tranxfer.com/saml/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Tranxfer Client support team](mailto:soporte@tranxfer.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Tranxfer application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Tranxfer application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | groups | user.groups [All] |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, select copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Tranxfer SSO
+
+To configure single sign-on on **Tranxfer** side, you need to send the **App Federation Metadata Url** to [Tranxfer support team](mailto:soporte@tranxfer.com). The support team will use the copied URLs to configure the single sign-on on the application.
+
+### Create Tranxfer test user
+
+In this section, a user called B.Simon is created in Tranxfer. Tranxfer supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Tranxfer, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Tranxfer Sign-on URL where you can initiate the login flow.
+
+* Go to Tranxfer Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Tranxfer tile in the My Apps, this will redirect to Tranxfer Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Tranxfer you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Howto Verifiable Credentials Partner Au10tix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md
Before you can continue with the steps below you need to meet the following requ
## Scenario description
-When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](./plan-verification-solution.md#account-onboarding).
User flow is specific to your application or website. However if you are using o
## Next steps - [Verifiable credentials admin API](admin-api.md)-- [Request Service REST API issuance specification](issuance-request-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
active-directory Howto Verifiable Credentials Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md
You can use Entra Verified ID with LexisNexis Risk Solutions to enable faster on
## Scenario description
-Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](./plan-verification-solution.md#account-onboarding).
:::image type="content" source="media/verified-id-partner-au10tix/vc-solution-architecture-diagram.png" alt-text="Diagram of the verifiable credential solution.":::
User flow is specific to your application or website. However if you are using [
## Next steps - [Verifiable credentials admin API](admin-api.md)-- [Request Service REST API issuance specification](issuance-request-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
active-directory Partner Vu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-vu.md
VU Identity Card works as a link between users who need to access an application
Verifiable credentials can be used to enable faster and easier user onboarding by replacing some human interactions. For example, a user or employee who wants to create or remotely access an account can use a Verified ID through VU Identity Card to verify their identity without using vulnerable or overly complex passwords or the requirement to be on-site.
-Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+Learn more about [account onboarding](./plan-verification-solution.md#account-onboarding).
In this account onboarding scenario, Vu plays the Trusted ID proofing issuer role.
User flow is specific to your application or website. However if you are using o
## Next steps - [Verifiable credentials admin API](admin-api.md)-- [Request Service REST API issuance specification](issuance-request-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
Now that you have a new credential, you're going to gather some information abou
1. Copy your **Tenant ID**, and record it for later. The Tenant ID is the guid in the manifest URL highlighted in red above.
- >[!NOTE]
- > When setting up access policies for Azure Key Vault, you must add the access policies for both **Verifiable Credentials Service Request** and **Verifiable Credentials Service**.
- ## Download the sample code The sample application is available in .NET, and the code is maintained in a GitHub repository. Download the sample code from [GitHub](https://github.com/Azure-Samples/active-directory-verifiable-credentials-dotnet), or clone the repository to your local machine:
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
To add the required permissions, follow these steps:
1. Select **APIs my organization uses**.
-1. Search for the **Verifiable Credentials Service Request** and **Verifiable Credentials Service** service principals, and select them.
+1. Search for the **Verifiable Credentials Service Request** service principal and select it.
:::image type="content" source="media/verifiable-credentials-configure-tenant/add-app-api-permissions-select-service-principal.png" alt-text="Screenshot that shows how to select the service principal.":::
You can choose to grant issuance and presentation permissions separately if you
1. Navigate to the Verified ID service in the Azure portal. 1. Select **Registration**. 1. Notice that there are two sections:
- 1. Website ID registration
- 1. Domain verification.
+ 1. DID registration
+ 1. Domain ownership verification.
1. Select on each section and download the JSON file under each. 1. Create a website that you can use to distribute the files. If you specified **https://contoso.com** as your domain, the URLs for each of the files would look as shown below: - `https://contoso.com/.well-known/did.json` - `https://contoso.com/.well-known/did-configuration.json` Once that you have successfully completed the verification steps, you are ready to continue to the next tutorial.
+If you have selected ION as the trust system, you will not see the DID registration section as it is not applicable for ION and you only have to distribute the did-configuration.json file.
## Next steps
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
You can get these recommendations on the **Operational Excellence** tab of the A
1. On the **Advisor** dashboard, select the **Operational Excellence** tab.
-## Spring Cloud
+## Azure Spring Apps
### Update your outdated Azure Spring Apps SDK to the latest version We have identified API calls from an outdated Azure Spring Apps SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
-Learn more about [Spring Cloud Service - SpringCloudUpgradeOutdatedSDK (Update your outdated Azure Spring Apps SDK to the latest version)](../spring-apps/index.yml).
+Learn more about the [Azure Spring Apps service](../spring-apps/index.yml).
### Update Azure Spring Apps API Version
-We have identified API calls from outdated Azure Spring Apps API for resources under this subscription. We recommend switching to the latest Spring Cloud API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version. This ensures you receive the latest features and performance improvements.
+We have identified API calls from outdated Azure Spring Apps API for resources under this subscription. We recommend switching to the latest Azure Spring Apps API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version. This ensures you receive the latest features and performance improvements.
-Learn more about [Spring Cloud Service - UpgradeAzureSpringCloudAPI (Update Azure Spring Apps API Version)](../spring-apps/index.yml).
+Learn more about the [Azure Spring Apps service](../spring-apps/index.yml).
## Automation
Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number
### Add Azure Monitor to your virtual machine (VM) labeled as production
-Azure Monitor for VMs monitors your Azure virtual machines (VM) and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and it monitors their processes and dependencies on other resources and external processes. It includes support for monitoring performance and application dependencies for VMs that are hosted on-premises or in another cloud provider.
+Azure Monitor for VMs monitors your Azure virtual machines (VM) and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and it monitors their processes and dependencies on other resources and external processes. It includes support for monitoring performance and application dependencies for VMs that are hosted on-premises or in another cloud provider.
Learn more about [Virtual machine - AddMonitorProdVM (Add Azure Monitor to your virtual machine (VM) labeled as production)](/azure/azure-monitor/insights/vminsights-overview).
aks Aks Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-diagnostics.md
+
+ Title: Azure Kubernetes Service (AKS) Diagnostics Overview
+description: Learn about self-diagnosing clusters in Azure Kubernetes Service.
++ Last updated : 11/15/2022++
+# Azure Kubernetes Service Diagnostics (preview) overview
+
+Troubleshooting Azure Kubernetes Service (AKS) cluster issues plays an important role in maintaining your cluster, especially if your cluster is running mission-critical workloads. AKS Diagnostics (preview) is an intelligent, self-diagnostic experience that:
+
+* Helps you identify and resolve problems in your cluster.
+* Is cloud-native.
+* Requires no extra configuration or billing cost.
++
+## Open AKS Diagnostics
+
+To access AKS Diagnostics:
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+1. From **All services** in the Azure portal, select **Kubernetes Service**.
+1. Select **Diagnose and solve problems** in the left navigation, which opens AKS Diagnostics.
+1. Choose a category that best describes the issue of your cluster, like _Cluster Node Issues_, by:
+
+ * Using the keywords in the homepage tile.
+ * Typing a keyword that best describes your issue in the search bar.
+
+![Homepage](./media/concepts-diagnostics/aks-diagnostics-homepage.png)
+
+## View a diagnostic report
+
+After you click on a category, you can view a diagnostic report specific to your cluster. Diagnostic reports intelligently call out any issues in your cluster with status icons. You can drill down on each topic by clicking **More Info** to see a detailed description of:
+
+* Issues
+* Recommended actions
+* Links to helpful docs
+* Related-metrics
+* Logging data
+
+Diagnostic reports generate based on the current state of your cluster after running various checks. They can be useful for pinpointing the problem of your cluster and understanding next steps to resolve the issue.
+
+![Diagnostic Report](./media/concepts-diagnostics/diagnostic-report.png)
+
+![Expanded Diagnostic Report](./media/concepts-diagnostics/node-issues.png)
+
+## Cluster insights
+
+The following diagnostic checks are available in **Cluster Insights**.
+
+### Cluster Node Issues
+
+Cluster Node Issues checks for node-related issues that cause your cluster to behave unexpectedly. Specifically:
+
+- Node readiness issues
+- Node failures
+- Insufficient resources
+- Node missing IP configuration
+- Node CNI failures
+- Node not found
+- Node power off
+- Node authentication failure
+- Node kube-proxy stale
+
+### Create, read, update & delete (CRUD) operations
+
+CRUD Operations checks for any CRUD operations that cause issues in your cluster. Specifically:
+
+- In-use subnet delete operation error
+- Network security group delete operation error
+- In-use route table delete operation error
+- Referenced resource provisioning error
+- Public IP address delete operation error
+- Deployment failure due to deployment quota
+- Operation error due to organization policy
+- Missing subscription registration
+- VM extension provisioning error
+- Subnet capacity
+- Quota exceeded error
+
+### Identity and security management
+
+Identity and Security Management detects authentication and authorization errors that prevent communication to your cluster. Specifically,
+
+- Node authorization failures
+- 401 errors
+- 403 errors
+
+## Next steps
+
+* Collect logs to help you further troubleshoot your cluster issues by using [AKS Periscope](https://aka.ms/aksperiscope).
+
+* Read the [triage practices section](/azure/architecture/operator-guides/aks/aks-triage-practices) of the AKS day-2 operations guide.
+
+* Post your questions or feedback at [UserVoice](https://feedback.azure.com/d365community/forum/aabe212a-f724-ec11-b6e6-000d3a4f0da0) by adding "[Diag]" in the title.
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Azure Kubernetes Service (AKS)
-description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage (preview) in an Azure Kubernetes Service (AKS) cluster.
+description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage in an Azure Kubernetes Service (AKS) cluster.
Previously updated : 08/10/2022 Last updated : 11/16/2022
-# Use Azure Blob storage Container Storage Interface (CSI) driver (preview)
+# Use Azure Blob storage Container Storage Interface (CSI) driver
-The Azure Blob storage Container Storage Interface (CSI) driver (preview) is a [CSI specification][csi-specification]-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure Blob storage. The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes.
+The Azure Blob storage Container Storage Interface (CSI) driver is a [CSI specification][csi-specification]-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure Blob storage. The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes.
By adopting and using CSI, AKS now can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes. Using CSI drivers in AKS avoids having to touch the core Kubernetes code and wait for its release cycles.
Mounting Azure Blob storage as a file system into a container or pod, enables yo
* Images, documents, and streaming video or audio * Disaster recovery data
-The data on the object storage can be accessed by applications using BlobFuse or Network File System (NFS) 3.0 protocol. Before the introduction of the Azure Blob storage CSI driver (preview), the only option was to manually install an unsupported driver to access Blob storage from your application running on AKS. When the Azure Blob storage CSI driver (preview) is enabled on AKS, there are two built-in storage classes: *azureblob-fuse-premium* and *azureblob-nfs-premium*.
+The data on the object storage can be accessed by applications using BlobFuse or Network File System (NFS) 3.0 protocol. Before the introduction of the Azure Blob storage CSI driver, the only option was to manually install an unsupported driver to access Blob storage from your application running on AKS. When the Azure Blob storage CSI driver is enabled on AKS, there are two built-in storage classes: *azureblob-fuse-premium* and *azureblob-nfs-premium*.
+
+> [!NOTE]
+> Azure Blob CSI driver only supports NFS 3.0 protocol for Kubernetes versions 1.25 (preview) on AKS.
To create an AKS cluster with CSI drivers support, see [CSI drivers on AKS][csi-drivers-aks]. To learn more about the differences in access between each of the Azure storage types using the NFS protocol, see [Compare access to Azure Files, Blob Storage, and Azure NetApp Files with NFS][compare-access-with-nfs].
-## Azure Blob storage CSI driver (preview) features
+## Azure Blob storage CSI driver features
-Azure Blob storage CSI driver (preview) supports the following features:
+Azure Blob storage CSI driver supports the following features:
- BlobFuse and Network File System (NFS) version 3.0 protocol ## Before you begin -- The Azure CLI version 2.37.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].--- Install the aks-preview Azure CLI extension version 0.5.85 or later.--- If the open-source CSI Blob storage driver is installed on your cluster, uninstall it before enabling the preview driver.
+Review the prerequisites listed in the [CSI storage drivers overview][csi-storage-driver-overview] article to verify the requirements before using this feature.
### Uninstall open-source driver Perform the steps in this [link][csi-blob-storage-open-source-driver-uninstall-steps] if you previously installed the [CSI Blob Storage open-source driver][csi-blob-storage-open-source-driver] to access Azure Blob storage from your cluster.
-## Install the Azure CLI aks-preview extension
-
-The following steps are required to install and register the Azure CLI aks-preview extension and driver in your subscription.
-
-1. To use the Azure CLI aks-preview extension for enabling the Blob storage CSI driver (preview) on your AKS cluster, run the following command to install it:
-
- ```azurecli
- az extension add --name aks-preview
- ```
-
-2. Run the following command to register the CSI driver (preview):
-
- ```azurecli
- az feature register --name EnableBlobCSIDriver --namespace Microsoft.ContainerService
- ```
-
-3. To register the provider, run the following command:
-
- ```azurecli
- az provider register -n Microsoft.ContainerService
- ```
-
-When newer versions of the extension are released, run the following command to upgrade the extension to the latest release:
-
-```azurecli
-az extension update --name aks-preview
-```
-
-## Enable CSI driver on a new or existing AKS cluster
-
-Using the Azure CLI, you can enable the Blob storage CSI driver (preview) on a new or existing AKS cluster before you configure a persistent volume for use by pods in the cluster.
-
-To enable the driver on a new cluster, include the `--enable-blob-driver` parameter with the `az aks create` command as shown in the following example:
-
-```azurecli
-az aks create --enable-blob-driver -n myAKSCluster -g myResourceGroup
-```
-
-To enable the driver on an existing cluster, include the `--enable-blob-driver` parameter with the `az aks update` command as shown in the following example:
-
-```azurecli
-az aks update --enable-blob-driver -n myAKSCluster -g myResourceGroup
-```
-
-You're prompted to confirm there isn't an open-source Blob CSI driver installed. After confirming, it may take several minutes to complete this action. Once it's complete, you should see in the output the status of enabling the driver on your cluster. The following example is resembles the section indicating the results of the previous command:
-
-```output
-"storageProfile": {
- "blobCsiDriver": {
- "enabled": true
- },
-```
-
-## Disable CSI driver on an existing AKS cluster
-
-Using the Azure CLI, you can disable the Blob storage CSI driver on an existing AKS cluster after you remove the persistent volume from the cluster.
-
-To disable the driver on an existing cluster, include the `--disable-blob-driver` parameter with the `az aks update` command as shown in the following example:
-
-```azurecli
-az aks update --disable-blob-driver -n myAKSCluster -g myResourceGroup
-```
- ## Use a persistent volume with Azure Blob storage A [persistent volume][persistent-volume] (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Blob storage to connect by using the Network File System (NFS) or blobfuse. This article shows you how to dynamically create an Azure Blob storage container for use by multiple pods in an AKS cluster.
A storage class is used to define how an Azure Blob storage container is created
* **Standard_GRS**: Standard geo-redundant storage * **Standard_RAGRS**: Standard read-access geo-redundant storage
-When you use storage CSI drivers on AKS, there are two additional built-in StorageClasses that use the Azure Blob CSI storage driver (preview).
+When you use storage CSI drivers on AKS, there are two additional built-in StorageClasses that use the Azure Blob CSI storage driver.
The reclaim policy on both storage classes ensures that the underlying Azure Blob storage is deleted when the respective PV is deleted. The storage classes also configure the container to be expandable by default, as the `set allowVolumeExpansion` parameter is set to **true**.
To have a storage volume persist for your workload, you can use a StatefulSet. T
- To learn how to manually set up a static persistent volume, see [Create and use a volume with Azure Blob storage][azure-csi-blob-storage-static]. - To learn how to dynamically set up a persistent volume, see [Create and use a dynamic persistent volume with Azure Blob storage][azure-csi-blob-storage-dynamic].-- To learn how to use CSI driver for Azure Disks, see [Use Azure Disks with CSI driver](azure-disk-csi.md).-- To learn how to use CSI driver for Azure Files, see [Use Azure Files with CSI driver](azure-files-csi.md).
+- To learn how to use CSI driver for Azure Disks, see [Use Azure Disks with CSI driver][azure-disk-csi-driver]
+- To learn how to use CSI driver for Azure Files, see [Use Azure Files with CSI driver][azure-files-csi-driver]
- For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage]. <!-- LINKS - external -->
-[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
-[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
[csi-specification]: https://github.com/container-storage-interface/spec/blob/master/spec.md [csi-blob-storage-open-source-driver]: https://github.com/kubernetes-sigs/blob-csi-driver [csi-blob-storage-open-source-driver-uninstall-steps]: https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/docs/install-csi-driver-master.md#clean-up-blob-csi-driver <!-- LINKS - internal -->
-[install-azure-cli]: /cli/azure/install-azure-cli
-[azure-disk-volume]: azure-disk-volume.md
-[azure-files-pvc]: azure-files-dynamic-pv.md
-[premium-storage]: ../virtual-machines/disks-types.md
[compare-access-with-nfs]: ../storage/common/nfs-comparison.md
-[az-disk-list]: /cli/azure/disk#az_disk_list
-[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
-[az-disk-create]: /cli/azure/disk#az_disk_create
-[az-disk-show]: /cli/azure/disk#az_disk_show
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
[operator-best-practices-storage]: operator-best-practices-storage.md [concepts-storage]: concepts-storage.md [persistent-volume]: concepts-storage.md#persistent-volumes [csi-drivers-aks]: csi-storage-drivers.md
-[storage-class-concepts]: concepts-storage.md#storage-classes
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks
-[storage-skus]: ../storage/common/storage-redundancy.md
-[use-tags]: use-tags.md
-[az-tags]: ../azure-resource-manager/management/tag-resources.md
[azure-csi-blob-storage-dynamic]: azure-csi-blob-storage-dynamic.md [azure-csi-blob-storage-static]: azure-csi-blob-storage-static.md
+[csi-storage-driver-overview]: csi-storage-drivers.md
+[azure-disk-csi-driver]: azure-disk-csi.md
+[azure-files-csi-driver]: azure-files-csi.md
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
Scaling your workload based on relevant business metrics such as HTTP requests,
Removing state from your design reduces the in-memory or on-disk data required by the workload to function.
-* Consider [stateless design](/azure/aks/operator-best-practices-multi-region#remove-service-state-from-inside-containers) to reduce unnecessary network load, data processing, and compute resources.
+* Consider [stateless design](./operator-best-practices-multi-region.md#remove-service-state-from-inside-containers) to reduce unnecessary network load, data processing, and compute resources.
## Application platform
Explore this section to learn how to make better informed platform-related decis
An up-to-date cluster avoids unnecessary performance issues and ensures you benefit from the latest performance improvements and compute optimizations.
-* Enable [cluster auto-upgrade](/azure/aks/auto-upgrade-cluster) and [apply security updates to nodes automatically using GitHub Actions](/azure/aks/node-upgrade-github-actions), to ensure your cluster has the latest improvements.
+* Enable [cluster auto-upgrade](./auto-upgrade-cluster.md) and [apply security updates to nodes automatically using GitHub Actions](./node-upgrade-github-actions.md), to ensure your cluster has the latest improvements.
### Install supported add-ons and extensions
-Add-ons and extensions covered by the [AKS support policy](/azure/aks/support-policies) provide additional and supported functionality to your cluster while allowing you to benefit from the latest performance improvements and energy optimizations throughout your cluster lifecycle.
+Add-ons and extensions covered by the [AKS support policy](./support-policies.md) provide additional and supported functionality to your cluster while allowing you to benefit from the latest performance improvements and energy optimizations throughout your cluster lifecycle.
-* Ensure you install [KEDA](/azure/aks/integrations#available-add-ons) as an add-on and [GitOps & Dapr](/azure/aks/cluster-extensions?tabs=azure-cli#currently-available-extensions) as extensions.
+* Ensure you install [KEDA](./integrations.md#available-add-ons) as an add-on and [GitOps & Dapr](./cluster-extensions.md?tabs=azure-cli#currently-available-extensions) as extensions.
### Containerize your workload where applicable Containers allow for reducing unnecessary resource allocation and making better use of the resources deployed as they allow for bin packing and require less compute resources than virtual machines.
-* Use [Draft](/azure/aks/draft) to simplify application containerization by generating Dockerfiles and Kubernetes manifests.
+* Use [Draft](./draft.md) to simplify application containerization by generating Dockerfiles and Kubernetes manifests.
### Use energy efficient hardware
Ampere's Cloud Native Processors are uniquely designed to meet both the high per
An oversized cluster does not maximize utilization of compute resources and can lead to a waste of energy. Separate your applications into different node pools to allow for cluster right sizing and independent scaling according to the application requirements. As you run out of capacity in your AKS cluster, grow from AKS to ACI to scale out additional pods to serverless nodes and ensure your workload uses all the allocated resources efficiently.
-* Size your cluster to match the scalability needs of your application and [use cluster autoscaler](/azure/aks/cluster-autoscaler) in combination with [virtual nodes](/azure/aks/virtual-nodes) to rapidly scale and maximize compute resource utilization. Additionally, [enforce resource quotas](/azure/aks/operator-best-practices-scheduler#enforce-resource-quotas) at the namespace level and [scale user node pools to 0](/azure/aks/scale-cluster?tabs=azure-cli#scale-user-node-pools-to-0) when there is no demand.
+* Size your cluster to match the scalability needs of your application and [use cluster autoscaler](./cluster-autoscaler.md) in combination with [virtual nodes](./virtual-nodes.md) to rapidly scale and maximize compute resource utilization. Additionally, [enforce resource quotas](./operator-best-practices-scheduler.md#enforce-resource-quotas) at the namespace level and [scale user node pools to 0](./scale-cluster.md?tabs=azure-cli#scale-user-node-pools-to-0) when there is no demand.
### Turn off workloads and node pools outside of business hours Workloads may not need to run continuously and could be turned off to reduce energy waste, hence carbon emissions. You can completely turn off (stop) your node pools in your AKS cluster, allowing you to also save on compute costs.
-* Use the [node pool stop / start](/azure/aks/start-stop-nodepools) to turn off your node pools outside of business hours, and [KEDA CRON scaler](https://keda.sh/docs/2.7/scalers/cron/) to scale down your workloads (pods) based on time.
+* Use the [node pool stop / start](./start-stop-nodepools.md) to turn off your node pools outside of business hours, and [KEDA CRON scaler](https://keda.sh/docs/2.7/scalers/cron/) to scale down your workloads (pods) based on time.
## Operational procedures
Explore this section to set up your environment for measuring and continuously i
Unused resources such as unreferenced images and storage resources should be identified and deleted as they have a direct impact on hardware and energy efficiency. Identifying and deleting unused resources must be treated as a process, rather than a point-in-time activity to ensure continuous energy optimization.
-* Use [Azure Advisor](/azure/advisor/advisor-cost-recommendations) to identify unused resources and [ImageCleaner](/azure/aks/image-cleaner?tabs=azure-cli) to clean up stale images and remove an area of risk in your cluster.
+* Use [Azure Advisor](../advisor/advisor-cost-recommendations.md) to identify unused resources and [ImageCleaner](./image-cleaner.md?tabs=azure-cli) to clean up stale images and remove an area of risk in your cluster.
### Tag your resources Getting the right information and insights at the right time is important for producing reports about performance and resource utilization.
-* Set [Azure tags on your cluster](/azure/aks/use-tags) to enable monitoring of your workloads.
+* Set [Azure tags on your cluster](./use-tags.md) to enable monitoring of your workloads.
## Storage
Explore this section to learn how to design a more sustainable data storage arch
The data retrieval and data storage operations can have a significant impact on both energy and hardware efficiency. Designing solutions with the correct data access pattern can reduce energy consumption and embodied carbon.
-* Understand the needs of your application to [choose the appropriate storage](/azure/aks/operator-best-practices-storage#choose-the-appropriate-storage-type) and define it using [storage classes](/azure/aks/operator-best-practices-storage#create-and-use-storage-classes-to-define-application-needs) to avoid storage underutilization. Additionally, consider [provisioning volumes dynamically](/azure/aks/operator-best-practices-storage#dynamically-provision-volumes) to automatically scale the number of storage resources.
+* Understand the needs of your application to [choose the appropriate storage](./operator-best-practices-storage.md#choose-the-appropriate-storage-type) and define it using [storage classes](./operator-best-practices-storage.md#create-and-use-storage-classes-to-define-application-needs) to avoid storage underutilization. Additionally, consider [provisioning volumes dynamically](./operator-best-practices-storage.md#dynamically-provision-volumes) to automatically scale the number of storage resources.
## Network and connectivity
The distance from a data center to the users has a significant impact on energy
Placing nodes in a single region or a single availability zone reduces the physical distance between the instances. However, for business critical workloads, you need to ensure your cluster is spread across multiple availability-zones, which may result in more network traversal and increase in your carbon footprint.
-* Consider deploying your nodes within a [proximity placement group](/azure/virtual-machines/co-location) to reduce the network traversal by ensuring your compute resources are physically located close to each other. For critical workloads configure [proximity placement groups with availability zones](/azure/aks/reduce-latency-ppg#configure-proximity-placement-groups-with-availability-zones).
+* Consider deploying your nodes within a [proximity placement group](../virtual-machines/co-location.md) to reduce the network traversal by ensuring your compute resources are physically located close to each other. For critical workloads configure [proximity placement groups with availability zones](./reduce-latency-ppg.md#configure-proximity-placement-groups-with-availability-zones).
### Evaluate using a service mesh A service mesh deploys additional containers for communication, typically in a [sidecar pattern](/azure/architecture/patterns/sidecar), to provide more operational capabilities leading to an increase in CPU usage and network traffic. Nevertheless, it allows you to decouple your application from these capabilities as it moves them out from the application layer, and down to the infrastructure layer.
-* Carefully consider the increase in CPU usage and network traffic generated by [service mesh](/azure/aks/servicemesh-about) communication components before making the decision to use one.
+* Carefully consider the increase in CPU usage and network traffic generated by [service mesh](./servicemesh-about.md) communication components before making the decision to use one.
### Optimize log collection Sending and storing all logs from all possible sources (workloads, services, diagnostics and platform activity) can considerably increase storage and network traffic, which would impact higher costs and carbon emissions.
-* Make sure you are collecting and retaining only the log data necessary to support your requirements. [Configure data collection rules for your AKS workloads](/azure/azure-monitor/containers/container-insights-agent-config#data-collection-settings) and implement design considerations for [optimizing your Log Analytics costs](/azure/architecture/framework/services/monitoring/log-analytics/cost-optimization).
+* Make sure you are collecting and retaining only the log data necessary to support your requirements. [Configure data collection rules for your AKS workloads](../azure-monitor/containers/container-insights-agent-config.md#data-collection-settings) and implement design considerations for [optimizing your Log Analytics costs](/azure/architecture/framework/services/monitoring/log-analytics/cost-optimization).
### Cache static data Using Content Delivery Network (CDN) is a sustainable approach to optimizing network traffic because it reduces the data movement across a network. It minimizes latency through storing frequently read static data closer to users, and helps reduce network traffic and server load.
-* Ensure you [follow best practices](/azure/architecture/best-practices/cdn) for CDN and consider using [Azure CDN](/azure/cdn/cdn-how-caching-works?toc=%2Fazure%2Ffrontdoor%2FTOC.json) to lower the consumed bandwidth and keep costs down.
+* Ensure you [follow best practices](/azure/architecture/best-practices/cdn) for CDN and consider using [Azure CDN](../cdn/cdn-how-caching-works.md?toc=%2fazure%2ffrontdoor%2fTOC.json) to lower the consumed bandwidth and keep costs down.
## Security
Explore this section to learn more about the recommendations leading to a sustai
Transport Layer Security (TLS) ensures that all data passed between the web server and web browsers remain private and encrypted. However, terminating and re-establishing TLS increases CPU utilization and might be unnecessary in certain architectures. A balanced level of security can offer a more sustainable and energy efficient workload, while a higher level of security may increase the compute resource requirements.
-* Review the information on TLS termination when using [Application Gateway](/azure/application-gateway/ssl-overview) or [Azure Front Door](/azure/application-gateway/ssl-overview). Consider if you can terminate TLS at your border gateway and continue with non-TLS to your workload load balancer and onwards to your workload.
+* Review the information on TLS termination when using [Application Gateway](../application-gateway/ssl-overview.md) or [Azure Front Door](../application-gateway/ssl-overview.md). Consider if you can terminate TLS at your border gateway and continue with non-TLS to your workload load balancer and onwards to your workload.
### Use cloud native network security tools and controls
Azure Font Door and Application Gateway help manage traffic from web application
Many attacks on cloud infrastructure seek to misuse deployed resources for the attacker's direct gain leading to an unnecessary spike in usage and cost. Vulnerability scanning tools help minimize the window of opportunity for attackers and mitigate any potential malicious usage of resources.
-* Follow recommendations from [Microsoft Defender for Cloud](/security/benchmark/azure/security-control-vulnerability-management) and run automated vulnerability scanning tools such as [Defender for Containers](/azure/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure) to avoid unnecessary resource usage by identifying vulnerabilities in your images and minimizing the window of opportunity for attackers.
+* Follow recommendations from [Microsoft Defender for Cloud](/security/benchmark/azure/security-control-vulnerability-management) and run automated vulnerability scanning tools such as [Defender for Containers](../defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md) to avoid unnecessary resource usage by identifying vulnerabilities in your images and minimizing the window of opportunity for attackers.
## Next steps > [!div class="nextstepaction"]
-> [Azure Well-Architected Framework review of AKS](/azure/architecture/framework/services/compute/azure-kubernetes-service/azure-kubernetes-service)
+> [Azure Well-Architected Framework review of AKS](/azure/architecture/framework/services/compute/azure-kubernetes-service/azure-kubernetes-service)
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AK
description: Learn about and deploy the Container Storage Interface (CSI) drivers for Azure Disks and Azure Files in an Azure Kubernetes Service (AKS) cluster Previously updated : 09/18/2022- Last updated : 11/16/2022
The CSI storage driver support on AKS allows you to natively use:
## Prerequisites
-You need the Azure CLI version 2.40 installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+- You need the Azure CLI version 2.42 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+- If the open-source CSI Blob storage driver is installed on your cluster, uninstall it before enabling the Azure Blob storage driver.
-## Disable CSI storage drivers on a new cluster
+## Disable CSI storage drivers on a new or existing cluster
-`--disable-disk-driver` allows you to disable the [Azure Disks CSI driver][azure-disk-csi]. `--disable-file-driver` allows you to disable the [Azure Files CSI driver][azure-files-csi]. `--disable-snapshot-controller` allows you to disable the [snapshot controller][snapshot-controller ].
+To disable CSI storage drivers on a new cluster, include one of the following parameters depending on the storage system:
-To disable CSI storage drivers on a new cluster, use `--disable-disk-driver`, `--disable-file-driver`, and `--disable-snapshot-controller`.
+* `--disable-disk-driver` allows you to disable the [Azure Disks CSI driver][azure-disk-csi].
+* `--disable-file-driver` allows you to disable the [Azure Files CSI driver][azure-files-csi].
+* `--disable-blob-driver` allows you to disable the [Azure Blob storage CSI driver][azure-blob-csi].
+* `--disable-snapshot-controller` allows you to disable the [snapshot controller][snapshot-controller].
```azurecli
-az aks create -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-snapshot-controller
+az aks create -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
```
-## Disable CSI storage drivers on an existing cluster
-
-To disable CSI storage drivers on an existing cluster, use `--disable-disk-driver`, `--disable-file-driver`, and `--disable-snapshot-controller`.
+To disable CSI storage drivers on an existing cluster, use one of the parameters listed earlier depending on the storage system:
```azurecli
-az aks update -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-snapshot-controller
+az aks update -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
``` ## Enable CSI storage drivers on an existing cluster
-`--enable-disk-driver` allows you enable the [Azure Disks CSI driver][azure-disk-csi]. `--enable-file-driver` allows you to enable the [Azure Files CSI driver][azure-files-csi]. `--enable-snapshot-controller` allows you to enable the [snapshot controller][snapshot-controller].
+To enable CSI storage drivers on a new cluster, include one of the following parameters depending on the storage system:
-To enable CSI storage drivers on an existing cluster with CSI storage drivers disabled, use `--enable-disk-driver`, `--enable-file-driver`, and `--enable-snapshot-controller`.
+* `--enable-disk-driver` allows you to enable the [Azure Disks CSI driver][azure-disk-csi].
+* `--enable-file-driver` allows you to enable the [Azure Files CSI driver][azure-files-csi].
+* `--enable-blob-driver` allows you to enable the [Azure Blob storage CSI driver][azure-blob-csi].
+* `--enable-snapshot-controller` allows you to enable the [snapshot controller][snapshot-controller].
```azurecli
-az aks update -n myAKSCluster -g myResourceGroup --enable-disk-driver --enable-file-driver --enable-snapshot-controller
+az aks update -n myAKSCluster -g myResourceGroup --enable-disk-driver --enable-file-driver --enable-blob-driver --enable-snapshot-controller
``` ## Migrate custom in-tree storage classes to CSI
If you have in-tree Azure File persistent volumes, get `secretName`, `shareName`
## Next steps -- To use the CSI driver for Azure Disks, see [Use Azure Disks with CSI drivers](azure-disk-csi.md).-- To use the CSI driver for Azure Files, see [Use Azure Files with CSI drivers](azure-files-csi.md).-- To use the CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI drivers](azure-blob-csi.md)
+- To use the CSI driver for Azure Disks, see [Use Azure Disks with CSI drivers][azure-disk-csi].
+- To use the CSI driver for Azure Files, see [Use Azure Files with CSI drivers][azure-files-csi].
+- To use the CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI drivers][azure-blob-csi]
- For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage]. - For more information on CSI migration, see [Kubernetes In-Tree to CSI Volume Migration][csi-migration-community]. <!-- LINKS - external -->
-[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
[csi-migration-community]: https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
-[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
-[azure-disk-csi]: https://github.com/kubernetes-sigs/azuredisk-csi-driver
-[azure-files-csi]: https://github.com/kubernetes-sigs/azurefile-csi-driver
[snapshot-controller]: https://kubernetes-csi.github.io/docs/snapshot-controller.html <!-- LINKS - internal -->
-[azure-disk-volume]: azure-disk-volume.md
[azure-disk-static-mount]: azure-disk-volume.md#mount-disk-as-a-volume [azure-file-static-mount]: azure-files-volume.md#mount-file-share-as-a-persistent-volume
-[azure-files-pvc]: azure-files-dynamic-pv.md
-[premium-storage]: ../virtual-machines/disks-types.md
-[az-disk-list]: /cli/azure/disk#az_disk_list
-[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
-[az-disk-create]: /cli/azure/disk#az_disk_create
-[az-disk-show]: /cli/azure/disk#az_disk_show
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[storage-class-concepts]: concepts-storage.md#storage-classes
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[install-azure-cli]: ../cli/azure/install-azure-cli
+[azure-blob-csi]: azure-blob-csi.md
+[azure-disk-csi]: azure-disk-csi.md
+[azure-files-csi]: azure-files-csi.md
aks Enable Host Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-host-encryption.md
This feature can only be set at cluster creation or node pool creation time.
### Prerequisites - Ensure you have the CLI extension v2.23 or higher version installed.-- Ensure you have the `EncryptionAtHost` feature flag under `Microsoft.Compute` enabled.-
-### Register `EncryptionAtHost` feature
-
-To create an AKS cluster that uses host-based encryption, you must enable the `EncryptionAtHost` feature flags on your subscription.
-
-Register the `EncryptionAtHost` feature flag using the [az feature register][az-feature-register] command as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.Compute" --name "EncryptionAtHost"
-```
-
-It takes a few minutes for the status to show *Registered*. You can check on the registration status using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.Compute/EncryptionAtHost')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the `Microsoft.Compute` resource providers using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.Compute
-```
### Limitations
aks Tutorial Kubernetes Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md
Title: Tutorial - Use a workload identity with an application on Azure Kubernete
description: In this Azure Kubernetes Service (AKS) tutorial, you deploy an Azure Kubernetes Service cluster and configure an application to use a workload identity. Previously updated : 09/29/2022 Last updated : 11/16/2022 # Tutorial: Use a workload identity with an application on Azure Kubernetes Service (AKS)
spec:
- image: ghcr.io/azure/azure-workload-identity/msal-go name: oidc env:
- - name: KEYVAULT_NAME
- value: ${KEYVAULT_NAME}
+ - name: KEYVAULT_URL
+ value: ${KEYVAULT_URL}
- name: SECRET_NAME value: ${KEYVAULT_SECRET_NAME} nodeSelector:
This tutorial is for introductory purposes. For guidance on a creating full solu
[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials [az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create [aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
To upgrade from *1.12.x* -> *1.14.x*:
Skipping multiple versions can only be done when upgrading from an unsupported version back into the minimum supported version. For example, you can upgrade from an unsupported *1.10.x* to a supported *1.15.x* if *1.15* is the minimum supported minor version.
+ When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, it's recommended to re-create the cluster.
+ **Can I create a new 1.xx.x cluster during its 30 day support window?** No. Once a version is deprecated/removed, you cannot create a cluster with that version. As the change rolls out, you will start to see the old version removed from your version list. This process may take up to two weeks from announcement, progressively by region.
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
Title: Kubernetes on Azure tutorial - Upgrade a cluster
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to upgrade an existing AKS cluster to the latest available Kubernetes version. Previously updated : 05/24/2021 Last updated : 11/15/2022 #Customer intent: As a developer or IT pro, I want to learn how to upgrade an Azure Kubernetes Service (AKS) cluster so that I can use the latest version of Kubernetes and features. # Tutorial: Upgrade Kubernetes in Azure Kubernetes Service (AKS)
-As part of the application and cluster lifecycle, you may wish to upgrade to the latest available version of Kubernetes and use new features. An Azure Kubernetes Service (AKS) cluster can be upgraded using the Azure CLI.
+As part of the application and cluster lifecycle, you may want to upgrade to the latest available version of Kubernetes. You can upgrade your Azure Kubernetes Service (AKS) cluster by using the Azure CLI, Azure PowerShell, or the Azure portal.
-In this tutorial, part seven of seven, a Kubernetes cluster is upgraded. You learn how to:
+In this tutorial, part seven of seven, you learn how to:
> [!div class="checklist"]
-> * Identify current and available Kubernetes versions
-> * Upgrade the Kubernetes nodes
-> * Validate a successful upgrade
+> * Identify current and available Kubernetes versions.
+> * Upgrade your Kubernetes nodes.
+> * Validate a successful upgrade.
## Before you begin
-In previous tutorials, an application was packaged into a container image. This image was uploaded to Azure Container Registry, and you created an AKS cluster. The application was then deployed to the AKS cluster. If you have not done these steps, and would like to follow along, start with [Tutorial 1 ΓÇô Create container images][aks-tutorial-prepare-app].
+In previous tutorials, an application was packaged into a container image, and this container image was uploaded to Azure Container Registry (ACR). You also created an AKS cluster. The application was then deployed to the AKS cluster. If you have not done these steps and would like to follow along, start with [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
-### [Azure CLI](#tab/azure-cli)
-
-This tutorial requires that you are running the Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-This tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
--
+* If you're using Azure CLI, this article requires that you're running Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+* If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Get available cluster versions ### [Azure CLI](#tab/azure-cli)
-Before you upgrade a cluster, use the [az aks get-upgrades][] command to check which Kubernetes releases are available for upgrade:
+Before you upgrade a cluster, use the [az aks get-upgrades][] command to check which Kubernetes releases are available.
```azurecli az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster ```
-In the following example, the current version is *1.18.10*, and the available versions are shown under *upgrades*.
+In the following example output, the current version is *1.18.10*, and the available versions are shown under *upgrades*.
-```json
+```output
{ "agentPoolProfiles": null, "controlPlaneProfile": {
In the following example, the current version is *1.18.10*, and the available ve
### [Azure PowerShell](#tab/azure-powershell)
-Before you upgrade a cluster, use the [Get-AzAksCluster][get-azakscluster] cmdlet to determine which Kubernetes version you're running and what region it resides in:
+Before you upgrade a cluster, use the [Get-AzAksCluster][get-azakscluster] cmdlet to check which Kubernetes version you're running and the region in which it resides.
```azurepowershell Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster | Select-Object -Property Name, KubernetesVersion, Location ```
-In the following example, the current version is *1.19.9*:
+In the following example output, the current version is *1.19.9*.
```output
-Name KubernetesVersion Location
-- -- --
-myAKSCluster 1.19.9 eastus
+Name KubernetesVersion Location
+- -- --
+myAKSCluster 1.19.9 eastus
```
-Use the [Get-AzAksVersion][get-azaksversion] cmdlet to determine which Kubernetes upgrade releases are available in the region where your AKS cluster resides:
+Use the [Get-AzAksVersion][get-azaksversion] cmdlet to check which Kubernetes upgrade releases are available in the region where your AKS cluster resides.
```azurepowershell Get-AzAksVersion -Location eastus | Where-Object OrchestratorVersion -gt 1.19.9
Get-AzAksVersion -Location eastus | Where-Object OrchestratorVersion -gt 1.19.9
The available versions are shown under *OrchestratorVersion*. ```output
-OrchestratorType : Kubernetes
-OrchestratorVersion : 1.20.2
-DefaultProperty :
-IsPreview :
-Upgrades : {Microsoft.Azure.Commands.Aks.Models.PSOrchestratorProfile}
-
-OrchestratorType : Kubernetes
-OrchestratorVersion : 1.20.5
-DefaultProperty :
-IsPreview :
-Upgrades : {}
+Default IsPreview OrchestratorType OrchestratorVersion
+- - -
+ Kubernetes 1.20.2
+ Kubernetes 1.20.5
```
+### [Azure portal](#tab/azure-portal)
+
+To check which Kubernetes releases are available for your cluster:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to your AKS cluster.
+3. Under **Settings**, select **Cluster configuration**.
+4. In **Kubernetes version**, select **Upgrade version**. This will redirect you to a new page.
+5. In **Kubernetes version**, select the version to check for available upgrades.
+
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when no upgrades are available.
+ ## Upgrade a cluster
-To minimize disruption to running applications, AKS nodes are carefully cordoned and drained. In this process, the following steps are performed:
+AKS nodes are carefully cordoned and drained to minimize any potential disruptions to running applications. The following steps are performed during this process:
-1. The Kubernetes scheduler prevents additional pods being scheduled on a node that is to be upgraded.
+1. The Kubernetes scheduler prevents additional pods from being scheduled on a node that is to be upgraded.
1. Running pods on the node are scheduled on other nodes in the cluster.
-1. A node is created that runs the latest Kubernetes components.
-1. When the new node is ready and joined to the cluster, the Kubernetes scheduler begins to run pods on it.
+1. A new node is created that runs the latest Kubernetes components.
+1. When the new node is ready and joined to the cluster, the Kubernetes scheduler begins to run pods on the new node.
1. The old node is deleted, and the next node in the cluster begins the cordon and drain process. [!INCLUDE [alias minor version callout](./includes/aliasminorversion/alias-minor-version-upgrade.md)] ### [Azure CLI](#tab/azure-cli)
-Use the [az aks upgrade][] command to upgrade the AKS cluster.
+Use the [az aks upgrade][] command to upgrade your AKS cluster.
```azurecli az aks upgrade \
az aks upgrade \
``` > [!NOTE]
-> You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but cannot upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
+> You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you cannot upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, you must first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
-The following condensed example output shows the result of upgrading to *1.19.1*. Notice the *kubernetesVersion* now reports *1.19.1*:
+The following example output shows the result of upgrading to *1.19.1*. Notice the *kubernetesVersion* now reports *1.19.1*.
-```json
+```output
{ "agentPoolProfiles": [ {
The following condensed example output shows the result of upgrading to *1.19.1*
### [Azure PowerShell](#tab/azure-powershell)
-Use the [Set-AzAksCluster][set-azakscluster] cmdlet to upgrade the AKS cluster.
+Use the [Set-AzAksCluster][set-azakscluster] cmdlet to upgrade your AKS cluster.
```azurepowershell Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -KubernetesVersion <KUBERNETES_VERSION> ``` > [!NOTE]
-> You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but cannot upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
+> You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you cannot upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
-The following condensed example output shows the result of upgrading to *1.19.9*. Notice the *kubernetesVersion* now reports *1.20.2*:
+The following example output shows the result of upgrading to *1.19.9*. Notice the *kubernetesVersion* now reports *1.20.2*.
```output ProvisioningState : Succeeded
Location : eastus
Tags : {} ```
+### [Azure portal](#tab/azure-portal)
+
+To upgrade your AKS cluster:
+
+1. In the Azure portal, navigate to your AKS cluster.
+2. Under **Settings**, select **Cluster configuration**.
+3. In **Kubernetes version**, select **Upgrade version**. This will redirect you to a new page.
+4. In **Kubernetes version**, select your desired version and then select **Save**.
+
+It takes a few minutes to upgrade the cluster, depending on how many nodes you have.
+ ## View the upgrade events
-When you upgrade your cluster, the following Kubenetes events may occur on each node:
+When you upgrade your cluster, the following Kubernetes events may occur on the nodes:
-* Surge ΓÇô Create surge node.
-* Drain ΓÇô Pods are being evicted from the node. Each pod has a 5 minute timeout to complete the eviction.
-* Update ΓÇô Update of a node has succeeded or failed.
-* Delete ΓÇô Deleted a surge node.
+* **Surge**: Create surge node.
+* **Drain**: Pods are being evicted from the node. Each pod has a *5 minute timeout* to complete the eviction.
+* **Update**: Update of a node has succeeded or failed.
+* **Delete**: Delete a surge node.
-Use `kubectl get events` to show events in the default namespaces while running an upgrade. For example:
+Use `kubectl get events` to show events in the default namespaces while running an upgrade.
```azurecli-interactive kubectl get events ```
-The following example output shows some of the above events listed during an upgrade.
+The following example output shows some of the above events listed during an upgrade.
```output ...
default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surg
... ``` ++ ## Validate an upgrade ### [Azure CLI](#tab/azure-cli)
-Confirm that the upgrade was successful using the [az aks show][] command as follows:
+Confirm that the upgrade was successful using the [az aks show][] command.
```azurecli az aks show --resource-group myResourceGroup --name myAKSCluster --output table
az aks show --resource-group myResourceGroup --name myAKSCluster --output table
The following example output shows the AKS cluster runs *KubernetesVersion 1.19.1*: ```output
-Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
- - - - -
-myAKSCluster eastus myResourceGroup 1.19.1 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
+Name Location ResourceGroup KubernetesVersion CurrentKubernetesVersion ProvisioningState Fqdn
+ - - - -
+myAKSCluster eastus myResourceGroup 1.19.1 1.19.1 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
``` ### [Azure PowerShell](#tab/azure-powershell)
-Confirm that the upgrade was successful using the [Get-AzAksCluster][get-azakscluster] cmdlet as follows:
+Confirm that the upgrade was successful using the [Get-AzAksCluster][get-azakscluster] cmdlet.
```azurepowershell Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
The following example output shows the AKS cluster runs *KubernetesVersion 1.20.2*: ```output
-Name Location KubernetesVersion ProvisioningState
-- -- -- --
-myAKSCluster eastus 1.20.2 Succeeded
+Name Location KubernetesVersion ProvisioningState
+- -- -- --
+myAKSCluster eastus 1.20.2 Succeeded
```
+### [Azure portal](#tab/azure-portal)
+
+To confirm that the upgrade was successful, navigate to your AKS cluster in the Azure portal. On the **Overview** page, select the **Kubernetes version** and ensure it's the latest version you installed in the previous step.
+ ## Delete the cluster
+As this tutorial is the last part of the series, you may want to delete your AKS cluster. The Kubernetes nodes run on Azure virtual machines and continue incurring charges even if you don't use the cluster.
+ ### [Azure CLI](#tab/azure-cli)
-As this tutorial is the last part of the series, you may want to delete the AKS cluster. As the Kubernetes nodes run on Azure virtual machines (VMs), they continue to incur charges even if you don't use the cluster. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
+Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
```azurecli-interactive az group delete --name myResourceGroup --yes --no-wait ```+ ### [Azure PowerShell](#tab/azure-powershell)
-As this tutorial is the last part of the series, you may want to delete the AKS cluster. As the Kubernetes nodes run on Azure virtual machines (VMs), they continue to incur charges even if you don't use the cluster. Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
```azurepowershell-interactive Remove-AzResourceGroup -Name myResourceGroup ```
+### [Azure portal](#tab/azure-portal)
+
+To delete your AKS cluster:
+
+1. In the Azure portal, navigate to your AKS cluster.
+2. On the **Overview** page, select **Delete**.
+3. A popup will appear that asks you to confirm the deletion of the cluster. Select **Yes**.
+ > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity, the identity is managed by the platform and does not require you to provision or rotate any secrets.
+> When you delete the cluster, the Azure Active Directory (AAD) service principal used by the AKS cluster isn't removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity, the identity is managed by the platform and it doesn't require that you provision or rotate any secrets.
## Next steps In this tutorial, you upgraded Kubernetes in an AKS cluster. You learned how to: > [!div class="checklist"]
-> * Identify current and available Kubernetes versions
-> * Upgrade the Kubernetes nodes
-> * Validate a successful upgrade
+> * Identify current and available Kubernetes versions.
+> * Upgrade your Kubernetes nodes.
+> * Validate a successful upgrade.
-For more information on AKS, see [AKS overview][aks-intro]. For guidance on a creating full solutions with AKS, see [AKS solution guidance][aks-solution-guidance].
+For more information on AKS, see [AKS overview][aks-intro]. For guidance on how to create full solutions with AKS, see [AKS solution guidance][aks-solution-guidance].
<!-- LINKS - external --> [kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --outpu
> [!NOTE] > When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. All upgrades must be performed sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* is not allowed. >
-> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available.
+> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available. When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, it's recommended to re-create the cluster.
The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
To check which Kubernetes releases are available for your cluster, use the [Get-
> [!NOTE] > When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. All upgrades must be performed sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* is not allowed. >
-> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available.
+> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available. When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, it's recommended to re-create the cluster.
The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
The following policy is the minimal form of the `validate-azure-ad-token` policy
The following policy checks that the audience is the hostname of the API Management instance and that the `ctry` claim is `US`. The hostname is provided using a policy expression, and the Azure AD tenant ID and client application ID are provided using named values. The decoded JWT is provided in the `jwt` variable after validation.
-For more details on optional claims, read [Provide optional claims to your app](/azure/active-directory/develop/active-directory-optional-claims).
+For more details on optional claims, read [Provide optional claims to your app](../active-directory/develop/active-directory-optional-claims.md).
```xml <validate-azure-ad-token tenant-id="{{aad-tenant-id}}" output-token-variable-name="jwt">
This policy can be used in the following policy [sections](./api-management-howt
- **Policy sections:** inbound - **Policy scopes:** all scopes
api-management Api Management Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-capacity.md
To follow the steps in this article, you must have:
+ An API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). ## What is capacity
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
With a private endpoint and Private Link, you can:
- Limit incoming traffic only to private endpoints, preventing data exfiltration.
-> [!IMPORTANT]
-> * API Management support for private endpoints is currently in **preview**.
-> * To enable private endpoints, the API Management instance can't already be configured with an external or internal [virtual network](virtual-network-concepts.md).
-> * A private endpoint connection supports only incoming traffic to the API Management instance.
+ [!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)]
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
With a private endpoint and Private Link, you can:
* Use policy to distinguish traffic that comes from the private endpoint. * Limit incoming traffic only to private endpoints, preventing data exfiltration.
-> [!IMPORTANT]
-> * API Management support for private endpoints is currently in preview.
-> * During the preview period, a private endpoint connection supports only incoming traffic to the API Management managed gateway.
For more information, see [Connect privately to API Management using a private endpoint](private-endpoint.md).
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
When no longer needed, you can delete the resource group, App service, and all r
## Manage the MySQL flexible server, username, or password -- The MySQL Flexible Server is created behind a private [Virtual Network](/azure/virtual-network/virtual-networks-overview) and can't be accessed directly. To access or manage the database, use phpMyAdmin that's deployed with the WordPress site. You can access phpMyAdmin by following these steps:
+- The MySQL Flexible Server is created behind a private [Virtual Network](../virtual-network/virtual-networks-overview.md) and can't be accessed directly. To access or manage the database, use phpMyAdmin that's deployed with the WordPress site. You can access phpMyAdmin by following these steps:
- Navigate to the URL: https://`<sitename>`.azurewebsites.net/phpmyadmin - Login with the flexible server's username and password
Congratulations, you've successfully completed this quickstart!
> [Tutorial: PHP app with MySQL](tutorial-php-mysql-app.md) > [!div class="nextstepaction"]
-> [Configure PHP app](configure-language-php.md)
+> [Configure PHP app](configure-language-php.md)
app-service Reference Dangling Subdomain Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-dangling-subdomain-prevention.md
The risks of subdomain takeover include:
- Phishing campaigns - Further risks of classic attacks such as XSS, CSRF, CORS bypass
-Learn more about Subdomain Takeover at [Dangling DNS and subdomain takeover](/azure/security/fundamentals/subdomain-takeover).
+Learn more about Subdomain Takeover at [Dangling DNS and subdomain takeover](../security/fundamentals/subdomain-takeover.md).
Azure App Service provides [Name Reservation Service](#how-app-service-prevents-subdomain-takeovers) and [domain verification tokens](#how-you-can-prevent-subdomain-takeovers) to prevent subdomain takeovers. ## How App Service prevents subdomain takeovers
These records prevent the creation of another App Service app using the same nam
DNS records should be updated before the site deletion to ensure bad actors can't take over the domain between the period of deletion and re-creation.
-To get a domain verification ID, see the [Map a custom domain tutorial](app-service-web-tutorial-custom-domain.md#2-get-a-domain-verification-id)
+To get a domain verification ID, see the [Map a custom domain tutorial](app-service-web-tutorial-custom-domain.md#2-get-a-domain-verification-id)
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-business-cards)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-business-cards)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
See how data, including name, job title, address, email, and company name, is ex
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice. -
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
The following resources are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|_**Custom model**_| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|_**Custom model**_| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
| _**Composed model**_ |<ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net/)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/Compose)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</li><li>JavaScript SDK</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>| ::: moniker-end
Learn to create and compose custom models:
> [!div class="nextstepaction"] > [**Build a custom model**](how-to-guides/build-a-custom-model.md)
-> [**Compose custom models**](how-to-guides/compose-custom-models.md)
+> [**Compose custom models**](how-to-guides/compose-custom-models.md)
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | Model ID| |||:|
-|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
+|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
### Try building a custom model
Explore Form Recognizer quickstarts and REST APIs:
| Quickstart | REST API| |--|--| |[v3.0 Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) |[Form Recognizer v3.0 API 2022-08-31](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|
-| [v2.1 quickstart](quickstarts/get-started-v2-1-sdk-rest-api.md) | [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/BuildDocumentModel) |
-
+| [v2.1 quickstart](quickstarts/get-started-v2-1-sdk-rest-api.md) | [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/BuildDocumentModel) |
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-identity-id-documents)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-identity-id-documents)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end ## Input requirements
Below are the fields extracted per document type. The Azure Form Recognizer ID m
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-invoices)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-invoices)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
The JSON output has three parts:
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-layout)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
Layout API also extracts selection marks from documents. Extracted selection mar
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Receipt digitization is the process of converting scanned receipts into digital
::: moniker range="form-recog-2.1.0"
-**Sample invoice processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/connection)**:
+**Sample receipt processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/connection)**:
::: moniker-end
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-receipts)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-receipts)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
See how data, including time and date of transactions, merchant information, and
1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
- :::image type="content" source="media/invoice-example-new.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
+ :::image type="content" source="media/receipts-example.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
The receipt model supports all English receipts and the following locales:
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 10/20/2022 Last updated : 11/16/2022 monikerRange: '>=form-recog-2.1.0' recommendations: false
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
-Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation.
>[!NOTE] > With the release of the 2022-08-31 GA API, the associated preview APIs are being deprecated. If you are using the 2021-09-30-preview or the 2022-01-30-preview API versions, please update your applications to target the 2022-08-31 API version. There are a few minor changes involved, for more information, _see_ the [migration guide](v3-migration-guide.md). ## October 2022
+### Form Recognizer versioned content
+
+Form Recognizer documentation has been updated to present a versioned experience. Now, you can choose to view content targeting the v3.0 GA experience or the v2.1 GA experience. The v3.0 experience is the default.
++ ### Form Recognizer Studio Sample Code
-Sample code the Form Recgonizer Studio labeling experience is now available on github - https://github.com/microsoft/Form-Recognizer-Toolkit/tree/main/SampleCode/LabelingUX. Customers can develop and integrate Form Recognizer into their own UX or build their own new UX using the Form Recognizer Studio sample code.
+Sample code for the [Form Recognizer Studio labeling experience](https://github.com/microsoft/Form-Recognizer-Toolkit/tree/main/SampleCode/LabelingUX) is now available on GitHub. Customers can develop and integrate Form Recognizer into their own UX or build their own new UX using the Form Recognizer Studio sample code.
### Language expansion
Use the REST API parameter `api-version=2022-06-30-preview` when using the API o
### New Prebuilt Contract model
-A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. Contracts is currently in preview, please request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
+A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. the contracts model is currently in preview, request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
### Region expansion for training custom neural models
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
-## November 8, 2022
-
-### Image tag
-
-`v1.13.0_2022-11-08`
-
-For complete release version information, see [Version log](version-log.md#november-8-2022).
-
-New for this release:
--- Azure Arc data controller
- - Support database as resource in Azure Arc data resource provider
--- Arc-enabled PostgreSQL server
- - Add support for automated backups
--- `arcdata` Azure CLI extension
- - CLI support for automated backups: Setting the `--storage-class-backups` parameter for the create command will enable automated backups
- ## October 11, 2022 ### Image tag
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
} ```
-1. Update the application's group membership claims. Run the commands in the same directory as `oauth2-permissions.json` file. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](/azure/active-directory/develop/supported-accounts-validation):
+1. Update the application's group membership claims. Run the commands in the same directory as `oauth2-permissions.json` file. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](../../active-directory/develop/supported-accounts-validation.md):
```azurecli az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
az ad app show --id "${SERVER_APP_ID}" --query "api.oauth2PermissionScopes[0].id" -o tsv ```
-4. Grant the required permissions for the client application. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](/azure/active-directory/develop/supported-accounts-validation):
+4. Grant the required permissions for the client application. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](../../active-directory/develop/supported-accounts-validation.md):
```azurecli az ad app permission add --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}" --api-permissions <oAuthPermissionId>=Scope
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
## Next steps > [!div class="nextstepaction"]
-> Securely connect to the cluster by using [Cluster Connect](cluster-connect.md).
+> Securely connect to the cluster by using [Cluster Connect](cluster-connect.md).
azure-arc Diagnose Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md
If everything is working correctly, your pods should all be in the `Running` sta
### Still having problems?
-The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) so we can investigate the problem further.
+The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) so we can investigate the problem further.
To generate the troubleshooting log file, run the following command:
To generate the troubleshooting log file, run the following command:
az connectedk8s troubleshoot -g <myResourceGroup> -n <myK8sCluster> ```
-When you [create your support request](/azure/azure-portal/supportability/how-to-create-azure-support-request), in the **Additional details** section, use the **File upload** option to upload the generated log file.
+When you [create your support request](../../azure-portal/supportability/how-to-create-azure-support-request.md), in the **Additional details** section, use the **File upload** option to upload the generated log file.
## Connections with a proxy server If you are using a proxy server on at least one machine, complete the first five steps of the non-proxy flowchart (through resource provider registration) for basic troubleshooting steps. Then, if you are still encountering issues, review the next flowchart for additional troubleshooting steps. More details about each step are provided below. ### Is the machine executing commands behind a proxy server?
+If the machine is executing commands behind a proxy server, you'll need to set any necessary environment variables, [explained below](#set-environment-variables).
+
+### Set environment variables
+ Be sure you have set all of the necessary environment variables. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server).
+For example:
+
+```bash
+export HTTP_PROXY=ΓÇ£http://<proxyIP>:<proxyPort>ΓÇ¥
+export HTTPS_PROXY=ΓÇ£https://<proxyIP>:<proxyPort>ΓÇ¥
+export NO_PROXY=ΓÇ£<service CIDR>,Kubernetes.default.svc,.svc.cluster.local,.svcΓÇ¥
+```
+ ### Does the proxy server only accept trusted certificates? Be sure to include the certificate file path by including `--proxy-cert <path-to-cert-file>` when running the `az connectedk8s connect` command.
If everything is working correctly, your pods should all be in the `Running` sta
### Still having problems?
-The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) so we can investigate the problem further.
+The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) so we can investigate the problem further.
To generate the troubleshooting log file, run the following command:
To generate the troubleshooting log file, run the following command:
az connectedk8s troubleshoot -g <myResourceGroup> -n <myK8sCluster> ```
-When you [create your support request](/azure/azure-portal/supportability/how-to-create-azure-support-request), in the **Additional details** section, use the **File upload** option to upload the generated log file.
-
+When you [create your support request](../../azure-portal/supportability/how-to-create-azure-support-request.md), in the **Additional details** section, use the **File upload** option to upload the generated log file.
## Next steps
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
To resolve the error, one or more network misconfigurations may need to be addre
If a request times out, the deployment machine is not able to communicate with the IP(s). This could be caused by a closed port, network misconfiguration or a firewall block. Work with your network administrator to allow communication between the deployment machine to the Control Plane IP and Appliance VM IP.
-1. Appliance VM IP and Control Plane IP must be able to communicate with the deployment machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This may require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](/azure/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script) for Arc resource bridge.
+1. Appliance VM IP and Control Plane IP must be able to communicate with the deployment machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This may require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md) for Arc resource bridge.
1. Appliance VM IP and Control Plane IP need internet access to [these required URLs](#restricted-outbound-connectivity). Azure Stack HCI requires [additional URLs](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites). Work with your network administrator to ensure that the IPs can access the required URLs.
If you don't see your problem here or you can't resolve your issue, try one of t
- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. -- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-fluid-relay Use Audience In Fluid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/use-audience-in-fluid.md
++
+description: Learn how to use audience features in the Fluid Framework
+ Title: 'How to: Use audience features in the Fluid Framework'
+ Last updated : 11/04/2022++++
+# How to: Use audience features in the Fluid Framework
+
+In this tutorial, you'll learn about using the Fluid Framework [Audience](https://fluidframework.com/docs/build/audience/) with [React](https://reactjs.org/) to create a visual demonstration of users connecting to a container. The audience object holds information related to all users connected to the container. In this example, the Azure Client library will be used to create the container and audience.
+
+To jump ahead into the finished demo, check out the [Audience demo in our FluidExamples repo](https://github.com/microsoft/FluidExamples/tree/main/audience-demo).
+
+The following image shows ID buttons and a container ID input field. Leaving the container ID field blank and clicking a user ID button will create a new container and join as the selected user. Alternatively, the end-user can input a container ID and choose a user ID to join an existing container as the selected user.
++
+The next image shows multiple users connected to a container represented by boxes. The box outlined in blue represents the user who is viewing the client while the boxes outlined in black represents the other connected users. As new users attach to the container with unique ID's, the number of boxes will increase.
++
+> [!NOTE]
+> This tutorial assumes that you are familiar with the [Fluid Framework Overview](https://fluidframework.com/docs/) and that you have completed the [QuickStart](https://fluidframework.com/docs/start/quick-start/). You should also be familiar with the basics of [React](https://reactjs.org/), [creating React projects](https://reactjs.org/docs/create-a-new-react-app.html#create-react-app), and [React Hooks](https://reactjs.org/docs/hooks-intro.html).
+
+## Create the project
+
+1. Open a Command Prompt and navigate to the parent folder where you want to create the project; e.g., `C:\My Fluid Projects`.
+1. Run the following command at the prompt. (Note that the CLI is np**x**, not npm. It was installed when you installed Node.js.)
+
+ ```dotnetcli
+ npx create-react-app fluid-audience-tutorial
+ ```
+
+1. The project is created in a subfolder named `fluid-audience-tutorial`. Navigate to it with the command `cd fluid-audience-tutorial`.
+
+1. The project uses the following Fluid libraries:
+
+ | Library | Description |
+ |-|-|
+ | `fluid-framework` | Contains the SharedMap [distributed data structure](../concepts/data-structures.md) that synchronizes data across clients. |
+ | `@fluidframework/azure-client` | Defines the connection to a Fluid service server and defines the starting schema for the [Fluid container](../concepts/architecture.md#container). |
+ | `@fluidframework/test-client-utils` | Defines the [InsecureTokenProvider](../concepts/authentication-authorization.md#the-token-provider) needed to create the connection to a Fluid Service. |
+
+ Run the following command to install the libraries.
+
+ ```dotnetcli
+ npm install @fluidframework/azure-client @fluidframework/test-client-utils fluid-framework
+ ```
+
+## Code the project
+
+### Set up state variables and component view
+
+1. Open the file `\src\App.js` in the code editor. Delete all the default `import` statements. Then delete all the markup from the `return` statement. Then add import statements for components and React hooks. Note that we will be implementing the imported **AudienceDisplay** and **UserIdSelection** components in the later steps. The file should look like the following:
+
+ ```js
+ import { useState, useCallback } from "react";
+ import { AudienceDisplay } from "./AudienceDisplay";
+ import { UserIdSelection } from "./UserIdSelection";
+
+ export const App = () => {
+ // TODO 1: Define state variables to handle view changes and user input
+ return (
+ // TODO 2: Return view components
+ );
+ }
+ ```
+
+1. Replace `TODO 1` with the following code. This code initializes local state variables that will be used within the application. The `displayAudience` value determines if we render the **AudienceDisplay** component or the **UserIdSelection** component (see `TODO 2`). The `userId` value is the user identifier to connect to the container with and the `containerId` value is the container to load. The `handleSelectUser` and `handleContainerNotFound` functions are passed in as callbacks to the two views and manage state transitions. `handleSelectUser` is called when attempting to create/load a container. `handleContainerNotFound` is called when creating/loading a container fails.
+
+ Note, the values userId and containerId will come from a **UserIdSelection** component through the `handleSelectUser` function.
+
+ ```js
+ const [displayAudience, setDisplayAudience] = useState(false);
+ const [userId, setUserId] = useState();
+ const [containerId, setContainerId] = useState();
+
+ const handleSelectUser = useCallback((userId, containerId) => {
+ setDisplayAudience(true)
+ setUserId(userId);
+ setContainerId(containerId);
+ }, [displayAudience, userId, containerId]);
+
+ const handleContainerNotFound = useCallback(() => {
+ setDisplayAudience(false)
+ }, [setDisplayAudience]);
+ ```
+
+1. Replace `TODO 2` with the following code. As stated above, the `displayAudience` variable will determine if we render the **AudienceDisplay** component or the **UserIdSelection** component. Also, functions to update the state variables are passed into components as properties.
+
+ ```js
+ (displayAudience) ?
+ <AudienceDisplay userId={userId} containerId={containerId} onContainerNotFound={handleContainerNotFound}/> :
+ <UserIdSelection onSelectUser={handleSelectUser}/>
+ ```
+
+### Set up AudienceDisplay component
+
+1. Create and open a file `\src\AudienceDisplay.js` in the code editor. Add the following `import` statements:
+
+ ```js
+ import { useEffect, useState } from "react";
+ import { SharedMap } from "fluid-framework";
+ import { AzureClient } from "@fluidframework/azure-client";
+ import { InsecureTokenProvider } from "@fluidframework/test-client-utils";
+ ```
+
+ Note that the objects imported from the Fluid Framework library are required for defining users and containers. In the following steps, **AzureClient** and **InsecureTokenProvider** will be used to configure the client service (see `TODO 1`) while the **SharedMap** will be used to configure a `containerSchema` needed to create a container (see `TODO 2`).
+
+1. Add the following functional components and helper functions:
+
+ ```js
+ const tryGetAudienceObject = async (userId, userName, containerId) => {
+ // TODO 1: Create container and return audience object
+ }
+
+ export const AudienceDisplay = (props) => {
+ //TODO 2: Configure user ID, user name, and state variables
+ //TODO 3: Set state variables and set event listener on component mount
+ //TODO 4: Return list view
+ }
+
+ const AudienceList = (data) => {
+ //TODO 5: Append view elements to list array for each member
+ //TODO 6: Return list of member elements
+ }
+ ```
+
+ Note that the **AudienceDisplay** and **AudienceList** are functional components which handle getting and rendering audience data while the `tryGetAudienceObject` method handles the creation of container and audience services.
+
+### Getting container and audience
+
+You can use a helper function to get the Fluid data, from the Audience object, into the view layer (the React state). The `tryGetAudienceObject` method is called when the view component loads after a user ID is selected. The returned value is assigned to a React state property.
+
+1. Replace `TODO 1` with the following code. Note that the values for `userId` `userName` `containerId` will be passed in from the **App** component. If there is no `containerId`, a new container is created. Also, note that the `containerId` is stored on the URL hash. A user entering a session from a new browser may copy the URL from an existing session browser or navigate to `localhost:3000` and manually input the container ID. With this implementation, we want to wrap the `getContainer` call in a try catch in the case that the user inputs a container ID which does not exist. Visit the [React demo](https://fluidframework.com/docs/recipes/react/) and [Containers](../concepts/architecture.md#container) documentation for more information.
+
+ ```js
+ const userConfig = {
+ id: userId,
+ name: userName,
+ additionalDetails: {
+ email: userName.replace(/\s/g, "") + "@example.com",
+ date: new Date().toLocaleDateString("en-US"),
+ },
+ };
+
+ const serviceConfig = {
+ connection: {
+ type: "local",
+ tokenProvider: new InsecureTokenProvider("", userConfig),
+ endpoint: "http://localhost:7070",
+ },
+ };
+
+ const client = new AzureClient(serviceConfig);
+
+ const containerSchema = {
+ initialObjects: { myMap: SharedMap },
+ };
+
+ let container;
+ let services;
+ if (!containerId) {
+ ({ container, services } = await client.createContainer(containerSchema));
+ const id = await container.attach();
+ location.hash = id;
+ } else {
+ try {
+ ({ container, services } = await client.getContainer(containerId, containerSchema));
+ } catch (e) {
+ return;
+ }
+ }
+ return services.audience;
+ ```
+
+### Getting the audience on component mount
+
+Now that we've defined how to get the Fluid audience, we need to tell React to call `tryGetAudienceObject` when the Audience Display component is mounted.
+
+1. Replace `TODO 2` with the following code. Note that the user ID will come from the parent component as either `user1` `user2` or `random`. If the ID is `random` we use `Math.random()` to generate a random number as the ID. Additionally, a name will be mapped to the user based on their ID as specified in `userNameList`. Lastly, we define the state variables which will store the connected members as well as the current user. `fluidMembers` will store a list of all members connected to the container whereas `currentMember` will contain the member object representing the current user viewing the browser context.
+
+ ```js
+ const userId = props.userId == "random" ? Math.random() : props.userId;
+ const userNameList = {
+ "user1" : "User One",
+ "user2" : "User Two",
+ "random" : "Random User"
+ };
+ const userName = userNameList[props.userId];
+
+ const [fluidMembers, setFluidMembers] = useState();
+ const [currentMember, setCurrentMember] = useState();
+ ```
+
+1. Replace `TODO 3` with the following code. This will call the `tryGetAudienceObject` when the component is mounted and set the returned audience members to `fluidMembers` and `currentMember`. Note, we check if an audience object is returned in case a user inputs a containerId which does not exist and we need to return them to the **UserIdSelection** view (`props.onContainerNotFound()` will handle switching the view). Also, it is good practice to deregister event handlers when the React component dismounts by returning `audience.off`.
+
+ ```js
+ useEffect(() => {
+ tryGetAudienceObject(userId, userName, props.containerId).then(audience => {
+ if(!audience) {
+ props.onContainerNotFound();
+ alert("error: container id not found.");
+ return;
+ }
+
+ const updateMembers = () => {
+ setFluidMembers(audience.getMembers());
+ setCurrentMember(audience.getMyself());
+ }
+
+ updateMembers();
+
+ audience.on("membersChanged", updateMembers);
+
+ return () => { audience.off("membersChanged", updateMembers) };
+ });
+ }, []);
+ ```
+
+1. Replace `TODO 4` with the following code. Note, if the `fluidMembers` or `currentMember` has not been initialized, a blank screen is rendered. The **AudienceList** component will render the member data with styling (to be implemented in the next section).
+
+ ```js
+ if (!fluidMembers || !currentMember) return (<div/>);
+
+ return (
+ <AudienceList fluidMembers={fluidMembers} currentMember={currentMember}/>
+ )
+ ```
+
+ > [!NOTE]
+ > Connection transitions can result in short timing windows where `getMyself` returns `undefined`. This is because the current client connection will not have been added to the audience yet, so a matching connection ID cannot be found. To prevent React from rendering a page with no audience members, we add a listener to call `updateMembers` on `membersChanged`. This works since the service audience emits a `membersChanged` event when the container is connected.
+
+### Create the view
+
+1. Replace `TODO 5` with the following code. Note we are rendering a list component for each member passed from the **AudienceDisplay** component. For each member, we first compare `member.userId` to `currentMember.userId` to check if that member `isSelf`. This way, we can differentiate the client user from the other users and display the component with a different color. We then push the list component to a `list` array. Each component will display member data such as `userId` `userName` and `additionalDetails`.
+
+ ```js
+ const currentMember = data.currentMember;
+ const fluidMembers = data.fluidMembers;
+
+ const list = [];
+ fluidMembers.forEach((member, key) => {
+ const isSelf = (member.userId === currentMember.userId);
+ const outlineColor = isSelf ? 'blue' : 'black';
+
+ list.push(
+ <div style={{
+ padding: '1rem',
+ margin: '1rem',
+ display: 'flex',
+ outline: 'solid',
+ flexDirection: 'column',
+ maxWidth: '25%',
+ outlineColor
+ }} key={key}>
+ <div style={{fontWeight: 'bold'}}>Name</div>
+ <div>
+ {member.userName}
+ </div>
+ <div style={{fontWeight: 'bold'}}>ID</div>
+ <div>
+ {member.userId}
+ </div>
+ <div style={{fontWeight: 'bold'}}>Connections</div>
+ {
+ member.connections.map((data, key) => {
+ return (<div key={key}>{data.id}</div>);
+ })
+ }
+ <div style={{fontWeight: 'bold'}}>Additional Details</div>
+ { JSON.stringify(member.additionalDetails, null, '\t') }
+ </div>
+ );
+ });
+ ```
+
+1. Replace `TODO 6` with the following code. This will render all each of the member elements we pushed into the `list` array.
+
+ ```js
+ return (
+ <div>
+ {list}
+ </div>
+ );
+ ```
+
+### Setup UserIdSelection component
+
+1. Create and open a file `\src\UserIdSelection.js` in the code editor. This component will include user ID buttons and container ID input fields which allow end-users to choose their user ID and collaborative session. Add the following `import` statements and functional components:
+
+ ```js
+ import { useState } from 'react';
+
+ export const UserIdSelection = (props) => {
+ // TODO 1: Define styles and handle user inputs
+ return (
+ // TODO 2: Return view components
+ );
+ }
+ ```
+
+1. Replace `TODO 1` with the following code. Note that the `onSelectUser` function will update the state variables in the parent **App** component and prompt a view change. The `handleSubmit` method is triggered by button elements which will be implemented in `TODO 2`. Also, the `handleChange` method is used to update the `containerId` state variable. This method will be called from an input element event listener implemented in `TODO 2`. Also, note that we update the `containerId` be getting the value from an HTML element with the id `containerIdInput` (defined in `TODO 2`).
+
+ ```js
+ const selectionStyle = {
+ marginTop: '2rem',
+ marginRight: '2rem',
+ width: '150px',
+ height: '30px',
+ };
+
+ const [containerId, setContainerId] = (location.hash.substring(1));
+
+ const handleSubmit = (userId) => {
+ props.onSelectUser(userId, containerId);
+ }
+
+ const handleChange = () => {
+ setContainerId(document.getElementById("containerIdInput").value);
+ };
+ ```
+
+1. Replace `TODO 2` with the following code. This will render the user ID buttons as well as the container ID input field.
+
+ ```js
+ <div style={{display: 'flex', flexDirection:'column'}}>
+ <div style={{marginBottom: '2rem'}}>
+ Enter Container Id:
+ <input type="text" id="containerIdInput" value={containerId} onChange={() => handleChange()} style={{marginLeft: '2rem'}}></input>
+ </div>
+ {
+ (containerId) ?
+ (<div style={{}}>Select a User to join container ID: {containerId} as the user</div>)
+ : (<div style={{}}>Select a User to create a new container and join as the selected user</div>)
+ }
+ <nav>
+ <button type="submit" style={selectionStyle} onClick={() => handleSubmit("user1")}>User 1</button>
+ <button type="submit" style={selectionStyle} onClick={() => handleSubmit("user2")}>User 2</button>
+ <button type="submit" style={selectionStyle} onClick={() => handleSubmit("random")}>Random User</button>
+ </nav>
+ </div>
+ ```
+
+## Start the Fluid server and run the application
+
+> [!NOTE]
+> To match the rest of this how-to, this section uses `npx` and `npm` commands to start a Fluid server. However, the code in this article can also run against an Azure Fluid Relay server. For more information, see [How to: Provision an Azure Fluid Relay service](provision-fluid-azure-portal.md) and [How to: Connect to an Azure Fluid Relay service](connect-fluid-azure-service.md)
+
+In the Command Prompt, run the following command to start the Fluid service.
+
+```dotnetcli
+npx @fluidframework/azure-local-service@latest
+```
+
+Open a new Command Prompt and navigate to the root of the project; for example, `C:/My Fluid Projects/fluid-audience-tutorial`. Start the application server with the following command. The application opens in the browser. This may take a few minutes.
+
+```dotnetcli
+npm run start
+```
+
+Navigate to `localhost:3000` on a browser tab to view the running application. To create a new container, select a user ID button while leaving the container ID input blank. To simulate a new user joining the container session, open a new browser tab and navigate to `localhost:3000`. This time, input the container ID value which can be found from first browser tab's url proceeding `http://localhost:3000/#`.
+
+> [!NOTE]
+> You may need to install an additional dependency to make this demo compatible with Webpack 5. If you receive a compilation error related to a "buffer" or "url" package, please run `npm install -D buffer url` and try again. This will be resolved in a future release of Fluid Framework.
+
+## Next steps
+
+- Try extending the demo with more key/value pairs in the `additionalDetails` field in `userConfig`.
+- Consider integrating audience into a collaborative application which utilizes distributed data structures such as SharedMap or SharedString.
+- Learn more about [Audience](https://fluidframework.com/docs/build/audience/).
+
+> [!TIP]
+> When you make changes to the code the project will automatically rebuild and the application server will reload. However, if you make changes to the container schema, they will only take effect if you close and restart the application server. To do this, give focus to the Command Prompt and press Ctrl-C twice. Then run `npm run start` again.
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
You can exclude certain types of telemetry from sampling. In this example, data
For more information, see [Sampling in Application Insights](../azure-monitor/app/sampling.md).
+## Enable SQL query collection
+
+Application Insights automatically collects data on dependencies for HTTP requests, database calls, and for several bindings. For more information, see [Dependencies](./functions-monitoring.md#dependencies). For SQL calls, the name of the server and database is always collected and stored, but SQL query text isn't collected by default. You can use `dependencyTrackingOptions.enableSqlCommandTextInstrumentation` to enable SQL query text logging by setting (at minimum) the following in your [host.json file](./functions-host-json.md#applicationinsightsdependencytrackingoptions):
+
+```json
+"logging": {
+ "applicationInsights": {
+ "enableDependencyTracking": true,
+ "dependencyTrackingOptions": {
+ "enableSqlCommandTextInstrumentation": true
+ }
+ }
+}
+```
+
+For more information, see [Advanced SQL tracking to get full SQL query](../azure-monitor/app/asp-net-dependencies.md#advanced-sql-tracking-to-get-full-sql-query).
+ ## Configure scale controller logs _This feature is in preview._
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
There are other function app configuration options in the [host.json](functions-
Example connection string values are truncated for readability. > [!NOTE]
-> You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values). Changes to function app settings require your function app to be restarted.
-
-> [!IMPORTANT]
-> Do not use an [instrumentation key](../azure-monitor/app/separate-resources.md#about-resources-and-instrumentation-keys) and a [connection string](../azure-monitor/app/sdk-connection-string.md#overview) simultaneously. Whichever was set last will take precedence.
+> You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values). Changes to function app settings require your function app to be restarted.
## APPINSIGHTS_INSTRUMENTATIONKEY
-The instrumentation key for Application Insights. Only use one of `APPINSIGHTS_INSTRUMENTATIONKEY` or `APPLICATIONINSIGHTS_CONNECTION_STRING`. When Application Insights runs in a sovereign cloud, use `APPLICATIONINSIGHTS_CONNECTION_STRING`. For more information, see [How to configure monitoring for Azure Functions](configure-monitoring.md).
+The instrumentation key for Application Insights. Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING`. When possible, use `APPLICATIONINSIGHTS_CONNECTION_STRING`. When Application Insights runs in a sovereign cloud, you must use `APPLICATIONINSIGHTS_CONNECTION_STRING`. For more information, see [How to configure monitoring for Azure Functions](configure-monitoring.md).
|Key|Sample value| |||
The instrumentation key for Application Insights. Only use one of `APPINSIGHTS_I
## APPLICATIONINSIGHTS_CONNECTION_STRING
-The connection string for Application Insights. Use `APPLICATIONINSIGHTS_CONNECTION_STRING` instead of `APPINSIGHTS_INSTRUMENTATIONKEY` in the following cases:
+The connection string for Application Insights. When possible, use `APPLICATIONINSIGHTS_CONNECTION_STRING` instead of `APPINSIGHTS_INSTRUMENTATIONKEY`. Using `APPLICATIONINSIGHTS_CONNECTION_STRING` is required in the following cases:
+ When your function app requires the added customizations supported by using the connection string. + When your Application Insights instance runs in a sovereign cloud, which requires a custom endpoint.
azure-functions Functions Concurrency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-concurrency.md
Using dynamic concurrency provides the following benefits:
### Dynamic concurrency configuration
-Dynamic concurrency can be enabled at the host level in the host.json file. When, enabled any binding extensions used by your function app that support dynamic concurrency will adjust concurrency dynamically as needed. Dynamic concurrency settings override any manually configured concurrency settings for triggers that support dynamic concurrency.
+Dynamic concurrency can be enabled at the host level in the host.json file. When, enabled any binding extensions used by your function app that support dynamic concurrency adjust concurrency dynamically as needed. Dynamic concurrency settings override any manually configured concurrency settings for triggers that support dynamic concurrency.
By default, dynamic concurrency is disabled. With dynamic concurrency enabled, concurrency starts at 1 for each function, and is adjusted up to an optimal value, which is determined by the host.
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json.md
Title: host.json reference for Azure Functions 2.x
description: Reference documentation for the Azure Functions host.json file with the v2 runtime. Previously updated : 04/28/2020 Last updated : 11/16/2022 # host.json reference for Azure Functions 2.x and later
The following sample *host.json* file for version 2.x+ has all possible options
"batchSize": 1000, "flushTimeout": "00:00:30" },
+ "concurrency": {
+ "dynamicConcurrencyEnabled": true,
+ "snapshotPersistenceEnabled": true
+ },
"extensions": { "blobs": {}, "cosmosDb": {},
The following sample *host.json* file for version 2.x+ has all possible options
"excludedTypes" : "Dependency;Event", "includedTypes" : "PageView;Trace" },
+ "dependencyTrackingOptions": {
+ "enableSqlCommandTextInstrumentation": true
+ },
"enableLiveMetrics": true, "enableDependencyTracking": true, "enablePerformanceCountersCollection": true,
For the complete JSON structure, see the earlier [example host.json file](#sampl
| Property | Default | Description | | | | | | samplingSettings | n/a | See [applicationInsights.samplingSettings](#applicationinsightssamplingsettings). |
+| dependencyTrackingOptions | n/a | See [applicationInsights.dependencyTrackingOptions](#applicationinsightsdependencytrackingoptions). |
| enableLiveMetrics | true | Enables live metrics collection. | | enableDependencyTracking | true | Enables dependency tracking. | | enablePerformanceCountersCollection | true | Enables Kudu performance counters collection. |
For more information about these settings, see [Sampling in Application Insights
| enableW3CDistributedTracing | true | Enables or disables support of W3C distributed tracing protocol (and turns on legacy correlation schema). Enabled by default if `enableHttpTriggerExtendedInfoCollection` is true. If `enableHttpTriggerExtendedInfoCollection` is false, this flag applies to outgoing requests only, not incoming requests. | | enableResponseHeaderInjection | true | Enables or disables injection of multi-component correlation headers into responses. Enabling injection allows Application Insights to construct an Application Map to when several instrumentation keys are used. Enabled by default if `enableHttpTriggerExtendedInfoCollection` is true. This setting doesn't apply if `enableHttpTriggerExtendedInfoCollection` is false. |
+### applicationInsights.dependencyTrackingOptions
+
+|Property | Default | Description |
+| | | |
+| enableSqlCommandTextInstrumentation | false | Enables collection of the full text of SQL queries, which is disabled by default. For more information on collecting SQL query text, see [Advanced SQL tracking to get full SQL query](../azure-monitor/app/asp-net-dependencies.md#advanced-sql-tracking-to-get-full-sql-query). |
+ ### applicationInsights.snapshotConfiguration For more information on snapshots, see [Debug snapshots on exceptions in .NET apps](../azure-monitor/app/snapshot-debugger.md) and [Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](../azure-monitor/app/snapshot-debugger-troubleshoot.md).
Configuration settings for a custom handler. For more information, see [Azure Fu
Configuration setting can be found in [bindings for Durable Functions](durable/durable-functions-bindings.md#host-json).
+## concurrency
+
+Enables dynamic concurrency for specific bindings in your function app. For more information, see [Dynamic concurrency](./functions-concurrency.md#dynamic-concurrency).
+
+```json
+ {
+ "concurrency": {
+ "dynamicConcurrencyEnabled": true,
+ "snapshotPersistenceEnabled": true
+ }
+ }
+```
+
+|Property | Default | Description |
+| | | |
+| dynamicConcurrencyEnabled | false | Enables dynamic concurrency behaviors for all triggers supported by this feature, which is off by default. |
+| snapshotPersistenceEnabled | true | Learned concurrency values are periodically persisted to storage so new instances start from those values instead of starting from 1 and having to redo the learning. |
+ ## eventHub Configuration settings can be found in [Event Hub triggers and bindings](functions-bindings-event-hubs.md#host-json).
An array of one or more names of files that are monitored for changes that requi
## Override host.json values
-There may be instances where you wish to configure or modify specific settings in a host.json file for a specific environment, without changing the host.json file itself. You can override specific host.json values by creating an equivalent value as an application setting. When the runtime finds an application setting in the format `AzureFunctionsJobHost__path__to__setting`, it overrides the equivalent host.json setting located at `path.to.setting` in the JSON. When expressed as an application setting, the dot (`.`) used to indicate JSON hierarchy is replaced by a double underscore (`__`).
+There may be instances where you wish to configure or modify specific settings in a host.json file for a specific environment, without changing the host.json file itself. You can override specific host.json values by creating an equivalent value as an application setting. When the runtime finds an application setting in the format `AzureFunctionsJobHost__path__to__setting`, it overrides the equivalent host.json setting located at `path.to.setting` in the JSON. When expressed as an application setting, the dot (`.`) used to indicate JSON hierarchy is replaced by a double underscore (`__`).
For example, say that you wanted to disable Application Insight sampling when running locally. If you changed the local host.json file to disable Application Insights, this change might get pushed to your production app during deployment. The safer way to do this is to instead create an application setting as `"AzureFunctionsJobHost__logging__applicationInsights__samplingSettings__isEnabled":"false"` in the `local.settings.json` file. You can see this in the following `local.settings.json` file, which doesn't get published:
For example, say that you wanted to disable Application Insight sampling when ru
} ```
+Overriding host.json settings using environment variables follows the ASP.NET Core naming conventions. When the element structure includes an array, the numeric array index should be treated as an additional element name in this path. For more information, see [Naming of environment variables](/aspnet/core/fundamentals/configuration/#naming-of-environment-variables).
+ ## Next steps > [!div class="nextstepaction"]
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
To download the publishing profile of your function app:
1. In [GitHub](https://github.com/), go to your repository.
-1. Select **Security > Secrets and variables > Actions**.
+1. Select **Settings > Secrets > Actions**.
1. Select **New repository secret**.
azure-maps How To Dev Guide Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md
The [Azure.Maps Namespace][Azure.Maps Namespace] in the .NET documentation.
[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account [authentication]: azure-maps-authentication.md
-[Host daemon]: /azure/azure-maps/how-to-secure-daemon-app#host-a-daemon-on-non-azure-resources
+[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
[.NET standard]: /dotnet/standard/net-standard?tabs=net-standard-2-0 [Rest API]: /rest/api/maps/ [.NET Standard versions]: https://dotnet.microsoft.com/platform/dotnet-standard#versions
The [Azure.Maps Namespace][Azure.Maps Namespace] in the .NET documentation.
[search-api]: /dotnet/api/azure.maps.search [Identity library .NET]: /dotnet/api/overview/azure/identity-readme?view=azure-dotnet [defaultazurecredential.NET]: /dotnet/api/overview/azure/identity-readme?view=azure-dotnet#defaultazurecredential
-[NuGet]: https://www.nuget.org/
+[NuGet]: https://www.nuget.org/
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
description: How to develop applications that incorporate Azure Maps using the JavaScript SDK Developers Guide. Previously updated : 11/07/2021 Last updated : 11/15/2021
The Azure Maps JavaScript/TypeScript REST SDK (JavaScript SDK) supports searchin
> az maps account create --kind "Gen2" --account-name "myMapAccountName" --resource-group "<resource group>" --sku "G2" > ```
+## Create a Node.js project
+
+The example below creates a new directory then a Node.js program named _mapsDemo_ using npm:
+
+```powershell
+mkdir mapsDemo
+cd mapsDemo
+npm init
+```
+ ## Install the search package To use Azure Maps JavaScript SDK, you'll need to install the search package. Each of the Azure Maps services including search, routing, rendering and geolocation are each in their own package.
mapsDemo
+-- search.js ```
-### Azure Maps search service
+### Azure Maps services
-| Service Name  | NPM package  | Samples  |
+| Service Name  | npm packages | Samples  |
||-|--|
-| [Search][search readme] | [Azure.Maps.Search][search package] | [search samples][search sample] |
+| [Search][search readme] | [@azure/maps-search][search package] | [search samples][search sample] |
| [Route][js route readme] | [@azure-rest/maps-route][js route package] | [route samples][js route sample] |
-## Create a Node.js project
-
-The example below creates a new directory then a Node.js program named _mapsDemo_ using NPM:
-
-```powershell
-mkdir mapsDemo
-cd mapsDemo
-npm init
-```
- ## Create and authenticate a MapsSearchClient You'll need a `credential` object for authentication when creating the `MapsSearchClient` object used to access the Azure Maps search APIs. You can use either an Azure Active Directory (Azure AD) credential or an Azure subscription key to authenticate. For more information on authentication, see [Authentication with Azure Maps][authentication].
main().catch((err) => {
[authentication]: azure-maps-authentication.md [Identity library]: /javascript/api/overview/azure/identity-readme
-[Host daemon]: /azure/azure-maps/how-to-secure-daemon-app#host-a-daemon-on-non-azure-resources
+[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
[dotenv]: https://github.com/motdotla/dotenv#readme [search package]: https://www.npmjs.com/package/@azure/maps-search
main().catch((err) => {
[js route readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-route-rest/README.md [js route package]: https://www.npmjs.com/package/@azure-rest/maps-route
-[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
+[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
azure-maps How To Use Best Practices For Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md
To learn more, please see:
> [Show route on the map](./map-route.md) > [!div class="nextstepaction"]
-> [Azure Maps NPM Package](https://www.npmjs.com/package/azure-maps-rest )
+> [Azure Maps npm Package](https://www.npmjs.com/package/azure-maps-rest )
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
See also the [Azure Maps Glossary](./glossary.md) for an in-depth list of termin
## Web SDK side-by-side examples
-The following is a collection of code samples for each platform that cover common use cases to help you migrate your web application from Bing Maps V8 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript; however, Azure Maps also provides TypeScript definitions as an additional option through an [NPM module](./how-to-use-map-control.md).
+The following is a collection of code samples for each platform that cover common use cases to help you migrate your web application from Bing Maps V8 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript; however, Azure Maps also provides TypeScript definitions as an additional option through an [npm module](./how-to-use-map-control.md).
**Topics**
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
In addition to this API, Azure Maps provides many time zone APIs. These APIs con
Azure Maps provides client libraries for the following programming languages:
-* JavaScript, TypeScript, Node.js ΓÇô [documentation](how-to-use-services-module.md) \| [NPM package](https://www.npmjs.com/package/azure-maps-rest)
+* JavaScript, TypeScript, Node.js ΓÇô [documentation](how-to-use-services-module.md) \| [npm package](https://www.npmjs.com/package/azure-maps-rest)
These Open-source client libraries are for other programming languages:
azure-maps Power Bi Visual Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bubble-layer.md
Title: Add a bubble layer to an Azure Maps Power BI visual
-description: In this article, you will learn how to use the bubble layer in an Azure Maps Power BI visual.
+description: In this article, you'll learn how to use the bubble layer in an Azure Maps Power BI visual.
Previously updated : 11/29/2021 Last updated : 11/14/2022
Initially all bubbles have the same fill color. If a field is passed into the **
| Setting | Description | |--|-|
-| Size | The size of each bubble. This option is hidden when a field is passed into the **Size** bucket of the **Fields** pane. Additional options will appear as outlined in the [Bubble size scaling](#bubble-size-scaling) topic further down in this article. |
+| Size | The size of each bubble. This option is hidden when a field is passed into the **Size** bucket of the **Fields** pane. More options will appear as outlined in the [Bubble size scaling](#bubble-size-scaling) section further down in this article. |
| Fill color | Color of each bubble. This option is hidden when a field is passed into the **Legend** bucket of the **Fields** pane and a separate **Data colors** section will appear in the **Format** pane. | | Fill transparency | Transparency of each bubble. | | High-contrast outline | Makes the outline color contrast with the fill color for better accessibility by using a high-contrast variant of the fill color. | | Outline color | Color that outlines the bubble. This option is hidden when the **High-contrast outline** option is enabled. | | Outline transparency | Transparency of the outline. | | Outline width | Width of the outline in pixels. |
-| Blur | Amount of blur applied to the outline. A value of 1 blurs the bubbles such that only the center point has no transparency. A value of 0 apply any blur effect. |
+| Blur | Amount of blur applied to the outline. A value of one blurs the bubbles such that only the center point has no transparency. A value of 0 apply any blur effect. |
| Pitch alignment | Specifies how the bubbles look when the map is pitched. <br/><br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Viewport - Bubbles appear on their edge on the map relative to viewport. (default)<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Map - Bubbles are rendered flat on the surface of the map. |
-| Zoom scale | Amount the bubbles should scale relative to the zoom level. A zoom scale of one means no scaling. Large values will make bubbles smaller when zoomed out and larger when zoomed in. This helps to reduce the clutter on the map when zoomed out, yet ensures points stand out more when zoomed in. A value of 1 does not apply any scaling. |
+| Zoom scale | Amount the bubbles should scale relative to the zoom level. A zoom scale of one means no scaling. Large values will make bubbles smaller when zoomed out and larger when zoomed in. This helps to reduce the clutter on the map when zoomed out, yet ensures points stand out more when zoomed in. A value of 1 doesn't apply any scaling. |
| Min zoom | Minimum zoom level tiles are available. | | Max zoom | Maximum zoom level tiles are available. | | Layer position | Specifies the position of the layer relative to other map layers. |
If a field is passed into the **Size** bucket of the **Fields** pane, the bubble
||--| | Min size | Minimum bubble size when scaling the data.| | Max size | Maximum bubble size when scaling the data.|
-| Size scaling method | Scaling algorithm used to determine relative bubble size.<br/><br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Linear - Range of input data linearly mapped to the min and max size. (default)<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Log - Range of input data logarithmically mapped to the min and max size.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Cubic-Bezier - Specify X1, Y1, X2, Y2 values of a Cubic-Bezier curve to create a custom scaling method. |
+| Size scaling method | Scaling algorithm used to determine relative bubble size.<br/><br/>&nbsp;ΓÇó Linear: Range of input data linearly mapped to the min and max size. (default)<br/>&nbsp;ΓÇó Log: Range of input data logarithmically mapped to the min and max size.<br/>&nbsp;ΓÇó Cubic-Bezier: Specify X1, Y1, X2, Y2 values of a Cubic-Bezier curve to create a custom scaling method. |
When the **Size scaling method** is set to **Log**, the following options will be made available.
When the **Size scaling method** is set to **Cubic-Bezier**, the following optio
> [!TIP] > [https://cubic-bezier.com/](https://cubic-bezier.com/) has a handy tool for creating the parameters for Cubic-Bezier curves.
+## Category labels
+
+When displaying a **bubble layer** map, the **Category labels** settings will become active in the **Format visual** pane.
++
+The **Category labels** settings enable you to customize font setting such as font type, size and color as well as the category labels background color and transparency.
++ ## Next steps Change how your data is displayed on the map:
azure-maps Power Bi Visual Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-get-started.md
Title: Get started with Azure Maps Power BI visual
-description: In this article, you will learn how to use Azure Maps Power BI visual.
+description: In this article, you'll learn how to use Azure Maps Power BI visual.
Last updated 11/29/2021
This article shows how to use the Microsoft Azure Maps Power BI visual.
> [!NOTE] > This visual can be created and viewed in both Power BI Desktop and the Power BI service. The steps and illustrations in this article are from Power BI Desktop.
-The Azure Maps Power BI visual provides a rich set of data visualizations for spatial data on top of a map. It is estimated that over 80% of business data has a location context. The Azure Maps Power BI visual can be used to gain insights into how this location context relates to and influences your business data.
+The Azure Maps Power BI visual provides a rich set of data visualizations for spatial data on top of a map. It's estimated that over 80% of business data has a location context. The Azure Maps Power BI visual can be used to gain insights into how this location context relates to and influences your business data.
-![Power BI desktop with the Azure Maps Power BI visual displaying business data](media/power-bi-visual/azure-maps-visual-hero.png)
## What is sent to Azure?
To learn more, about privacy and terms of use related to the Azure Maps Power BI
There are a few considerations and requirements for the Azure Maps Power BI visual: -- The Azure Maps Power BI visual must be enabled in Power BI Desktop. To enable Azure Maps Power BI visual, select **File** &gt; **Options and Settings** &gt; **Options** &gt; **Preview features**, then select the **Azure Maps Visual** checkbox. If the Azure Maps visual is not available after enabling this setting, it's likely that a tenant admin switch in the Admin Portal needs to be enabled.
+- The Azure Maps Power BI visual must be enabled in Power BI Desktop. To enable Azure Maps Power BI visual, select **File** &gt; **Options and Settings** &gt; **Options** &gt; **Preview features**, then select the **Azure Maps Visual** checkbox. If the Azure Maps visual isn't available after enabling this setting, it's likely that a tenant admin switch in the Admin Portal needs to be enabled.
- The data set must have fields that contain **latitude** and **longitude** information. ## Use the Azure Maps Power BI visual Once the Azure Maps Power BI visual is enabled, select the **Azure Maps** icon from the **Visualizations** pane. Power BI creates an empty Azure Maps visual design canvas. While in preview, another disclaimer is displayed. Take the following steps to load the Azure Maps visual: 1. In the **Fields** pane, drag data fields that contain latitude and longitude coordinate information into the **Latitude** and/or **Longitude** buckets. This is the minimal data needed to load the Azure Maps visual.
- :::image type="content" source="media/power-bi-visual/bubble-layer.png" alt-text="Azure Maps visual displaying points as bubbles on the map after latitude and longitude fields provided.":::
+ :::image type="content" source="media/power-bi-visual/bubble-layer.png" alt-text="A screenshot of the Azure Maps visual displaying points as bubbles on the map after latitude and longitude fields are provided." lightbox="media/power-bi-visual/bubble-layer.png":::
2. To color the data based on categorization, drag a categorical field into the **Legend** bucket of the **Fields** pane. In this example, we're using the **AdminDistrict** column (also known as state or province).
- :::image type="content" source="media/power-bi-visual/bubble-layer-with-legend-color.png" alt-text="Azure Maps visual displaying points as colored bubbles on the map after legend field provided.":::
+ :::image type="content" source="media/power-bi-visual/bubble-layer-with-legend-color.png" alt-text="A screenshot of the Azure Maps visual displaying points as colored bubbles on the map after legend field is provided." lightbox="media/power-bi-visual/bubble-layer-with-legend-color.png":::
> [!NOTE] > The built-in legend control for Power BI does not currently appear in this preview. 3. To scale the data relatively, drag a measure into the **Size** bucket of the **Fields** pane. In this example, we're using **Sales** column.
- :::image type="content" source="media/power-bi-visual/bubble-layer-with-legend-color-and-size.png" alt-text="Azure Maps visual displaying points as colored and scaled bubbles on the map after size field provided.":::
+ :::image type="content" source="media/power-bi-visual/bubble-layer-with-legend-color-and-size.png" alt-text="A screenshot of the Azure Maps visual displaying points as colored and scaled bubbles on the map demonstrating the size field." lightbox="media/power-bi-visual/bubble-layer-with-legend-color-and-size.png":::
4. Use the options in the **Format** pane to customize how data is rendered. The following image is the same map as above, but with the bubble layers fill transparency option set to 50% and the high-contrast outline option enabled.
- :::image type="content" source="media/power-bi-visual/bubble-layer-styled.png" alt-text="Azure Maps visual displaying points as bubbles on the map with a custom style.":::
+ :::image type="content" source="media/power-bi-visual/bubble-layer-styled.png" alt-text="A screenshot of the Azure Maps visual displaying points as bubbles on the map with a custom style." lightbox="media/power-bi-visual/bubble-layer-styled.png":::
+
+5. You can also show or hide labels in the **Format** pane. The following two images show maps with the **Show labels** setting turned on and off:
+
+ :::image type="content" source="media/power-bi-visual/show-labels-on.png" alt-text="A screenshot of the Azure Maps visual displaying a map with the show labels setting turned on in the style section of the format pane in Power BI." lightbox="media/power-bi-visual/show-labels-on.png":::
+
+ :::image type="content" source="media/power-bi-visual/show-labels-off.png" alt-text="A screenshot of the Azure Maps visual displaying a map with the show labels setting turned off in the style section of the format pane in Power BI." lightbox="media/power-bi-visual/show-labels-off.png":::
## Fields pane buckets
The following data buckets are available in the **Fields** pane of the Azure Map
## Map settings
-The **Map settings** section of the Format pane provide options for customizing how the map is displayed and reacts to updates.
+The **Map settings** section of the **Format** pane provide options for customizing how the map is displayed and reacts to updates.
+
+The **Map settings** section is divided into three subsections: [style](#style), [view](#view) and [controls](#controls).
+
+### Style
-| Setting | Description |
-||--|
-| Auto zoom | Automatically zooms the map into the data loaded through the **Fields** pane of the visual. As the data changes, the map will update its position accordingly. When the slider is in the **Off** position, more map view settings are displayed for the default map view. |
-| World wrap | Allows the user to pan the map horizontally infinitely. |
-| Style picker | Adds a button to the map that allows the report readers to change the style of the map. |
-| Navigation controls | Adds buttons to the map as another method to allow the report readers to zoom, rotate, and change the pitch of the map. See this document on [Navigating the map](map-accessibility.md#navigating-the-map) for details on all the different ways users can navigate the map. |
-| Map style | The style of the map. See the [supported map styles](supported-map-styles.md) document for more information. |
-| Selection control | Adds a button that allows the user to choose between different modes to select data on the map; circle, rectangle, polygon (lasso), or travel time or distance. When drawing a polygon, to complete the drawing; click on the first point, or double-click the map on the last point, or press the `c` key. |
+The following settings are available in the **Style** section:
-### Map view settings
+| Setting | Description |
+|-|--|
+| Style | The style of the map. The dropdown list contains [greyscale light][gs-light], [greyscale dark][gs-dark], [night][night], [road shaded relief][RSR], [satellite][satellite] and [satellite road labels][satellite RL]. |
+| Show labels | A toggle switch that enables you to either show or hide map labels. For more information, see list item number five in the previous section. |
-If the **Auto zoom** slider is in the **Off** position, the following settings are displayed and allow the user to specify the default map view information.
+### View
+
+The following settings available in the **View** section enable the user to specify the default map view information when the **Auto zoom** setting is set to **Off**.
| Setting | Description | |||
+| Auto zoom | Automatically zooms the map into the data loaded through the **Fields** pane of the visual. As the data changes, the map will update its position accordingly. When **Auto zoom** is set to **Off**, the remaining settings in this section become active that enable to user to define the default map view. |
| Zoom | The default zoom level of the map. Can be a number between 0 and 22. |
-| Center latitude | The default latitude at the center of the map. |
-| Center longitude | The default longitude at the center of the map. |
+| Center latitude | The default latitude of the center of the map. |
+| Center longitude | The default longitude of the center of the map. |
| Heading | The default orientation of the map in degrees, where 0 is north, 90 is east, 180 is south, and 270 is west. Can be any number between 0 and 360. | | Pitch | The default tilt of the map in degrees between 0 and 60, where 0 is looking straight down at the map. |
+### Controls
+
+The following settings are available in the **Controls** section:
+
+| Setting | Description |
+|--|--|
+| World wrap | Allows the user to pan the map horizontally infinitely. |
+| Style picker | Adds a button to the map that allows the report readers to change the style of the map. |
+| Navigation | Adds buttons to the map as another method to allow the report readers to zoom, rotate, and change the pitch of the map. See this document on [Navigating the map](map-accessibility.md#navigating-the-map) for details on all the different ways users can navigate the map. |
+| Selection | Adds a button that allows the user to choose between different modes to select data on the map; circle, rectangle, polygon (lasso), or travel time or distance. To complete drawing a polygon; select the first point, or double-click on the last point on the map, or press the `c` key. |
+| Geocoding culture | The default, **Auto**, refers to the Western Address System. The only other option, **JA**, refers to the Japanese address system. In the western address system, you begin with the address details and then proceed to the larger categories such as city, state and postal code. In the Japanese address system, the larger categories are listed first and finish with the address details. |
+ ## Considerations and Limitations The Azure Maps Power BI visual is available in the following services and applications:
The Azure Maps Power BI visual is available in the following services and applic
**Where is Azure Maps available?**
-At this time, Azure Maps is currently available in all countries and regions except the following:
+At this time, Azure Maps is currently available in all countries and regions except:
- China - South Korea
Customize the visual:
> [!div class="nextstepaction"] > [Customize visualization titles, backgrounds, and legends](/power-bi/visuals/power-bi-visualization-customize-title-background-and-legend)+
+[gs-light]: supported-map-styles.md#grayscale_light
+[gs-dark]: supported-map-styles.md#grayscale_dark
+[night]:supported-map-styles.md#night
+[RSR]: supported-map-styles.md#road_shaded_relief
+[satellite]: supported-map-styles.md#satellite
+[satellite RL]: supported-map-styles.md#satellite_road_labels
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Azure Maps Python SDK supports Python version 3.7 or later. Check theΓÇ»[Azure S
Azure Maps JavaScript/TypeScript SDK supports LTS versions of [Node.js][Node.js] including versions in Active status and Maintenance status.
-| Service Name  | npm package  | Samples  |
+| Service Name  | npm packages | Samples  |
||-|--| | [Search][js search readme] | [@azure/maps-search][js search package] | [search samples][js search sample] | | [Route][js route readme] | [@azure-rest/maps-route][js route package] | [route samples][js route sample] |
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
For security best practices, see [Authentication and authorization best practice
The Azure Maps SDKs go through regular security testing along with any external dependency libraries that may be used by the SDKs. Any known security issue is fixed in a timely manner and released to production. If your application points to the latest major version of the hosted version of the Azure Maps Web SDK, it will automatically receive all minor version updates that will include security related fixes.
-If self-hosting the Azure Maps Web SDK via the NPM module, be sure to use the caret (^) symbol to in combination with the Azure Maps NPM package version number in your `package.json` file so that it will always point to the latest minor version.
+If self-hosting the Azure Maps Web SDK via the npm module, be sure to use the caret (^) symbol to in combination with the Azure Maps npm package version number in your `package.json` file so that it will always point to the latest minor version.
```json "dependencies": {
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
The OMS Agent has limited customization and hardening support for Linux.
The following are currently supported: - SELinux (Marketplace images for CentOS and RHEL with their default settings)
+- FIPS (Marketplace images for CentOS and RHEL 6/7 with their default settings)
The following aren't supported: - CIS
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
To install DCR Config Generator, you need:
1. PowerShell version 5.1 or higher. We recommend using PowerShell version 7.1.3 or higher. 1. Read access for the specified workspace resources. 1. The `Az Powershell` module to pull workspace agent configuration information.
-1. The Azure credentials for running `Connect-AzAccount` and `Select-AzSubscription`, which set the context for the script to run.
+1. The Azure credentials for running `Connect-AzAccount` and `Select-AzContext`, which set the context for the script to run.
To install DCR Config Generator:
azure-monitor Diagnostics Extension Windows Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-windows-install.md
The protected settings are defined in the [PrivateConfig element](diagnostics-ex
{ "storageAccountName": "mystorageaccount", "storageAccountKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
- "storageAccountEndPoint": "https://mystorageaccount.blob.core.windows.net"
+ "storageAccountEndPoint": "https://core.windows.net"
} ```
The following minimal example of a configuration file enables collection of diag
"PrivateConfig": { "storageAccountName": "mystorageaccount", "storageAccountKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
- "storageAccountEndPoint": "https://mystorageaccount.blob.core.windows.net"
+ "storageAccountEndPoint": "https://core.windows.net"
} } ```
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
And then defining these elements for the resulting alert actions using:
1. In the **Conditions** pane, select the **Chart period**. 1. The **Preview** chart shows you the results of your selection.
- 1. In the **Alert logic** section:
+ 1. Select values for each of these fields in the **Alert logic** section:
|Field |Description | |||
- |Event level| Select the level of the events that this alert rule monitors. Values are: **Critical**, **Error**, **Warning**, **Informational**, **Verbose** and **All**.|
- |Status|Select the status levels for which the alert is evaluated.|
+ |Event level| Select the level of the events for this alert rule. Values are: **Critical**, **Error**, **Warning**, **Informational**, **Verbose** and **All**.|
+ |Status|Select the status levels for the alert.|
|Event initiated by|Select the user or service principal that initiated the event.|
+ ### [Resource Health alert](#tab/resource-health)
+
+ 1. In the **Conditions** pane, select values for each of these fields:
+
+ |Field |Description |
+ |||
+ |Event status| Select the statuses of Resource Health events. Values are: **Active**, **In Progress**, **Resolved**, and **Updated**.|
+ |Current resource status|Select the current resource status. Values are: **Available**, **Degraded**, and **Unavailable**.|
+ |Previous resource status|Select the previous resource status. Values are: **Available**, **Degraded**, **Unavailable**, and **Unknown**.|
+ |Reason type|Select the cause(s) of the Resource Health events. Values are: **Platform Initiated**, **Unknown**, and **User Initiated**.|
+ ### [Service Health alert](#tab/service-health)
+
+ 1. In the **Conditions** pane, select values for each of these fields:
+
+ |Field |Description |
+ |||
+ |Services| Select the Azure services.|
+ |Regions|Select the Azure regions.|
+ |Event types|Select the type(s) of Service Health events. Values are: **Service issue**, **Planned maintenance**, **Health advisories**, and **Security advisories**.|
+ From this point on, you can select the **Review + create** button at any time.
And then defining these elements for the resulting alert actions using:
1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot of the actions tab when creating a new activity log alert rule.":::
+ ### [Resource Health alert](#tab/resource-health)
+
+ 1. Enter values for the **Alert rule name** and the **Alert rule description**.
+ 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.
+ ### [Service Health alert](#tab/service-health)
+
+ 1. Enter values for the **Alert rule name** and the **Alert rule description**.
+ 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
### [Activity log alert](#tab/activity-log)
- To create an activity log alert rule, use the **az monitor activity-log alert create** command. You can see detailed documentation on the metric alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
- To create a new activity log alert rule, use the following commands: - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource. - [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule. - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
-
+ You can find detailed documentation on the activity log alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+ ### [Resource Health alert](#tab/resource-health)
+
+ To create a new activity log alert rule, use the following commands using the `Resource Health` category:
+ - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource.
+ - [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule.
+ - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
+
+ You can find detailed documentation on the alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+
+ ### [Service Health alert](#tab/service-health)
+
+ To create a new activity log alert rule, use the following commands using the `Service Health` category:
+ - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource .
+ - [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule.
+ - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
+
+ You can find detailed documentation on the alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+
+
+ ## Create a new alert rule using PowerShell - To create a metric alert rule using PowerShell, use this cmdlet: [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2)
azure-monitor Alerts Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-logic-apps.md
This article shows you how to create a Logic App and integrate it with an Azure Monitor Alert.
-[Azure Logic Apps](https://docs.microsoft.com/azure/logic-apps/logic-apps-overview) allows you to build and customize workflows for integration. Use Logic Apps to customize your alert notifications.
+[Azure Logic Apps](../../logic-apps/logic-apps-overview.md) allows you to build and customize workflows for integration. Use Logic Apps to customize your alert notifications.
+ Customize the alerts email, using your own email subject and body format. + Customize the alert metadata by looking up tags for affected resources or fetching a log query search result.
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
You can see all alert instances in all your Azure resources generated in the las
## Types of alerts
-There are four types of alerts. This table provides a brief description of each alert type.
+This table provides a brief description of each alert type.
See [this article](alerts-types.md) for detailed information about each alert type and how to choose which alert type best suits your needs. |Alert type|Description| |:|:| |[Metric alerts](alerts-types.md#metric-alerts)|Metric alerts evaluate resource metrics at regular intervals. Metrics can be platform metrics, custom metrics, logs from Azure Monitor converted to metrics or Application Insights metrics. Metric alerts have several additional features, such as the ability to apply multiple conditions and dynamic thresholds.| |[Log alerts](alerts-types.md#log-alerts)|Log alerts allow users to use a Log Analytics query to evaluate resource logs at a predefined frequency.|
-|[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches the defined conditions.|
+|[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches defined conditions. **Resource Health** alerts and **Service Health** alerts are activity log alerts that report on your service and resource health.|
|[Smart detection alerts](alerts-types.md#smart-detection-alerts)|Smart detection on an Application Insights resource automatically warns you of potential performance problems and failure anomalies in your web application. You can migrate smart detection on your Application Insights resource to create alert rules for the different smart detection modules.|
+|[Prometheus alerts (preview)](alerts-types.md#prometheus-alerts-preview)|Prometheus alerts are used for alerting on performance and health of Kubernetes clusters (including AKS). The alert rules are based on PromQL, which is an open source query language.|
## Out-of-the-box alert rules (preview)
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
This article describes the kinds of Azure Monitor alerts you can create, and helps you understand when to use each type of alert.
-There are five types of alerts:
+There are four types of alerts:
- [Metric alerts](#metric-alerts)-
+- [Log alerts](#log-alerts)
- [Activity log alerts](#activity-log-alerts)
+ - [Service Health alerts](#service-health-alerts)
+ - [Resource Health alerts](#resource-health-alerts)
- [Smart detection alerts](#smart-detection-alerts) - [Prometheus alerts](#prometheus-alerts-preview) (preview)+ ## Choosing the right alert type This table can help you decide when to use what type of alert. For more detailed information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
This table can help you decide when to use what type of alert. For more detailed
|||| |Metric alert|Metric data is stored in the system already pre-computed. Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. We recommend using metric alerts if the data you want to monitor is available in metric data.|Each metric alert rule is charged based on the number of time-series that are monitored. | |Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for log alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
-|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
+|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource. Service Health alerts and Resource Health alerts can let you know when there is an issue with one of your services or resources.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
|Prometheus alerts (preview)| Prometheus alerts are primarily used for alerting on performance and health of Kubernetes clusters (including AKS). The alert rules are based on PromQL, which is an open source query language. | There is no charge for Prometheus alerts during the preview period. | ## Metric alerts
Activity log alert rules are Azure resources, so they can be created by using an
An activity log alert only monitors events in the subscription in which the alert is created.
+### Service Health alerts
+
+Service Health alerts are a type of activity alert. [Service Health](../../service-health/overview.md) lets you know about outages, planned maintenance activities, and other health advisories because the authenticated Service Health experience knows which services and resources you currently use.
+
+The best way to use Service Health is to set up Service Health alerts to notify you using your preferred communication channels when service issues, planned maintenance, or other changes may affect the Azure services and regions you use.
+
+### Resource Health alerts
+
+Resource Health alerts are a type of activity alert. [Resource Health overview](../../service-health/resource-health-overview.md) helps you diagnose and get support for service problems that affect your Azure resources. It reports on the current and past health of your resources. Resource Health relies on signals from different Azure services to assess whether a resource is healthy. If a resource is unhealthy, Resource Health analyzes additional information to determine the source of the problem. It also reports on actions that Microsoft is taking to fix the problem and identifies things that you can do to address it.
+ ## Smart Detection alerts After setting up Application Insights for your project, when your app generates a certain minimum amount of data, Smart Detection takes 24 hours to learn the normal behavior of your app. Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine learning to find these anomalies. Smart Detection monitors the data received from your app, and in particular the failure rates. Application Insights automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests.
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 10/31/2022 Last updated : 11/15/2022 ms.devlang: csharp, java, javascript, vb
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
Title: Filtering and preprocessing in the Application Insights SDK | Microsoft Docs description: Write telemetry processors and telemetry initializers for the SDK to filter or add properties to the data before the telemetry is sent to the Application Insights portal. Previously updated : 11/23/2016 Last updated : 11/14/2022 ms.devlang: csharp, javascript, python
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application. Previously updated : 09/20/2022 Last updated : 11/14/2022 # Application Insights overview
Application Insights provides other features including, but not limited to:
- [Live Metrics](live-stream.md) ΓÇô observe activity from your deployed application in real time with no effect on the host environment - [Availability](availability-overview.md) ΓÇô also known as ΓÇ£Synthetic Transaction MonitoringΓÇ¥, probe your applications external endpoint(s) to test the overall availability and responsiveness over time-- [GitHub or Azure DevOps integration](work-item-integration.md) ΓÇô create [GitHub](https://learn.microsoft.com/training/paths/github-administration-products/) or [Azure DevOps](https://learn.microsoft.com/azure/devops/?view=azure-devops) work items in context of Application Insights data
+- [GitHub or Azure DevOps integration](work-item-integration.md) ΓÇô create [GitHub](/training/paths/github-administration-products/) or [Azure DevOps](/azure/devops/?view=azure-devops) work items in context of Application Insights data
- [Usage](usage-overview.md) ΓÇô understand which features are popular with users and how users interact and use your application - [Smart Detection](proactive-diagnostics.md) ΓÇô automatic failure and anomaly detection through proactive telemetry analysis
-In addition, Application Insights supports [Distributed Tracing](distributed-tracing.md), also known as ΓÇ£distributed component correlationΓÇ¥. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a given execution or transaction. The ability to trace activity end-to-end is increasingly important for applications that have been built as distributed components or [microservices](https://learn.microsoft.com/azure/architecture/guide/architecture-styles/microservices).
+In addition, Application Insights supports [Distributed Tracing](distributed-tracing.md), also known as ΓÇ£distributed component correlationΓÇ¥. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a given execution or transaction. The ability to trace activity end-to-end is increasingly important for applications that have been built as distributed components or [microservices](/azure/architecture/guide/architecture-styles/microservices).
The [Application Map](app-map.md) allows a high level top-down view of the application architecture and at-a-glance visual references to component health and responsiveness.
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Title: Application Map in Azure Application Insights | Microsoft Docs description: Monitor complex application topologies with Application Map and Intelligent view. Previously updated : 05/16/2022 Last updated : 11/15/2022 ms.devlang: csharp, java, javascript, python
# Application Map: Triage distributed applications
+Application maps represent the logical structure of a distributed application. Individual components of the application are determined by their "roleName" or "name" property in recorded telemetry. These components are represented as circles on the map and are referred to as "nodes." HTTP calls between nodes are represented as arrows connecting these nodes, referred to as "connectors" or "edges." The node that makes the call is the "source" of the call, and the receiving node is the "target" of the call.
+ Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. Each node on the map represents an application component or its dependencies and has health KPI and alerts status. You can select any component to get more detailed diagnostics, such as Application Insights events. If your app uses Azure services, you can also select Azure diagnostics, such as SQL Database Advisor recommendations. Application Map also features [Intelligent view](#application-map-intelligent-view-public-preview) to assist with fast service health investigations.
For the [official definitions](https://github.com/Microsoft/ApplicationInsights-
``` Alternatively, *cloud role instance* can be helpful for scenarios where a cloud role name tells you the problem is somewhere in your web front end. But you might be running multiple load-balanced servers across your web front end. Being able to drill in a layer deeper via Kusto queries and knowing if the issue is affecting all web front-end servers or instances or just one can be important.-
+intelligent view
A scenario when you might want to override the value for cloud role instance could be if your app is running in a containerized environment. In this case, just knowing the individual server might not be enough information to locate a specific issue. For more information about how to override the cloud role name property with telemetry initializers, see [Add properties: ITelemetryInitializer](api-filtering-sampling.md#addmodify-properties-itelemetryinitializer). +
+## Application Map Filters
+
+Application Map filters allow the user to reduce the number of nodes and edges shown by applying one or more filters. These filters can be used to reduce the scope of the map, showing a smaller and more focused map.
+
+### Creating Application Map filters
+
+To create a filter, select the "Add filter" button in the application map's toolbar.
++
+This pops up a dialog with three sections: 1) Select filter type, 2)
+Choose filter parameters, and 3) Review.
+++
+The first section has two options:
+
+1. Node filter
+1. Connector (edge) filter
+
+The contents in the other sections change based on the option selected.
+
+#### Node filters
+
+Node filters allow the user to leave only selected nodes on the map and hide the rest. A node filter checks each node if it contains a property (its name, for example) with a value that matches a search value through a given operator. If a node is removed by a node filter, all of its connectors (edges) are also removed.
+
+There are three parameters available for nodes:
+
+- "**Nodes included**" allows the user to select only nodes with
+ matching properties or to also include source nodes, target nodes,
+ or both in the resulting map.
+
+ - "Nodes and sources, targets"--This means nodes that match the search parameters will be included in the resulting map, and nodes that are sources or targets for the matching node will also be included, even if they don't have property values that
+ match the search. Source and target nodes are collectively referred to as "Connected" nodes.
+
+ - "Nodes and sources"--Same as above, but target nodes aren't automatically included in the results.
+
+ - "Nodes and targets"--Same as above, but source nodes aren't automatically included.
+
+ - "Nodes only"--All nodes in the resulting map must have a property value that matches.
+
+- "**Operator**" is the type of check that will be performed on each
+ node's property values:
+
+ - contains
+
+ - !contains (not contains)
+
+ - == (equals)
+
+ - != (not equals)
+
+- "**Search value**" is the text that has to be contained, not
+ contained, equal, or not equal to a node property value. Some of the
+ values found in nodes that are on the map are shown in a drop-down.
+ Any arbitrary value can be entered by clicking "Create option ..."
+ in the drop-down.
+
+For example, in the screenshot below, the filter is being configured to
+select **Node(s)** that **contain(s)** the text **"-west".** **Source**
+and t**arget** nodes will also be included in the resulting map. In the
+same screenshot, the user is able to select one of the values found in
+the map or to create an option that isn't an exact match to one found
+in the map.
++
+#### Connector (edge) filters
+
+Connector filters examine the properties of a connector to match a value. Connectors that don't match the filter are removed from the map. The same happens to nodes with no connectors left.
+
+Connector filters require three parameters:
+
+- "**Filter connectors by**" allows the user to choose which property
+ of a connector to use:
+
+ - "**Error connector (highlighted red)**" selects connectors based
+ on their color (red or not). A value can't be entered for this
+ type of filter, only an operator that is "==" or "!=" meaning
+ "connector with errors" and "connector without errors."
+
+ - "**Error rate**" uses the average error rate for the
+ connectorthe number of failed calls divided by the number of
+ all callsexpressed as a percentage. For example, a value of
+ "1" would refer to 1% failed calls.
+
+ - "**Average call duration (****ms)**" uses just that: the average
+ duration of all calls represented by the connector, in
+ milliseconds. For example, a value of "1000" would refer to
+ calls that averaged 1 second.
+
+ - "**Calls count**" uses the total number of calls represented by
+ the connector.
+
+- **"Operator"** is the comparison that will be applied between the
+ connector property and the value entered below. The options change:
+ "Error connector" has equals/not equals options; all others have
+ greater/less than.
+
+- **"Value"** is the comparison value for the filter. There's only
+ one option for the "Error connector" filter: "Errors." Other filter
+ types require a numeric value and offer a drop-down with some
+ pre-populated entries relevant to the map.
+
+ - Some of these entries have a designation "(Pxx)" which are
+ percentile levels. For example, "Average call duration" filter
+ may have the value "200 (P90)" which indicates 90% of all
+ connectors (regardless of the number of calls they represent)
+ have less than 200 ms call duration..
+
+ - When a specific number isn't shown in the drop-down, it can be
+ typed, and created by clicking on "Create option." Typing "P"
+ shows all the percentile values in the drop-down.
+
+### Review section
+
+The Review section contains textual and visual descriptions of what the filter will do, which should be helpful when learning how filters work:
+++
+### Using filters in Application Map
+
+#### Filter interactivity
+
+After configuring a filter in the "Add filter" pop-up, select "Apply" to create the filter. Several filters can be applied, and they work sequentially, from left to right. Each filter can remove further nodes and connectors, but can't add them back to the map.
+
+The filters show up as rounded buttons above the application map:
++
+Clicking the :::image type="content" source="media/app-map/image-8.png" alt-text="A screenshot of a rounded X button."::: on a filter will remove that filter. Clicking elsewhere on the button allows the user to edit the filter's values. As the user changes values in the filter, the new values are applied so that the map is a preview of the change. Clicking "Cancel" restores the filter as it was before editing.
++
+### Reusing filters
+
+Filters can be reused in two ways:
+
+- The "Copy link" button on the toolbar above the map encodes the
+ filter information in the copied URL. This link can be saved in the
+ browser's bookmarks or shared with others. "Copy link" preserves the
+ duration value, but not the absolute time, so the map shown at a
+ later time may be different from the one observed when the link was
+ created.
+
+- The dashboard pin :::image type="content" source="media/app-map/image-10.png" alt-text="A screenshot displaying the dashboard pin button."::: is located next to the title bar of the Application Map blade. This button pins the map to a dashboard, along with the filters applied to it. This action can be useful for filters that are frequently interesting. As an example, the user can pin a map with "Error connector" filter applied to it, and the dashboard view will only show nodes that have errors in their HTTP calls.
+
+#### Filter usage scenarios
+
+There are many filter combinations. Here are some suggestions that apply to most maps and may be useful to pin on a dashboard:
+
+- Show only errors that appear significant by using the "Error connector" filter along with "Intelligent view":\
+ :::image type="content" source="media/app-map/image-11.png" alt-text="A screenshot displaying the Last 24 hours and Highlighted Errors filters.":::
+ :::image type="content" source="media/app-map/image-12.png" alt-text="A screenshot displaying the Intelligent Overview toggle.":::
+
+- Hide low-traffic connectors with no errors to quickly focus on issues that have higher impact:
+ :::image type="content" source="media/app-map/image-13.png" alt-text="A screenshot displaying the Last 24 hours, calls greater than 876, and highlihgted errors filters.":::
+
+- Show high-traffic connectors with high average duration to focus on potential performance issues:
+ :::image type="content" source="media/app-map/image-14.png" alt-text="A screenshot displaying the Last 24 hours, calls greater than 3057, and average time greater than 467 filters.":::
+
+- Show a specific portion of a distributed application (requires suitable roleName naming convention):
+ :::image type="content" source="media/app-map/image-15.png" alt-text="A screenshot displaying the Last 24 hours and Connected Contains West filters.":::
+
+- Hide a dependency type that is too noisy:
+ :::image type="content" source="media/app-map/image-16.png" alt-text="A screenshot displaying the Last 24 hours and Nodes Contains Storage Accounts filters.":::
+
+- Show only connectors that have higher error rates than a specific value
+ :::image type="content" source="media/app-map/image-17.png" alt-text="A screenshot displaying the Last 24 hours and Errors greater than 0.01 filters.":::
+++ ## Application Map Intelligent view (public preview) The following sections discuss Intelligent view.
Intelligent view has some limitations:
To provide feedback, see [Portal feedback](#portal-feedback). ++ ## Troubleshooting If you're having trouble getting Application Map to work as expected, try these steps.
Common troubleshooting questions about Intelligent view.
A dependency might appear to be failing but the model doesn't indicate it's a potential incident:
-* If this dependency has been failing for a while, the model might believe it's a regular state and not highlight the edge for you. It focuses on problem-solving in RT.
+* If this dependency has been failing for a while, the model might believe it's a regular state, and not highlight the edge for you. It focuses on problem-solving in RT.
* If this dependency has a minimal effect on the overall performance of the app, that can also make the model ignore it. * If none of the above is correct, use the **Feedback** option and describe your experience. You can help us improve future model versions.
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Title: Dependency tracking in Application Insights | Microsoft Docs description: Monitor dependency calls from your on-premises or Azure web application with Application Insights. Previously updated : 08/26/2020 Last updated : 11/15/2022 ms.devlang: csharp
For webpages, the Application Insights JavaScript SDK automatically collects AJA
## Advanced SQL tracking to get full SQL query > [!NOTE]
-> Azure Functions requires separate settings to enable SQL text collection. Within [host.json](../../azure-functions/functions-host-json.md#applicationinsights), set `"EnableDependencyTracking": true,` and `"DependencyTrackingOptions": { "enableSqlCommandTextInstrumentation": true }` in `applicationInsights`.
+> Azure Functions requires separate settings to enable SQL text collection. For more information, see [Enable SQL query collection](../../azure-functions/configure-monitoring.md#enable-sql-query-collection).
For SQL calls, the name of the server and database is always collected and stored as the name of the collected `DependencyTelemetry`. Another field, called data, can contain the full SQL query text.
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
description: Capture exceptions from ASP.NET apps along with request telemetry.
ms.devlang: csharp Previously updated : 08/19/2022 Last updated : 11/15/2022
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
description: Search logs generated by Trace, NLog, or Log4Net.
ms.devlang: csharp Previously updated : 05/08/2019 Last updated : 11/15/2022
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
Title: Configure monitoring for ASP.NET with Azure Application Insights | Microsoft Docs description: Configure performance, availability, and user behavior analytics tools for your ASP.NET website hosted on-premises or in Azure. Previously updated : 10/12/2021 Last updated : 11/15/2022 ms.devlang: csharp
azure-monitor Auto Collect Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-collect-dependencies.md
description: Application Insights automatically collect and visualize dependenci
ms.devlang: csharp, java, javascript Previously updated : 08/22/2022 Last updated : 11/15/2022
azure-monitor Availability Multistep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-multistep.md
Title: Monitor with multi-step web tests - Azure Application Insights
-description: Set up multi-step web tests to monitor your web applications with Azure Application Insights
+ Title: Monitor with multistep web tests - Application Insights
+description: Set up multistep web tests to monitor your web applications with Application Insights.
Last updated 07/21/2021
-# Multi-step web tests
+# Multistep web tests
-You can monitor a recorded sequence of URLs and interactions with a website via multi-step web tests. This article will walk you through the process of creating a multi-step web test with Visual Studio Enterprise.
+You can monitor a recorded sequence of URLs and interactions with a website via multistep web tests. This article walks you through the process of creating a multistep web test with Visual Studio Enterprise.
> [!IMPORTANT]
-> [Multi-step web tests have been deprecated](https://azure.microsoft.com/updates/retirement-notice-transition-to-custom-availability-tests-in-application-insights/). We recommend using [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) to submit [custom availability tests](availability-azure-functions.md) instead of multi-step web tests. With TrackAvailability() and custom availability tests, you can run tests on any compute you want and use C# to easily author new tests.
+> [Multistep web tests have been deprecated](https://azure.microsoft.com/updates/retirement-notice-transition-to-custom-availability-tests-in-application-insights/). We recommend using [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) to submit [custom availability tests](availability-azure-functions.md) instead of multistep web tests. With `TrackAvailability()` and custom availability tests, you can run tests on any compute you want and use C# to easily author new tests.
-> [!NOTE]
-> Multi-step web tests **are not supported** in the [Azure Government](../../azure-government/index.yml) cloud.
+Multistep web tests are categorized as classic tests and can be found under **Add Classic Test** on the **Availability** pane.
+> [!NOTE]
+> Multistep web tests *aren't supported* in the [Azure Government](../../azure-government/index.yml) cloud.
-Multi-step web tests are categorized as classic tests and can be found under **Add Classic Test** in the Availability pane.
+## Multistep web test alternative
-## Multi-step webtest alternative
+Multistep web tests depend on Visual Studio web test files. It was [announced](https://devblogs.microsoft.com/devops/cloud-based-load-testing-service-eol/) that Visual Studio 2019 will be the last version with web test functionality. Although no new features will be added, web test functionality in Visual Studio 2019 is still currently supported and will continue to be supported during the support lifecycle of the product.
-Multi-step web tests depend on Visual Studio webtest files. It was [announced](https://devblogs.microsoft.com/devops/cloud-based-load-testing-service-eol/) that Visual Studio 2019 will be the last version with webtest functionality. It's important to understand that while no new features will be added, webtest functionality in Visual Studio 2019 is still currently supported and will continue to be supported during the support lifecycle of the product.
+We recommend using [TrackAvailability](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) to submit [custom availability tests](./availability-azure-functions.md) instead of multistep web tests. This option is the long-term supported solution for multi-request or authentication test scenarios. With `TrackAvailability()` and custom availability tests, you can run tests on any compute you want and use C# to easily author new tests.
-We recommend using the [TrackAvailability](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) to submit [custom availability tests](./availability-azure-functions.md) instead of Multi-step web tests. This is the long term supported solution for multi request or authentication test scenarios. With TrackAvailability() and custom availability tests, you can run tests on any compute you want and use C# to easily author new tests.
+## Prerequisites
-## Pre-requisites
+You need:
* Visual Studio 2017 Enterprise or greater. * Visual Studio web performance and load testing tools.
-To locate the testing tools pre-requisite. Launch the **Visual Studio Installer** > **Individual components** > **Debugging and testing** > **Web performance and load testing tools**.
+To locate the testing tools prerequisite, select **Visual Studio Installer** > **Individual components** > **Debugging and testing** > **Web performance and load testing tools**.
-![Screenshot of the Visual Studio installer UI with Individual components selected with a checkbox next to the item for Web performance and load testing tools](./media/availability-multistep/web-performance-load-testing.png)
+![Screenshot that shows the Visual Studio installer UI with individual components selected with a checkbox next to the item for web performance and load testing tools.](./media/availability-multistep/web-performance-load-testing.png)
> [!NOTE]
-> Multi-step web tests have additional costs associated with them. To learn more consult the [official pricing guide](https://azure.microsoft.com/pricing/details/application-insights/).
+> Multistep web tests have extra costs associated with them. To learn more, see the [official pricing guide](https://azure.microsoft.com/pricing/details/application-insights/).
-## Record a multi-step web test
+## Record a multistep web test
> [!WARNING]
-> We no longer recommend using the multi-step recorder. The recorder was developed for static HTML pages with basic interactions, and does not provide a functional experience for modern web pages.
+> We no longer recommend using the multistep recorder. The recorder was developed for static HTML pages with basic interactions. It doesn't provide a functional experience for modern webpages.
-For guidance on creating Visual Studio web tests consult the [official Visual Studio 2019 documentation](/visualstudio/test/how-to-create-a-web-service-test).
+For guidance on how to create Visual Studio web tests, see the [official Visual Studio 2019 documentation](/visualstudio/test/how-to-create-a-web-service-test).
## Upload the web test
-1. In the Application Insights portal on the Availability pane select **Add Classic test**, then select **Multi-step** as the *SKU*.
-2. Upload your multi-step web test.
-3. Set the test locations, frequency, and alert parameters.
-4. Select **Create**.
+1. In the Application Insights portal on the **Availability** pane, select **Add Classic test**. Then select **Multi-step** as the **SKU**.
+1. Upload your multistep web test.
+1. Set the test locations, frequency, and alert parameters.
+1. Select **Create**.
-### Frequency & location
+### Frequency and location
-|Setting| Explanation
-|-|-|-|
-|**Test frequency**| Sets how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested on average every minute.|
-|**Test locations**| Are the places from where our servers send web requests to your URL. **Our minimum number of recommended test locations is five** in order to insure that you can distinguish problems in your website from network issues. You can select up to 16 locations.
+|Setting| Description |
+|-|-|
+|Test frequency| Sets how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested on average every minute.|
+|Test locations| The places from where our servers send web requests to your URL. *Our minimum number of recommended test locations is five* to ensure that you can distinguish problems in your website from network issues. You can select up to 16 locations.
### Success criteria
-|Setting| Explanation
-|-|-|-|
-| **Test timeout** |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site have not been received within this period. If you selected **Parse dependent requests**, then all the images, style files, scripts, and other dependent resources must have been received within this period.|
-| **HTTP response** | The returned status code that is counted as a success. 200 is the code that indicates that a normal web page has been returned.|
-| **Content match** | A string, like "Welcome!" We test that an exact case-sensitive match occurs in every response. It must be a plain string, without wildcards. Don't forget that if your page content changes you might have to update it. **Only English characters are supported with content match** |
+|Setting| Description|
+|-||
+| Test timeout |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site haven't been received within this period. If you selected **Parse dependent requests**, all the images, style files, scripts, and other dependent resources must have been received within this period.|
+| HTTP response | The returned status code that's counted as a success. The code 200 indicates that a normal webpage has been returned.|
+| Content match | A string, like "Welcome!" We test that an exact case-sensitive match occurs in every response. It must be a plain string, without wildcards. Don't forget that if your page content changes, you might have to update it. *Only English characters are supported with content match.* |
### Alerts
-|Setting| Explanation
-|-|-|-|
-|**Near-realtime (Preview)** | We recommend using Near-realtime alerts. Configuring this type of alert is done after your availability test is created. |
-|**Alert location threshold**|We recommend a minimum of 3/5 locations. The optimal relationship between alert location threshold and the number of test locations is **alert location threshold** = **number of test locations - 2, with a minimum of five test locations.**|
+|Setting| Description|
+|-||
+|Near real time (preview) | We recommend using near real time alerts. Configuring this type of alert is done after your availability test is created. |
+|Alert location threshold|We recommend a minimum of 3/5 locations. The optimal relationship between alert location threshold and the number of test locations is **alert location threshold** = **number of test locations - 2**, with a minimum of five test locations.|
## Configuration
-### Plugging time and random numbers into your test
+Follow these configuration steps.
+
+### Plug time and random numbers into your test
-Suppose you're testing a tool that gets time-dependent data such as stocks from an external feed. When you record your web test, you have to use specific times, but you set them as parameters of the test, StartTime and EndTime.
+Suppose you're testing a tool that gets time-dependent data, such as stock prices, from an external feed. When you record your web test, you have to use specific times, but you set them as parameters of the test, `StartTime` and `EndTime`.
-![My awesome stock app screenshot](./media/availability-multistep/app-insights-72webtest-parameters.png)
+![Screenshot that shows a stock app.](./media/availability-multistep/app-insights-72webtest-parameters.png)
-When you run the test, you'd like EndTime always to be the present time, and StartTime should be 15 minutes ago.
+When you run the test, you want `EndTime` always to be the present time. `StartTime` should be 15 minutes prior.
-The Web Test Date Time Plugin provides the way to handle parameterize times.
+The Web Test Date Time Plug-in provides the way to handle parameter times.
-1. Add a web test plug-in for each variable parameter value you want. In the web test toolbar, choose **Add Web Test Plugin**.
-
- ![Add Web Test Plug-in](./media/availability-multistep/app-insights-72webtest-plugin-name.png)
-
- In this example, we use two instances of the Date Time Plug-in. One instance is for "15 minutes ago" and another for "now."
+1. Add a Web Test Plug-in for each variable parameter value you want. On the web test toolbar, select **Add Web Test Plug-in**.
-2. Open the properties of each plug-in. Give it a name and set it to use the current time. For one of them, set Add Minutes = -15.
+ ![Screenshot that shows the Add Web Test Plug-in.](./media/availability-multistep/app-insights-72webtest-plugin-name.png)
- ![Context Parameters](./media/availability-multistep/app-insights-72webtest-plugin-parameters.png)
+ In this example, we use two instances of the Date Time Plug-in. One instance is for "15 minutes ago" and another is for "now."
-3. In the web test parameters, use {{plug-in name}} to reference a plug-in name.
+1. Open the properties of each plug-in. Give it a name and set it to use the current time. For one of them, set **Add Minutes = -15**.
- ![StartTime](./media/availability-multistep/app-insights-72webtest-plugins.png)
+ ![Screenshot that shows context parameters.](./media/availability-multistep/app-insights-72webtest-plugin-parameters.png)
-Now, upload your test to the portal. It will use the dynamic values on every run of the test.
+1. In the web test parameters, use `{{plug-in name}}` to reference a plug-in name.
-### Dealing with sign-in
+ ![Screenshot that shows StartTime.](./media/availability-multistep/app-insights-72webtest-plugins.png)
+
+Now, upload your test to the portal. It will use dynamic values on every run of the test.
+
+### Consider sign-in
If your users sign in to your app, you have various options for simulating sign-in so that you can test pages behind the sign-in. The approach you use depends on the type of security provided by the app.
-In all cases, you should create an account in your application just for the purpose of testing. If possible, restrict the permissions of this test account so that there's no possibility of the web tests affecting real users.
+In all cases, create an account in your application only for testing. If possible, restrict the permissions of this test account so that there's no possibility of the web tests affecting real users.
-**Simple username and password**
+#### Simple username and password
Record a web test in the usual way. Delete cookies first.
-**SAML authentication**
+#### SAML authentication
|Property name| Description|
-|-|--|
-| Audience Uri | The audience URI for the SAML token. This is the URI for the Access Control Service (ACS) ΓÇô including ACS namespace and host name. |
-| Certificate Password | The password for the client certificate which will grant access to the embedded private key. |
-| Client Certificate | The client certificate value with private key in Base64 encoded format. |
-| Name Identifier | The name identifier for the token |
-| Not After | The timespan for which the token will be valid. The default is 5 minutes. |
-| Not Before | The timespan for which a token created in the past will be valid (to address time skews). The default is (negative) 5 minutes. |
-| Target Context Parameter Name | The context parameter that will receive the generated assertion. |
-
+|-||
+| Audience URI | The audience URI for the SAML token. This URI is for the Access Control service, including the Access Control namespace and host name. |
+| Certificate password | The password for the client certificate, which will grant access to the embedded private key. |
+| Client certificate | The client certificate value with private key in Base64-encoded format. |
+| Name identifier | The name identifier for the token. |
+| Not after | The timespan for which the token will be valid. The default is 5 minutes. |
+| Not before | The timespan for which a token created in the past will be valid (to address time skews). The default is (negative) 5 minutes. |
+| Target context parameter name | The context parameter that will receive the generated assertion. |
-**Client secret**
-If your app has a sign-in route that involves a client secret, use that route. Azure Active Directory (AAD) is an example of a service that provides a client secret sign-in. In AAD, the client secret is the App Key.
+#### Client secret
+If your app has a sign-in route that involves a client secret, use that route. Azure Active Directory (Azure AD) is an example of a service that provides a client secret sign-in. In Azure AD, the client secret is the app key.
-Here's a sample web test of an Azure web app using an app key:
+Here's a sample web test of an Azure web app using an app key.
-![Sample screenshot](./media/availability-multistep/client-secret.png)
+![Screenshot that shows a sample.](./media/availability-multistep/client-secret.png)
-Get token from AAD using client secret (AppKey).
-Extract bearer token from response.
-Call API using bearer token in the authorization header.
-Make sure that the web test is an actual client - that is, it has its own app in AAD - and use its clientId + app key. Your service under test also has its own app in AAD: the appID URI of this app is reflected in the web test in the resource field.
+1. Get a token from Azure AD by using the client secret (the app key).
+1. Extract a bearer token from the response.
+1. Call the API by using the bearer token in the authorization header.
+1. Make sure that the web test is an actual client. That is, that it has its own app in Azure AD. Use its client ID and app key. Your service under test also has its own app in Azure AD. The app ID URI of this app is reflected in the web test in the resource field.
-### Open Authentication
-An example of open authentication is signing in with your Microsoft or Google account. Many apps that use OAuth provide the client secret alternative, so your first tactic should be to investigate that possibility.
+### Open authentication
+An example of open authentication is the act of signing in with your Microsoft or Google account. Many apps that use OAuth provide the client secret alternative, so your first tactic should be to investigate that possibility.
-If your test must sign in using OAuth, the general approach is:
+If your test must sign in by using OAuth, the general approach is:
-Use a tool such as Fiddler to examine the traffic between your web browser, the authentication site, and your app.
-Perform two or more sign-ins using different machines or browsers, or at long intervals (to allow tokens to expire).
-By comparing different sessions, identify the token passed back from the authenticating site, that is then passed to your app server after sign-in.
-Record a web test using Visual Studio.
-Parameterize the tokens, setting the parameter when the token is returned from the authenticator, and using it in the query to the site. (Visual Studio attempts to parameterize the test, but does not correctly parameterize the tokens.)
+1. Use a tool such as Fiddler to examine the traffic between your web browser, the authentication site, and your app.
+1. Perform two or more sign-ins using different machines or browsers, or at long intervals (to allow tokens to expire).
+1. By comparing different sessions, identify the token passed back from the authenticating site that's then passed to your app server after sign-in.
+1. Record a web test by using Visual Studio.
+1. Parameterize the tokens. Set the parameter when the token is returned from the authenticator, and use it in the query to the site. (Visual Studio attempts to parameterize the test, but doesn't correctly parameterize the tokens.)
## Troubleshooting
-Dedicated [troubleshooting article](troubleshoot-availability.md).
+For troubleshooting help, see the dedicated [troubleshooting](troubleshoot-availability.md) article.
## Next steps
-* [Availability Alerts](availability-alerts.md)
-* [Url ping web tests](monitor-web-app-availability.md)
+* [Availability alerts](availability-alerts.md)
+* [URL ping web tests](monitor-web-app-availability.md)
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
Title: Application Insights availability tests description: Set up recurring web tests to monitor availability and responsiveness of your app or website. Previously updated : 07/13/2021 Last updated : 11/15/2022
azure-monitor Availability Private Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-private-test.md
Title: Private availability testing - Azure Monitor Application Insights description: Learn how to use availability tests on internal servers that run behind a firewall with private testing. Previously updated : 05/14/2021 Last updated : 11/15/2022
azure-monitor Availability Standard Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md
Title: Availability Standard test - Azure Monitor Application Insights description: Set up Standard tests in Application Insights to check for availability of a website with a single request test. Previously updated : 07/13/2021 Last updated : 11/15/2022 # Standard test
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Title: Monitor performance on Azure VMs - Azure Application Insights
-description: Application performance monitoring for Azure VM and Azure virtual machine scale sets. Chart load and response time, dependency information, and set alerts on performance.
+ Title: Monitor performance on Azure VMs - Application Insights
+description: Application performance monitoring for Azure Virtual Machines and Azure Virtual Machine Scale Sets. Chart load and response time, dependency information, and set alerts on performance.
Previously updated : 10/31/2022 Last updated : 11/15/2022 ms.devlang: csharp, java, javascript, python
-# Deploy the Azure Monitor Application Insights Agent on Azure virtual machines and Azure virtual machine scale sets
+# Deploy Application Insights Agent on virtual machines and virtual machine scale sets
-Enabling monitoring for your .NET or Java based web applications running on [Azure virtual machines](https://azure.microsoft.com/services/virtual-machines/) and [Azure virtual machine scale sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code.
+Enabling monitoring for your .NET or Java-based web applications running on [Azure Virtual Machines](https://azure.microsoft.com/services/virtual-machines/) and [Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code.
-This article walks you through enabling Application Insights monitoring using the Application Insights Agent and provides preliminary guidance for automating the process for large-scale deployments.
-> [!IMPORTANT]
-> **Java** based applications running on Azure VMs and VMSS are monitored with the **[Application Insights Java 3.0 agent](./java-in-process-agent.md)**, which is generally available.
+This article walks you through enabling Application Insights monitoring by using Application Insights Agent. It also provides preliminary guidance for automating the process for large-scale deployments.
+
+Java-based applications running on Azure Virtual Machines and Azure Virtual Machine Scale Sets are monitored with the [Application Insights Java 3.0 agent](./java-in-process-agent.md), which is generally available.
> [!IMPORTANT]
-> Azure Application Insights Agent for ASP.NET and ASP.NET Core applications running on **Azure VMs and VMSS** is currently in public preview. For monitoring your ASP.NET applications running **on-premises**, use the [Azure Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md), which is generally available and fully supported.
-> The preview version for Azure VMs and VMSS is provided without a service-level agreement, and we don't recommend it for production workloads. Some features might not be supported, and some might have constrained capabilities.
+> Application Insights Agent for ASP.NET and ASP.NET Core applications running on Azure Virtual Machines and Azure Virtual Machine Scale Sets is currently in public preview. For monitoring your ASP.NET applications running on-premises, use [Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md), which is generally available and fully supported.
+>
+> The preview version for Azure Virtual Machines and Azure Virtual Machine Scale Sets is provided without a service-level agreement. We don't recommend it for production workloads. Some features might not be supported, and some might have constrained capabilities.
+>
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Enable Application Insights
-Auto-instrumentation is easy to enable with no advanced configuration required.
+Auto-instrumentation is easy to enable. Advanced configuration isn't required.
For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). > [!NOTE]
-> Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications and Java. Use an SDK to instrument Node.js and Python applications hosted on an Azure virtual machines and virtual machine scale sets.
+> Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications, and Java. Use an SDK to instrument Node.js and Python applications hosted on Azure Virtual Machines and Azure Virtual Machine Scale Sets.
### [.NET Framework](#tab/net)
-The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
+The Application Insights Agent autocollects the same dependency signals out-of-the-box as the SDK. To learn more, see [Dependency autocollection](./auto-collect-dependencies.md#net).
-### [.NET Core / .NET](#tab/core)
+### [.NET Core/.NET](#tab/core)
-The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
+The Application Insights Agent autocollects the same dependency signals out-of-the-box as the SDK. To learn more, see [Dependency autocollection](./auto-collect-dependencies.md#net).
### [Java](#tab/Java)
-For Java, **[Application Insights Java 3.0 agent](./java-in-process-agent.md)** is the recommended approach. The most popular libraries and frameworks, as well as logs and dependencies are [auto-collected](./java-in-process-agent.md#autocollected-requests), with a multitude of [other configurations](./java-standalone-config.md)
+We recommend [Application Insights Java 3.0 agent](./java-in-process-agent.md) for Java. The most popular libraries, frameworks, logs, and dependencies are [autocollected](./java-in-process-agent.md#autocollected-requests) along with many [other configurations](./java-standalone-config.md).
### [Node.js](#tab/nodejs)
To monitor Python apps, use the [SDK](./opencensus-python.md).
-## Manage Application Insights Agent for .NET applications on Azure virtual machines using PowerShell
+## Manage Application Insights Agent for .NET applications on virtual machines by using PowerShell
-> [!NOTE]
-> Before installing the Application Insights Agent, you'll need a connection string. [Create a new Application Insights Resource](./create-new-resource.md) or copy the connection string from an existing application insights resource.
+Before you install Application Insights Agent, you'll need a connection string. [Create a new Application Insights resource](./create-new-resource.md) or copy the connection string from an existing Application Insights resource.
> [!NOTE]
-> New to PowerShell? Check out the [Get Started Guide](/powershell/azure/get-started-azureps).
+> If you're new to PowerShell, see the [Get Started Guide](/powershell/azure/get-started-azureps).
+
+Install or update Application Insights Agent as an extension for virtual machines:
-Install or update the Application Insights Agent as an extension for Azure virtual machines
```powershell $publicCfgJsonString = ' {
Set-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>"
``` > [!NOTE]
-> You may install or update the Application Insights Agent as an extension across multiple Virtual Machines at-scale using a PowerShell loop.
+> You can install or update Application Insights Agent as an extension across multiple virtual machines at scale by using a PowerShell loop.
+
+Uninstall Application Insights Agent extension from a virtual machine:
-Uninstall Application Insights Agent extension from Azure virtual machine
```powershell Remove-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name "ApplicationMonitoring" ```
-Query Application Insights Agent extension status for Azure virtual machine
+Query Application Insights Agent extension status for a virtual machine:
+ ```powershell Get-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name ApplicationMonitoring -Status ```
-Get list of installed extensions for Azure virtual machine
+Get a list of installed extensions for a virtual machine:
+ ```powershell Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions"
Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<my
# Location : southcentralus # ResourceId : /subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions/ApplicationMonitoring ```
-You may also view installed extensions in the [Azure virtual machine section](../../virtual-machines/extensions/overview.md) in the Portal.
+
+You can also view installed extensions in the [Azure Virtual Machine section](../../virtual-machines/extensions/overview.md) of the Azure portal.
> [!NOTE]
-> Verify installation by clicking on Live Metrics Stream within the Application Insights Resource associated with the connection string you used to deploy the Application Insights Agent Extension. If you are sending data from multiple Virtual Machines, select the target Azure virtual machines under Server Name. It may take up to a minute for data to begin flowing.
+> Verify installation by selecting **Live Metrics Stream** within the Application Insights resource associated with the connection string you used to deploy the Application Insights Agent extension. If you're sending data from multiple virtual machines, select the target virtual machines under **Server Name**. It might take up to a minute for data to begin flowing.
+
+## Manage Application Insights Agent for .NET applications on virtual machine scale sets by using PowerShell
-## Manage Application Insights Agent for .NET applications on Azure virtual machine scale sets using PowerShell
+Install or update Application Insights Agent as an extension for a virtual machine scale set:
-Install or update the Application Insights Agent as an extension for Azure virtual machine scale set
```powershell $publicCfgHashtable = @{
Add-AzVmssExtension -VirtualMachineScaleSet $vmss -Name "ApplicationMonitoringWi
Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -VirtualMachineScaleSet $vmss
-# Note: depending on your update policy, you might need to run Update-AzVmssInstance for each instance
+# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance.
```
-Uninstall application monitoring extension from Azure virtual machine scale sets
+Uninstall the application monitoring extension from virtual machine scale sets:
+ ```powershell $vmss = Get-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myVmssName>"
Remove-AzVmssExtension -VirtualMachineScaleSet $vmss -Name "ApplicationMonitorin
Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -VirtualMachineScaleSet $vmss
-# Note: depending on your update policy, you might need to run Update-AzVmssInstance for each instance
+# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance.
```
-Query application monitoring extension status for Azure virtual machine scale sets
+Query the application monitoring extension status for virtual machine scale sets:
+ ```powershell # Not supported by extensions framework ```
-Get list of installed extensions for Azure virtual machine scale sets
+Get a list of installed extensions for virtual machine scale sets:
+ ```powershell Get-AzResource -ResourceId /subscriptions/<mySubscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.Compute/virtualMachineScaleSets/<myVmssName>/extensions
Get-AzResource -ResourceId /subscriptions/<mySubscriptionId>/resourceGroups/<myR
## Troubleshooting
-Find troubleshooting tips for Application Insights Monitoring Agent Extension for .NET applications running on Azure virtual machines and virtual machine scale sets.
+Find troubleshooting tips for the Application Insights Monitoring Agent extension for .NET applications running on Azure virtual machines and virtual machine scale sets.
> [!NOTE]
-> The steps below do not apply to Node.js and Python applications, which require SDK instrumentation.
+> The following steps don't apply to Node.js and Python applications, which require SDK instrumentation.
Extension execution output is logged to files found in the following directories:+ ```Windows C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWindows\<version>\ ```
C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWi
### 2.8.44 -- Updated ApplicationInsights .NET/.NET Core SDK to 2.20.1 - red field.-- Enabled SQL query collection.-- Enabled support for Azure Active Directory authentication.
+- Updated Application Insights .NET/.NET Core SDK to 2.20.1 - red field
+- Enabled SQL query collection
+- Enabled support for Azure Active Directory authentication
### 2.8.42 -- Updated ApplicationInsights .NET/.NET Core SDK to 2.18.1 - red field.
+Updated Application Insights .NET/.NET Core SDK to 2.18.1 - red field
### 2.8.41 -- Added ASP.NET Core Auto-Instrumentation feature.
+Added ASP.NET Core auto-instrumentation feature
## Next steps
-* Learn how to [deploy an application to an Azure virtual machine scale set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md).
-* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down.
+
+* Learn how to [deploy an application to a virtual machine scale set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md).
+* [Set up availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down.
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
Title: Monitor Azure app services performance Java | Microsoft Docs description: Application performance monitoring for Azure app services using Java. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/05/2021 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Title: Monitor Azure App Service performance in .NET Core | Microsoft Docs description: Application performance monitoring for Azure App Service using ASP.NET Core. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 11/09/2022 Last updated : 11/15/2022 ms.devlang: csharp
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md
Title: Monitor Azure app services performance Node.js | Microsoft Docs description: Application performance monitoring for Azure app services using Node.js. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/05/2021 Last updated : 11/15/2022 ms.devlang: javascript
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
Title: Monitor Azure App Service performance | Microsoft Docs description: Application performance monitoring for Azure App Service. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/05/2021 Last updated : 11/15/2022
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn about the steps required to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 08/23/2022 Last updated : 11/15/2022
No. Migration won't affect existing API access to data. After migration, you'll
### Will there be any impact on Live Metrics or other monitoring experiences?
-No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor--diagnose-with-1-second-latency) or other monitoring experiences.
+No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor-and-diagnose-with-1-second-latency) or other monitoring experiences.
### What happens with continuous export after migration?
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md
Title: Create a new Azure Application Insights resource | Microsoft Docs description: Manually set up Application Insights monitoring for a new live application. Previously updated : 02/10/2021 Last updated : 11/15/2022
azure-monitor Custom Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-endpoints.md
Title: Azure Application Insights override default SDK endpoints description: Modify default Azure Monitor Application Insights SDK endpoints for regions like Azure Government. Previously updated : 07/26/2019 Last updated : 11/14/2022 ms.devlang: csharp, java, javascript, python
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Title: IP addresses used by Azure Monitor | Microsoft Docs description: This article discusses server firewall exceptions that are required by Azure Monitor Previously updated : 08/19/2022 Last updated : 11/15/2022
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
Title: Application Insights IP address collection | Microsoft Docs description: Understand how Application Insights handles IP addresses and geolocation. Previously updated : 09/23/2020 Last updated : 11/15/2022
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Title: 'Quickstart: Java web app analytics with Azure Application Insights' description: 'Application Performance Monitoring for Java web apps with Application Insights. ' Previously updated : 11/22/2020 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Java In Process Agent Redirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent-redirect.md
Title: Azure Monitor Application Insights Java (redirect to OpenTelemetry) description: Redirect to OpenTelemetry agent Previously updated : 07/22/2022 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Title: Azure Monitor Application Insights Java description: Application performance monitoring for Java applications running in any environment without requiring code modification. Distributed tracing and application map. Previously updated : 11/12/2022 Last updated : 11/14/2022 ms.devlang: java
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 11/12/2022 Last updated : 11/14/2022 ms.devlang: java
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Title: Adding the JVM arg - Azure Monitor Application Insights for Java description: How to add the JVM arg that enables Azure Monitor Application Insights for Java Previously updated : 11/12/2022 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 11/12/2022 Last updated : 11/14/2022 ms.devlang: java
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
Title: Java Profiler for Azure Monitor Application Insights description: How to configure the Azure Monitor Application Insights for Java Profiler Previously updated : 07/19/2022 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
Title: Sampling overrides (preview) - Azure Monitor Application Insights for Java description: Learn to configure sampling overrides in Azure Monitor Application Insights for Java. Previously updated : 03/22/2021 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 11/12/2022 Last updated : 11/15/2022 ms.devlang: java
azure-monitor Javascript Click Analytics Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-click-analytics-plugin.md
In JavaScript correlation is turned off by default in order to minimize the tele
- Check out the [GitHub Repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [NPM Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Auto-Collection Plugin. - Use [Events Analysis in Usage Experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions. - Find click data under content field within customDimensions attribute in CustomEvents table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). For more information, see [Sample App](https://go.microsoft.com/fwlink/?linkid=2152871).-- Build a [Workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md#integrating-queries) to create custom visualizations of click data.
+- Build a [Workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md#integrate-queries) to create custom visualizations of click data.
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Title: Azure Application Insights for JavaScript web apps description: Get page view and session counts, web client data, and single-page applications and track usage patterns. Detect exceptions and performance issues in JavaScript webpages. Previously updated : 08/06/2020 Last updated : 11/15/2022 ms.devlang: javascript
azure-monitor Kubernetes Codeless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md
Title: Monitor applications on Azure Kubernetes Service (AKS) with Application Insights - Azure Monitor | Microsoft Docs description: Azure Monitor seamlessly integrates with your application running on Kubernetes, and allows you to spot the problems with your apps in no time. Previously updated : 05/13/2020 Last updated : 11/15/2022
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Title: Diagnose with Live Metrics - Application Insights - Azure Monitor description: Monitor your web app in real time with custom metrics, and diagnose issues with a live feed of failures, traces, and events. Previously updated : 05/31/2022 Last updated : 11/15/2022 ms.devlang: csharp
-# Live Metrics: Monitor & Diagnose with 1-second latency
+# Live Metrics: Monitor and diagnose with 1-second latency
-Monitor your live, in-production web application by using Live Metrics (also known as QuickPulse) from [Application Insights](./app-insights-overview.md). Select and filter metrics and performance counters to watch in real time, without any disturbance to your service. Inspect stack traces from sample failed requests and exceptions. Together with [Profiler](./profiler.md) and [Snapshot debugger](./snapshot-debugger.md), Live Metrics provides a powerful and non-invasive diagnostic tool for your live website.
+Monitor your live, in-production web application by using Live Metrics (also known as QuickPulse) from [Application Insights](./app-insights-overview.md). You can select and filter metrics and performance counters to watch in real time, without any disturbance to your service. You can also inspect stack traces from sample failed requests and exceptions. Together with [Profiler](./profiler.md) and [Snapshot Debugger](./snapshot-debugger.md), Live Metrics provides a powerful and noninvasive diagnostic tool for your live website.
> [!NOTE] > Live Metrics only supports TLS 1.2. For more information, see [Troubleshooting](#troubleshooting). With Live Metrics, you can:
-* Validate a fix while it's released, by watching performance and failure counts.
-* Watch the effect of test loads, and diagnose issues live.
-* Focus on particular test sessions or filter out known issues, by selecting and filtering the metrics you want to watch.
+* Validate a fix while it's released by watching performance and failure counts.
+* Watch the effect of test loads and diagnose issues live.
+* Focus on particular test sessions or filter out known issues by selecting and filtering the metrics you want to watch.
* Get exception traces as they happen. * Experiment with filters to find the most relevant KPIs. * Monitor any Windows performance counter live.
-* Easily identify a server that is having issues, and filter all the KPI/live feed to just that server.
+* Easily identify a server that's having issues and filter all the KPI/live feed to just that server.
-![Live Metrics tab](./media/live-stream/live-metric.png)
+![Screenshot that shows the Live Metrics tab.](./media/live-stream/live-metric.png)
-Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions, Java, and Node.js apps.
+Live Metrics is currently supported for ASP.NET, ASP.NET Core, Azure Functions, Java, and Node.js apps.
> [!NOTE]
-> The number of monitored server instances displayed by Live Metrics may be lower than the actual number of instances allocated for the application. This is because many modern web servers will unload applications that do not receive requests over a period of time in order to conserve resources. Since Live Metrics only counts servers that are currently running the application, servers that have already unloaded the process will not be included in that total.
+> The number of monitored server instances displayed by Live Metrics might be lower than the actual number of instances allocated for the application. This mismatch is because many modern web servers will unload applications that don't receive requests over a period of time to conserve resources. Because Live Metrics only counts servers that are currently running the application, servers that have already unloaded the process won't be included in that total.
## Get started
-1. Follow language specific guidelines to enable Live Metrics.
- * [ASP.NET](./asp-net.md) - Live Metrics is enabled by default.
- * [ASP.NET Core](./asp-net-core.md) - Live Metrics is enabled by default.
- * [.NET/.NET Core Console/Worker](./worker-service.md) - Live Metrics is enabled by default.
- * [.NET Applications - Enable using code](#enable-live-metrics-using-code-for-any-net-application).
- * [Java](./java-in-process-agent.md) - Live Metrics is enabled by default.
+> [!IMPORTANT]
+> Monitoring ASP.NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications requires Application Insights version 2.8.0 or above. To enable Application Insights, ensure that it's activated in the Azure portal and that the Application Insights NuGet package is included. Without the NuGet package, some telemetry is sent to Application Insights, but that telemetry won't show in Live Metrics.
+
+1. Follow language-specific guidelines to enable Live Metrics:
+ * [ASP.NET](./asp-net.md): Live Metrics is enabled by default.
+ * [ASP.NET Core](./asp-net-core.md): Live Metrics is enabled by default.
+ * [.NET/.NET Core Console/Worker](./worker-service.md): Live Metrics is enabled by default.
+ * [.NET Applications: Enable using code](#enable-live-metrics-by-using-code-for-any-net-application).
+ * [Java](./java-in-process-agent.md): Live Metrics is enabled by default.
* [Node.js](./nodejs.md#live-metrics)
-2. In the [Azure portal](https://portal.azure.com), open the Application Insights resource for your app, then open Live Stream.
-
-3. [Secure the control channel](#secure-the-control-channel) if you might use sensitive data such as customer names in your filters.
+1. In the [Azure portal](https://portal.azure.com), open the Application Insights resource for your app. Then open Live Stream.
-> [!IMPORTANT]
-> Monitoring ASP.NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications require Application Insights version 2.8.0 or above. To enable Application Insights ensure it is both activated in the Azure Portal and that the Application Insights NuGet package is included. Without the NuGet package some telemetry is sent to Application Insights but that telemetry will not show in Live Metrics.
+1. [Secure the control channel](#secure-the-control-channel) if you might use sensitive data like customer names in your filters.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-### Enable Live Metrics using code for any .NET application
+### Enable Live Metrics by using code for any .NET application
> [!NOTE]
-> Live Metrics is enabled by default when onboarding using the recommended instructions for .NET Applications.
-
-How to manually set up Live Metrics:
+> Live Metrics is enabled by default when you onboard it by using the recommended instructions for .NET applications.
-1. Install the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector)
-2. The following sample console app code shows setting up Live Metrics.
+To manually set up Live Metrics:
-```csharp
-using Microsoft.ApplicationInsights;
-using Microsoft.ApplicationInsights.Extensibility;
-using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
-using System;
-using System.Threading.Tasks;
+1. Install the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector).
+1. The following sample console app code shows setting up Live Metrics:
-namespace LiveMetricsDemo
-{
- class Program
+ ```csharp
+ using Microsoft.ApplicationInsights;
+ using Microsoft.ApplicationInsights.Extensibility;
+ using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
+ using System;
+ using System.Threading.Tasks;
+
+ namespace LiveMetricsDemo
{
- static void Main(string[] args)
+ class Program
{
- // Create a TelemetryConfiguration instance.
- TelemetryConfiguration config = TelemetryConfiguration.CreateDefault();
- config.InstrumentationKey = "INSTRUMENTATION-KEY-HERE";
- QuickPulseTelemetryProcessor quickPulseProcessor = null;
- config.DefaultTelemetrySink.TelemetryProcessorChainBuilder
- .Use((next) =>
- {
- quickPulseProcessor = new QuickPulseTelemetryProcessor(next);
- return quickPulseProcessor;
- })
- .Build();
-
- var quickPulseModule = new QuickPulseTelemetryModule();
-
- // Secure the control channel.
- // This is optional, but recommended.
- quickPulseModule.AuthenticationApiKey = "YOUR-API-KEY-HERE";
- quickPulseModule.Initialize(config);
- quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
-
- // Create a TelemetryClient instance. It is important
- // to use the same TelemetryConfiguration here as the one
- // used to setup Live Metrics.
- TelemetryClient client = new TelemetryClient(config);
-
- // This sample runs indefinitely. Replace with actual application logic.
- while (true)
+ static void Main(string[] args)
{
- // Send dependency and request telemetry.
- // These will be shown in Live Metrics.
- // CPU/Memory Performance counter is also shown
- // automatically without any additional steps.
- client.TrackDependency("My dependency", "target", "http://sample",
- DateTimeOffset.Now, TimeSpan.FromMilliseconds(300), true);
- client.TrackRequest("My Request", DateTimeOffset.Now,
- TimeSpan.FromMilliseconds(230), "200", true);
- Task.Delay(1000).Wait();
+ // Create a TelemetryConfiguration instance.
+ TelemetryConfiguration config = TelemetryConfiguration.CreateDefault();
+ config.InstrumentationKey = "INSTRUMENTATION-KEY-HERE";
+ QuickPulseTelemetryProcessor quickPulseProcessor = null;
+ config.DefaultTelemetrySink.TelemetryProcessorChainBuilder
+ .Use((next) =>
+ {
+ quickPulseProcessor = new QuickPulseTelemetryProcessor(next);
+ return quickPulseProcessor;
+ })
+ .Build();
+
+ var quickPulseModule = new QuickPulseTelemetryModule();
+
+ // Secure the control channel.
+ // This is optional, but recommended.
+ quickPulseModule.AuthenticationApiKey = "YOUR-API-KEY-HERE";
+ quickPulseModule.Initialize(config);
+ quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
+
+ // Create a TelemetryClient instance. It is important
+ // to use the same TelemetryConfiguration here as the one
+ // used to set up Live Metrics.
+ TelemetryClient client = new TelemetryClient(config);
+
+ // This sample runs indefinitely. Replace with actual application logic.
+ while (true)
+ {
+ // Send dependency and request telemetry.
+ // These will be shown in Live Metrics.
+ // CPU/Memory Performance counter is also shown
+ // automatically without any additional steps.
+ client.TrackDependency("My dependency", "target", "http://sample",
+ DateTimeOffset.Now, TimeSpan.FromMilliseconds(300), true);
+ client.TrackRequest("My Request", DateTimeOffset.Now,
+ TimeSpan.FromMilliseconds(230), "200", true);
+ Task.Delay(1000).Wait();
+ }
} } }
-}
-```
+ ```
-While the above sample is for a console app, the same code can be used in any .NET applications. If any other TelemetryModules are enabled which auto-collects telemetry, it's important to ensure the same configuration used for initializing those modules is used for Live Metrics module as well.
+The preceding sample is for a console app, but the same code can be used in any .NET applications. If any other telemetry modules are enabled to autocollect telemetry, it's important to ensure that the same configuration used for initializing those modules is used for the Live Metrics module.
-## How does Live Metrics differ from Metrics Explorer and Analytics?
+## How does Live Metrics differ from metrics explorer and Log Analytics?
-| |Live Stream | Metrics Explorer and Analytics |
+| Capabilities |Live Stream | Metrics explorer and Log Analytics |
||||
-|**Latency**|Data displayed within one second|Aggregated over minutes|
-|**No retention**|Data persists while it's on the chart, and is then discarded|[Data retained for 90 days](./data-retention-privacy.md#how-long-is-the-data-kept)|
-|**On demand**|Data is only streamed while the Live Metrics pane is open |Data is sent whenever the SDK is installed and enabled|
-|**Free**|There's no charge for Live Stream data|Subject to [pricing](../logs/cost-logs.md#application-insights-billing)
-|**Sampling**|All selected metrics and counters are transmitted. Failures and stack traces are sampled. |Events may be [sampled](./api-filtering-sampling.md)|
-|**Control channel**|Filter control signals are sent to the SDK. We recommend you secure this channel.|Communication is one way, to the portal|
+|Latency|Data displayed within one second.|Aggregated over minutes.|
+|No retention|Data persists while it's on the chart and is then discarded.|[Data retained for 90 days.](./data-retention-privacy.md#how-long-is-the-data-kept)|
+|On demand|Data is only streamed while the Live Metrics pane is open. |Data is sent whenever the SDK is installed and enabled.|
+|Free|There's no charge for Live Stream data.|Subject to [pricing](../logs/cost-logs.md#application-insights-billing).
+|Sampling|All selected metrics and counters are transmitted. Failures and stack traces are sampled. |Events can be [sampled](./api-filtering-sampling.md).|
+|Control channel|Filter control signals are sent to the SDK. We recommend you secure this channel.|Communication is one way, to the portal.|
## Select and filter your metrics
-(Available with ASP.NET, ASP.NET Core, and Azure Functions (v2).)
+These capabilities are available with ASP.NET, ASP.NET Core, and Azure Functions (v2).
-You can monitor custom KPI live by applying arbitrary filters on any Application Insights telemetry from the portal. Select the filter control that shows when you mouse-over any of the charts. The following chart is plotting a custom Request count KPI with filters on URL and Duration attributes. Validate your filters with the Stream Preview section that shows a live feed of telemetry that matches the criteria you've specified at any point in time.
+You can monitor custom KPI live by applying arbitrary filters on any Application Insights telemetry from the portal. Select the filter control that shows when you mouse-over any of the charts. The following chart plots a custom **Request** count KPI with filters on **URL** and **Duration** attributes. Validate your filters with the stream preview section that shows a live feed of telemetry that matches the criteria you've specified at any point in time.
-![Filter request rate](./media/live-stream/filter-request.png)
+![Screenshot that shows the Filter request rate.](./media/live-stream/filter-request.png)
-You can monitor a value different from Count. The options depend on the type of stream, which could be any Application Insights telemetry: requests, dependencies, exceptions, traces, events, or metrics. It can be your own [custom measurement](./api-custom-events-metrics.md#properties):
+You can monitor a value different from **Count**. The options depend on the type of stream, which could be any Application Insights telemetry like requests, dependencies, exceptions, traces, events, or metrics. It can also be your own [custom measurement](./api-custom-events-metrics.md#properties).
-![Query builder on request rate with custom metric](./media/live-stream/query-builder-request.png)
+![Screenshot that shows the Query Builder on Request Rate with a custom metric.](./media/live-stream/query-builder-request.png)
-In addition to Application Insights telemetry, you can also monitor any Windows performance counter by selecting that from the stream options, and providing the name of the performance counter.
+Along with Application Insights telemetry, you can also monitor any Windows performance counter. Select it from the stream options and provide the name of the performance counter.
-Live Metrics are aggregated at two points: locally on each server, and then across all servers. You can change the default at either by selecting other options in the respective drop-downs.
+Live Metrics are aggregated at two points: locally on each server and then across all servers. You can change the default at either one by selecting other options in the respective dropdown lists.
-## Sample Telemetry: Custom Live Diagnostic Events
+## Sample telemetry: Custom live diagnostic events
By default, the live feed of events shows samples of failed requests and dependency calls, exceptions, events, and traces. Select the filter icon to see the applied criteria at any point in time.
-![Filter button](./media/live-stream/filter.png)
+![Screenshot that shows the Filter button.](./media/live-stream/filter.png)
-As with metrics, you can specify any arbitrary criteria to any of the Application Insights telemetry types. In this example, we're selecting specific request failures, and events.
+As with metrics, you can specify any arbitrary criteria to any of the Application Insights telemetry types. In this example, we're selecting specific request failures and events.
-![Query Builder](./media/live-stream/query-builder.png)
+![Screenshot that shows the Query Builder.](./media/live-stream/query-builder.png)
> [!NOTE]
-> Currently, for Exception message-based criteria, use the outermost exception message. In the preceding example, to filter out the benign exception with inner exception message (follows the "<--" delimiter) "The client disconnected." use a message not-contains "Error reading request content" criteria.
+> Currently, for exception message-based criteria, use the outermost exception message. In the preceding example, to filter out the benign exception with an inner exception message (follows the "<--" delimiter) "The client disconnected," use a message not-contains "Error reading request content" criteria.
-See the details of an item in the live feed by clicking it. You can pause the feed either by clicking **Pause** or simply scrolling down, or clicking an item. Live feed will resume after you scroll back to the top, or by clicking the counter of items collected while it was paused.
+To see the details of an item in the live feed, select it. You can pause the feed either by selecting **Pause** or by scrolling down and selecting an item. Live feed resumes after you scroll back to the top, or when you select the counter of items collected while it was paused.
-![Screenshot shows the Sample telemetry window with an exception selected and the exception details displayed at the bottom of the window.](./media/live-stream/sample-telemetry.png)
+![Screenshot that shows the Sample telemetry window with an exception selected and the exception details displayed at the bottom of the window.](./media/live-stream/sample-telemetry.png)
## Filter by server instance
-If you want to monitor a particular server role instance, you can filter by server. To filter, select the server name under *Servers*.
+If you want to monitor a particular server role instance, you can filter by server. To filter, select the server name under **Servers**.
-![Sampled live failures](./media/live-stream/filter-by-server.png)
+![Screenshot that shows the Sampled live failures.](./media/live-stream/filter-by-server.png)
## Secure the control channel
-Live Metrics custom filters allow you to control which of your application's telemetry is streamed to the Live Metrics view in Azure portal. The filters criteria is sent to the apps that are instrumented with the Application Insights SDK. The filter value could potentially contain sensitive information such as CustomerID. To keep this value secured and prevent potential disclosure to unauthorized applications, you have two options:
+Live Metrics custom filters allow you to control which of your application's telemetry is streamed to the Live Metrics view in the Azure portal. The filters criteria is sent to the apps that are instrumented with the Application Insights SDK. The filter value could potentially contain sensitive information, such as the customer ID. To keep this value secured and prevent potential disclosure to unauthorized applications, you have two options:
-- Recommended: Secure Live Metrics channel using [Azure AD authentication](./azure-ad-authentication.md#configuring-and-enabling-azure-ad-based-authentication)-- Legacy (no longer recommended): Set up an authenticated channel by configuring a secret API key as explained below
+- **Recommended:** Secure the Live Metrics channel by using [Azure Active Directory (Azure AD) authentication](./azure-ad-authentication.md#configuring-and-enabling-azure-ad-based-authentication).
+- **Legacy (no longer recommended):** Set up an authenticated channel by configuring a secret API key as explained in the "Legacy option" section.
> [!NOTE]
-> On 30 September 2025, API keys used to stream live metrics telemetry into application insights will be retired. After that date, applications which use API keys will no longer be able to send live metrics data to your application insights resource. Authenticated telemetry ingestion for live metrics streaming to application insights will need to be done with [Azure AD authentication for application insights](./azure-ad-authentication.md).
+> On September 30, 2025, API keys used to stream Live Metrics telemetry into Application Insights will be retired. After that date, applications that use API keys won't be able to send Live Metrics data to your Application Insights resource. Authenticated telemetry ingestion for Live Metrics streaming to Application Insights will need to be done with [Azure AD authentication for Application Insights](./azure-ad-authentication.md).
-It's possible to try custom filters without having to set up an authenticated channel. Simply click on any of the filter icons and authorize the connected servers. Notice that if you choose this option, you'll have to authorize the connected servers once every new session or when a new server comes online.
+It's possible to try custom filters without having to set up an authenticated channel. Select any of the filter icons and authorize the connected servers. If you choose this option, you'll have to authorize the connected servers once every new session or whenever a new server comes online.
> [!WARNING]
-> We strongly discourage the use of unsecured channels and will disable this option 6 months after you start using it. The ΓÇ£Authorize connected serversΓÇ¥ dialog displays the date (highlighted below) after which this option will be disabled.
+> We strongly discourage the use of unsecured channels and will disable this option six months after you start using it. The **Authorize connected servers** dialog displays the date after which this option will be disabled.
++
+### Legacy option: Create an API key
+
+1. Select the **API Access** tab and then select **Create API key**.
+
+ ![Screenshot that shows selecting the API Access tab and the Create API key button.](./media/live-stream/api-key.png)
+1. Select the **Authenticate SDK control channel** checkbox and then select **Generate key**.
-### Legacy option: Create API key
+ ![Screenshot that shows the Create API key pane. Select Authenticate SDK control channel checkbox and then select Generate key.](./media/live-stream/create-api-key.png)
-![API key > Create API key](./media/live-stream/api-key.png)
-![Create API Key tab. Select "authenticate SDK control channel" then "generate key"](./media/live-stream/create-api-key.png)
+### Add an API key to configuration
-### Add API key to Configuration
+You can add an API key to configuration for ASP.NET, ASP.NET Core, WorkerService, and Azure Functions apps.
#### ASP.NET
-In the applicationinsights.config file, add the AuthenticationApiKey to the QuickPulseTelemetryModule:
+In the *applicationinsights.config* file, add `AuthenticationApiKey` to `QuickPulseTelemetryModule`:
```xml <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector">
In the applicationinsights.config file, add the AuthenticationApiKey to the Quic
#### ASP.NET Core
-For [ASP.NET Core](./asp-net-core.md) applications, follow the instructions below.
+For [ASP.NET Core](./asp-net-core.md) applications, follow these instructions.
-Modify `ConfigureServices` of your Startup.cs file as follows:
+Modify `ConfigureServices` of your *Startup.cs* file as shown.
-Add the following namespace.
+Add the following namespace:
```csharp using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse; ```
-Then modify `ConfigureServices` method as below.
+Then modify the `ConfigureServices` method:
```csharp public void ConfigureServices(IServiceCollection services) {
- // existing code which include services.AddApplicationInsightsTelemetry() to enable Application Insights.
+ // Existing code which includes services.AddApplicationInsightsTelemetry() to enable Application Insights.
services.ConfigureTelemetryModule<QuickPulseTelemetryModule> ((module, o) => module.AuthenticationApiKey = "YOUR-API-KEY-HERE"); } ```
-More information on configuring ASP.NET Core applications can be found in our guidance on [configuring telemetry modules in ASP.NET Core](./asp-net-core.md#configuring-or-removing-default-telemetrymodules).
+For more information on how to configure ASP.NET Core applications, see [Configuring telemetry modules in ASP.NET Core](./asp-net-core.md#configuring-or-removing-default-telemetrymodules).
#### WorkerService
-For [WorkerService](./worker-service.md) applications, follow the instructions below.
+For [WorkerService](./worker-service.md) applications, follow these instructions.
-Add the following namespace.
+Add the following namespace:
```csharp using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse; ```
-Next, add the following line before the call `services.AddApplicationInsightsTelemetryWorkerService`.
+Next, add the following line before the call `services.AddApplicationInsightsTelemetryWorkerService`:
```csharp services.ConfigureTelemetryModule<QuickPulseTelemetryModule> ((module, o) => module.AuthenticationApiKey = "YOUR-API-KEY-HERE"); ```
-More information on configuring WorkerService applications can be found in our guidance on [configuring telemetry modules in WorkerServices](./worker-service.md#configure-or-remove-default-telemetry-modules).
+For more information on how to configure WorkerService applications, see [Configuring telemetry modules in WorkerServices](./worker-service.md#configure-or-remove-default-telemetry-modules).
-#### Azure Function Apps
+#### Azure Functions apps
-For Azure Function Apps (v2), securing the channel with an API key can be accomplished with an environment variable.
+For Azure Functions apps (v2), you can secure the channel with an API key by using an environment variable.
-Create an API key from within your Application Insights resource and go to **Settings > Configuration** for your Function App. Select **New application setting** and enter a name of `APPINSIGHTS_QUICKPULSEAUTHAPIKEY` and a value that corresponds to your API key.
+Create an API key from within your Application Insights resource and go to **Settings** > **Configuration** for your Azure Functions app. Select **New application setting**, enter a name of `APPINSIGHTS_QUICKPULSEAUTHAPIKEY`, and enter a value that corresponds to your API key.
## Supported features table
-| Language | Basic Metrics | Performance metrics | Custom filtering | Sample telemetry | CPU split by process |
+| Language | Basic metrics | Performance metrics | Custom filtering | Sample telemetry | CPU split by process |
|-|:--|:--|:--|:--|:| | .NET Framework | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | | .NET Core (target=.NET Framework)| Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) |
-| .NET Core (target=.NET Core) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported* | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | **Not Supported** |
-| Azure Functions v2 | Supported | Supported | Supported | Supported | **Not Supported** |
-| Java | Supported (V2.0.0+) | Supported (V2.0.0+) | **Not Supported** | Supported (V3.2.0+) | **Not Supported** |
-| Node.js | Supported (V1.3.0+) | Supported (V1.3.0+) | **Not Supported** | Supported (V1.3.0+) | **Not Supported** |
+| .NET Core (target=.NET Core) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported* | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | **Not supported** |
+| Azure Functions v2 | Supported | Supported | Supported | Supported | **Not supported** |
+| Java | Supported (V2.0.0+) | Supported (V2.0.0+) | **Not supported** | Supported (V3.2.0+) | **Not supported** |
+| Node.js | Supported (V1.3.0+) | Supported (V1.3.0+) | **Not supported** | Supported (V1.3.0+) | **Not supported** |
Basic metrics include request, dependency, and exception rate. Performance metrics (performance counters) include memory and CPU. Sample telemetry shows a stream of detailed information for failed requests and dependencies, exceptions, events, and traces.
- \* PerfCounters support varies slightly across versions of .NET Core that don't target the .NET Framework:
+ PerfCounters support varies slightly across versions of .NET Core that don't target the .NET Framework:
-- PerfCounters metrics are supported when running in Azure App Service for Windows. (AspNetCore SDK Version 2.4.1 or higher)-- PerfCounters are supported when app is running in ANY Windows machines (VM or Cloud Service or on-premises etc.) (AspNetCore SDK Version 2.7.1 or higher), but for apps targeting .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.-- PerfCounters are supported when app is running ANYWHERE (Linux, Windows, app service for Linux, containers, etc.) in the latest versions, but only for apps targeting .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
+- PerfCounters metrics are supported when running in Azure App Service for Windows (ASP.NET Core SDK version 2.4.1 or higher).
+- PerfCounters are supported when the app is running in *any* Windows machines, like VM, Azure Cloud Service, or on-premises (ASP.NET Core SDK version 2.7.1 or higher), but only for apps that target .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
+- PerfCounters are supported when the app is running *anywhere* (such as Linux, Windows, app service for Linux, or containers) in the latest versions, but only for apps that target .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
## Troubleshooting
-Live Metrics uses different IP addresses than other Application Insights telemetry. Make sure [those IP addresses](./ip-addresses.md) are open in your firewall. Also check the [outgoing ports for Live Metrics](./ip-addresses.md#outgoing-ports) are open in the firewall of your servers.
+Live Metrics uses different IP addresses than other Application Insights telemetry. Make sure [those IP addresses](./ip-addresses.md) are open in your firewall. Also check that [outgoing ports for Live Metrics](./ip-addresses.md#outgoing-ports) are open in the firewall of your servers.
-As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Live Metrics now only supports TLS 1.2. If you're using an older version of TLS, Live Metrics won't display any data. For applications based on .NET Framework 4.5.1, refer to [How to enable Transport Layer Security (TLS) 1.2 on clients - Configuration Manager](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client#bkmk_net) to support newer TLS version.
+As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Live Metrics now only supports TLS 1.2. If you're using an older version of TLS, Live Metrics won't display any data. For applications based on .NET Framework 4.5.1, see [Enable Transport Layer Security (TLS) 1.2 on clients - Configuration Manager](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client#bkmk_net) to support the newer TLS version.
### Missing configuration for .NET
-1. Verify you're using the latest version of the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector)
-2. Edit the `ApplicationInsights.config` file
- * Verify that the connection string points to the Application Insights resource you're using
- * Locate the `QuickPulseTelemetryModule` configuration option; if it isn't there, add it
- * Locate the `QuickPulseTelemetryProcessor` configuration option; if it isn't there, add it
-
- ```xml
-<TelemetryModules>
-<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.
-QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector"/>
-</TelemetryModules>
-
-<TelemetryProcessors>
-<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.
-QuickPulse.QuickPulseTelemetryProcessor, Microsoft.AI.PerfCounterCollector"/>
-<TelemetryProcessors>
-````
-3. Restart the application
+1. Verify that you're using the latest version of the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector).
+1. Edit the `ApplicationInsights.config` file:
+ * Verify that the connection string points to the Application Insights resource you're using.
+ * Locate the `QuickPulseTelemetryModule` configuration option. If it isn't there, add it.
+ * Locate the `QuickPulseTelemetryProcessor` configuration option. If it isn't there, add it.
+
+ ```xml
+ <TelemetryModules>
+ <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.
+ QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector"/>
+ </TelemetryModules>
+
+ <TelemetryProcessors>
+ <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.
+ QuickPulse.QuickPulseTelemetryProcessor, Microsoft.AI.PerfCounterCollector"/>
+ <TelemetryProcessors>
+ ````
+1. Restart the application.
## Next steps
-* [Monitoring usage with Application Insights](./usage-overview.md)
-* [Using Diagnostic Search](./diagnostic-search.md)
+* [Monitor usage with Application Insights](./usage-overview.md)
+* [Use Diagnostic Search](./diagnostic-search.md)
* [Profiler](./profiler.md)
-* [Snapshot debugger](./snapshot-debugger.md)
+* [Snapshot Debugger](./snapshot-debugger.md)
azure-monitor Mobile Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/mobile-center-quickstart.md
Title: Monitor mobile or universal Windows apps with Azure Monitor Application Insights description: Provides instructions to quickly set up a mobile or universal Windows app for monitoring with Azure Monitor Application Insights and App Center Previously updated : 07/21/2022 Last updated : 11/15/2022 ms.devlang: java, swift
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Title: Monitor applications running on Azure Functions with Application Insights - Azure Monitor | Microsoft Docs description: Azure Monitor seamlessly integrates with your application running on Azure Functions, and allows you to monitor the performance and spot the problems with your apps in no time. Previously updated : 08/27/2021 Last updated : 11/14/2022
azure-monitor Monitor Web App Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-web-app-availability.md
Title: Monitor availability with URL ping tests - Azure Monitor description: Set up ping tests in Application Insights. Get alerts if a website becomes unavailable or responds slowly. Previously updated : 07/13/2021 Last updated : 11/15/2022
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Title: Monitor Node.js services with Application Insights | Microsoft Docs description: Monitor performance and diagnose problems in Node.js services with Application Insights. Previously updated : 10/12/2021 Last updated : 11/15/2022 ms.devlang: javascript
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Title: Monitor Python applications with Azure Monitor | Microsoft Docs description: This article provides instructions on how to wire up OpenCensus Python with Azure Monitor. Previously updated : 8/19/2022 Last updated : 11/15/2022 ms.devlang: python
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 10/21/2022 Last updated : 11/15/2022 ms.devlang: csharp, javascript, python
azure-monitor Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/overview-dashboard.md
Title: Application Insights Overview dashboard | Microsoft Docs description: Monitor applications with Application Insights and Overview dashboard functionality. Previously updated : 06/03/2019 Last updated : 11/15/2022 # Application Insights Overview dashboard
azure-monitor Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/platforms.md
Title: 'Application Insights: Languages, platforms, and integrations | Microsoft Docs' description: Languages, platforms, and integrations that are available for Application Insights. Previously updated : 10/24/2022 Last updated : 11/15/2022
azure-monitor Powershell Azure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell-azure-diagnostics.md
- Title: Using PowerShell to setup Application Insights in an Azure | Microsoft Docs
-description: Automate configuring Azure Diagnostics to pipe data to Application Insights.
- Previously updated : 08/06/2019 -
-ms.reviwer: cogoodson
--
-# Using PowerShell to set up Application Insights for Azure Cloud Services
-
-[Microsoft Azure](https://azure.com) can be [configured to send Azure Diagnostics](../agents/diagnostics-extension-to-application-insights.md) to [Azure Application Insights](./app-insights-overview.md). The diagnostics relate to Azure Cloud Services and Azure VMs. They complement the telemetry that you send from within the app using the Application Insights SDK. As part of automating the process of creating new resources in Azure, you can configure diagnostics using PowerShell.
-
-## Azure template
-If the web app is in Azure and you create your resources using an Azure Resource Manager template, you can configure Application Insights by adding this to the resources node:
-
-```json
-{
- resources: [
- /* Create Application Insights resource */
- {
- "apiVersion": "2015-05-01",
- "type": "microsoft.insights/components",
- "name": "nameOfAIAppResource",
- "location": "centralus",
- "kind": "web",
- "properties": { "ApplicationId": "nameOfAIAppResource" },
- "dependsOn": [
- "[concat('Microsoft.Web/sites/', myWebAppName)]"
- ]
- }
- ]
-}
-```
-
-* `nameOfAIAppResource` - a name for the Application Insights resource
-* `myWebAppName` - the ID of the web app
-
-## Enable diagnostics extension as part of deploying a Cloud Service
-The `New-AzureDeployment` cmdlet has a parameter `ExtensionConfiguration`, which takes an array of diagnostics configurations. These can be created using the `New-AzureServiceDiagnosticsExtensionConfig` cmdlet. For example:
-
-```azurepowershell
-$service_package = "CloudService.cspkg"
-$service_config = "ServiceConfiguration.Cloud.cscfg"
-$diagnostics_storagename = "myservicediagnostics"
-$webrole_diagconfigpath = "MyService.WebRole.PubConfig.xml"
-$workerrole_diagconfigpath = "MyService.WorkerRole.PubConfig.xml"
-
-$primary_storagekey = (Get-AzStorageKey `
- -StorageAccountName "$diagnostics_storagename").Primary
-$storage_context = New-AzStorageContext `
- -StorageAccountName $diagnostics_storagename `
- -StorageAccountKey $primary_storagekey
-
-$webrole_diagconfig = `
- New-AzureServiceDiagnosticsExtensionConfig `
- -Role "WebRole" -Storage_context $storageContext `
- -DiagnosticsConfigurationPath $webrole_diagconfigpath
-$workerrole_diagconfig = `
- New-AzureServiceDiagnosticsExtensionConfig `
- -Role "WorkerRole" `
- -StorageContext $storage_context `
- -DiagnosticsConfigurationPath $workerrole_diagconfigpath
-
- New-AzureDeployment `
- -ServiceName $service_name `
- -Slot Production `
- -Package $service_package `
- -Configuration $service_config `
- -ExtensionConfiguration @($webrole_diagconfig,$workerrole_diagconfig)
-```
-
-## Enable diagnostics extension on an existing Cloud Service
-On an existing service, use `Set-AzureServiceDiagnosticsExtension`.
-
-```azurepowershell
-$service_name = "MyService"
-$diagnostics_storagename = "myservicediagnostics"
-$webrole_diagconfigpath = "MyService.WebRole.PubConfig.xml"
-$workerrole_diagconfigpath = "MyService.WorkerRole.PubConfig.xml"
-$primary_storagekey = (Get-AzStorageKey `
- -StorageAccountName "$diagnostics_storagename").Primary
-$storage_context = New-AzStorageContext `
- -StorageAccountName $diagnostics_storagename `
- -StorageAccountKey $primary_storagekey
-
-Set-AzureServiceDiagnosticsExtension `
- -StorageContext $storage_context `
- -DiagnosticsConfigurationPath $webrole_diagconfigpath `
- -ServiceName $service_name `
- -Slot Production `
- -Role "WebRole"
-Set-AzureServiceDiagnosticsExtension `
- -StorageContext $storage_context `
- -DiagnosticsConfigurationPath $workerrole_diagconfigpath `
- -ServiceName $service_name `
- -Slot Production `
- -Role "WorkerRole"
-```
-
-## Get current diagnostics extension configuration
-
-```azurepowershell
-Get-AzureServiceDiagnosticsExtension -ServiceName "MyService"
-```
--
-## Remove diagnostics extension
-
-```azurepowershell
-Remove-AzureServiceDiagnosticsExtension -ServiceName "MyService"
-```
-
-If you enabled the diagnostics extension using either `Set-AzureServiceDiagnosticsExtension` or `New-AzureServiceDiagnosticsExtensionConfig` without the Role parameter, then you can remove the extension using `Remove-AzureServiceDiagnosticsExtension` without the Role parameter. If the Role parameter was used when enabling the extension then it must also be used when removing the extension.
-
-To remove the diagnostics extension from each individual role:
-
-```azurepowershell
-Remove-AzureServiceDiagnosticsExtension -ServiceName "MyService" -Role "WebRole"
-```
--
-## See also
-* [Monitor Azure Cloud Services apps with Application Insights](./azure-web-apps-net-core.md)
-* [Send Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md)
--
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
Other automation articles:
* [Create an Application Insights resource](./create-new-resource.md#creating-a-resource-automatically) - quick method without using a template. * [Create web tests](../alerts/resource-manager-alerts-metric.md#availability-test-with-metric-alert)
-* [Send Azure Diagnostics to Application Insights](powershell-azure-diagnostics.md)
+* [Send Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md)
* [Create release annotations](annotations.md)
azure-monitor Resource Manager Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-web-app.md
Title: Resource Manager template samples for Azure App Service + Application Ins
description: Sample Azure Resource Manager templates to deploy an Azure App Service with an Application Insights resource. Previously updated : 07/11/2022 Last updated : 11/15/2022
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Title: Telemetry sampling in Azure Application Insights | Microsoft Docs description: How to keep the volume of telemetry under control. Previously updated : 08/26/2021 Last updated : 11/15/2022
builder.UseAdaptiveSampling(maxTelemetryItemsPerSecond:5, excludedTypes: "Depend
### Configuring adaptive sampling for ASP.NET Core applications
-ASP.NET Core applications may be configured in code or through the `appsettings.json` file. For more information, see [Configuration in ASP.NET Core](https://learn.microsoft.com/aspnet/core/fundamentals/configuration).
+ASP.NET Core applications may be configured in code or through the `appsettings.json` file. For more information, see [Configuration in ASP.NET Core](/aspnet/core/fundamentals/configuration).
Adaptive sampling is enabled by default for all ASP.NET Core applications. You can disable or customize the sampling behavior.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Title: Connection strings in Application Insights | Microsoft Docs description: This article shows how to use connection strings. Previously updated : 04/13/2022 Last updated : 11/15/2022
azure-monitor Sdk Support Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-support-guidance.md
Title: Application Insights SDK support guidance
description: Support guidance for Application Insights legacy and preview SDKs Previously updated : 08/22/2022 Last updated : 11/15/2022
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
Title: How to design your Application Insights deployment - One vs many resources?
+ Title: 'Design your Application Insights deployment: One vs. many resources?'
description: Direct telemetry to different resources for development, test, and production stamps. Previously updated : 11/01/2022 Last updated : 11/15/2022
-# How many Application Insights resources should I deploy
+# How many Application Insights resources should I deploy?
When you're developing the next version of a web application, you don't want to mix up the [Application Insights](../../azure-monitor/app/app-insights-overview.md) telemetry from the new version and the already released version.
-To avoid confusion, send the telemetry from different development stages to separate Application Insights resources, with separate instrumentation keys (ikeys).
+To avoid confusion, send the telemetry from different development stages to separate Application Insights resources with separate instrumentation keys.
-To make it easier to change the instrumentation key as a version moves from one stage to another, it can be useful to [set the ikey dynamically in code](#dynamic-ikey) instead of in the configuration file.
+To make it easier to change the instrumentation key as a version moves from one stage to another, it can be useful to [set the instrumentation key dynamically in code](#dynamic-instrumentation-key) instead of in the configuration file.
-(If your system is an Azure Cloud Service, there's [another method of setting separate ikeys](../../azure-monitor/app/azure-web-apps-net-core.md).)
+If your system is an instance of Azure Cloud Services, there's [another method of setting separate instrumentation keys](../../azure-monitor/app/azure-web-apps-net-core.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## About resources and instrumentation keys
-When you set up Application Insights monitoring for your web app, you create an Application Insights *resource* in Microsoft Azure. You open this resource in the Azure portal in order to see and analyze the telemetry collected from your app. The resource is identified by an *instrumentation key* (ikey). When you install the Application Insights package to monitor your app, you configure it with the instrumentation key, so that it knows where to send the telemetry.
+When you set up Application Insights monitoring for your web app, you create an Application Insights resource in Azure. You open this resource in the Azure portal to see and analyze the telemetry collected from your app. The resource is identified by an instrumentation key. When you install the Application Insights package to monitor your app, you configure it with the instrumentation key so that it knows where to send the telemetry.
-Each Application Insights resource comes with metrics that are available out-of-box. If separate components report to the same Application Insights resource, these metrics may not make sense to dashboard/alert on.
+Each Application Insights resource comes with metrics that are available out of the box. If separate components report to the same Application Insights resource, it might not make sense to alert on these metrics.
### When to use a single Application Insights resource
+Use a single Application Insights resource:
+ - For application components that are deployed together. These applications are usually developed by a single team and managed by the same set of DevOps/ITOps users.-- If it makes sense to aggregate Key Performance Indicators (KPIs) such as response durations, failure rates in dashboard etc., across all of them by default (you can choose to segment by role name in the Metrics Explorer experience).-- If there's no need to manage Azure role-based access control (Azure RBAC) differently between the application components.
+- If it makes sense to aggregate key performance indicators, such as response durations or failure rates in a dashboard, across all of them by default. You can choose to segment by role name in the metrics explorer.
+- If there's no need to manage Azure role-based access control differently between the application components.
- If you don't need metrics alert criteria that are different between the components. - If you don't need to manage continuous exports differently between the components. - If you don't need to manage billing/quotas differently between the components.
Each Application Insights resource comes with metrics that are available out-of-
- If it's okay to have the same smart detection and work item integration settings across all roles. > [!NOTE]
-> If you want to consolidate multiple Application Insights Resources, you may point your existing application components to a new, consolidated Application Insights Resource. The telemetry stored in your old resource will not be transfered to the new resource, so only delete the old resource when you have enough telemetry in the new resource for business continuity.
+> If you want to consolidate multiple Application Insights resources, you can point your existing application components to a new, consolidated Application Insights resource. The telemetry stored in your old resource won't be transferred to the new resource. Only delete the old resource when you have enough telemetry in the new resource for business continuity.
+
+### Other considerations
-### Other things to keep in mind
+Be aware that:
-- You may need to add custom code to ensure that meaningful values are set into the [Cloud_RoleName](./app-map.md?tabs=net#set-or-override-cloud-role-name) attribute. Without meaningful values set for this attribute, *NONE* of the portal experiences will work.-- For Service Fabric applications and classic cloud services, the SDK automatically reads from the Azure Role Environment and sets these. For all other types of apps, you'll likely need to set this explicitly.-- Live Metrics experience doesn't support splitting by role name.
+- You might need to add custom code to ensure that meaningful values are set into the [Cloud_RoleName](./app-map.md?tabs=net#set-or-override-cloud-role-name) attribute. Without meaningful values set for this attribute, none of the portal experiences will work.
+- For Azure Service Fabric applications and classic cloud services, the SDK automatically reads from the Azure Role Environment and sets these services. For all other types of apps, you'll likely need to set this explicitly.
+- Live Metrics doesn't support splitting by role name.
-## <a name="dynamic-ikey"></a> Dynamic instrumentation key
+## <a name="dynamic-instrumentation-key"></a> Dynamic instrumentation key
-To make it easier to change the ikey as the code moves between stages of production, reference the key dynamically in code instead of using a hardcoded/static value.
+To make it easier to change the instrumentation key as the code moves between stages of production, reference the key dynamically in code instead of using a hardcoded or static value.
-Set the key in an initialization method, such as global.aspx.cs in an ASP.NET service:
+Set the key in an initialization method, such as `global.aspx.cs`, in an ASP.NET service:
```csharp protected void Application_Start()
protected void Application_Start()
... ```
-In this example, the ikeys for the different resources are placed in different versions of the web configuration file. Swapping the web configuration file - which you can do as part of the release script - will swap the target resource.
+In this example, the instrumentation keys for the different resources are placed in different versions of the web configuration file. Swapping the web configuration file, which you can do as part of the release script, will swap the target resource.
-### Web pages
-The iKey is also used in your app's web pages, in the [script that you got from the quickstart pane](../../azure-monitor/app/javascript.md). Instead of coding it literally into the script, generate it from the server state. For example, in an ASP.NET app:
+### Webpages
+The instrumentation key is also used in your app's webpages, in the [script that you got from the quickstart pane](../../azure-monitor/app/javascript.md). Instead of coding it literally into the script, generate it from the server state. For example, in an ASP.NET app:
```javascript <script type="text/javascript">
-// Standard Application Insights web page script:
+// Standard Application Insights webpage script:
var appInsights = window.appInsights || function(config){ ... // Modify this part: }({instrumentationKey:
var appInsights = window.appInsights || function(config){ ...
//... ```
-## Create additional Application Insights resources
+## Create more Application Insights resources
-To create an Applications Insights resource follow the [resource creation guide](./create-new-resource.md).
+To create an Applications Insights resource, see [Create an Application Insights resource](./create-new-resource.md).
-### Getting the instrumentation key
+### Get the instrumentation key
The instrumentation key identifies the resource that you created. You need the instrumentation keys of all the resources to which your app will send data.
-## Filter on build number
+## Filter on the build number
When you publish a new version of your app, you'll want to be able to separate the telemetry from different builds.
-You can set the Application Version property so that you can filter [search](../../azure-monitor/app/diagnostic-search.md) and [metric explorer](../../azure-monitor/essentials/metrics-charts.md) results.
+You can set the **Application Version** property so that you can filter [search](../../azure-monitor/app/diagnostic-search.md) and [metric explorer](../../azure-monitor/essentials/metrics-charts.md) results.
-There are several different methods of setting the Application Version property.
+There are several different methods of setting the **Application Version** property.
* Set directly: `telemetryClient.Context.Component.Version = typeof(MyProject.MyClass).Assembly.GetName().Version;`
-* Wrap that line in a [telemetry initializer](../../azure-monitor/app/api-custom-events-metrics.md#defaults) to ensure that all TelemetryClient instances are set consistently.
-* [ASP.NET] Set the version in `BuildInfo.config`. The web module will pick up the version from the BuildLabel node. Include this file in your project and remember to set the Copy Always property in Solution Explorer.
+* Wrap that line in a [telemetry initializer](../../azure-monitor/app/api-custom-events-metrics.md#defaults) to ensure that all `TelemetryClient` instances are set consistently.
+* ASP.NET: Set the version in `BuildInfo.config`. The web module will pick up the version from the `BuildLabel` node. Include this file in your project and remember to set the **Copy Always** property in Solution Explorer.
```xml <?xml version="1.0" encoding="utf-8"?>
There are several different methods of setting the Application Version property.
</DeploymentEvent> ```
-* [ASP.NET] Generate BuildInfo.config automatically in MSBuild. To do this, add a few lines to your `.csproj` file:
+
+* ASP.NET: Generate `BuildInfo.config` automatically in the Microsoft Build Engine. Add a few lines to your `.csproj` file:
```xml <PropertyGroup>
There are several different methods of setting the Application Version property.
</PropertyGroup> ```
- This generates a file called *yourProjectName*.BuildInfo.config. The Publish process renames it to BuildInfo.config.
+ This step generates a file called *yourProjectName*`.BuildInfo.config`. The Publish process renames it to `BuildInfo.config`.
- The build label contains a placeholder (AutoGen_...) when you build with Visual Studio. But when built with MSBuild, it's populated with the correct version number.
+ The build label contains a placeholder (*AutoGen_...*) when you build with Visual Studio. But when built with the Microsoft Build Engine, it's populated with the correct version number.
- To allow MSBuild to generate version numbers, set the version like `1.0.*` in AssemblyReference.cs
+ To allow the Microsoft Build Engine to generate version numbers, set the version like `1.0.*` in `AssemblyReference.cs`.
## Version and release tracking To track the application version, make sure `buildinfo.config` is generated by your Microsoft Build Engine process. In your `.csproj` file, add:
To track the application version, make sure `buildinfo.config` is generated by y
</PropertyGroup> ```
-When it has the build info, the Application Insights web module automatically adds **Application version** as a property to every item of telemetry. That allows you to filter by version when you perform [diagnostic searches](../../azure-monitor/app/diagnostic-search.md), or when you [explore metrics](../../azure-monitor/essentials/metrics-charts.md).
+When the Application Insights web module has the build information, it automatically adds **Application Version** as a property to every item of telemetry. For this reason, you can filter by version when you perform [diagnostic searches](../../azure-monitor/app/diagnostic-search.md) or when you [explore metrics](../../azure-monitor/essentials/metrics-charts.md).
-However, notice that the build version number is generated only by the Microsoft Build Engine, not by the developer build from Visual Studio.
+The build version number is generated only by the Microsoft Build Engine, not by the developer build from Visual Studio.
### Release annotations
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Title: Application Insights Agent overview | Microsoft Docs description: Learn how to use Application Insights Agent to monitor website performance without redeploying the website. It works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. Previously updated : 09/16/2019 Last updated : 11/15/2022
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
Title: Application Insights transaction diagnostics | Microsoft Docs description: This article explains Application Insights end-to-end transaction diagnostics. Previously updated : 10/31/2022 Last updated : 11/15/2022
azure-monitor Tutorial App Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-app-dashboards.md
Title: Create custom dashboards in Application Insights | Microsoft Docs description: This tutorial shows you how to create custom KPI dashboards using Application Insights. Previously updated : 09/30/2020 Last updated : 11/15/2022
azure-monitor Tutorial Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md
description: Application Insights SDK tutorial to monitor ASP.NET Core web appli
ms.devlang: csharp Previously updated : 08/22/2022 Last updated : 11/15/2022
azure-monitor Tutorial Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-performance.md
Title: Diagnose performance issues using Application Insights | Microsoft Docs description: Tutorial to find and diagnose performance issues in your application by using Application Insights. Previously updated : 06/15/2020 Last updated : 11/15/2022
azure-monitor Usage Funnels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-funnels.md
Title: Application Insights Funnels
-description: Learn how you can use Funnels to discover how customers are interacting with your application.
+ Title: Application Insights funnels
+description: Learn how you can use funnels to discover how customers are interacting with your application.
Previously updated : 10/24/2022 Last updated : 11/15/2022
-# Discover how customers are using your application with Application Insights Funnels
+# Discover how customers are using your application with Application Insights funnels
-Understanding the customer experience is of the utmost importance to your business. If your application involves multiple stages, you need to know if most customers are progressing through the entire process, or if they're ending the process at some point. The progression through a series of steps in a web application is known as a *funnel*. You can use Application Insights Funnels to gain insights into your users, and monitor step-by-step conversion rates.
+Understanding the customer experience is of great importance to your business. If your application involves multiple stages, you need to know if customers are progressing through the entire process or ending the process at some point. The progression through a series of steps in a web application is known as a *funnel*. You can use Application Insights funnels to gain insights into your users and monitor step-by-step conversion rates.
## Create your funnel
-Before you create your funnel, decide on the question you want to answer. For example, you might want to know how many users are viewing the home page, viewing a customer profile, and creating a ticket.
+Before you create your funnel, decide on the question you want to answer. For example, you might want to know how many users view the home page, view a customer profile, and create a ticket.
To create a funnel:
-1. In the **Funnels** tab, select **Edit**.
-1. Choose your *Top step*.
+1. On the **Funnels** tab, select **Edit**.
+1. Choose your **Top Step**.
- :::image type="content" source="./media/usage-funnels/funnel.png" alt-text="Screenshot of the Funnel tab and selecting steps on the edit tab." lightbox="./media/usage-funnels/funnel.png":::
+ :::image type="content" source="./media/usage-funnels/funnel.png" alt-text="Screenshot that shows the Funnel tab and selecting steps on the Edit tab." lightbox="./media/usage-funnels/funnel.png":::
-1. To apply filters to the step select **Add filters**, which will appear after you choose an item for the top step.
-1. Then choose your *Second step* and so on.
+1. To apply filters to the step, select **Add filters**. This option appears after you choose an item for the top step.
+1. Then choose your **Second Step** and so on.
-> [!NOTE]
-> Funnels are limited to a maximum of six steps.
+ > [!NOTE]
+ > Funnels are limited to a maximum of six steps.
-1. Select the **View** tab to see your funnel results
+1. Select the **View** tab to see your funnel results.
- :::image type="content" source="./media/usage-funnels/funnel-2.png" alt-text="Screenshot of the funnel tab on view tab showing results from the top and second step." lightbox="./media/usage-funnels/funnel-2.png":::
+ :::image type="content" source="./media/usage-funnels/funnel-2.png" alt-text="Screenshot that shows the Funnels View tab that shows results from the top and second steps." lightbox="./media/usage-funnels/funnel-2.png":::
-1. To save your funnel to view at another time, select **Save** at the top. You can use **Open** to open your saved funnels.
+1. To save your funnel to view at another time, select **Save** at the top. Use **Open** to open your saved funnels.
### Funnels features -- If your app is sampled, you'll see a sampling banner. Selecting the banner opens a context pane, explaining how to turn sampling off. -- Select a step to see more details on the right. -- The historical conversion graph shows the conversion rates over the last 90 days. -- Understand your users better by accessing the users tool. You can use filters in each step.
+Funnels have the following features:
+
+- If your app is sampled, you'll see a sampling banner. Selecting the banner opens a context pane that explains how to turn off sampling.
+- Select a step to see more details on the right.
+- The historical conversion graph shows the conversion rates over the last 90 days.
+- Understand your users better by accessing the users tool. You can use filters in each step.
## Next steps+ * [Usage overview](usage-overview.md)
- * [Users, Sessions, and Events](usage-segmentation.md)
+ * [Users, sessions, and events](usage-segmentation.md)
* [Retention](usage-retention.md) * [Workbooks](../visualize/workbooks-overview.md) * [Add user context](./usage-overview.md)
- * [Export to Power BI](../logs/log-powerbi.md) if you've [migrated to a workspace-based resource](convert-classic-resource.md)
+ * [Export to Power BI](../logs/log-powerbi.md) if you've [migrated to a workspace-based resource](convert-classic-resource.md)
azure-monitor Web App Extension Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/web-app-extension-release-notes.md
Title: Release Notes for Azure web app extension - Application Insights description: Releases notes for Azure Web Apps Extension for runtime instrumentation with Application Insights. Previously updated : 06/26/2020 Last updated : 11/15/2022
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
description: Monitoring .NET Core/.NET Framework non-HTTP apps with Azure Monito
ms.devlang: csharp Previously updated : 05/12/2022 Last updated : 11/15/2022
azure-monitor Autoscale Common Scale Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-scale-patterns.md
Title: Overview of common autoscale patterns description: Learn some of the common patterns to auto scale your resource in Azure.+ Previously updated : 04/22/2022 Last updated : 11/17/2022 -++ # Overview of common autoscale patterns
-This article describes some of the common patterns to scale your resource in Azure.
-Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md).
+Autoscale settings help ensure that you have the right amount of resources running to handle the fluctuating load of your application. You can configure autoscale settings to be triggered based on metrics that indicate load or performance, or triggered at a scheduled date and time.
-## Lets get started
+Azure autoscale supports many resource types. For more information about supported resources, see [autoscale supported resources](./autoscale-overview.md#supported-services-for-autoscale).
-This article assumes that you are familiar with auto scale. You can [get started here to scale your resource][1]. The following are some of the common scale patterns.
+This article describes some of the common patterns you can use to scale your resources in Azure.
+## Prerequisites
-## Scale based on CPU
+This article assumes that you're familiar with auto scale. [Get started here to scale your resource](./autoscale-get-started.md).
-You have a web app (/VMSS/cloud service role) and
+## Scale based on metrics
-- You want to scale out/scale in based on CPU.-- Additionally, you want to ensure there is a minimum number of instances.-- Also, you want to ensure that you set a maximum limit to the number of instances you can scale to.
+Scale your resource based on metrics produce by the resource itself or any other resource.
+For example:
+* Scale your Virtual Machine Scale Set based on the CPU usage of the virtual machine.
+* Ensure a minimum number of instances.
+* Set a maximum limit on the number of instances.
-[![Scale based on CPU](./media/autoscale-common-scale-patterns/scale-based-on-cpu.png)](./media/autoscale-common-scale-patterns/scale-based-on-cpu.png#lightbox)
+The image below shows a default scale condition for a Virtual Machine Scale Set
+ * The **Scale rule** tab shows that the metric source is the scale set itself and the metric used is Percentage CPU.
+ * The minimum number of instances running is set to 2.
+ * The maximum number of instances is set to 10.
+ * When the scale set starts, the default number of instances is 3.
-## Scale differently on weekdays vs weekends
-You have a web app (/VMSS/cloud service role) and
+## Scale based on another resource's metric
-- You want 3 instances by default (on weekdays)-- You don't expect traffic on weekends and hence you want to scale down to 1 instance on weekends.
+Scale a resource based on the metrics from a different resource.
+The image below shows a scale rule that is scaling a Virtual Machine Scale Set based on the number of allocated ports on a load balancer.
-[![Scale differently on weekdays vs weekends](./media/autoscale-common-scale-patterns/scale-differently-on-weekends.png)](./media/autoscale-common-scale-patterns/scale-differently-on-weekends.png#lightbox)
-## Scale differently during holidays
+## Scale differently on weekends
-You have a web app (/VMSS/cloud service role) and
+You can scale your resources differently on different days of the week..
+For example, you have a web app and want to:
+- Set a minimum of 3 instances on weekdays.
+- Scale down to 1 instance on weekends when there's less traffic.
-- You want to scale up/down based on CPU usage by default-- However, during holiday season (or specific days that are important for your business) you want to override the defaults and have more capacity at your disposal.
-[![Scale differently on holidays](./media/autoscale-common-scale-patterns/scale-for-holiday.png)](./media/autoscale-common-scale-patterns/scale-for-holiday.png#lightbox)
+## Scale differently during specific events
-## Scale based on custom metric
+You can set your scale rules and instance limits differently for specific events.
+For example:
+- Set a minimum of 3 instances by default
+- For the week of Back Friday, set the minimum number of instances to 10 to handle the anticipated traffic.
-You have a web front end and an API tier that communicates with the backend.
-- You want to scale the API tier based on custom events in the front end (example: You want to scale your checkout process based on the number of items in the shopping cart)
+## Scale based on custom metrics
+Scale by custom metrics generated by your application.
+For example, you have a web front end and an API tier that communicates with the backend, and you want to scale the API tier based on custom events in the front end.
-![Scale based on custom metric][5]
-<!--Reference-->
-[1]: ./autoscale-get-started.md
-[2]: ./media/autoscale-common-scale-patterns/scale-based-on-cpu.png
-[3]: ./media/autoscale-common-scale-patterns/weekday-weekend-scale.png
-[4]: ./media/autoscale-common-scale-patterns/holidays-scale.png
-[5]: ./media/autoscale-common-scale-patterns/custom-metric-scale.png
+Next steps
+
+Learn more about autoscale by referring to the following articles :
+
+* [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md)
+* [Azure Monitor autoscale custom metrics](./autoscale-custom-metric.md)
+* [Autoscale with multiple profiles](./autoscale-multiprofile.md)
+* [Flapping in Autoscale](./autoscale-custom-metric.md)
+* [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md)
+* [Autoscale REST API](/rest/api/monitor/autoscalesettings)
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
The full list of configurable fields and descriptions is available in the [Autos
For code examples, see
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-template.md)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md)
## Horizontal vs vertical scaling Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a Virtual Machine Scale Set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
In contrast, vertical scaling, keeps the same number of resources constant, but
The following services are supported by autoscale:
-| Service | Schema & Documentation |
-| | |
-| Azure Virtual machines scale sets |[Overview of autoscale with Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md) |
-| Web apps |[Scaling Web Apps](autoscale-get-started.md) |
-| Azure API Management service|[Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md)
-| Azure Data Explorer Clusters|[Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling)|
-| Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) |
-| Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
- Spring Cloud |[Set up autoscale for microservice applications](../../spring-apps/how-to-setup-autoscale.md)|
-| Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |
-| Service Bus |[Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)|
-| Logic Apps - Integration Service Environment(ISE) | [Add ISE capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
+| Service | Schema & Documentation |
+||--|
+| Azure Virtual machines scale sets | [Overview of autoscale with Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md) |
+| Web apps | [Scaling Web Apps](autoscale-get-started.md) |
+| Azure API Management service | [Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md) |
+| Azure Data Explorer Clusters | [Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling) |
+| Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) |
+| Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
+| Azure Spring Apps | [Set up autoscale for applications](../../spring-apps/how-to-setup-autoscale.md) |
+| Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |
+| Service Bus | [Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md) |
+| Logic Apps - Integration Service Environment(ISE) | [Add ISE capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
## Next steps
To learn more about autoscale, see the following resources:
* [Azure Monitor autoscale common metrics](autoscale-common-metrics.md) * [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with Azure PowerShell](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
-* [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest)
-* [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings)
-* [PowerShell Az.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
-* [REST API reference. Autoscale Settings](https://learn.microsoft.com/rest/api/monitor/autoscale-settings).
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-template.md)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with Azure PowerShell](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md)
+* [Autoscale CLI reference](/cli/azure/monitor/autoscale)
+* [ARM template resource definition](/azure/templates/microsoft.insights/autoscalesettings)
+* [PowerShell Az.Monitor Reference](/powershell/module/az.monitor/#monitor)
+* [REST API reference. Autoscale Settings](/rest/api/monitor/autoscale-settings).
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Your retention requirement might be for compliance reasons or for occasional inv
You can configure retention and archiving for all tables in a workspace or configure each table separately. The options allow you to optimize your costs by setting only the retention you require for each data type.
-### Configure Basic Logs (preview)
+### Configure Basic Logs
You can save on data ingestion costs by configuring [certain tables](logs/basic-logs-configure.md#which-tables-support-basic-logs) in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as [Basic Logs](logs/basic-logs-configure.md).
azure-monitor Change Analysis Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-enable.md
In this guide, you'll learn the two ways to enable Change Analysis for Azure Fun
For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#diagnose-and-solve-problems-tool) section. > [!NOTE]
-> You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+> You may not immediately see web app in-guest file changes and configuration changes. Prepare for downtime and restart your web app to view changes within 30 minutes. If you still can't see changes, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
1. Navigate to Azure Monitor's Change Analysis UI in the portal.
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-troubleshoot.md
This error message is likely a temporary internet connectivity issue, since:
* The UI sent the resource provider registration request. * You've resolved your [permissions issue](#you-dont-have-enough-permissions-to-register-microsoftchangeanalysis-resource-provider).
-Try refreshing the page and checking your internet connection. If the error persists, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+Try refreshing the page and checking your internet connection. If the error persists, [submit an Azure support ticket](https://azure.microsoft.com/support/).
### This is taking longer than expected.
-You'll receive this error message when the registration takes longer than 2 minutes. While unusual, it doesn't mean something went wrong. Restart your web app to see your registration changes. Changes should show up within a few hours of app restart.
+You'll receive this error message when the registration takes longer than 2 minutes. While unusual, it doesn't mean something went wrong.
-If your changes still don't show after 6 hours, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+1. Prepare for downtime.
+1. Restart your web app to see your registration changes.
+
+Changes should show up within a few hours of app restart. If your changes still don't show after 6 hours, [submit an Azure support ticket](https://azure.microsoft.com/support/).
## Azure Lighthouse subscription is not supported.
Often, this message includes: `Azure Lighthouse subscription is not supported, t
Azure Lighthouse allows for cross-tenant resource administration. However, cross-tenant support needs to be built for each resource provider. Currently, Change Analysis has not built this support. If you're signed into one tenant, you can't query for resource or subscription changes whose home is in another tenant.
-If this is a blocking issue for you, we'd like to hear your feedback! [Contact the Change Analysis help team](mailto:changeanalysishelp@microsoft.com) to describe how you're trying to use Change Analysis.
+If this is a blocking issue for you, [submit an Azure support ticket](https://azure.microsoft.com/support/) to describe how you're trying to use Change Analysis.
## An error occurred while getting changes. Please refresh this page or come back later to view changes.
When changes can't be loaded, Azure Monitor's Change Analysis service presents t
- Internet connectivity error from the client device. - Change Analysis service being temporarily unavailable.
-Refreshing the page after a few minutes usually fixes this issue. If the error persists, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+Refreshing the page after a few minutes usually fixes this issue. If the error persists, [submit an Azure support ticket](https://azure.microsoft.com/support/).
## Only partial data loaded. This error message may occur in the Azure portal when loading change data via the Change Analysis home page. Typically, the Change Analysis service calculates and returns all change data. However, in a network failure or a temporary outage of service, you may receive an error message indicating only partial data was loaded.
-To load all change data, try waiting a few minutes and refreshing the page. If you are still only receiving partial data, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+To load all change data, try waiting a few minutes and refreshing the page. If you are still only receiving partial data, [submit an Azure support ticket](https://azure.microsoft.com/support/).
## You don't have enough permissions to view some changes. Contact your Azure subscription administrator.
This general unauthorized error message occurs when the current user doesn't hav
## Cannot see in-guest changes for newly enabled Web App.
-You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+You may not immediately see web app in-guest file changes and configuration changes.
+
+1. Prepare for brief downtime.
+1. Restart your web app.
+
+You should be able to view changes within 30 minutes. If not, [submit an Azure support ticket](https://azure.microsoft.com/support/).
## Diagnose and solve problems tool for virtual machines
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
Click into a change to view full Resource Manager snippet and other properties.
:::image type="content" source="./media/change-analysis/change-details.png" alt-text="Screenshot of change details":::
-Send any feedback to the [Change Analysis team](mailto:changeanalysisteam@microsoft.com) from the Change Analysis blade:
+Send feedback from the Change Analysis blade:
:::image type="content" source="./media/change-analysis/change-analysis-feedback.png" alt-text="Screenshot of feedback button in Change Analysis tab":::
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
If you are utilizing [Prometheus metric scraping](container-insights-prometheus.
### Configure Basic Logs
-You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs (preview)](../best-practices-cost.md#configure-basic-logs-preview). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
+You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs (preview)](../best-practices-cost.md#configure-basic-logs). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logging-v2.md).
azure-monitor Data Collection Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md
Use a transformation to add information to data that provides business context o
## Supported tables Transformations may be applied to the following tables in a Log Analytics workspace. -- Any Azure table listed in [Tables that support time transformations in Azure Monitor Logs (preview)](../logs/tables-feature-support.md)
+- Any Azure table listed in [Tables that support transformations in Azure Monitor Logs (preview)](../logs/tables-feature-support.md)
- Any custom table
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
The following table provides unique requirements for each destination including
| Destination | Requirements | |:|:| | Log Analytics workspace | The workspace doesn't need to be in the same region as the resource being monitored.|
-| Storage account | Don't use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.<br><br> Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.<br><br>[Azure DNS zone endpoints (preview)](/azure/storage/common/storage-account-overview#azure-dns-zone-endpoints-preview) and [Azure Premium LRS](/azure/storage/common/storage-redundancy#locally-redundant-storage) (locally redundant storage) storage accounts are not supported as a log or metric destination.|
+| Storage account | Don't use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.<br><br> Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.<br><br>[Azure DNS zone endpoints (preview)](../../storage/common/storage-account-overview.md#azure-dns-zone-endpoints-preview) and [Azure Premium LRS](../../storage/common/storage-redundancy.md#locally-redundant-storage) (locally redundant storage) storage accounts are not supported as a log or metric destination.|
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.| | Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
Every effort is made to ensure all log data is sent correctly to your destinatio
## Next step
-[Read more about Azure platform logs](./platform-logs-overview.md)
+[Read more about Azure platform logs](./platform-logs-overview.md)
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
Last updated 07/27/2022
The Diagnostic Settings Storage Retention feature is being deprecated. To configure retention for logs and metrics use Azure Storage Lifecycle Management.
-This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](/azure/storage/blobs/lifecycle-management-policy-configure?tabs=azure-portal) for retention.
+This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](../../storage/blobs/lifecycle-management-policy-configure.md?tabs=azure-portal) for retention.
> [!IMPORTANT] > **Deprecation Timeline.**
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
The only requirement to enable Azure Monitor managed service for Prometheus is t
The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards. ## Rules and alerts
-Azure Monitor managed service for Prometheus supports recording rules and alert rules using PromQL queries. Metrics recorded by recording rules are stored back in the Azure Monitor workspace and can be queried by dashboard or by other rules. Alerts fired by alert rules can trigger actions or notifications, as defined in the [action groups](/azure/azure-monitor/alerts/action-groups) configured for the alert rule. You can also view fired and resolved Prometheus alerts in the Azure portal along with other alert types. For your AKS cluster, a set of [predefined Prometheus alert rules](/azure/azure-monitor/containers/container-insights-metric-alerts) and [recording rules ](/azure/azure-monitor/essentials/prometheus-metrics-scrape-default#recording-rules)is provided to allow easy quick start.
+Azure Monitor managed service for Prometheus supports recording rules and alert rules using PromQL queries. Metrics recorded by recording rules are stored back in the Azure Monitor workspace and can be queried by dashboard or by other rules. Alerts fired by alert rules can trigger actions or notifications, as defined in the [action groups](../alerts/action-groups.md) configured for the alert rule. You can also view fired and resolved Prometheus alerts in the Azure portal along with other alert types. For your AKS cluster, a set of [predefined Prometheus alert rules](../containers/container-insights-metric-alerts.md) and [recording rules ](./prometheus-metrics-scrape-default.md#recording-rules)is provided to allow easy quick start.
## Limitations See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor workspaces.
Following are links to Prometheus documentation.
- [Enable Azure Monitor managed service for Prometheus](prometheus-metrics-enable.md). - [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md). - [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).-- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).-
+- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Log Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-powerbi.md
Title: Log Analytics integration with Power BI and Excel
-description: How to send results from Log Analytics to Power BI
+description: Learn how to send results from Log Analytics to Power BI.
Last updated 06/22/2022
# Log Analytics integration with Power BI
-This article focuses on ways to feed data from Log Analytics into Microsoft Power BI to create more visually appealing reports and dashboards.
+This article focuses on ways to feed data from Log Analytics into Power BI to create more visually appealing reports and dashboards.
-## Background
+## Background
-Azure Monitor Logs is a platform that provides an end-to-end solution for ingesting logs. [Azure Monitor Log Analytics](../data-platform.md) is the interface to query these logs. For more information on the entire Azure Monitor data platform including Log Analytics, see [Azure Monitor data platform](../data-platform.md).
-
-Microsoft Power BI is MicrosoftΓÇÖs data visualization platform. For more information on how to get started, see [Power BIΓÇÖs homepage](https://powerbi.microsoft.com/).
+Azure Monitor Logs is a platform that provides an end-to-end solution for ingesting logs. [Azure Monitor Log Analytics](../data-platform.md) is the interface to query these logs. For more information on the entire Azure Monitor data platform including Log Analytics, see [Azure Monitor data platform](../data-platform.md).
+Power BI is the Microsoft data visualization platform. For more information on how to get started, see the [Power BI home page](https://powerbi.microsoft.com/).
In general, you can use free Power BI features to integrate and create visually appealing reports and dashboards.
-More advanced features may require purchasing a Power BI Pro or premium account. These features include:
+More advanced features might require purchasing a Power BI Pro or Premium account. These features include:
-For more information, see [learn more about Power BI pricing and features](https://powerbi.microsoft.com/pricing/)
+ - Sharing your work.
+ - Scheduled refreshes.
+ - Power BI apps.
+ - Dataflows and incremental refresh.
-## Integrating queries
+For more information, see [Learn more about Power BI pricing and features](https://powerbi.microsoft.com/pricing/).
-Power BI uses the [M query language](/powerquery-m/power-query-m-language-specification/) as its main querying language.
+## Integrate queries
-Log Analytics queries can be exported to M and used in Power BI directly. After running a successful query, select the **Export to Power BI (M query)** from the **Export** button in Log Analytics UI top action bar.
+Power BI uses the [M query language](/powerquery-m/power-query-m-language-specification/) as its main querying language.
+Log Analytics queries can be exported to M and used in Power BI directly. After you run a successful query, select **Export to Power BI (M query)** from the **Export** dropdown list in the Log Analytics top toolbar.
Log Analytics creates a .txt file containing the M code that can be used directly in Power BI.
-## Connecting your logs to a dataset
+## Connect your logs to a dataset
-A Power BI dataset is a source of data ready for reporting and visualization. To connect a Log Analytics query to a dataset, copy the M code exported from Log Analytics into a blank query in Power BI.
+A Power BI dataset is a source of data ready for reporting and visualization. To connect a Log Analytics query to a dataset, copy the M code exported from Log Analytics into a blank query in Power BI.
-For more information, see [Understanding Power BI datasets](/power-bi/service-datasets-understand/).
+For more information, see [Understanding Power BI datasets](/power-bi/service-datasets-understand/).
-## Collect data with Power BI dataflows
+## Collect data with Power BI dataflows
-Power BI dataflows also allow you to collect and store data. For more information, see [Power BI Dataflows](/power-bi/service-dataflows-overview).
+Power BI dataflows also allow you to collect and store data. For more information, see [Power BI dataflows](/power-bi/service-dataflows-overview).
A dataflow is a type of "cloud ETL" designed to help you collect and prep your data. A dataset is the "model" designed to help you connect different entities and model them for your needs.
-## Incremental refresh
-
-Both Power BI datasets and Power BI dataflows have an incremental refresh option. Power BI dataflows and Power BI datasets support this feature. To use incremental refresh on dataflows, you need Power BI Premium.
+## Incremental refresh
+Both Power BI datasets and Power BI dataflows have an incremental refresh option. Power BI dataflows and Power BI datasets support this feature. To use incremental refresh on dataflows, you need Power BI Premium.
-Incremental refresh runs small queries and updates smaller amounts of data per run instead of ingesting all of the data again and again when you run the query. You have the option to save large amounts of data, but add a new increment of data every time the query is run. This behavior is ideal for longer running reports.
+Incremental refresh runs small queries and updates smaller amounts of data per run instead of ingesting all the data again and again when you run the query. You can save large amounts of data but add a new increment of data every time the query is run. This behavior is ideal for longer-running reports.
-Power BI incremental refresh relies on the existence of a *datetime* filed in the result set. Before configuring incremental refresh, make sure your Log Analytics query result set includes at least one *datetime* filed.
+Power BI incremental refresh relies on the existence of a **datetime** field in the result set. Before you configure incremental refresh, make sure your Log Analytics query result set includes at least one **datetime** field.
-To learn more and how to configure incremental refresh, see [Power BI Datasets and Incremental refresh](/power-bi/service-premium-incremental-refresh) and [Power BI dataflows and incremental refresh](/power-bi/service-dataflows-incremental-refresh).
+To learn more and how to configure incremental refresh, see [Power BI datasets and incremental refresh](/power-bi/service-premium-incremental-refresh) and [Power BI dataflows and incremental refresh](/power-bi/service-dataflows-incremental-refresh).
## Reports and dashboards After your data is sent to Power BI, you can continue to use Power BI to create reports and dashboards.
-For more information, see [this guide on how to create your first Power BI model and report](/training/modules/build-your-first-power-bi-report/).
+For more information, see [Create and share your first Power BI report](/training/modules/build-your-first-power-bi-report/).
## Excel integration
-You can use the same M integration used in Power BI to integrate with an Excel spreadsheet. For more information, see this [guide on how to integrate with excel](https://support.microsoft.com/office/import-data-from-external-data-sources-power-query-be4330b3-5356-486c-a168-b68e9e616f5a) and then paste the M query exported from Log Analytics.
+You can use the same M integration used in Power BI to integrate with an Excel spreadsheet. For more information, see [Import data from data sources (Power Query)](https://support.microsoft.com/office/import-data-from-external-data-sources-power-query-be4330b3-5356-486c-a168-b68e9e616f5a). Then paste the M query exported from Log Analytics.
-Additional information can be found in [Integrate Log Analytics and Excel](log-excel.md)
+For more information, see [Integrate Log Analytics and Excel](log-excel.md).
## Next steps
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
Title: Export data from Log Analytics workspace to Azure Storage Account using Logic App
-description: Describes a method to use Azure Logic Apps to query data from a Log Analytics workspace and send to Azure Storage.
-
+ Title: Export data from a Log Analytics workspace to a storage account by using Logic Apps
+description: This article describes a method to use Azure Logic Apps to query data from a Log Analytics workspace and send it to Azure Storage.
+
Last updated 03/01/2022
-# Export data from Log Analytics workspace to Azure Storage Account using Logic App
-This article describes a method to use [Azure Logic App](../../logic-apps/index.yml) to query data from a Log Analytics workspace in Azure Monitor and send to Azure Storage. Use this process when you need to export your Azure Monitor Log data for auditing and compliance scenarios or to allow another service to retrieve this data.
+# Export data from a Log Analytics workspace to a storage account by using Logic Apps
+This article describes a method to use [Azure Logic Apps](../../logic-apps/index.yml) to query data from a Log Analytics workspace in Azure Monitor and send it to Azure Storage. Use this process when you need to export your Azure Monitor Logs data for auditing and compliance scenarios or to allow another service to retrieve this data.
## Other export methods
-The method described in this article describes a scheduled export from a log query using a Logic App. Other options to export data for particular scenarios include the following:
+The method discussed in this article describes a scheduled export from a log query by using a logic app. Other options to export data for particular scenarios include:
-- To export data from your Log Analytics workspace to an Azure Storage Account or Event Hubs, use the Log Analytics workspace data export feature of Azure Monitor Logs. See [Log Analytics workspace data export in Azure Monitor](logs-data-export.md)-- One time export using a Logic App. See [Azure Monitor Logs connector for Logic Apps and Power Automate](logicapp-flow-connector.md).-- One time export to local machine using PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport).
+- To export data from your Log Analytics workspace to a storage account or Azure Event Hubs, use the Log Analytics workspace data export feature of Azure Monitor Logs. See [Log Analytics workspace data export in Azure Monitor](logs-data-export.md).
+- One-time export by using a logic app. See [Azure Monitor Logs connector for Logic Apps and Power Automate](logicapp-flow-connector.md).
+- One-time export to a local machine by using a PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport).
## Overview
-This procedure uses the [Azure Monitor Logs connector](/connectors/azuremonitorlogs) which lets you run a log query from a Logic App and use its output in other actions in the workflow. The [Azure Blob Storage connector](/connectors/azureblob) is used in this procedure to send the query output to Azure storage.
+This procedure uses the [Azure Monitor Logs connector](/connectors/azuremonitorlogs), which lets you run a log query from a logic app and use its output in other actions in the workflow. The [Azure Blob Storage connector](/connectors/azureblob) is used in this procedure to send the query output to storage.
-[![Logic app overview](media/logs-export-logic-app/logic-app-overview.png "Screenshot of Logic app flow.")](media/logs-export-logic-app/logic-app-overview.png#lightbox)
+[![Screenshot that shows a Logic Apps overview.](media/logs-export-logic-app/logic-app-overview.png "Screenshot that shows a Logic Apps flow.")](media/logs-export-logic-app/logic-app-overview.png#lightbox)
-When you export data from a Log Analytics workspace, you should limit the amount of data processed by your Logic App workflow, by filtering and aggregating your log data in query, to reduce to the required data. For example, if you need to export sign-in events, you should filter for required events and project only the required fields. For example:
+When you export data from a Log Analytics workspace, limit the amount of data processed by your Logic Apps workflow. Filter and aggregate your log data in the query to reduce the required data. For example, if you need to export sign-in events, filter for required events and project only the required fields. For example:
```Kusto SecurityEvent
SecurityEvent
| project TimeGenerated , Account , AccountType , Computer ```
-When you export the data on a schedule, use the ingestion_time() function in your query to ensure that you donΓÇÖt miss late arriving data. If data is delayed due to network or platform issues, using the ingestion time ensures that data is included in the next Logic App execution. See *Add Azure Monitor Logs action* under [Logic App procedure](#logic-app-procedure) for an example.
+When you export the data on a schedule, use the `ingestion_time()` function in your query to ensure that you don't miss late-arriving data. If data is delayed because of network or platform issues, using the ingestion time ensures that data is included in the next Logic Apps execution. For an example, see the step "Add Azure Monitor Logs action" in the [Logic Apps procedure](#logic-apps-procedure) section.
## Prerequisites
-Following are prerequisites that must be completed before this procedure.
--- Log Analytics workspace--The user who creates the Logic App must have at least read permission to the workspace. -- Azure Storage Account--The Storage Account doesnΓÇÖt have to be in the same subscription as your Log Analytics workspace. The user who creates the Logic App must have write permission to the Storage Account.
+The following prerequisites must be completed before you start this procedure:
+- **Log Analytics workspace**: The user who creates the logic app must have at least read permission to the workspace.
+- **Storage account**: The storage account doesn't have to be in the same subscription as your Log Analytics workspace. The user who creates the logic app must have write permission to the storage account.
## Connector limits
-Log Analytics workspace and log queries in Azure Monitor are multitenancy services that include limits, to protect and isolate customers, and maintain quality of service. When querying for a large amount of data, you should consider the following limits, which can affect how you configure the Logic App recurrence and your log query:
--- Log queries cannot return more than 500,000 rows.-- Log queries cannot return more than 64,000,000 bytes.-- Log queries cannot run longer than 10 minutes by default. -- Log Analytics connector is limited to 100 call per minute.-
-## Logic App procedure
-
-1. **Create container in the Storage Account**
-
- Use the procedure in [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) to add a container to your Storage Account to hold the exported data. The name used for the container in this article is **loganalytics-data**, but you can use any name.
-
-1. **Create Logic App**
-
- 1. Go to **Logic Apps** in the Azure portal and click **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new Logic App and then give it a unique name. You can turn on **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.
-\
- [![Create Logic App](media/logs-export-logic-app/create-logic-app.png "Screenshot of Logic App resource create.")](media/logs-export-logic-app/create-logic-app.png#lightbox)
-
- 2. Click **Review + create** and then **Create**. When the deployment is complete, click **Go to resource** to open the **Logic Apps Designer**.
-
-2. **Create a trigger for the Logic App**
-
- 1. Under **Start with a common trigger**, select **Recurrence**. This creates a Logic App that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day** and in the **Interval** box, enter **1** to run the workflow once per day.
- \
- [![Recurrence action](media/logs-export-logic-app/recurrence-action.png "Screenshot of recurrence action create.")](media/logs-export-logic-app/recurrence-action.png#lightbox)
-
-3. **Add Azure Monitor Logs action**
-
- The Azure Monitor Logs action lets you specify the query to run. The log query used in this example is optimized for hourly recurrence and collects the data ingested for the particular execution time. For example, if the workflow runs at 4:35, the time range would be 3:00 to 4:00. If you change the Logic App to run at a different frequency, you need the change the query as well. For example, if you set the recurrence to run daily, you would set startTime in the query to startofday(make_datetime(year,month,day,0,0)).
-
- You will be prompted to select a tenant to grant access to the Log Analytics workspace with the account that the workflow will use to run the query.
-
- 1. Click **+ New step** to add an action that runs after the recurrence action. Under **Choose an action**, type **azure monitor** and then select **Azure Monitor Logs**.
- \
- [![Azure Monitor Logs action](media/logs-export-logic-app/select-azure-monitor-connector.png "Screenshot of Azure Monitor Logs action create.")](media/logs-export-logic-app/select-azure-monitor-connector.png#lightbox)
-
- 1. Click **Azure Log Analytics ΓÇô Run query and list results**.
- \
- [![Azure Monitor Logs is highlighted under Choose an action.](media/logs-export-logic-app/select-query-action-list.png "Screenshot of a new action being added to a step in the Logic App Designer.")](media/logs-export-logic-app/select-query-action-list.png#lightbox)
-
- 2. Select the **Subscription** and **Resource Group** for your Log Analytics workspace. Select *Log Analytics Workspace* for the **Resource Type** and then select the workspace's name under **Resource Name**.
-
- 3. Add the following log query to the **Query** window.
-
- ```Kusto
- let dt = now();
- let year = datetime_part('year', dt);
- let month = datetime_part('month', dt);
- let day = datetime_part('day', dt);
- let hour = datetime_part('hour', dt);
- let startTime = make_datetime(year,month,day,hour,0)-1h;
- let endTime = startTime + 1h - 1tick;
- AzureActivity
- | where ingestion_time() between(startTime .. endTime)
- | project
- TimeGenerated,
- BlobTime = startTime,
- OperationName ,
- OperationNameValue ,
- Level ,
- ActivityStatus ,
- ResourceGroup ,
- SubscriptionId ,
- Category ,
- EventSubmissionTimestamp ,
- ClientIpAddress = parse_json(HTTPRequest).clientIpAddress ,
- ResourceId = _ResourceId
- ```
-
- 4. The **Time Range** specifies the records that will be included in the query based on the **TimeGenerated** column. This should be set to a value greater than the time range selected in the query. Since this query isn't using the **TimeGenerated** column, then **Set in query** option isn't available. See [Query scope](./scope.md) for more details about the time range. Select **Last 4 hours** for the **Time Range**. This will ensure that any records with an ingestion time larger than **TimeGenerated** will be included in the results.
- \
- [![Screenshot of the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logs-export-logic-app/run-query-list-action.png "of the settings for the new Azure Monitor Logs action named Run query and visualize results.")](media/logs-export-logic-app/run-query-list-action.png#lightbox)
-
-4. **Add Parse JSON activity (optional)**
-
- The output from the **Run query and list results** action is formatted in JSON. You can parse this data and manipulate it as part of the preparation for **Compose** action.
-
- You can provide a JSON schema that describes the payload you expect to receive. The designer parses JSON content by using this schema and generates user-friendly tokens that represent the properties in your JSON content. You can then easily reference and use those properties throughout your Logic App's workflow.
-
- You can use a sample output from **Run query and list results** step. Click **Run Trigger** in Logic App ribbon, then **Run**, download and save an output record. For the sample query in previous stem, you can use the following sample output:
+Log Analytics workspace and log queries in Azure Monitor are multitenancy services that include limits to protect and isolate customers and maintain quality of service. When you query for a large amount of data, consider the following limits, which can affect how you configure the Logic Apps recurrence and your log query:
+
+- Log queries can't return more than 500,000 rows.
+- Log queries can't return more than 64,000,000 bytes.
+- Log queries can't run longer than 10 minutes by default.
+- Log Analytics connector is limited to 100 calls per minute.
+
+## Logic Apps procedure
+
+The following sections walk you through the procedure.
+
+### Create a container in the storage account
+
+Use the procedure in [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) to add a container to your storage account to hold the exported data. The name used for the container in this article is **loganalytics-data**, but you can use any name.
+
+### Create a logic app
+
+1. Go to **Logic Apps** in the Azure portal and select **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new Logic App. Then give it a unique name. You can turn on the **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor Logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.
+
+ [![Screenshot that shows creating a logic app.](media/logs-export-logic-app/create-logic-app.png "Screenshot that shows creating a Logic Apps resource.")](media/logs-export-logic-app/create-logic-app.png#lightbox)
+
+1. Select **Review + create** and then select **Create**. After the deployment is finished, select **Go to resource** to open the **Logic Apps Designer**.
+
+### Create a trigger for the logic app
+
+Under **Start with a common trigger**, select **Recurrence**. This setting creates a logic app that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day**. In the **Interval** box, enter **1** to run the workflow once per day.
+
+[![Screenshot that shows a Recurrence action.](media/logs-export-logic-app/recurrence-action.png "Screenshot that shows creating a recurrence action.")](media/logs-export-logic-app/recurrence-action.png#lightbox)
+
+### Add an Azure Monitor Logs action
+
+The Azure Monitor Logs action lets you specify the query to run. The log query used in this example is optimized for hourly recurrence. It collects the data ingested for the particular execution time. For example, if the workflow runs at 4:35, the time range would be 3:00 to 4:00. If you change the logic app to run at a different frequency, you need to change the query too. For example, if you set the recurrence to run daily, you set `startTime` in the query to `startofday(make_datetime(year,month,day,0,0))`.
+
+You're prompted to select a tenant to grant access to the Log Analytics workspace with the account that the workflow will use to run the query.
+
+1. Select **+ New step** to add an action that runs after the recurrence action. Under **Choose an action**, enter **azure monitor**. Then select **Azure Monitor Logs**.
+
+ [![Screenshot that shows an Azure Monitor Logs action.](media/logs-export-logic-app/select-azure-monitor-connector.png "Screenshot that shows creating a Azure Monitor Logs action.")](media/logs-export-logic-app/select-azure-monitor-connector.png#lightbox)
+
+1. Select **Azure Log Analytics ΓÇô Run query and list results**.
+
+ [![Screenshot that shows Azure Monitor Logs is highlighted under Choose an action.](media/logs-export-logic-app/select-query-action-list.png "Screenshot that shows a new action being added to a step in the Logic Apps Designer.")](media/logs-export-logic-app/select-query-action-list.png#lightbox)
+
+1. Select the **Subscription** and **Resource Group** for your Log Analytics workspace. Select **Log Analytics Workspace** for the **Resource Type**. Then select the workspace name under **Resource Name**.
+
+1. Add the following log query to the **Query** window:
+
+ ```Kusto
+ let dt = now();
+ let year = datetime_part('year', dt);
+ let month = datetime_part('month', dt);
+ let day = datetime_part('day', dt);
+ let hour = datetime_part('hour', dt);
+ let startTime = make_datetime(year,month,day,hour,0)-1h;
+ let endTime = startTime + 1h - 1tick;
+ AzureActivity
+ | where ingestion_time() between(startTime .. endTime)
+ | project
+ TimeGenerated,
+ BlobTime = startTime,
+ OperationName ,
+ OperationNameValue ,
+ Level ,
+ ActivityStatus ,
+ ResourceGroup ,
+ SubscriptionId ,
+ Category ,
+ EventSubmissionTimestamp ,
+ ClientIpAddress = parse_json(HTTPRequest).clientIpAddress ,
+ ResourceId = _ResourceId
+ ```
+
+1. The **Time Range** specifies the records that will be included in the query based on the **TimeGenerated** column. The value should be greater than the time range selected in the query. Because this query isn't using the **TimeGenerated** column, the **Set in query** option isn't available. For more information about the time range, see [Query scope](./scope.md). Select **Last 4 hours** for the **Time Range**. This setting ensures that any records with an ingestion time larger than **TimeGenerated** will be included in the results.
+
+ [![Screenshot that shows the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logs-export-logic-app/run-query-list-action.png "Screenshot that shows the settings for the Azure Monitor Logs action named Run query.")](media/logs-export-logic-app/run-query-list-action.png#lightbox)
+
+### Add a Parse JSON action (optional)
+
+The output from the **Run query and list results** action is formatted in JSON. You can parse this data and manipulate it as part of the preparation for the **Compose** action.
+
+You can provide a JSON schema that describes the payload you expect to receive. The designer parses JSON content by using this schema and generates user-friendly tokens that represent the properties in your JSON content. You can then easily reference and use those properties throughout your Logic App's workflow.
+
+You can use a sample output from the **Run query and list results** step.
+
+1. Select **Run Trigger** in the Logic Apps ribbon. Then select **Run** and download and save an output record. For the sample query in the previous stem, you can use the following sample output:
```json {
Log Analytics workspace and log queries in Azure Monitor are multitenancy servic
} ```
- 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **json** and then select **Parse JSON**.
- \
- [![Select Parse JSON operator](media/logs-export-logic-app/select-parse-json.png "Screenshot of Parse JSON operator.")](media/logs-export-logic-app/select-parse-json.png#lightbox)
+1. Select **+ New step** and then select **+ Add an action**. Under **Choose an operation**, enter **json** and then select **Parse JSON**.
+
+ [![Screenshot that shows selecting a Parse JSON operator.](media/logs-export-logic-app/select-parse-json.png "Screenshot that shows the Parse JSON operator.")](media/logs-export-logic-app/select-parse-json.png#lightbox)
+
+1. Select the **Content** box to display a list of values from previous activities. Select **Body** from the **Run query and list results** action. This output is from the log query.
+
+ [![Screenshot that shows selecting a Body.](media/logs-export-logic-app/select-body.png "Screenshot that shows a Parse JSON Content setting with the output Body from the previous step.")](media/logs-export-logic-app/select-body.png#lightbox)
+
+1. Copy the sample record saved earlier. Select **Use sample payload to generate schema** and paste.
+
+ [![Screenshot that shows parsing a JSON payload.](media/logs-export-logic-app/parse-json-payload.png "Screenshot that shows a Parse JSON schema.")](media/logs-export-logic-app/parse-json-payload.png#lightbox)
+
+### Add the Compose action
+
+The **Compose** action takes the parsed JSON output and creates the object that you need to store in the blob.
+
+1. Select **+ New step**, and then select **+ Add an action**. Under **Choose an operation**, enter **compose**. Then select the **Compose** action.
+
+ [![Screenshot that shows selecting a Compose action.](media/logs-export-logic-app/select-compose.png "Screenshot that shows a Compose action.")](media/logs-export-logic-app/select-compose.png#lightbox)
+
+1. Select the **Inputs** box to display a list of values from previous activities. Select **Body** from the **Parse JSON** action. This parsed output is from the log query.
+
+ [![Screenshot that shows selecting a body for a Compose action.](media/logs-export-logic-app/select-body-compose.png "Screenshot that shows a body for Compose action.")](media/logs-export-logic-app/select-body-compose.png#lightbox)
+
+### Add the Create blob action
- 1. Click in the **Content** box to display a list of values from previous activities. Select **Body** from the **Run query and list results** action. This is the output from the log query.
- \
- [![Select Body](media/logs-export-logic-app/select-body.png "Screenshot of Par JSON Content setting with output Body from previous step.")](media/logs-export-logic-app/select-body.png#lightbox)
+The **Create blob** action writes the composed JSON to storage.
- 1. Copy the sample record saved earlier, click **Use sample payload to generate schema** and paste.
-\
- [![Parse JSON payload](media/logs-export-logic-app/parse-json-payload.png "Screenshot of Parse JSON schema.")](media/logs-export-logic-app/parse-json-payload.png#lightbox)
+1. Select **+ New step**, and then select **+ Add an action**. Under **Choose an operation**, enter **blob**. Then select the **Create blob** action.
-5. **Add the Compose action**
-
- The **Compose** action takes the parsed JSON output and creates the object that you need to store in the blob.
+ [![Screenshot that shows selecting the Create Blob action.](media/logs-export-logic-app/select-create-blob.png "Screenshot that shows creating a Blob storage action.")](media/logs-export-logic-app/select-create-blob.png#lightbox)
- 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **compose** and then select the **Compose** action.
- \
- [![Select Compose action](media/logs-export-logic-app/select-compose.png "Screenshot of Compose action.")](media/logs-export-logic-app/select-compose.png#lightbox)
+1. Enter a name for the connection to your storage account in **Connection Name**. Then select the folder icon in the **Folder path** box to select the container in your storage account. Select **Blob name** to see a list of values from previous activities. Select **Expression** and enter an expression that matches your time interval. For this query, which is run hourly, the following expression sets the blob name per previous hour:
- 1. Click the **Inputs** box display a list of values from previous activities. Select **Body** from the **Parse JSON** action. This is the parsed output from the log query.
- \
- [![Select body for Compose action](media/logs-export-logic-app/select-body-compose.png "Screenshot of body for Compose action.")](media/logs-export-logic-app/select-body-compose.png#lightbox)
+ ```json
+ subtractFromTime(formatDateTime(utcNow(),'yyyy-MM-ddTHH:00:00'), 1,'Hour')
+ ```
-6. **Add the Create Blob action**
-
- The Create Blob action writes the composed JSON to storage.
+ [![Screenshot that shows a blob expression.](media/logs-export-logic-app/blob-expression.png "Screenshot that shows a Blob action connection.")](media/logs-export-logic-app/blob-expression.png#lightbox)
- 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **blob** and then select the **Create Blob** action.
- \
- [![Select Create blob](media/logs-export-logic-app/select-create-blob.png "Screenshot of blob storage action create.")](media/logs-export-logic-app/select-create-blob.png#lightbox)
+1. Select the **Blob content** box to display a list of values from previous activities. Then select **Outputs** in the **Compose** section.
- 1. Type a name for the connection to your Storage Account in **Connection Name** and then click the folder icon in the **Folder path** box to select the container in your Storage Account. Click the **Blob name** to see a list of values from previous activities. Click **Expression** and enter an expression that matches your time interval. For this query which is run hourly, the following expression sets the blob name per previous hour:
+ [![Screenshot that shows creating a blob expression.](media/logs-export-logic-app/create-blob.png "Screenshot that shows a Blob action output configuration.")](media/logs-export-logic-app/create-blob.png#lightbox)
- ```json
- subtractFromTime(formatDateTime(utcNow(),'yyyy-MM-ddTHH:00:00'), 1,'Hour')
- ```
- \
- [![Blob expression](media/logs-export-logic-app/blob-expression.png "Screenshot of blob action connection.")](media/logs-export-logic-app/blob-expression.png#lightbox)
+### Test the logic app
- 2. Click the **Blob content** box to display a list of values from previous activities and then select **Outputs** in the **Compose** section.
- \
- [![Create blob expression](media/logs-export-logic-app/create-blob.png "Screenshot of blob action output configuration.")](media/logs-export-logic-app/create-blob.png#lightbox)
+To test the workflow, select **Run**. If the workflow has errors, they're indicated on the step with the problem. You can view the executions and drill in to each step to view the input and output to investigate failures. See [Troubleshoot and diagnose workflow failures in Azure Logic Apps](../../logic-apps/logic-apps-diagnosing-failures.md), if necessary.
+[![Screenshot that shows Runs history.](media/logs-export-logic-app/runs-history.png "Screenshot that shows trigger run history.")](media/logs-export-logic-app/runs-history.png#lightbox)
-7. **Test the Logic App**
-
- Test the workflow by clicking **Run**. If the workflow has errors, it will be indicated on the step with the problem. You can view the executions and drill in to each step to view the input and output to investigate failures. See [Troubleshoot and diagnose workflow failures in Azure Logic Apps](../../logic-apps/logic-apps-diagnosing-failures.md) if necessary.
- \
- [![Runs history](media/logs-export-logic-app/runs-history.png "Screenshot of trigger run history.")](media/logs-export-logic-app/runs-history.png#lightbox)
+### View logs in storage
+Go to the **Storage accounts** menu in the Azure portal and select your storage account. Select the **Blobs** tile. Then select the container you specified in the **Create blob** action. Select one of the blobs and then select **Edit blob**.
-8. **View logs in Storage**
-
- Go to the **Storage accounts** menu in the Azure portal and select your Storage Account. Click the **Blobs** tile and select the container you specified in the Create blob action. Select one of the blobs and then **Edit blob**.
- \
- [![Blob data](media/logs-export-logic-app/blob-data.png "Screenshot of sample data exported to blob.")](media/logs-export-logic-app/blob-data.png#lightbox)
+[![Screenshot that shows blob data.](media/logs-export-logic-app/blob-data.png "Screenshot that shows sample data exported to a blob.")](media/logs-export-logic-app/blob-data.png#lightbox)
## Next steps - Learn more about [log queries in Azure Monitor](./log-query-overview.md).-- Learn more about [Logic Apps](../../logic-apps/index.yml)
+- Learn more about [Logic Apps](../../logic-apps/index.yml).
- Learn more about [Power Automate](https://flow.microsoft.com).
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md
If your Private Link setup was created before April 19, 2021, it won't reach the
### Collecting custom logs and IIS log over Private Link Storage accounts are used in the ingestion process of custom logs. By default, service-managed storage accounts are used. However, to ingest custom logs on private links, you must use your own storage accounts and associate them with Log Analytics workspace(s).
-For more information on connecting your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md) and specifically [Using Private Links](private-storage.md#using-private-links) and [Link storage accounts to your Log Analytics workspace](private-storage.md#link-storage-accounts-to-your-log-analytics-workspace).
+For more information on connecting your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md) and specifically [Use Private Links](private-storage.md#use-private-links) and [Link storage accounts to your Log Analytics workspace](private-storage.md#link-storage-accounts-to-your-log-analytics-workspace).
### Automation If you use Log Analytics solutions that require an Automation account (such as Update Management, Change Tracking, or Inventory) you should also create a Private Link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md).
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
Title: Using customer-managed storage accounts in Azure Monitor Log Analytics
-description: Use your own storage account for Log Analytics scenarios
+ Title: Use customer-managed storage accounts in Azure Monitor Log Analytics
+description: Use your own Azure Storage account for Azure Monitor Log Analytics scenarios.
Last updated 04/04/2022
-# Using customer-managed storage accounts in Azure Monitor Log Analytics
+# Use customer-managed storage accounts in Azure Monitor Log Analytics
-Log Analytics relies on Azure Storage in various scenarios. This use is typically managed automatically. However, some cases require you to provide and manage your own storage account, also referred to as a customer-managed storage account. This document covers the use of customer-managed storage for WAD/LAD logs, Private Link, and customer-managed key (CMK) encryption.
+Log Analytics relies on Azure Storage in various scenarios. This use is typically managed automatically. But some cases require you to provide and manage your own storage account, which is also known as a customer-managed storage account. This article covers the use of customer-managed storage for WAD/LAD logs, Azure Private Link, and customer-managed key (CMK) encryption.
> [!NOTE]
-> We recommend that you donΓÇÖt take a dependency on the contents Log Analytics uploads to customer-managed storage, given that formatting and content may change.
+> We recommend that you don't take a dependency on the contents that Log Analytics uploads to customer-managed storage because formatting and content might change.
-## Ingesting Azure Diagnostics extension logs (WAD/LAD)
-The Azure Diagnostics extension agents (also called WAD and LAD for Windows and Linux agents respectively) collect various operating system logs and store them on a customer-managed storage account. You can then ingest these logs into Log Analytics to review and analyze them.
-### How to collect Azure Diagnostics extension logs from your storage account
-Connect the storage account to your Log Analytics workspace as a storage data source using [the Azure portal](../agents/diagnostics-extension-logs.md#collect-logs-from-azure-storage) or by calling the [Storage Insights API](/rest/api/loganalytics/storage-insights/create-or-update).
+## Ingest Azure Diagnostics extension logs (WAD/LAD)
+The Azure Diagnostics extension agents (also called WAD and LAD for Windows and Linux agents, respectively) collect various operating system logs and store them on a customer-managed storage account. You can then ingest these logs into Log Analytics to review and analyze them.
+
+### Collect Azure Diagnostics extension logs from your storage account
+Connect the storage account to your Log Analytics workspace as a storage data source by using the [Azure portal](../agents/diagnostics-extension-logs.md#collect-logs-from-azure-storage). You can also call the [Storage Insights API](/rest/api/loganalytics/storage-insights/create-or-update).
+
+Supported data types are:
-Supported data types:
* [Syslog](../agents/data-sources-syslog.md) * [Windows events](../agents/data-sources-windows-events.md)
-* Service Fabric
-* [ETW Events](../agents/data-sources-event-tracing-windows.md)
-* [IIS Logs](../agents/data-sources-iis-logs.md)
+* Azure Service Fabric
+* [Event Tracing for Windows (ETW) events](../agents/data-sources-event-tracing-windows.md)
+* [IIS logs](../agents/data-sources-iis-logs.md)
-## Using Private links
-Customer-managed storage accounts are used to ingest Custom logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
+## Use private links
+Customer-managed storage accounts are used to ingest custom logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
> [!IMPORTANT]
-> Collection of IIS logs is not supported with private link.
+> Collection of IIS logs isn't supported with private links.
+
+### Use a customer-managed storage account over a private link
+
+Meet the following requirements.
-### Using a customer-managed storage account over a Private Link
#### Workspace requirements
-When connecting to Azure Monitor over a private link, Log Analytics agents are only able to send logs to workspaces accessible over a private link. This requirement means you should:
-* Configure an Azure Monitor Private Link Scope (AMPLS) object
-* Connect it to your workspaces
-* Connect the AMPLS to your network over a private link.
+When you connect to Azure Monitor over a private link, Log Analytics agents are only able to send logs to workspaces accessible over a private link. This requirement means you should:
-For more information on the AMPLS configuration procedure, see [Use Azure Private Link to securely connect networks to Azure Monitor](./private-link-security.md).
+* Configure an Azure Monitor Private Link Scope (AMPLS) object.
+* Connect it to your workspaces.
+* Connect the AMPLS to your network over a private link.
+
+For more information on the AMPLS configuration procedure, see [Use Azure Private Link to securely connect networks to Azure Monitor](./private-link-security.md).
#### Storage account requirements For the storage account to successfully connect to your private link, it must:
-* Be located on your VNet or a peered network, and connected to your VNet over a private link.
-* Be located on the same region as the workspace itΓÇÖs linked to.
-* Allow Azure Monitor to access the storage account. If you chose to allow only select networks to access your storage account, you should select the exception: ΓÇ£Allow trusted Microsoft services to access this storage accountΓÇ¥.
-![Storage account trust MS services image](./media/private-storage/storage-trust.png)
-* If your workspace handles traffic from other networks as well, you should configure the storage account to allow incoming traffic coming from the relevant networks/internet.
-* Coordinate TLS version between the agents and the storage account - It's recommended that you send data to Log Analytics using TLS 1.2 or higher. Review [platform-specific guidance](./data-security.md#sending-data-securely-using-tls-12), and if required [configure your agents to use TLS 1.2](../agents/agent-windows.md#configure-agent-to-use-tls-12). If for some reason that's not possible, configure the storage account to accept TLS 1.0.
-
-### Using a customer-managed storage account for CMK data encryption
-Azure Storage encrypts all data at rest in a storage account. By default, it uses Microsoft-managed keys (MMK) to encrypt the data; However, Azure Storage also allows you to use CMK from Azure Key vault to encrypt your storage data. You can either import your own keys into Azure Key Vault, or you can use the Azure Key Vault APIs to generate keys.
+
+* Be located on your virtual network or a peered network and connected to your virtual network over a private link.
+* Be located on the same region as the workspace it's linked to.
+* Allow Azure Monitor to access the storage account. If you chose to allow only select networks to access your storage account, select the exception **Allow trusted Microsoft services to access this storage account**.
+
+ ![Screenshot that shows Storage account trust Microsoft services.](./media/private-storage/storage-trust.png)
+
+If your workspace handles traffic from other networks, configure the storage account to allow incoming traffic coming from the relevant networks/internet.
+
+Coordinate the TLS version between the agents and the storage account. We recommend that you send data to Log Analytics by using TLS 1.2 or higher. Review the [platform-specific guidance](./data-security.md#sending-data-securely-using-tls-12). If required, [configure your agents to use TLS 1.2](../agents/agent-windows.md#configure-agent-to-use-tls-12). If that's not possible, configure the storage account to accept TLS 1.0.
+
+### Use a customer-managed storage account for CMK data encryption
+Azure Storage encrypts all data at rest in a storage account. By default, it uses Microsoft-managed keys (MMKs) to encrypt the data. However, Azure Storage also allows you to use CMKs from Azure Key Vault to encrypt your storage data. You can either import your own keys into Key Vault or use the Key Vault APIs to generate keys.
+ #### CMK scenarios that require a customer-managed storage account
-* Encrypting log-alert queries with CMK
-* Encrypting saved queries with CMK
-#### How to apply CMK to customer-managed storage accounts
+A customer-managed storage account is required for:
+
+* Encrypting log-alert queries with CMKs.
+* Encrypting saved queries with CMKs.
+
+#### Apply CMKs to customer-managed storage accounts
+
+Follow this guidance to apply CMKs to customer-managed storage accounts.
+ ##### Storage account requirements
-The storage account and the key vault must be in the same region, but they can be in different subscriptions. For more information about Azure Storage encryption and key management, see [Azure Storage encryption for data at rest](../../storage/common/storage-service-encryption.md).
+The storage account and the key vault must be in the same region, but they also can be in different subscriptions. For more information about Azure Storage encryption and key management, see [Azure Storage encryption for data at rest](../../storage/common/storage-service-encryption.md).
-##### Apply CMK to your storage accounts
-To configure your Azure Storage account to use CMK with Azure Key Vault, use the [Azure portal](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), [PowerShell](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), or the [CLI](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json).
+##### Apply CMKs to your storage accounts
+To configure your Azure Storage account to use CMKs with Key Vault, use the [Azure portal](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), [PowerShell](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), or the [Azure CLI](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json).
## Link storage accounts to your Log Analytics workspace > [!NOTE]
-> - Delending if you link storage account for queries, or for log alerts, existing queries will be removed from workspace. Copy saved searches and log alerts that you need before this configuration. You can find directions for moving saved queries and log alerts in [workspace move procedure](./move-workspace-region.md).
-> - You can connect up to five storage accounts for the ingestion of Custom logs & IIS logs, and one storage account for Saved queries and Saved log alert queries (each).
-
-### Using the Azure portal
-On the Azure portal, open your Workspace' menu and select *Linked storage accounts*. A blade will open, showing the linked storage accounts by the use cases mentioned above (Ingestion over Private Link, applying CMK to saved queries or to alerts).
-![Linked storage accounts blade image](./media/private-storage/all-linked-storage-accounts.png)
-Selecting an item on the table will open its storage account details, where you can set or update the linked storage account for this type.
-![Link a storage account blade image](./media/private-storage/link-a-storage-account-blade.png)
+> If you link a storage account for queries, or for log alerts, existing queries will be removed from the workspace. Copy saved searches and log alerts that you need before you undertake this configuration. For directions on moving saved queries and log alerts, see [Workspace move procedure](./move-workspace-region.md).
+>
+> You can connect up to:
+> - Five storage accounts for the ingestion of custom logs and IIS logs.
+> - One storage account for saved queries.
+> - One storage account for saved log alert queries.
+
+### Use the Azure portal
+On the Azure portal, open your workspace menu and select **Linked storage accounts**. A pane shows the linked storage accounts by the use cases previously mentioned (ingestion over Private Link, applying CMKs to saved queries or to alerts).
+
+![Screenshot that shows the Linked storage accounts pane.](./media/private-storage/all-linked-storage-accounts.png)
+
+Selecting an item on the table opens its storage account details, where you can set or update the linked storage account for this type.
+
+![Screenshot that shows the Link storage account pane.](./media/private-storage/link-a-storage-account-blade.png)
You can use the same account for different use cases if you prefer.
-### Using the Azure CLI or REST API
+### Use the Azure CLI or REST API
You can also link a storage account to your workspace via the [Azure CLI](/cli/azure/monitor/log-analytics/workspace/linked-storage) or [REST API](/rest/api/loganalytics/linkedstorageaccounts).
-The applicable dataSourceType values are:
-* CustomLogs ΓÇô to use the storage account for custom logs and IIS logs ingestion
-* Query - to use the storage account to store saved queries (required for CMK encryption)
-* Alerts - to use the storage account to store log-based alerts (required for CMK encryption)
+The applicable `dataSourceType` values are:
+
+* `CustomLogs`: To use the storage account for custom logs and IIS logs ingestion.
+* `Query`: To use the storage account to store saved queries (required for CMK encryption).
+* `Alerts`: To use the storage account to store log-based alerts (required for CMK encryption).
+## Manage linked storage accounts
-## Managing linked storage accounts
+Follow this guidance to manage your linked storage accounts.
### Create or modify a link
-When you link a storage account to a workspace, Log Analytics will start using it instead of the storage account owned by the service. You can
-* Register multiple storage accounts to spread the load of logs between them
-* Reuse the same storage account for multiple workspaces
+When you link a storage account to a workspace, Log Analytics will start using it instead of the storage account owned by the service. You can:
+
+* Register multiple storage accounts to spread the load of logs between them.
+* Reuse the same storage account for multiple workspaces.
### Unlink a storage account
-To stop using a storage account, unlink the storage from the workspace.
-Unlinking all storage accounts from a workspace means Log Analytics will attempt to rely on service-managed storage accounts. If your network has limited access to the internet, these storages may not be available and any scenario that relies on storage will fail.
+To stop using a storage account, unlink the storage from the workspace. Unlinking all storage accounts from a workspace means Log Analytics will attempt to rely on service-managed storage accounts. If your network has limited access to the internet, these storage accounts might not be available and any scenario that relies on storage will fail.
### Replace a storage account
-To replace a storage account used for ingestion,
-1. **Create a link to a new storage account.** The logging agents will get the updated configuration and start sending data to the new storage as well. The process could take a few minutes.
-2. **Then unlink the old storage account so agents will stop writing to the removed account.** The ingestion process keeps reading data from this account until itΓÇÖs all ingested. DonΓÇÖt delete the storage account until you see all logs were ingested.
+To replace a storage account used for ingestion:
+
+1. **Create a link to a new storage account**. The logging agents will get the updated configuration and start sending data to the new storage. The process could take a few minutes.
+2. **Unlink the old storage account so agents will stop writing to the removed account**. The ingestion process keeps reading data from this account until it's all ingested. Don't delete the storage account until you see that all logs were ingested.
+
+### Maintain storage accounts
+
+Follow this guidance to maintain your storage accounts.
-### Maintaining storage accounts
#### Manage log retention
-When using your own storage account, retention is up to you. Log Analytics won't delete logs stored on your private storage. Instead, you should set up a policy to handle the load according to your preferences.
+When you use your own storage account, retention is up to you. Log Analytics won't delete logs stored on your private storage. Instead, you should set up a policy to handle the load according to your preferences.
#### Consider load
-Storage accounts can handle a certain load of read and write requests before they start throttling requests (For more information, see [Scalability and performance targets for Blob storage](../../storage/common/scalability-targets-standard-account.md)). Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register an additional storage account to spread the load between them. To monitor your storage accountΓÇÖs capacity and performance review its [Insights in the Azure portal](../../storage/common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+Storage accounts can handle a certain load of read and write requests before they start throttling requests. For more information, see [Scalability and performance targets for Azure Blob Storage](../../storage/common/scalability-targets-standard-account.md).
-### Related charges
-Storage accounts are charged by the volume of stored data, the type of the storage, and the type of redundancy. For details see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs) and [Table Storage pricing](https://azure.microsoft.com/pricing/details/storage/tables).
+Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register another storage account to spread the load between them. To monitor your storage account's capacity and performance, review its [Insights in the Azure portal](../../storage/common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+### Related charges
+Storage accounts are charged by the volume of stored data, the type of storage, and the type of redundancy. For more information, see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs) and [Azure Table Storage pricing](https://azure.microsoft.com/pricing/details/storage/tables).
## Next steps -- Learn about [using Azure Private Link to securely connect networks to Azure Monitor](private-link-security.md)-- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md)
+- Learn about [using Private Link to securely connect networks to Azure Monitor](private-link-security.md).
+- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md).
azure-monitor Tutorial Workspace Transformations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-portal.md
Title: Tutorial - Add workspace transformation to Azure Monitor Logs using Azure portal
-description: Describes how to add a custom transformation to data flowing through Azure Monitor Logs using the Azure portal.
+ Title: 'Tutorial: Add a workspace transformation to Azure Monitor Logs by using the Azure portal'
+description: Describes how to add a custom transformation to data flowing through Azure Monitor Logs by using the Azure portal.
Last updated 07/01/2022
-# Tutorial: Add transformation in workspace data collection rule using the Azure portal (preview)
-This tutorial walks you through configuration of a sample [transformation in a workspace data collection rule](../essentials/data-collection-transformations.md) using the Azure portal. [Transformations](../essentials/data-collection-transformations.md) in Azure Monitor allow you to filter or modify incoming data before it's sent to its destination. Workspace transformations provide support for [ingestion-time transformations](../essentials/data-collection-transformations.md) for workflows that don't yet use the [Azure Monitor data ingestion pipeline](../essentials/data-collection.md).
+# Tutorial: Add a transformation in a workspace data collection rule by using the Azure portal (preview)
+This tutorial walks you through configuration of a sample [transformation in a workspace data collection rule (DCR)](../essentials/data-collection-transformations.md) by using the Azure portal. [Transformations](../essentials/data-collection-transformations.md) in Azure Monitor allow you to filter or modify incoming data before it's sent to its destination. Workspace transformations provide support for [ingestion-time transformations](../essentials/data-collection-transformations.md) for workflows that don't yet use the [Azure Monitor data ingestion pipeline](../essentials/data-collection.md).
-Workspace transformations are stored together in a single [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) for the workspace, called the workspace DCR. Each transformation is associated with a particular table. The transformation will be applied to all data sent to this table from any workflow not using a DCR.
+Workspace transformations are stored together in a single [DCR](../essentials/data-collection-rule-overview.md) for the workspace, which is called the workspace DCR. Each transformation is associated with a particular table. The transformation will be applied to all data sent to this table from any workflow not using a DCR.
> [!NOTE]
-> This tutorial uses the Azure portal to configure a workspace transformation. See [Tutorial: Add transformation in workspace data collection rule to Azure Monitor using resource manager templates (preview)](tutorial-workspace-transformations-api.md) for the same tutorial using resource manager templates and REST API.
+> This tutorial uses the Azure portal to configure a workspace transformation. For the same tutorial using Azure Resource Manager templates and REST API, see [Tutorial: Add transformation in workspace data collection rule to Azure Monitor using resource manager templates (preview)](tutorial-workspace-transformations-api.md).
-In this tutorial, you learn to:
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Configure [workspace transformation](../essentials/data-collection-transformations.md#workspace-transformation-dcr) for a table in a Log Analytics workspace.
+> * Configure a [workspace transformation](../essentials/data-collection-transformations.md#workspace-transformation-dcr) for a table in a Log Analytics workspace.
> * Write a log query for a workspace transformation. - ## Prerequisites
-To complete this tutorial, you need the following:
+To complete this tutorial, you need:
-- Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create data collection rule (DCR) objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.-- The table must already have some data.
+- A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).
+- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- A table that already has some data.
- The table can't be linked to the [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr).
+## Overview of the tutorial
+In this tutorial, you'll reduce the storage requirement for the `LAQueryLogs` table by filtering out certain records. You'll also remove the contents of a column while parsing the column data to store a piece of data in a custom column. The [LAQueryLogs table](query-audit.md#audit-data) is created when you enable [log query auditing](query-audit.md) in a workspace. You can use this same basic process to create a transformation for any [supported table](tables-feature-support.md) in a Log Analytics workspace.
-## Overview of tutorial
-In this tutorial, you'll reduce the storage requirement for the `LAQueryLogs` table by filtering out certain records. You'll also remove the contents of a column while parsing the column data to store a piece of data in a custom column. The [LAQueryLogs table](query-audit.md#audit-data) is created when you enable [log query auditing](query-audit.md) in a workspace. You can use this same basic process to create a transformation for any [supported table](tables-feature-support.md) in a Log Analytics workspace.
-
-This tutorial will use the Azure portal which provides a wizard to walk you through the process of creating an ingestion-time transformation. The following actions are performed for you when you complete this wizard:
+This tutorial uses the Azure portal, which provides a wizard to walk you through the process of creating an ingestion-time transformation. After you finish the steps, you'll see that the wizard:
-- Updates the table schema with any additional columns from the query.-- Creates a `WorkspaceTransforms` data collection rule (DCR) and links it to the workspace if a default DCR isn't already linked to the workspace.
+- Updates the table schema with any other columns from the query.
+- Creates a `WorkspaceTransforms` DCR and links it to the workspace if a default DCR isn't already linked to the workspace.
- Creates an ingestion-time transformation and adds it to the DCR. - ## Enable query audit logs
-You need to enable [query auditing](query-audit.md) for your workspace to create the `LAQueryLogs` table that you'll be working with. This is not required for all ingestion time transformations. It's just to generate the sample data that we'll be working with.
+You need to enable [query auditing](query-audit.md) for your workspace to create the `LAQueryLogs` table that you'll be working with. This step isn't required for all ingestion time transformations. It's just to generate the sample data that we'll be working with.
-1. From the **Log Analytics workspaces** menu in the Azure portal, select **Diagnostic settings** and then **Add diagnostic setting**.
+1. On the **Log Analytics workspaces** menu in the Azure portal, select **Diagnostic settings** > **Add diagnostic setting**.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/diagnostic-settings.png" lightbox="media/tutorial-workspace-transformations-portal/diagnostic-settings.png" alt-text="Screenshot of diagnostic settings.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/diagnostic-settings.png" lightbox="media/tutorial-workspace-transformations-portal/diagnostic-settings.png" alt-text="Screenshot that shows diagnostic settings.":::
-2. Provide a name for the diagnostic setting and select the workspace so that the auditing data is stored in the same workspace. Select the **Audit** category and then click **Save** to save the diagnostic setting and close the diagnostic setting page.
+1. Enter a name for the diagnostic setting. Select the workspace so that the auditing data is stored in the same workspace. Select the **Audit** category and then select **Save** to save the diagnostic setting and close the **Diagnostic setting** page.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/new-diagnostic-setting.png" lightbox="media/tutorial-workspace-transformations-portal/new-diagnostic-setting.png" alt-text="Screenshot of new diagnostic setting.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/new-diagnostic-setting.png" lightbox="media/tutorial-workspace-transformations-portal/new-diagnostic-setting.png" alt-text="Screenshot that shows the new diagnostic setting.":::
-3. Select **Logs** and then run some queries to populate `LAQueryLogs` with some data. These queries don't need to return data to be added to the audit log.
+1. Select **Logs** and then run some queries to populate `LAQueryLogs` with some data. These queries don't need to return data to be added to the audit log.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/sample-queries.png" lightbox="media/tutorial-workspace-transformations-portal/sample-queries.png" alt-text="Screenshot of sample log queries.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/sample-queries.png" lightbox="media/tutorial-workspace-transformations-portal/sample-queries.png" alt-text="Screenshot that shows sample log queries.":::
-## Add transformation to the table
+## Add a transformation to the table
Now that the table's created, you can create the transformation for it.
-1. From the **Log Analytics workspaces** menu in the Azure portal, select **Tables (preview)**. Locate the `LAQueryLogs` table and select **Create transformation**.
+1. On the **Log Analytics workspaces** menu in the Azure portal, select **Tables (preview)**. Locate the `LAQueryLogs` table and select **Create transformation**.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/create-transformation.png" lightbox="media/tutorial-workspace-transformations-portal/create-transformation.png" alt-text="Screenshot of creating a new transformation.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/create-transformation.png" lightbox="media/tutorial-workspace-transformations-portal/create-transformation.png" alt-text="Screenshot that shows creating a new transformation.":::
+1. Because this transformation is the first one in the workspace, you must create a [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr). If you create transformations for other tables in the same workspace, they'll be stored in this same DCR. Select **Create a new data collection rule**. The **Subscription** and **Resource group** will already be populated for the workspace. Enter a name for the DCR and select **Done**.
-2. Since this is the first transformation in the workspace, you need to create a [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr). If you create transformations for other tables in the same workspace, they will be stored in this same DCR. Click **Create a new data collection rule**. The **Subscription** and **Resource group** will already be populated for the workspace. Provide a name for the DCR and click **Done**.
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/new-data-collection-rule.png" lightbox="media/tutorial-workspace-transformations-portal/new-data-collection-rule.png" alt-text="Screenshot that shows creating a new data collection rule.":::
- :::image type="content" source="media/tutorial-workspace-transformations-portal/new-data-collection-rule.png" lightbox="media/tutorial-workspace-transformations-portal/new-data-collection-rule.png" alt-text="Screenshot of creating a new data collection rule.":::
+1. Select **Next** to view sample data from the table. As you define the transformation, the result will be applied to the sample data. For this reason, you can evaluate the results before you apply it to actual data. Select **Transformation editor** to define the transformation.
-3. Click **Next** to view sample data from the table. As you define the transformation, the result will be applied to the sample data allowing you to evaluate the results before applying it to actual data. Click **Transformation editor** to define the transformation.
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/sample-data.png" lightbox="media/tutorial-workspace-transformations-portal/sample-data.png" alt-text="Screenshot that shows sample data from the log table.":::
- :::image type="content" source="media/tutorial-workspace-transformations-portal/sample-data.png" lightbox="media/tutorial-workspace-transformations-portal/sample-data.png" alt-text="Screenshot of sample data from the log table.":::
+1. In the transformation editor, you can see the transformation that will be applied to the data prior to its ingestion into the table. The incoming data is represented by a virtual table named `source`, which has the same set of columns as the destination table itself. The transformation initially contains a simple query that returns the `source` table with no changes.
-4. In the transformation editor, you can see the transformation that will be applied to the data prior to its ingestion into the table. The incoming data is represented by a virtual table named `source`, which has the same set of columns as the destination table itself. The transformation initially contains a simple query returning the `source` table with no changes.
-
-5. Modify the query to the following:
+1. Modify the query to the following example:
``` kusto source
Now that the table's created, you can create the transformation for it.
| project-away RequestContext, Context ```
- This makes the following changes:
-
- - Drop rows related to querying the `LAQueryLogs` table itself to save space since these log entries aren't useful.
- - Add a column for the name of the workspace that was queried.
- - Remove data from the `RequestContext` column to save space.
-
+ The modification makes the following changes:
+ - Rows related to querying the `LAQueryLogs` table itself were dropped to save space because these log entries aren't useful.
+ - A column for the name of the workspace that was queried was added.
+ - Data from the `RequestContext` column was removed to save space.
> [!Note]
- > Using the Azure portal, the output of the transformation will initiate changes to the table schema if required. Columns will be added to match the transformation output if they don't already exist. Make sure that your output doesn't contain any additional columns that you don't want added to the table. If the output does not include columns that are already in the table, those columns will not be removed, but data will not be added.
+ > Using the Azure portal, the output of the transformation will initiate changes to the table schema if required. Columns will be added to match the transformation output if they don't already exist. Make sure that your output doesn't contain any columns that you don't want added to the table. If the output doesn't include columns that are already in the table, those columns won't be removed, but data won't be added.
>
- > Any custom columns added to a built-in table must end in *_CF*. Columns added to a custom table (a table with a name that ends in *_CL*) does not need to have this suffix.
+ > Any custom columns added to a built-in table must end in `_CF`. Columns added to a custom table don't need to have this suffix. A custom table has a name that ends in `_CL`.
-6. Copy the query into the transformation editor and click **Run** to view results from the sample data. You can verify that the new `Workspace_CF` column is in the query.
+1. Copy the query into the transformation editor and select **Run** to view results from the sample data. You can verify that the new `Workspace_CF` column is in the query.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/transformation-editor.png" lightbox="media/tutorial-workspace-transformations-portal/transformation-editor.png" alt-text="Screenshot of transformation editor.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/transformation-editor.png" lightbox="media/tutorial-workspace-transformations-portal/transformation-editor.png" alt-text="Screenshot that shows the transformation editor.":::
-7. Click **Apply** to save the transformation and then **Next** to review the configuration. Click **Create** to update the data collection rule with the new transformation.
+1. Select **Apply** to save the transformation and then select **Next** to review the configuration. Select **Create** to update the DCR with the new transformation.
- :::image type="content" source="media/tutorial-workspace-transformations-portal/save-transformation.png" lightbox="media/tutorial-workspace-transformations-portal/save-transformation.png" alt-text="Screenshot of saving transformation.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-portal/save-transformation.png" lightbox="media/tutorial-workspace-transformations-portal/save-transformation.png" alt-text="Screenshot that shows saving the transformation.":::
-## Test transformation
-Allow about 30 minutes for the transformation to take effect and then test it by running a query against the table. Only data sent to the table after the transformation was applied will be affected.
+## Test the transformation
+Allow about 30 minutes for the transformation to take effect and then test it by running a query against the table. Only data sent to the table after the transformation was applied will be affected.
-For this tutorial, run some sample queries to send data to the `LAQueryLogs` table. Include some queries against `LAQueryLogs` so you can verify that the transformation filters these records. Notice that the output has the new `Workspace_CF` column, and there are no records for `LAQueryLogs`.
+For this tutorial, run some sample queries to send data to the `LAQueryLogs` table. Include some queries against `LAQueryLogs` so that you can verify that the transformation filters these records. Now the output has the new `Workspace_CF` column, and there are no records for `LAQueryLogs`.
## Troubleshooting
-This section describes different error conditions you may receive and how to correct them.
+This section describes different error conditions you might receive and how to correct them.
### IntelliSense in Log Analytics not recognizing new columns in the table
-The cache that drives IntelliSense may take up to 24 hours to update.
+The cache that drives IntelliSense might take up to 24 hours to update.
### Transformation on a dynamic column isn't working
-There is currently a known issue affecting dynamic columns. A temporary workaround is to explicitly parse dynamic column data using `parse_json()` prior to performing any operations against them.
+A known issue currently affects dynamic columns. A temporary workaround is to explicitly parse dynamic column data by using `parse_json()` prior to performing any operations against them.
## Next steps
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
This article is a reference of the different applications and services that are
Azure Monitor data is collected and stored based on resource provider namespaces. Each resource in Azure has a unique ID. The resource provider namespace is part of all unique IDs. For example, a key vault resource ID would be similar to `/subscriptions/d03b04c7-d1d4-eeee-aaaa-87b6fcb38b38/resourceGroups/KeyVaults/providers/Microsoft.KeyVault/vaults/mysafekeys ` . *Microsoft.KeyVault* is the resource provider namespace. *Microsoft.KeyVault/vaults/* is the resource provider.
-For a list of Azure resource provider namespaces, see [Resource providers for Azure services](/azure/azure-resource-manager/management/azure-services-resource-providers).
+For a list of Azure resource provider namespaces, see [Resource providers for Azure services](../azure-resource-manager/management/azure-services-resource-providers.md).
For a list of resource providers that support Azure Monitor
Azure Monitor can collect data from resources outside of Azure by using the meth
- Read more about the [Azure Monitor data platform that stores the logs and metrics collected by insights and solutions](data-platform.md). - Complete a [tutorial on monitoring an Azure resource](essentials/tutorial-resource-logs.md). - Complete a [tutorial on writing a log query to analyze data in Azure Monitor Logs](essentials/tutorial-resource-logs.md).-- Complete a [tutorial on creating a metrics chart to analyze data in Azure Monitor Metrics](essentials/tutorial-metrics.md).
+- Complete a [tutorial on creating a metrics chart to analyze data in Azure Monitor Metrics](essentials/tutorial-metrics.md).
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
|Sub-service| Article | Description | |||| |General|Table of contents|We have updated the Azure Monitor Table of Contents. The new TOC structure better reflects the customer experience and makes it easier for users to navigate and discover our content.|
-Alerts|[Connect Azure to ITSM tools by using IT Service Management](https://docs.microsoft.com/azure/azure-monitor/alerts/itsmc-definition)|Deprecating support for sending ITSM actions and events to ServiceNow. Instead, use ITSM actions in action groups based on Azure alerts to create work items in your ITSM tool.|
-Alerts|[Create a new alert rule](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-create-new-alert-rule)|New PowerShell commands to create and manage log alerts.|
+Alerts|[Connect Azure to ITSM tools by using IT Service Management](./alerts/itsmc-definition.md)|Deprecating support for sending ITSM actions and events to ServiceNow. Instead, use ITSM actions in action groups based on Azure alerts to create work items in your ITSM tool.|
+Alerts|[Create a new alert rule](./alerts/alerts-create-new-alert-rule.md)|New PowerShell commands to create and manage log alerts.|
Alerts|[Types of Azure Monitor alerts](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-types)|Updated to include Prometheus alerts.|
-Alerts|[Customize alert notifications using Logic Apps](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-logic-apps)|New: How to use alerts to send emails or Teams posts using logic apps|
-Application-insights|[Sampling in Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/sampling)|The "When to use sampling" and "How sampling works" sections have been prioritized as prerequisite information for the rest of the article.|
-Application-insights|[What is auto-instrumentation for Azure Monitor Application Insights?](https://docs.microsoft.com/azure/azure-monitor/app/codeless-overview)|The auto-instrumentation overview has been visually overhauled with links and footnotes.|
-Application-insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications (preview)](https://docs.microsoft.com/azure/azure-monitor/app/opentelemetry-enable)|Open Telemetry Metrics are now available for .NET, Node.js and Python applications.|
-Application-insights|[Find and diagnose performance issues with Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-performance)|The URL Ping (Classic) Test has been replaced with the Standard Test step-by-step instructions.|
-Application-insights|[Application Insights API for custom events and metrics](https://docs.microsoft.com/azure/azure-monitor/app/api-custom-events-metrics)|Flushing information was added to the FAQ.|
-Application-insights|[Azure AD authentication for Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/azure-ad-authentication)|We updated the `TelemetryConfiguration` code sample using .NET.|
-Application-insights|[Using Azure Monitor Application Insights with Spring Boot](https://docs.microsoft.com/azure/azure-monitor/app/java-spring-boot)|Spring Boot information was updated to 3.4.2.|
-Application-insights|[Configuration options: Azure Monitor Application Insights for Java](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-config)|New features include Capture Log4j Markers and Logback Markers as custom properties on the corresponding trace (log message) telemetry.|
-Application-insights|[Create custom KPI dashboards using Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-app-dashboards)|This article has been refreshed with new screenshots and instructions.|
-Application-insights|[Share Azure dashboards by using Azure role-based access control](https://docs.microsoft.com/azure/azure-portal/azure-portal-dashboard-share-access)|This article has been refreshed with new screenshots and instructions.|
-Application-insights|[Application Monitoring for Azure App Service and ASP.NET](https://docs.microsoft.com/azure/azure-monitor/app/azure-web-apps-net)|Important notes added regarding System.IO.FileNotFoundException after 2.8.44 auto-instrumentation upgrade.|
-Application-insights|[Geolocation and IP address handling](https://docs.microsoft.com/azure/azure-monitor/app/ip-collection)| Geolocation lookup information has been updated.|
-Containers|[Metric alert rules in Container insights (preview)](https://docs.microsoft.com/azure/azure-monitor/containers/container-insights-metric-alerts)|Container insights metric Alerts|
+Alerts|[Customize alert notifications using Logic Apps](./alerts/alerts-logic-apps.md)|New: How to use alerts to send emails or Teams posts using logic apps|
+Application-insights|[Sampling in Application Insights](./app/sampling.md)|The "When to use sampling" and "How sampling works" sections have been prioritized as prerequisite information for the rest of the article.|
+Application-insights|[What is auto-instrumentation for Azure Monitor Application Insights?](./app/codeless-overview.md)|The auto-instrumentation overview has been visually overhauled with links and footnotes.|
+Application-insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications (preview)](./app/opentelemetry-enable.md)|Open Telemetry Metrics are now available for .NET, Node.js and Python applications.|
+Application-insights|[Find and diagnose performance issues with Application Insights](./app/tutorial-performance.md)|The URL Ping (Classic) Test has been replaced with the Standard Test step-by-step instructions.|
+Application-insights|[Application Insights API for custom events and metrics](./app/api-custom-events-metrics.md)|Flushing information was added to the FAQ.|
+Application-insights|[Azure AD authentication for Application Insights](./app/azure-ad-authentication.md)|We updated the `TelemetryConfiguration` code sample using .NET.|
+Application-insights|[Using Azure Monitor Application Insights with Spring Boot](./app/java-spring-boot.md)|Spring Boot information was updated to 3.4.2.|
+Application-insights|[Configuration options: Azure Monitor Application Insights for Java](./app/java-standalone-config.md)|New features include Capture Log4j Markers and Logback Markers as custom properties on the corresponding trace (log message) telemetry.|
+Application-insights|[Create custom KPI dashboards using Application Insights](./app/tutorial-app-dashboards.md)|This article has been refreshed with new screenshots and instructions.|
+Application-insights|[Share Azure dashboards by using Azure role-based access control](../azure-portal/azure-portal-dashboard-share-access.md)|This article has been refreshed with new screenshots and instructions.|
+Application-insights|[Application Monitoring for Azure App Service and ASP.NET](./app/azure-web-apps-net.md)|Important notes added regarding System.IO.FileNotFoundException after 2.8.44 auto-instrumentation upgrade.|
+Application-insights|[Geolocation and IP address handling](./app/ip-collection.md)| Geolocation lookup information has been updated.|
+Containers|[Metric alert rules in Container insights (preview)](./containers/container-insights-metric-alerts.md)|Container insights metric Alerts|
Containers|[Custom metrics collected by Container insights](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-custom-metrics?tabs=portal)|New article.| Containers|[Overview of Container insights in Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-overview)|Rewritten to simplify onboarding options.| Containers|[Enable Container insights for Azure Kubernetes Service (AKS) cluster](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-enable-aks?tabs=azure-cli)|Updated to combine new and existing clusters.|
Containers Prometheus|[Query logs from Container insights](https://learn.microso
Containers Prometheus|[Collect Prometheus metrics with Container insights](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-prometheus?tabs=cluster-wide)|Updated to include Azure Monitor managed service for Prometheus.| Essentials Prometheus|[Metrics in Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/essentials/data-platform-metrics)|Updated to include Azure Monitor managed service for Prometheus| Essentials Prometheus|<ul> <li> [Azure Monitor workspace overview (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/azure-monitor-workspace-overview?tabs=azure-portal) </li><li> [Overview of Azure Monitor Managed Service for Prometheus (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-overview) </li><li>[Rule groups in Azure Monitor Managed Service for Prometheus (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-rule-groups)</li><li>[Remote-write in Azure Monitor Managed Service for Prometheus (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-remote-write-managed-identity) </li><li>[Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-grafana)</li><li>[Troubleshoot collection of Prometheus metrics in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-troubleshoot)</li><li>[Default Prometheus metrics configuration in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-default)</li><li>[Scrape Prometheus metrics at scale in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-scale)</li><li>[Customize scraping of Prometheus metrics in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-configuration)</li><li>[Create, validate and troubleshoot custom configuration file for Prometheus metrics in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-validate)</li><li>[Minimal Prometheus ingestion profile in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-configuration-minimal)</li><li>[Collect Prometheus metrics from AKS cluster (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-enable)</li><li>[Send Prometheus metrics to multiple Azure Monitor workspaces (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-multiple-workspaces) </li></ul> |New articles. Public preview of Azure Monitor managed service for Prometheus|
-Essentials Prometheus|[Azure Monitor managed service for Prometheus remote write - managed identity (preview)](https://docs.microsoft.com/azure/azure-monitor/essentials/prometheus-remote-write-managed-identity)|Addition: Verify Prometheus remote write is working correctly|
-Essentials|[Azure resource logs](https://docs.microsoft.com/azure/azure-monitor/essentials/resource-logs)|Clarification: Which blobs logs are written to, and when|
+Essentials Prometheus|[Azure Monitor managed service for Prometheus remote write - managed identity (preview)](./essentials/prometheus-remote-write-managed-identity.md)|Addition: Verify Prometheus remote write is working correctly|
+Essentials|[Azure resource logs](./essentials/resource-logs.md)|Clarification: Which blobs logs are written to, and when|
Essentials|[Resource Manager template samples for Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/resource-manager-samples?tabs=portal)|Added template deployment methods.| Essentials|[Azure Monitor service limits](https://learn.microsoft.com/azure/azure-monitor/service-limits)|Added Azure Monitor managed service for Prometheus|
-Logs|[Manage access to Log Analytics workspaces](https://docs.microsoft.com/azure/azure-monitor/logs/manage-access)|Table-level role-based access control (RBAC) lets you give specific users or groups read access to particular tables.|
-Logs|[Configure Basic Logs in Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/logs/basic-logs-configure)|General availability of the Basic Logs data plan, retention and archiving, search job, and the table management user experience in the Azure portal.|
+Logs|[Manage access to Log Analytics workspaces](./logs/manage-access.md)|Table-level role-based access control (RBAC) lets you give specific users or groups read access to particular tables.|
+Logs|[Configure Basic Logs in Azure Monitor](./logs/basic-logs-configure.md)|General availability of the Basic Logs data plan, retention and archiving, search job, and the table management user experience in the Azure portal.|
Logs|[Guided project - Analyze logs in Azure Monitor with KQL - Training](https://learn.microsoft.com/training/modules/analyze-logs-with-kql/)|New Learn module. Learn to write KQL queries to retrieve and transform log data to answer common business and operational questions.| Logs|[Detect and analyze anomalies with KQL in Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/logs/kql-machine-learning-azure-monitor)|New tutorial. Walkthrough of how to use KQL for time series analysis and anomaly detection in Azure Monitor Log Analytics. |
-Virtual-machines|[Enable VM insights for a hybrid virtual machine](https://docs.microsoft.com/azure/azure-monitor/vm/vminsights-enable-hybrid)|Updated versions of standalone installers.|
-Visualizations|[Retrieve legacy Application Insights workbooks](https://docs.microsoft.com/azure/azure-monitor/visualize/workbooks-retrieve-legacy-workbooks)|New article about how to access legacy workbooks in the Azure portal.|
-Visualizations|[Azure Workbooks](https://docs.microsoft.com/azure/azure-monitor/visualize/workbooks-overview)|New video to see how you can use Azure Workbooks to get insights and visualize your data. |
+Virtual-machines|[Enable VM insights for a hybrid virtual machine](./vm/vminsights-enable-hybrid.md)|Updated versions of standalone installers.|
+Visualizations|[Retrieve legacy Application Insights workbooks](./visualize/workbooks-retrieve-legacy-workbooks.md)|New article about how to access legacy workbooks in the Azure portal.|
+Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to see how you can use Azure Workbooks to get insights and visualize your data. |
## September 2022
Visualizations|[Azure Workbooks](https://docs.microsoft.com/azure/azure-monitor/
||| |[Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](./app/java-in-process-agent.md)|New OpenTelemetry `@WithSpan` annotation guidance.| |[Capture Application Insights custom metrics with .NET and .NET Core](./app/tutorial-asp-net-custom-metrics.md)|Tutorial steps and images have been updated.|
-|[Configuration options - Azure Monitor Application Insights for Java](/azure/azure-monitor/app/java-in-process-agent)|Connection string guidance updated.|
+|[Configuration options - Azure Monitor Application Insights for Java](./app/java-in-process-agent.md)|Connection string guidance updated.|
|[Enable Application Insights for ASP.NET Core applications](./app/tutorial-asp-net-core.md)|Tutorial steps and images have been updated.| |[Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)](./app/opentelemetry-enable.md)|Our product feedback link at the bottom of each document has been fixed.| |[Filter and preprocess telemetry in the Application Insights SDK](./app/api-filtering-sampling.md)|Added sample initializer to control which client IP gets used as part of geo-location mapping.|
azure-portal Azure Portal Dashboard Share Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboard-share-access.md
To share access to a dashboard, you must first publish it. When you do so, other
By default, sharing publishes your dashboard to a resource group named **dashboards**. To select a different resource group, clear the checkbox.
-1. To [add optional tags](/azure/azure-resource-manager/management/tag-resources) to the dashboard, enter one or more name/value pairs.
+1. To [add optional tags](../azure-resource-manager/management/tag-resources.md) to the dashboard, enter one or more name/value pairs.
1. Select **Publish**.
For each dashboard that you have published, you can assign Azure RBAC built-in r
* View the list of [Azure built-in roles](../role-based-access-control/built-in-roles.md). * Learn about [managing groups in Azure AD](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). * Learn more about [managing Azure resources by using the Azure portal](../azure-resource-manager/management/manage-resources-portal.md).
-* [Create a dashboard](azure-portal-dashboards.md) in the Azure portal.
+* [Create a dashboard](azure-portal-dashboards.md) in the Azure portal.
azure-resource-manager Scenarios Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scenarios-rbac.md
The following example shows how to create a user-assigned managed identity and a
When you delete a user, group, service principal, or managed identity from Azure AD, it's a good practice to delete any role assignments. They aren't deleted automatically.
-Any role assignments that refer to a deleted principal ID become invalid. If you try to reuse a role assignment's name for another role assignment, the deployment will fail. To work around this behavior, you should either remove the old role assignment before you recreate it, or ensure that you use a unique name when you deploy a new role assignment. [This quickstart template illustrates how you can define a role assignment in a Bicep module and use a principal ID as a seed value for the role assignment name.](https://azure.microsoft.com/resources/templates/key-vault-managed-identity-role-assignment/)
+Any role assignments that refer to a deleted principal ID become invalid. If you try to reuse a role assignment's name for another role assignment, the deployment will fail. To work around this behavior, you should either remove the old role assignment before you recreate it, or ensure that you use a unique name when you deploy a new role assignment. This [quickstart template](/samples/azure/azure-quickstart-templates/key-vault-managed-identity-role-assignment) illustrates how you can define a role assignment in a Bicep module and use a principal ID as a seed value for the role assignment name.
## Custom role definitions
Role definition resource names must be unique within the Azure Active Directory
- [Assign a role at subscription scope](https://azure.microsoft.com/resources/templates/subscription-role-assignment/) - [Assign a role at tenant scope](https://azure.microsoft.com/resources/templates/tenant-role-assignment/) - [Create a resourceGroup, apply a lock and RBAC](https://azure.microsoft.com/resources/templates/create-rg-lock-role-assignment/)
- - [Create key vault, managed identity, and role assignment](https://azure.microsoft.com/resources/templates/key-vault-managed-identity-role-assignment/)
+ - [Create key vault, managed identity, and role assignment](/samples/azure/azure-quickstart-templates/key-vault-managed-identity-role-assignment)
- Community blog posts - [Create role assignments for different scopes with Bicep](https://4bes.nl/2022/04/24/create-role-assignments-for-different-scopes-with-bicep/), by Barbara Forbes
azure-resource-manager Virtual Machines Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md
The following scenarios aren't yet supported:
## Azure disk encryption
-You can't move a virtual machine that is integrated with a key vault to implement [Azure Disk Encryption for Linux VMs](../../../virtual-machines/linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](../../../virtual-machines/windows/disk-encryption-overview.md). To move the VM, you must disable encryption.
+A virtual machine that is integrated with a key vault to implement [Azure Disk Encryption for Linux VMs](../../../virtual-machines/linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](../../../virtual-machines/windows/disk-encryption-overview.md) can be moved to another resource group when it is in deallocated state.
+
+However, to move such virtual machine to another subscription, you must disable encryption.
# [Azure CLI](#tab/azure-cli)
azure-resource-manager Resource Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-extensions.md
Azure Resource Manager template (ARM template) extensions are small applications
The existing extensions are: -- [Microsoft.Compute/virtualMachines/extensions](/azure/templates/microsoft.compute/2018-10-01/virtualmachines/extensions)-- [Microsoft.Compute virtualMachineScaleSets/extensions](/azure/templates/microsoft.compute/2018-10-01/virtualmachinescalesets/extensions)-- [Microsoft.HDInsight clusters/extensions](/azure/templates/microsoft.hdinsight/2018-06-01-preview/clusters)-- [Microsoft.Sql servers/databases/extensions](/azure/templates/microsoft.sql/2014-04-01/servers/databases/extensions)-- [Microsoft.Web/sites/siteextensions](/azure/templates/microsoft.web/2016-08-01/sites/siteextensions)
+- [Microsoft.Compute/virtualMachines/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions)
+- [Microsoft.Compute virtualMachineScaleSets/extensions](/azure/templates/microsoft.compute/virtualmachinescalesets/extensions)
+- [Microsoft.HDInsight clusters/extensions](/azure/templates/microsoft.hdinsight/clusters)
+- [Microsoft.Sql servers/databases/extensions](/azure/templates/microsoft.sql/servers/databases/extensions)
+- [Microsoft.Web/sites/siteextensions](/azure/templates/microsoft.web/sites/siteextensions)
To find out the available extensions, browse to the [template reference](/azure/templates/). In **Filter by title**, enter **extension**.
azure-resource-manager Template Tutorial Add Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-resource.md
Most resources also have a `location` property, which sets the region where you
The other properties vary by resource type and API version. It's important to understand the connection between the API version and the available properties, so let's jump into more detail.
-In this tutorial, you add a storage account to the template. You can see the storage account's API version at [storageAccounts 2021-04-01](/azure/templates/microsoft.storage/2021-04-01/storageaccounts). Notice that you don't add all the properties to your template. Many of the properties are optional. The `Microsoft.Storage` resource provider could release a new API version, but the version you're deploying doesn't have to change. You can continue using that version and know that the results of your deployment are consistent.
+In this tutorial, you add a storage account to the template. You can see the storage account's API version at [storageAccounts 2021-09-01](/azure/templates/microsoft.storage/2021-09-01/storageaccounts). Notice that you don't add all the properties to your template. Many of the properties are optional. The `Microsoft.Storage` resource provider could release a new API version, but the version you're deploying doesn't have to change. You can continue using that version and know that the results of your deployment are consistent.
-If you view an older API version, such as [storageAccounts 2016-05-01](/azure/templates/microsoft.storage/2016-05-01/storageaccounts), you see that a smaller set of properties is available.
+If you view an older [API version](/azure/templates/microsoft.storage/allversions) you might see that a smaller set of properties is available.
If you decide to change the API version for a resource, make sure you evaluate the properties for that version and adjust your template appropriately.
azure-video-indexer Edit Speakers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-speakers.md
# Edit speakers with the Azure Video Indexer website
-Azure Video Indexer identifies speakers in your video but in some cases you may want to edit these names. You can perform the following editing actions, while in the edit mode. The following editing actions only apply to the currently selected video.
--- Add new speaker.-- Rename existing speaker.
-
- The update applies to all speakers identified by this name.
-- Assign a speaker for a transcript line.
+Azure Video Indexer identifies each speaker in a video and attributes each transcribed line to a speaker. The speakers are given a unique identity such as `Speaker #1` and `Speaker #2`. To provide clarity and enrich the transcript quality, you may want to replace the assigned identity with each speaker's actual name. To edit speakers' names, use the edit actions as described in the article.
The article demonstrates how to edit speakers with the [Azure Video Indexer website](https://www.videoindexer.ai/). The same editing operations are possible with an API. To use API, call [update video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index).
-## Prerequisites
+> [!NOTE]
+> The addition or editing of a speaker name is applied throughout the transcript of the video but is not applied to other videos in your Azure Video Indexer account.
+
+## Start editing
1. Sign in to the [Azure Video Indexer website](https://www.videoindexer.ai/). 2. Select a video.
This action allows adding new speakers that were not identified by Azure Video I
## Rename an existing speaker
-This action allows renaming an existing speaker that was identified by Azure Video Indexer. To rename a speaker from the website for the selected video, do the following:
+This action allows renaming an existing speaker that was identified by Azure Video Indexer. The update applies to all speakers identified by this name.
+
+To rename a speaker from the website for the selected video, do the following:
1. Select the edit mode. 1. Go to the transcript line where the speaker you wish to rename appears.
azure-vmware Upgrade Hcx Azure Vmware Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/upgrade-hcx-azure-vmware-solutions.md
+
+ Title: Upgrade HCX on Azure VMware Solution
+description: This article explains how to upgrade HCX on Azure VMware Solution.
+ Last updated : 11/09/2022++
+# Upgrade HCX on Azure VMware Solution
+
+In this article, you'll learn how to upgrade Azure VMware Solution for HCX service updates that may include new features, software fixes, or security patches.
+
+You can update HCX Connector and HCX Cloud systems during separate maintenance windows, but for optimal compatibility, it's recommended you update both systems together. Apply service updates during a maintenance window where no new HCX operations are queued up.
+
+>[!IMPORTANT]
+>Starting with HCX 4.4.0, HCX appliances install the VMware Photon Operating System. When upgrading to HCX 4.4.x or later from an HCX version prior to version 4.4.0, you must also upgrade all Service Mesh appliances.
+
+## System requirements
+
+- For systems requirements, compatibility, and upgrade prerequisites, see the [VMware HCX release notes](https://docs.vmware.com/en/VMware-HCX/https://docsupdatetracker.net/index.html).
+
+- For more information about the upgrade path, see the [Product Interoperability Matrix](https://interopmatrix.vmware.com/Upgrade?productId=660).
+
+- Ensure HCX manager and site pair configurations are healthy.
+
+- As part of HCX update planning, and to ensure that HCX components are updated successfully, review the service update considerations and requirements. For planning HCX upgrade, see [Planning for HCX Updates](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-61F5CED2-C347-4A31-8ACB-A4553BFC62E3.html).
+
+- Ensure that you have a backup and snapshot of HCX connector in the on-premises environment, if applicable. 
+
+### Backup HCX 
+- Azure VMware Solution backs up HCX Cloud Manager configuration daily.
++
+- Use the appliance management interface to create backup of HCX in on-premises, see [Backing Up HCX Manager](https://docs.vmware.com/en/VMware-HCX/4.4/hcx-user-guide/GUID-6A9D1451-3EF3-4E49-B23E-A9A781E5214A.html). You can use the configuration backup to restore the appliance to its state before the backup. The contents of the backup file supersede configuration changes made before restoring the appliance.
+ 
+- HCX cloud manager snapshots are taken automatically during upgrades to HCX 4.4 or later. HCX retains automatic snapshots for 24 hours before deleting them. To take a manual snapshot on HCX Cloud Manager or help with reverting from a snapshot, [create a support ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview). 
+
+## Upgrade HCX
+The upgrade process is in two steps:
+1. Upgrade HCX Manager
+ 1. HCX cloud manager  
+ 1. HCX connector (You can update site-paired HCX Managers simultaneously) 
+1. Upgrade HCX Service Mesh appliances
+
+### Upgrade HCX manager
+The HCX update is first applied to the HCX Manager systems.
+
+**What to expect**
+- HCX manager is rebooted as part of the upgrade process.  
+- HCX vCenter Plugins will be updated.  
+- There's no data-plane outage during this procedure.
+
+**Prerequisites**
+- Verify the HCX Manager system reports healthy connections to the connected (vCenter Server, NSX Manager (if applicable).
+- Verify the HCX Manager system reports healthy connections to the HCX Interconnect service components. (Ensure HCX isn't in an out of sync state)
+- Verify that Site Pair configurations are healthy.
+- No VM migrations should be in progress during this upgrade.
+
+**Procedure**
+
+To follow the HCX Manager upgrade process, see [Upgrading the HCX Manager](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-02DB88E1-EC81-434B-9AE9-D100E427B31C.html)
+
+### Upgrade HCX Service Mesh appliances
+
+While Service Mesh appliances are upgraded independently to the HCX Manager, they must be upgraded. These appliances are flagged for new available updates anytime the HCX Manager has newer software available.
+
+**What to expect**
+
+- Service VMs will be rebooted as part of the upgrade.
+- There is a small data plane outage during this procedure.
+- In-service upgrade of Network-extension can be considered to reduce downtime during HCX Network extension upgrades.
+
+**Prerequisites**
+- All paired HCX Managers on both the source and the target site are updated and all services have returned to a fully converged state.
+- Service Mesh appliances must be initiated using the HCX plug-in of vCenter or the 443 console at the source site
+- No VM migrations should be in progress during this upgrade.
+
+**Procedure**
+
+To follow the Service Mesh appliances upgrade process, see [Upgrading the HCX Service Mesh Appliances](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-EF89A098-D09B-4270-9F10-AEFA37CE5C93.html)
+
+## FAQ
+
+### What is the impact of an HCX upgrade?
+
+Apply service updates during a maintenance window where no new HCX operations and migration are queued up. The upgrade window accounts for a brief disruption to the Network Extension service, while the appliances are redeployed with the updated code.
+For individual HCX component upgrade impact, see [Planning for HCX Updates](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-61F5CED2-C347-4A31-8ACB-A4553BFC62E3.html).
+
+### Do I need to upgrade the service mesh appliances?
+
+The HCX Service Mesh can be upgraded once all paired HCX Manager systems are updated, and all services have returned to a fully converged state. Check HCX release notes for upgrade requirements. Starting with HCX 4.4.0, HCX appliances installed the VMware Photon Operating System. When upgrading to HCX 4.4.x or later from an HCX version prior to 4.4.0 version, you must upgrade all Service Mesh appliances.
+
+### How do I roll back HCX upgrade using a snapshot?
+
+See [Rolling Back an Upgrade Using Snapshots](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-B34728B9-B187-48E5-AE7B-74E92D09B98B.html). On the cloud side, open a [support ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) to roll back the upgrade.
+
+## Next steps
+[Software Versioning, Skew and Legacy Support Policies](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-skew-policy/GUID-787FB2A1-52AF-483C-B595-CF382E728674.html)
+
+[Updating VMware HCX](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-508A94B2-19F6-47C7-9C0D-2C89A00316B9.html)
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 06/06/2022 Last updated : 11/15/2022
Azure Backup offers two ways to modify protection for a data-source:
In both scenarios, the new policy is applied to all older recovery points, which are in standard tier and archive tier. So, older recovery points might get deleted if there's a policy change.
-When you move recovery points to archive, they're subjected to an early deletion period of 180 days. The charges are prorated. If a recovery point that hasnΓÇÖt stayed in archive for 180 days is deleted, it incurs cost equivalent to 180 minus the number of days it has spent in standard tier.
-
-If you delete recovery points that haven't stayed in archive for a minimum of 180 days, they incur early deletion cost.
+When you move recovery points to archive, they're subjected to an early deletion period of 180 days. The charges are prorated. If you delete a recovery point that hasn't stayed in vault-archive for 180 days, then you're charged for the remaining retention period selected at vault-archive tier price.
## Stop protection and delete data
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
description: Learn how to create a shareable link to let a user connect to a tar
Previously updated : 09/13/2022 Last updated : 11/16/2022
By default, users in your org will have only read access to shared links. If a u
## Considerations
-* Shareable Links isn't currently supported on peered VNets.
+* Shareable Links isn't currently supported on peered VNets that aren't in the same subscription.
* Shareable Links is not supported for national clouds during preview. * The Standard SKU is required for this feature.
batch Batch Pool Cloud Service To Virtual Machine Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-cloud-service-to-virtual-machine-configuration.md
Last updated 09/03/2021
Currently, Batch pools can be created using either [virtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) or [cloudServiceConfiguration](/rest/api/batchservice/pool/add#cloudserviceconfiguration). We recommend using Virtual Machine Configuration only, as this configuration supports all Batch capabilities.
-Cloud Services Configuration pools don't support some of the current Batch features, and won't support any newly-added features. You won't be able to create new 'CloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
+Cloud Services Configuration pools don't support some of the current Batch features, and won't support any newly added features. You won't be able to create new 'CloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
If your Batch solutions currently use 'cloudServiceConfiguration' pools, we recommend changing to 'virtualMachineConfiguration' as soon as possible. This will enable you to benefit from all Batch capabilities, such as an expanded [selection of VM series](batch-pool-vm-sizes.md), Linux VMs, [containers](batch-docker-container-workloads.md), [Azure Resource Manager virtual networks](batch-virtual-network.md), and [node disk encryption](disk-encryption.md).
Some of the key differences between the two configurations include:
- 'virtualMachineConfiguration' pool nodes utilize managed OS disks. The [managed disk type](../virtual-machines/disks-types.md) that is used for each node depends on the VM size chosen for the pool. If a 's' VM size is specified for the pool, for example 'Standard_D2s_v3', then a premium SSD is used. If a 'non-s' VM size is specified, for example 'Standard_D2_v3', then a standard HDD is used. > [!IMPORTANT]
- > As with Virtual Machines and Virtual Machine Scale Sets, the OS managed disk used for each node incurs a cost, which is additional to the cost of the VMs. 'virtualMachineConfiguration' pools can use [ephemeral OS disks](create-pool-ephemeral-os-disk.md), which create the OS disk on the VM cache or temporary SSD, to avoid extra costs associated with managed disks.There is no OS disk cost for 'cloudServiceConfiguration' nodes, as the OS disk is created on the nodes local SSD.
+ > As with Virtual Machines and Virtual Machine Scale Sets, the OS managed disk used for each node incurs a cost, which is additional to the cost of the VMs. 'virtualMachineConfiguration' pools can use [ephemeral OS disks](create-pool-ephemeral-os-disk.md), which create the OS disk on the VM cache or temporary disk, to avoid extra costs associated with managed disks.There is no OS disk cost for 'cloudServiceConfiguration' nodes, as the OS disk is created on the node's local disk.
## Azure Data Factory custom activity pools
Azure Batch pools can be used to run Data Factory custom activities. Any 'cloudS
When creating your new pools to run Data Factory custom activities, follow these practices: - Pause all pipelines before creating the new pools and deleting the old ones to ensure no executions will be interrupted.-- The same pool id can be used to avoid linked service configuration changes.
+- The same pool ID can be used to avoid linked service configuration changes.
- Resume pipelines when new pools have been created. For more information about using Azure Batch to run Data Factory custom activities, see [Azure Batch linked service](../data-factory/compute-linked-services.md#azure-batch-linked-service) and [Custom activities in a Data Factory pipeline](../data-factory/transform-data-using-dotnet-custom-activity.md)
For more information about using Azure Batch to run Data Factory custom activiti
- Learn more about [pool configurations](nodes-and-pools.md#configurations). - Learn more about [pool best practices](best-practices.md#pools).-- See the REST API reference for [pool addition](/rest/api/batchservice/pool/add) and [virtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration).
+- See the REST API reference for [pool addition](/rest/api/batchservice/pool/add) and [virtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration).
batch Batch Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-virtual-network.md
Title: Provision a pool in a virtual network description: How to create a Batch pool in an Azure virtual network so that compute nodes can communicate securely with other VMs in the network, such as a file server. Previously updated : 11/14/2022 Last updated : 11/15/2022
or `CloudServiceConfiguration`. `VirtualMachineConfiguration` for Batch pools is
pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). > [!IMPORTANT]
-> Batch pools can be configured in one of two communication modes. `Classic` communication
-> mode is where the Batch service initiates communication to the compute nodes.
-> [`Simplified` communication mode](simplified-compute-node-communication.md)
+> Batch pools can be configured in one of two node communication modes. Classic node communication mode is
+> where the Batch service initiates communication to the compute nodes.
+> [Simplified](simplified-compute-node-communication.md) node communication mode
> is where the compute nodes initiate communication to the Batch Service. ## Pools in Virtual Machine Configuration
NSG with at least the inbound and outbound security rules that are shown in the
> [!WARNING] > Batch service IP addresses can change over time. Therefore, we highly recommend that you use the
-> BatchNodeManagement.*region* service tag (or a regional variant) for the NSG rules indicated in the
-> following tables. Avoid populating NSG rules with specific Batch service IP addresses.
+> BatchNodeManagement.*region* service tag for the NSG rules indicated in the following tables. Avoid
+> populating NSG rules with specific Batch service IP addresses.
#### Inbound security rules
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 11/14/2022 Last updated : 11/15/2022
This article discusses best practices and useful tips for using the Azure Batch
- **Pool allocation mode:** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable a small but important subset of scenarios. For more information, see [configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode). -- **'virtualMachineConfiguration' or 'cloudServiceConfiguration':** While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'cloudServiceConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools don't support all features and no new capabilities are planned. You won't be able to create new 'cloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
+- **`virtualMachineConfiguration` or `cloudServiceConfiguration`:** While you can currently create pools using either
+configuration, new pools should be configured using `virtualMachineConfiguration` and not `cloudServiceConfiguration`.
+All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Service Configuration
+pools don't support all features and no new capabilities are planned. You won't be able to create new
+`cloudServiceConfiguration` pools or add new nodes to existing pools
+[after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
+For more information, see
+[Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
+
+- **`classic` or `simplified` node communication mode:** Pools can be configured in one of two node communication modes,
+classic or [simplified](simplified-compute-node-communication.md). In the classic node communication model, the Batch service
+initiates communication to the compute nodes, and compute nodes also require communicating to Azure Storage. In the simplified
+node communication model, compute nodes initiate communication with the Batch service. Due to the reduced scope of
+inbound/outbound connections required, and not requiring Azure Storage outbound access for baseline operation, the recommendation
+is to use the simplified node communication model. Some future improvements to the Batch service will also require the simplified
+node communication model.
- **Job and task run time considerations:** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job isn't long, don't allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job. - **Multiple compute nodes:** Individual nodes aren't guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes. -- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It's your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with. An image without a specified `batchSupportEndOfLife` date indicates that such a date has not been determined yet by the Batch service. Absence of a date does not indicate that the respective image will be supported indefinitely. An EOL date may be added or updated in the future at anytime.
+- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support
+end of life (EOL) dates. These dates can be discovered via the
+[`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages),
+[PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or
+[Azure CLI](/cli/azure/batch/pool/supported-images). It's your responsibility to periodically refresh your view of the EOL
+dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a
+specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is
+derived or aligned with. An image without a specified `batchSupportEndOfLife` date indicates that such a date hasn't been
+determined yet by the Batch service. Absence of a date doesn't indicate that the respective image will be supported
+indefinitely. An EOL date may be added or updated in the future at any time.
- **Unique resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another similar pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. You can create uniqueness by using a GUID (either as the entire resource name, or as a part of it) or by embedding the date and time that the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can give a resource a more readable name even if the actual resource ID is something that isn't human-friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
batch Security Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-best-practices.md
Title: Batch security and compliance best practices description: Learn best practices and useful tips for enhancing security with your Azure Batch solutions. Previously updated : 09/01/2021 Last updated : 11/15/2022
Many features are available to help you create a more secure Azure Batch deploym
### Pool configuration
-Many security features are only available for pools configured using [Virtual Machine Configuration](nodes-and-pools.md#configurations), and not for pools with Cloud Services Configuration. We recommend using Virtual Machine Configuration pools, which utilize [virtual machine scale sets](../virtual-machine-scale-sets/overview.md), whenever possible.
+Many security features are only available for pools configured using [Virtual Machine Configuration](nodes-and-pools.md#configurations), and not for pools with Cloud Services Configuration. We recommend using Virtual Machine Configuration pools, which utilize [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), whenever possible.
+
+Pools can also be configured in one of two node communication modes, classic or [simplified](simplified-compute-node-communication.md).
+In the classic node communication model, the Batch service initiates communication to the compute nodes, and compute nodes
+also require communicating to Azure Storage. In the simplified node communication model, compute nodes initiate communication
+with the Batch service. Due to the reduced scope of inbound/outbound connections required, and not requiring Azure Storage
+outbound access for baseline operation, the recommendation is to use the simplified node communication model.
### Batch account authentication
We strongly recommend using Azure AD for Batch account authentication. Some Batc
When creating a Batch account, you can choose between two [pool allocation modes](accounts.md#batch-accounts): -- **Batch service**: The default option, where the underlying Cloud Service or virtual machine scale set resources used to allocate and manage pool nodes are created in internal subscriptions, and aren't directly visible in the Azure portal. Only the Batch pools and nodes are visible.-- **User subscription**: The underlying Cloud Service or virtual machine scale set resources are created in the same subscription as the Batch account. These resources are therefore visible in the subscription, in addition to the corresponding Batch resources.
+- **Batch service**: The default option, where the underlying Cloud Service or Virtual Machine Scale Set resources used to allocate and manage pool nodes are created on Batch-owned subscriptions, and aren't directly visible in the Azure portal. Only the Batch pools and nodes are visible.
+- **User subscription**: The underlying Cloud Service or Virtual Machine Scale Set resources are created in the same subscription as the Batch account. These resources are therefore visible in the subscription, in addition to the corresponding Batch resources.
-With user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription mode is required if you want to create Batch pools using Azure Reserved VM Instances, use Azure Policy on virtual machine scale set resources, and/or manage the core quota on the subscription (shared across all Batch accounts in the subscription). To create a Batch account in user subscription mode, you must also register your subscription with Azure Batch, and associate the account with an Azure Key Vault.
+With user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription mode is required if you want to create Batch pools using Azure Reserved VM Instances, use Azure Policy on Virtual Machine Scale Set resources, and/or manage the core quota on the subscription (shared across all Batch accounts in the subscription). To create a Batch account in user subscription mode, you must also register your subscription with Azure Batch, and associate the account with an Azure Key Vault.
## Restrict network endpoint access ### Batch network endpoints
-Be aware that by default, endpoints with public IP addresses are used to communicate with Batch accounts, Batch pools, and pool nodes.
+By default, endpoints with public IP addresses are used to communicate with Batch accounts, Batch pools, and pool nodes.
### Batch account API
For more information, see [Create an Azure Batch pool in a virtual network](bat
#### Create pools with static public IP addresses
-By default, the public IP addresses associated with pools are dynamic; they are created when a pool is created and IP addresses can be added or removed when a pool is resized. When the task applications running on pool nodes need to access external services, access to those services may need to be restricted to specific IPs. In this case, having dynamic IP addresses will not be manageable.
+By default, the public IP addresses associated with pools are dynamic; they're created when a pool is created
+and IP addresses can be added or removed when a pool is resized. When the task applications running on pool
+nodes need to access external services, access to those services may need to be restricted to specific IPs.
+In this case, having dynamic IP addresses won't be manageable.
You can create static public IP address resources in the same subscription as the Batch account before pool creation. You can then specify these addresses when creating your pool.
For extra security, encrypt these disks using one of these Azure disk encryption
## Securely access services from compute nodes
-Batch nodes can securely access credentials stored in [Azure Key Vault](../key-vault/general/overview.md), which can be used by task applications to access other services. A certificate is used to grant the pool nodes access to Key Vault. By [enabling automatic certificate rotation in your Batch pool](automatic-certificate-rotation.md), the credentials will be automatically renewed. This is the recommended option for Batch nodes to access credentials stored in Azure Key Vault, although you can also [set up Batch nodes to securely access credentials and secrets with a certificate](credential-access-key-vault.md) without automatic certificate rotation.
+Use [Pool managed identities](managed-identity-pools.md) with the appropriate access permissions configured for the
+user-assigned managed identity to access Azure services that support managed identity, including Azure Key Vault. If
+you need to provision certificates on Batch nodes, utilize the available Azure Key Vault VM extension with pool
+Managed Identity to install and manage certificates on your Batch pool. For more information on deploying certificates
+from Azure Key Vault with Managed Identity on Batch pools, see
+[Enable automatic certificate rotation in a Batch pool](automatic-certificate-rotation.md).
## Governance and compliance
These offerings are based on various types of assurances, including formal cert
Depending on your pool allocation mode and the resources to which a policy should apply, use Azure Policy with Batch in one of the following ways: - Directly, using the Microsoft.Batch/batchAccounts resource. A subset of the properties for a Batch account can be used. For example, your policy can include valid Batch account regions, allowed pool allocation mode, and whether a public network is enabled for accounts.-- Indirectly, using the Microsoft.Compute/virtualMachineScaleSets resource. Batch accounts with user subscription pool allocation mode can have policy set on the virtual machine scale set resources that are created in the Batch account subscription. For example, allowed VM sizes and ensure certain extensions are run on each pool node.
+- Indirectly, using the Microsoft.Compute/virtualMachineScaleSets resource. Batch accounts with user subscription pool allocation mode can have policy set on the Virtual Machine Scale Set resources that are created in the Batch account subscription. For example, allowed VM sizes and ensure certain extensions are run on each pool node.
## Next steps
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
The following faults are available for use today. Visit the [Fault Providers](./
|-|-| | Fault Provider | N/A | | Supported OS Types | N/A |
-| Description | Adds a time delay before, between, or after other actions. Useful for waiting for the impact of a fault to appear in a service or for waiting for an activity outside of the experiment to complete (for example, waiting for autohealing to occur before injecting another fault). |
+| Description | Adds a time delay before, between, or after other actions. This fault is useful for waiting for the impact of a fault to appear in a service, or for waiting for an activity outside of the experiment to complete. For example, waiting for autohealing to occur before injecting another fault. |
| Prerequisites | N/A | | Urn | urn:csci:microsoft:chaosStudio:timedDelay/1.0 | | duration | The duration of the delay in ISO 8601 format (Example: PT10M) |
The following faults are available for use today. Visit the [Fault Providers](./
| Capability Name | CPUPressure-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows, Linux |
-| Description | Add CPU pressure up to the specified value on the VM where this fault is injected for the duration of the fault action. The artificial CPU pressure is removed at the end of the duration or if the experiment is canceled. On Windows, the "% Processor Utility" performance counter is used at fault start to determine current CPU percentage and this is subtracted from the pressureLevel defined in the fault so that % Processor Utility will hit approximately the pressureLevel defined in the fault parameters. |
+| Description | Adds CPU pressure, up to the specified value, on the VM where this fault is injected during the fault action. The artificial CPU pressure is removed at the end of the duration or if the experiment is canceled. On Windows, the "% Processor Utility" performance counter is used at fault start to determine current CPU percentage, which is subtracted from the `pressureLevel` defined in the fault so that % Processor Utility will hit approximately the `pressureLevel` defined in the fault parameters. |
| Prerequisites | **Linux:** Running the fault on a Linux VM requires the **stress-ng** utility to be installed. You can install it using the package manager for your Linux distro, </br> APT Command to install stress-ng: *sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng* </br> YUM Command to install stress-ng: *sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng* | | | **Windows:** None. | | Urn | urn:csci:microsoft:agent:cpuPressure/1.0 | | Parameters (key, value) | | pressureLevel | An integer between 1 and 99 that indicates how much CPU pressure (%) will be applied to the VM. |
-| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. |
+| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a Virtual Machine Scale Set. Required for Virtual Machine Scale Sets. |
### Sample JSON ```json
Known issues on Linux:
| Capability Name | PhysicalMemoryPressure-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows, Linux |
-| Description | Add physical memory pressure up to the specified value on the VM where this fault is injected for the duration of the fault action. The artificial physical memory pressure is removed at the end of the duration or if the experiment is canceled. |
+| Description | Add physical memory pressure up to the specified value on the VM where this fault is injected during of the fault action. The artificial physical memory pressure is removed at the end of the duration or if the experiment is canceled. |
| Prerequisites | **Linux:** Running the fault on a Linux VM requires the **stress-ng** utility to be installed. You can install it using the package manager for your Linux distro, </br> APT Command to install stress-ng: *sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng* </br> YUM Command to install stress-ng: *sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng* | | | **Windows:** None. | | Urn | urn:csci:microsoft:agent:physicalMemoryPressure/1.0 | | Parameters (key, value) | | | pressureLevel | An integer between 1 and 99 that indicates how much physical memory pressure (%) will be applied to the VM. |
-| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. |
+| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a Virtual Machine Scale Set. Required for Virtual Machine Scale Sets. |
### Sample JSON
Known issues on Linux:
| Capability Name | VirtualMemoryPressure-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows |
-| Description | Add virtual memory pressure up to the specified value on the VM where this fault is injected for the duration of the fault action. The artificial virtual memory pressure is removed at the end of the duration or if the experiment is canceled. |
+| Description | Add virtual memory pressure up to the specified value on the VM where this fault is injected during the fault action. The artificial virtual memory pressure is removed at the end of the duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:agent:virtualMemoryPressure/1.0 | | Parameters (key, value) | |
Known issues on Linux:
| Capability Name | DiskIOPressure-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows |
-| Description | Uses the [diskspd utility](https://github.com/Microsoft/diskspd/wiki) to add disk pressure to the primary storage of the VM where it is injected for the duration of the fault action. This fault has five different modes of execution. The artificial disk pressure is removed at the end of the duration or if the experiment is canceled. |
+| Description | Uses the [diskspd utility](https://github.com/Microsoft/diskspd/wiki) to add disk pressure to the primary storage of the VM where it's injected during the fault action. This fault has five different modes of execution. The artificial disk pressure is removed at the end of the duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:agent:diskIOPressure/1.0 | | Parameters (key, value) | |
Known issues on Linux:
| Prerequisites | Running the fault on a Linux VM requires the **stress-ng** utility to be installed. You can install it using the package manager for your Linux distro, </br> APT Command to install stress-ng: *sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng* </br> YUM Command to install stress-ng: *sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng* | | Urn | urn:csci:microsoft:agent:linuxDiskIOPressure/1.0 | | Parameters (key, value) | |
-| workerCount | Number of worker processes to run. Setting this to 0 will generate as many worker processes as there are number of processors. |
+| workerCount | Number of worker processes to run. Setting `workerCount` to 0 will generate as many worker processes as there are number of processors. |
| fileSizePerWorker | Size of the temporary file a worker will perform I/O operations against. Integer plus a unit in bytes (b), kilobytes (k), megabytes (m), or gigabytes (g) (for example, 4m for 4 megabytes, 256g for 256 gigabytes) | | blockSize | Block size to be used for disk I/O operations, capped at 4 megabytes. Integer plus a unit in bytes (b), kilobytes (k), or megabytes (m) (for example, 512k for 512 kilobytes) | | virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. |
Known issues on Linux:
| Capability Name | StopService-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows |
-| Description | Uses the Windows Service Controller APIs to stop a Windows service for the duration of the fault, restarting it at the end of the duration or if the experiment is canceled. |
+| Description | Uses the Windows Service Controller APIs to stop a Windows service during the fault, restarting it at the end of the duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:agent:stopService/1.0 | | Parameters (key, value) | |
-| serviceName | The name of the Windows service you want to stop. You can run `sc.exe query` in command prompt to explore service names, Windows service friendly names are not supported. |
+| serviceName | The name of the Windows service you want to stop. You can run `sc.exe query` in command prompt to explore service names, Windows service friendly names aren't supported. |
| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. | ### Sample JSON
Known issues on Linux:
| Capability Name | TimeChange-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows |
-| Description | Changes the system time for the VM where it is injected and resets it at the end of the duration or if the experiment is canceled. |
+| Description | Changes the system time of the VM where it's injected, and resets the time at the end of the expiriment or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:agent:timeChange/1.0 | | Parameters (key, value) | |
-| dateTime | A DateTime string in [ISO8601 format](https://www.cryptosys.net/pki/manpki/pki_iso8601datetime.html). If YYYY-MM-DD values are missing, they are defaulted to the current day when the experiment runs. If Thh:mm:ss values are missing, the default value is 12:00:00 AM. If a 2-digit year is provided (YY), it is converted to a 4-digit year (YYYY) based on the current century. If \<Z\> is missing, it is defaulted to the offset of the local timezone. \<Z\> must always include a sign symbol (negative or positive). |
+| dateTime | A DateTime string in [ISO8601 format](https://www.cryptosys.net/pki/manpki/pki_iso8601datetime.html). If YYYY-MM-DD values are missing, they're defaulted to the current day when the experiment runs. If Thh:mm:ss values are missing, the default value is 12:00:00 AM. If a 2-digit year is provided (YY), it's converted to a 4-digit year (YYYY) based on the current century. If \<Z\> is missing, it's defaulted to the offset of the local timezone. \<Z\> must always include a sign symbol (negative or positive). |
| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. | ### Sample JSON
Known issues on Linux:
| Capability Name | DnsFailure-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows |
-| Description | Substitutes the response of a DNS lookup request with a specified error code. |
+| Description | Substitutes DNS lookup request responses with a specified error code. DNS lookup requests that will be substituted must:<ul><li>Originate from the VM</li><li>Match the defined fault parameters</li></ul>**Note**: DNS lookups that aren't made by the Windows DNS client won't be affected by this fault. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:agent:dnsFailure/1.0 | | Parameters (key, value) | |
-| hosts | Delimited JSON array of host names to fail DNS lookup request for. |
+| hosts | Delimited JSON array of host names to fail DNS lookup request for.<br><br>This property accepts wildcards (`*`), but only for the first subdomain in an address and only applies to the subdomain for which they're specified. For example:<ul><li>\*.microsoft.com is supported</li><li>subdomain.\*.microsoft isn't supported</li><li>\*.microsoft.com won't account for multiple subdomains in an address such as subdomain1.subdomain2.microsoft.com.</li></ul> |
| dnsFailureReturnCode | DNS error code to be returned to the client for the lookup failure (FormErr, ServFail, NXDomain, NotImp, Refused, XDomain, YXRRSet, NXRRSet, NotAuth, NotZone). For more details on DNS return codes, visit [the IANA website](https://www.iana.org/assignments/dns-parameters/dns-parameters.xml#dns-parameters-6) | | virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a virtual machine scale set. Required for virtual machine scale sets. |
Known issues on Linux:
| Target type | Microsoft-Agent | | Supported OS Types | Windows | | Description | Increases network latency for a specified port range and network block. |
-| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it is run as administrator by default. |
+| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
| Urn | urn:csci:microsoft:agent:networkLatency/1.0 | | Parameters (key, value) | | | latencyInMilliseconds | Amount of latency to be applied in milliseconds. |
-| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 3. |
+| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 16. |
| address | IP address indicating the start of the IP range. | | subnetMask | Subnet mask for the IP address range. | | portLow | (Optional) Port number of the start of the port range. |
Known issues on Linux:
| Target type | Microsoft-Agent | | Supported OS Types | Windows | | Description | Blocks outbound network traffic for specified port range and network block. |
-| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it is run as administrator by default. |
+| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
| Urn | urn:csci:microsoft:agent:networkDisconnect/1.0 | | Parameters (key, value) | |
-| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 3. |
+| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 16. |
| address | IP address indicating the start of the IP range. | | subnetMask | Subnet mask for the IP address range. | | portLow | (Optional) Port number of the start of the port range. |
Known issues on Linux:
| Target type | Microsoft-Agent | | Supported OS Types | Windows | | Description | Applies a Windows firewall rule to block outbound traffic for specified port range and network block. |
-| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it is run as administrator by default. |
+| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
| Urn | urn:csci:microsoft:agent:networkDisconnectViaFirewall/1.0 | | Parameters (key, value) | | | destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 3. |
Known issues on Linux:
| Capability Name | Shutdown-1.0 | | Target type | Microsoft-VirtualMachine | | Supported OS Types | Windows, Linux |
-| Description | Shuts down a VM for the duration of the fault and restarts the VM at the end of the fault duration or if the experiment is canceled. Only Azure Resource Manager VMs are supported. |
+| Description | Shuts down a VM for the duration of the fault, and restarts it at the end of the expiriment or if the experiment is canceled. Only Azure Resource Manager VMs are supported. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:virtualMachine:shutdown/1.0 | | Parameters (key, value) | |
Known issues on Linux:
| Capability Name | Shutdown-1.0 | | Target type | Microsoft-VirtualMachineScaleSet | | Supported OS Types | Windows, Linux |
-| Description | Shuts down or kill a virtual machine scale set instance for the duration of the fault and restarts the VM at the end of the fault duration or if the experiment is canceled. |
+| Description | Shuts down or kills a virtual machine scale set instance during the fault, and restarts the VM at the end of the fault duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:virtualMachineScaleSet:shutdown/1.0 | | Parameters (key, value) | |
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [NetworkChaos kind](https://chaos-mesh.org/docs/simulate-network-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [NetworkChaos kind](https://chaos-mesh.org/docs/simulate-network-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:podChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [PodChaos kind](https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/#create-experiments-using-yaml-configuration-files). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [PodChaos kind](https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/#create-experiments-using-yaml-configuration-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:stressChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [StressChaos kind](https://chaos-mesh.org/docs/simulate-heavy-stress-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [StressChaos kind](https://chaos-mesh.org/docs/simulate-heavy-stress-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:IOChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [IOChaos kind](https://chaos-mesh.org/docs/simulate-io-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [IOChaos kind](https://chaos-mesh.org/docs/simulate-io-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:timeChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [TimeChaos kind](https://chaos-mesh.org/docs/simulate-time-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [TimeChaos kind](https://chaos-mesh.org/docs/simulate-time-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:kernelChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [KernelChaos kind](https://chaos-mesh.org/docs/simulate-kernel-chaos-on-kubernetes/#configuration-file). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [KernelChaos kind](https://chaos-mesh.org/docs/simulate-kernel-chaos-on-kubernetes/#configuration-file).You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:httpChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [HTTPChaos kind](https://chaos-mesh.org/docs/simulate-http-chaos-on-kubernetes/#create-experiments). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [HTTPChaos kind](https://chaos-mesh.org/docs/simulate-http-chaos-on-kubernetes/#create-experiments). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md) and the [DNS service must be installed](https://chaos-mesh.org/docs/simulate-dns-chaos-on-kubernetes/#deploy-chaos-dns-service). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:dnsChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [DNSChaos kind](https://chaos-mesh.org/docs/simulate-dns-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it, and use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property (do not include metadata, kind, etc.). |
+| jsonSpec | A JSON-formatted and, if created via ARM template, REST API, or Azure CLI, JSON-escaped Chaos Mesh spec that uses the [DNSChaos kind](https://chaos-mesh.org/docs/simulate-dns-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the "jsonSpec" property, don't include metadata, kind, etc. |
### Sample JSON
Known issues on Linux:
|-|-| | Capability Name | SecurityRule-1.0 | | Target type | Microsoft-NetworkSecurityGroup |
-| Description | Enables manipulation or creation of a rule in an existing Azure Network Security Group or set of Azure Network Security Groups (assuming the rule definition is applicable cross security groups). Useful for simulating an outage of a downstream or cross-region dependency/non-dependency, simulating an event that is expected to trigger a logic to force a service failover, simulating an event that is expected to trigger an action from a monitoring or state management service, or as an alternative for blocking, or allowing, network traffic where Chaos Agent cannot be deployed. |
+| Description | Enables manipulation or rule creation in an existing Azure Network Security Group or set of Azure Network Security Groups, assuming the rule definition is applicable cross security groups. Useful for simulating an outage of a downstream or cross-region dependency/non-dependency, simulating an event that's expected to trigger a logic to force a service failover, simulating an event that is expected to trigger an action from a monitoring or state management service, or as an alternative for blocking or allowing network traffic where Chaos Agent can't be deployed. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:networkSecurityGroup:securityRule/1.0 | | Parameters (key, value) | |
Known issues on Linux:
### Limitations * The fault can only be applied to an existing Network Security Group.
-* When an NSG rule that is intended to deny traffic is applied existing connections will not be broken until they have been **idle** for 4 minutes. One workaround is to add another branch in the same step that uses a fault that would cause existing connections to break when the NSG fault is applied. For example, killing the process, temporarily stopping the service, or restarting the VM would cause connections to reset.
+* When an NSG rule that is intended to deny traffic is applied existing connections won't be broken until they've been **idle** for 4 minutes. One workaround is to add another branch in the same step that uses a fault that would cause existing connections to break when the NSG fault is applied. For example, killing the process, temporarily stopping the service, or restarting the VM would cause connections to reset.
* Rules are applied at the start of the action. Any external changes to the rule during the duration of the action will cause the experiment to fail.
-* Creating or modifying Application Security Group rules is not supported.
+* Creating or modifying Application Security Group rules isn't supported.
* Priority values must be unique on each NSG targeted. Attempting to create a new rule that has the same priority value as another will cause the experiment to fail. ## Azure Cache for Redis reboot
Known issues on Linux:
| Capability Name | Reboot-1.0 | | Target type | Microsoft-AzureClusteredCacheForRedis | | Description | Causes a forced reboot operation to occur on the target to simulate a brief outage. |
-| Prerequisites | The target Azure Cache for Redis resource must be a Redis Cluster, which requires that the cache must be a Premium Tier cache. Standard and Basic Tiers are not supported. |
+| Prerequisites | The target Azure Cache for Redis resource must be a Redis Cluster, which requires that the cache must be a Premium Tier cache. Standard and Basic Tiers aren't supported. |
| Urn | urn:csci:microsoft:azureClusteredCacheForRedis:reboot/1.0 | | Fault type | Discrete | | Parameters (key, value) | |
Known issues on Linux:
### Limitations * The reboot fault causes a forced reboot to better simulate an outage event, which means there is the potential for data loss to occur.
-* The reboot fault is a **discrete** fault type. Unlike continuous faults, it is a one-time action and therefore has no duration.
+* The reboot fault is a **discrete** fault type. Unlike continuous faults, it's a one-time action and therefore has no duration.
## Cloud Services (Classic) shutdown
Known issues on Linux:
|-|-| | Capability Name | Shutdown-1.0 | | Target type | Microsoft-DomainName |
-| Description | Stops a deployment for the duration of the fault and restarts the deployment at the end of the fault duration or if the experiment is canceled. |
+| Description | Stops a deployment during the fault and restarts the deployment at the end of the fault duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:domainName:shutdown/1.0 | | Fault type | Continuous |
Known issues on Linux:
| Capability Name | DenyAccess-1.0 | | Target type | Microsoft-KeyVault | | Description | Blocks all network access to a Key Vault by temporarily modifying the Key Vault network rules, preventing an application dependent on the Key Vault from accessing secrets, keys, and/or certificates. If the Key Vault allows access to all networks, this is changed to only allow access from selected networks with no virtual networks in the allowed list at the start of the fault and returned to allowing access to all networks at the end of the fault duration. If they Key Vault is set to only allow access from selected networks, any virtual networks in the allowed list are removed at the start of the fault and restored at the end of the fault duration. |
-| Prerequisites | The target Key Vault cannot have any firewall rules and must not be set to allow Azure services to bypass the firewall. If the target Key Vault is set to only allow access from selected networks, there must be at least one virtual network rule. The Key Vault cannot be in recover mode. |
+| Prerequisites | The target Key Vault can't have any firewall rules and must not be set to allow Azure services to bypass the firewall. If the target Key Vault is set to only allow access from selected networks, there must be at least one virtual network rule. The Key Vault can't be in recover mode. |
| Urn | urn:csci:microsoft:keyVault:denyAccess/1.0 | | Fault type | Continuous | | Parameters (key, value) | None. |
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
VNet is the fundamental building block for your private network in Azure. VNet e
## How VNet Injection works in Chaos Studio VNet injection allows Chaos resource provider to inject containerized workloads into your VNet. This means that resources without public endpoints can be accessed via a private IP address on the VNet. Below are the steps you can follow for vnet injection:
-1. Register the Microsoft.ContainerInstance resource provider with your subscription (if applicable).
-2. Re-register the Microsoft.Chaos resource provider with your subscription.
-3. Create a subnet named ChaosStudioSubnet in the VNet you want to inject into.
-4. Set the properties.subnetId property when you create or update the Target resource. The value should be the resource ID of the subnet created in step 1.
+
+1. Register the `Microsoft.ContainerInstance` resource provider with your subscription (if applicable).
+
+ ```bash
+ az provider register --namespace 'Microsoft.ContainerInstance' --wait
+ ```
+
+ Verify the registration by running the following command:
+
+ ```bash
+ az provider show --namespace 'Microsoft.ContainerInstance' | grep registrationState
+ ```
+
+ You should see output similar to the following:
+
+ ```bash
+ "registrationState": "Registered",
+ ```
+
+2. Re-register the `Microsoft.Chaos` resource provider with your subscription.
+
+ ```bash
+ az provider register --namespace 'Microsoft.Chaos' --wait
+ ```
+
+ Verify the registration by running the following command:
+
+ ```bash
+ az provider show --namespace 'Microsoft.Chaos' | grep registrationState
+ ```
+
+ You should see output similar to the following:
+
+ ```bash
+ "registrationState": "Registered",
+ ```
+
+3. Create a subnet named `ChaosStudioSubnet` in the VNet you want to inject into. And delegate the subnet to `Microsoft.ContainerInstance/containerGroups` service.
+
+4. Set the `properties.subnetId` property when you create or update the Target resource. The value should be the resource ID of the subnet created in step 3.
+
+ Replace `$SUBSCRIPTION_ID` with your Azure subscription ID, `$RESOURCE_GROUP` and `$AKS_CLUSTER` with the resource group name and your AKS cluster resource name. Also, replace `$AKS_INFRA_RESOURCE_GROUP` and `$AKS_VNET` with your AKS's infrastructure resource group name and VNet name.
+
+ ```bash
+ URL=https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.ContainerService/managedClusters/$AKS_CLUSTER/providers/Microsoft.Chaos/targets/microsoft-azurekubernetesservicechaosmesh?api-version=2022-10-01-preview
+ SUBNET_ID=/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$AKS_INFRA_RESOURCE_GROUP/providers/Microsoft.Network/virtualNetworks/$AKS_VNET/subnets/ChaosStudioSubnet
+ BODY="{ \"properties\": { \"subnetId\": \"$SUBNET_ID\" } }"
+ az rest --method put --url $URL --body "$BODY"
+ ```
+ 5. Start the experiment. ## Limitations
chaos-studio Chaos Studio Quickstart Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-azure-portal.md
Title: Create and run a chaos experiment using Azure Chaos Studio description: Understand the steps to create and run a Chaos Studio experiment in 10mins -+ Last updated 11/10/2021
If this is your first time using Chaos Studio, you must first register the Chaos
5. In the list of resource providers that appears, search for **Microsoft.Chaos**. 6. Click on the Microsoft.Chaos provider, and click the **Register** button.
+## Create an Azure resource supported by Chaos Studio
+
+Create an azure resource and ensure this is one of the supported [fault providers](chaos-studio-fault-providers.md). Also validate if this resource is being created in the [region](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio) where Chaos Studio is available. In this experiment we choose an Azure Virtual Machine which is one of the supported fault providers for Chaos Studio.
++ ## Enable Chaos Studio on the Virtual Machine you created 1. Open the [Azure portal](https://portal.azure.com). 2. Search for **Chaos Studio (preview)** in the search bar.
chaos-studio Sample Template Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-template-experiment.md
In this sample, we create a chaos experiment with a single target resource and a
"value": "eastus" }, "chaosTargetResourceId": {
- "value": "/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.DocumentDB/databaseAccounts/<account-name>"
+ "value": "/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.DocumentDB/databaseAccounts/<account-name>/providers/Microsoft.Chaos/targets/microsoft-cosmosdb"
} } }
cloud-shell Cloud Shell Windows Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/cloud-shell-windows-users.md
- Title: Azure Cloud Shell for Windows users | Microsoft Docs
-description: Guide for users who are not familiar with Linux systems
---
-tags: azure-resource-manager
---- Previously updated : 08/16/2022---
-# PowerShell in Azure Cloud Shell for Windows users
-
-In May 2018, changes were [announced](https://azure.microsoft.com/blog/pscloudshellrefresh/) to PowerShell in Azure Cloud Shell.
-The PowerShell experience in Azure Cloud Shell now runs [PowerShell Core 6](https://github.com/powershell/powershell) in a Linux environment.
-With this change, there may be some differences in the PowerShell experience in Cloud Shell compared to what is expected in a Windows PowerShell experience.
-
-## File system case sensitivity
-
-The file system is case-insensitive in Windows, whereas on Linux, the file system is case-sensitive.
-Previously `file.txt` and `FILE.txt` were considered to be the same file, but now they are considered to be different files.
-Proper casing must be used while `tab-completing` in the file system.
-PowerShell specific experiences, such as `tab-completing` cmdlet names, parameters, and values, are not case-sensitive.
-
-## Windows PowerShell aliases vs Linux utilities
-
-Some existing PowerShell aliases have the same names as built-in Linux commands, such as `cat`,`ls`, `sort`, `sleep`, etc.
-In PowerShell Core 6, aliases that collide with built-in Linux commands have been removed.
-Below are the common aliases that have been removed as well as their equivalent commands:
-
-|Removed Alias |Equivalent Command |
-|||
-|`cat` | `Get-Content` |
-|`curl` | `Invoke-WebRequest` |
-|`diff` | `Compare-Object` |
-|`ls` | `dir` <br> `Get-ChildItem` |
-|`mv` | `Move-Item` |
-|`rm` | `Remove-Item` |
-|`sleep` | `Start-Sleep` |
-|`sort` | `Sort-Object` |
-|`wget` | `Invoke-WebRequest` |
-
-## Persisting $HOME
-
-Earlier users could only persist scripts and other files in their Cloud Drive.
-Now, the user's $HOME directory is also persisted across sessions.
-
-## PowerShell profile
-
-By default, a user's PowerShell profile is not created.
-To create your profile, create a `PowerShell` directory under `$HOME/.config`.
-
-```azurepowershell-interactive
-mkdir (Split-Path $profile.CurrentUserAllHosts)
-```
-
-Under `$HOME/.config/PowerShell`, you can create your profile files - `profile.ps1` and/or `Microsoft.PowerShell_profile.ps1`.
-
-## What's new in PowerShell
-
-For more information about what is new in PowerShell, reference the
-[PowerShell What's New](/powershell/scripting/whats-new/overview) and
-[Discover PowerShell](/powershell/scripting/discover-powershell).
cloud-shell Embed Cloud Shell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/embed-cloud-shell.md
Title: Embed Azure Cloud Shell | Microsoft Docs+ description: Learn to embed Azure Cloud Shell.---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 12/11/2017-++
+tags: azure-resource-manager
+ Title: Embed Azure Cloud Shell
# Embed Azure Cloud Shell
-Embedding Cloud Shell enables developers and content writers to directly open Cloud Shell from a dedicated URL, [shell.azure.com](https://shell.azure.com). This immediately brings the full power of Cloud Shell's authentication, tooling, and up-to-date Azure CLI/Azure PowerShell tools to your users.
+Embedding Cloud Shell enables developers and content writers to directly open Cloud Shell from a
+dedicated URL, `shell.azure.com`. This link brings the full power of Cloud Shell's authentication,
+tooling, and up-to-date Azure CLI and Azure PowerShell tools to your users.
+
+You can use the following images in your own webpages and app as buttons to start a Cloud Shell
+session.
Regular sized button
-[![Regular launch](https://shell.azure.com/images/launchcloudshell.png "Launch Azure Cloud Shell")](https://shell.azure.com)
+![Regular launch](media/embed-cloud-shell/launch-cloud-shell-1.png "Launch Azure Cloud Shell")
Large sized button
-[![Large launch](https://shell.azure.com/images/launchcloudshell@2x.png "Launch Azure Cloud Shell")](https://shell.azure.com)
+![Large launch](media/embed-cloud-shell/launch-cloud-shell-2.png "Launch Azure Cloud Shell")
## How-to
-Integrate Cloud Shell's launch button into markdown files by copying the following:
+To integrate Cloud Shell's launch button into markdown files by copying the following code:
+
+Regular sized button
```markdown
-[![Launch Cloud Shell](https://shell.azure.com/images/launchcloudshell.png "Launch Cloud Shell")](https://shell.azure.com)
+[![Launch Cloud Shell](https://learn.microsoft.com/azure/cloud-shell/media/embed-cloud-shell/launch-cloud-shell-1.png](https://shell.azure.com)
```
-The HTML to embed a pop-up Cloud Shell is below:
-```html
-<a style="cursor:pointer" onclick='javascript:window.open("https://shell.azure.com", "_blank", "toolbar=no,scrollbars=yes,resizable=yes,menubar=no,location=no,status=no")'><img alt="Launch Azure Cloud Shell" src="https://shell.azure.com/images/launchcloudshell.png"></a>
+Large sized button
+
+```markdown
+[![Launch Cloud Shell](https://learn.microsoft.com/azure/cloud-shell/media/embed-cloud-shell/launch-cloud-shell-2.png](https://shell.azure.com)
```
+The location of these image files is subject to change. We recommend that you download the files for
+use in your applications.
+ ## Customize experience Set a specific shell experience by augmenting your URL.
-|Experience |URL |
-|||
-|Most recently used shell |[shell.azure.com](https://shell.azure.com) |
-|Bash |[shell.azure.com/bash](https://shell.azure.com/bash) |
-|PowerShell |[shell.azure.com/powershell](https://shell.azure.com/powershell) |
+| Experience | URL |
+| | |
+| Most recently used shell | `https://shell.azure.com` |
+| Bash | `https://shell.azure.com/bash` |
+| PowerShell | `https://shell.azure.com/powershell` |
## Next steps
-[Bash in Cloud Shell quickstart](quickstart.md)<br>
-[PowerShell in Cloud Shell quickstart](quickstart-powershell.md)
+
+- [Bash in Cloud Shell quickstart][07]
+- [PowerShell in Cloud Shell quickstart][06]
+
+<!-- updated link references -->
+[01]: https://shell.azure.com
+[06]: quickstart-powershell.md
+[07]: quickstart.md
cloud-shell Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md
Title: Azure Cloud Shell features | Microsoft Docs+ description: Overview of features in Azure Cloud Shell---
-tags: azure-resource-manager
-++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 09/20/2022-++
+tags: azure-resource-manager
+ Title: Azure Cloud Shell features
- # Features & tools for Azure Cloud Shell
+Azure Cloud Shell is a browser-based shell experience to manage and develop Azure resources.
+
+Cloud Shell offers a browser-accessible, pre-configured shell experience for managing Azure
+resources without the overhead of installing, versioning, and maintaining a machine yourself.
-Azure Cloud Shell runs on **Common Base Linux - Mariner** (CBL-Mariner),
-Microsoft's Linux distribution for cloud-infrastructure-edge products and services.
+Cloud Shell allocates machines on a per-request basis and as a result machine state doesn't
+persist across sessions. Since Cloud Shell is built for interactive sessions, shells automatically
+terminate after 20 minutes of shell inactivity.
+
+<!--
+TODO:
+- need to verify Distro - showing Ubuntu currently
+- need to verify all experiences described here eg. cd Azure: - I have different results
+-->
+Azure Cloud Shell runs on **Common Base Linux - Mariner** (CBL-Mariner), Microsoft's Linux
+distribution for cloud-infrastructure-edge products and services.
Microsoft internally compiles all the packages included in the **CBL-Mariner** repository to help guard against supply chain attacks. Tooling has been updated to reflect the new base image CBL-Mariner. You can get a full list of installed package versions using the following command:
-`tdnf list installed`. If these changes affected your Cloud Shell environment, please contact
-Azuresupport or create an issue in the
-[Cloud Shell repository](https://github.com/Azure/CloudShell/issues).
+`tdnf list installed`. If these changes affected your Cloud Shell environment, contact Azure Support
+or create an issue in the [Cloud Shell repository][12].
## Features ### Secure automatic authentication
-Cloud Shell securely and automatically authenticates account access for the Azure CLI and Azure PowerShell.
+Cloud Shell securely and automatically authenticates account access for the Azure CLI and Azure
+PowerShell.
### $HOME persistence across sessions
-To persist files across sessions, Cloud Shell walks you through attaching an Azure file share on first launch.
-Once completed, Cloud Shell will automatically attach your storage (mounted as `$HOME\clouddrive`) for all future sessions.
-Additionally, your `$HOME` directory is persisted as an .img in your Azure File share.
-Files outside of `$HOME` and machine state are not persisted across sessions. Use best practices when storing secrets such as SSH keys. Services like [Azure Key Vault have tutorials for setup](../key-vault/general/manage-with-cli2.md#prerequisites).
+To persist files across sessions, Cloud Shell walks you through attaching an Azure file share on
+first launch. Once completed, Cloud Shell will automatically attach your storage (mounted as
+`$HOME\clouddrive`) for all future sessions. Additionally, your `$HOME` directory is persisted as an
+.img in your Azure File share. Files outside of `$HOME` and machine state aren't persisted across
+sessions. Use best practices when storing secrets such as SSH keys. Services, like
+Azure Key Vault, have [tutorials for setup][02].
-[Learn more about persisting files in Cloud Shell.](persisting-shell-storage.md)
+[Learn more about persisting files in Cloud Shell.][29]
### Azure drive (Azure:)
-PowerShell in Cloud Shell provides the Azure drive (`Azure:`). You can switch to the Azure drive with `cd Azure:` and back to your home directory with `cd ~`.
-The Azure drive enables easy discovery and navigation of Azure resources such as Compute, Network, Storage etc. similar to filesystem navigation.
-You can continue to use the familiar [Azure PowerShell cmdlets](/powershell/azure) to manage these resources regardless of the drive you are in.
-Any changes made to the Azure resources, either made directly in Azure portal or through Azure PowerShell cmdlets, are reflected in the Azure drive. You can run `dir -Force` to refresh your resources.
+PowerShell in Cloud Shell provides the Azure drive (`Azure:`). You can switch to the Azure drive
+with `cd Azure:` and back to your home directory with `cd ~`. The Azure drive enables easy
+discovery and navigation of Azure resources such as Compute, Network, Storage etc. similar to
+filesystem navigation. You can continue to use the familiar [Azure PowerShell cmdlets][07] to manage
+these resources regardless of the drive you are in. Any changes made to the Azure resources, either
+made directly in Azure portal or through Azure PowerShell cmdlets, are reflected in the Azure drive.
+You can run `dir -Force` to refresh your resources.
-![Screenshot of an Azure Cloud Shell being initialized and a list of directory resources.](media/features-powershell/azure-drive.png)
+![Screenshot of an Azure Cloud Shell being initialized and a list of directory resources.][26]
### Manage Exchange Online
-PowerShell in Cloud Shell contains a private build of the Exchange Online module. Run `Connect-EXOPSSession` to get your Exchange cmdlets.
+PowerShell in Cloud Shell contains a private build of the Exchange Online module. Run
+`Connect-EXOPSSession` to get your Exchange cmdlets.
-![Screenshot of an Azure Cloud Shell running the commands Connect-EXOPSSession and Get-User.](media/features-powershell/exchangeonline.png)
+![Screenshot of an Azure Cloud Shell running the commands Connect-EXOPSSession and Get-User.][27]
Run `Get-Command -Module tmp_*`+ > [!NOTE]
-> The module name should begin with `tmp_`, if you have installed modules with the same prefix, their cmdlets will also be surfaced.
+> The module name should begin with `tmp_`, if you have installed modules with the same prefix,
+> their cmdlets will also be surfaced.
+
+![Screenshot of an Azure Cloud Shell running the command Get-Command -Module tmp_*.][28]
+
+### Deep integration with open source tooling
+
+Cloud Shell includes pre-configured authentication for open source tools such as Terraform, Ansible,
+and Chef InSpec. Try it out from the example walkthroughs.
-![Screenshot of an Azure Cloud Shell running the command Get-Command -Module tmp_*.](medilets.png)
+### Pre-installed tools
-### Deep integration with open-source tooling
+<!--
+TODO:
+- remove obsolete tools
+- separate by bash vs. pwsh
+- link to docs rather than github
+-->
-Cloud Shell includes pre-configured authentication for open-source tools such as Terraform, Ansible, and Chef InSpec. Try it out from the example walkthroughs.
+Linux tools
-## Tools
+- bash
+- zsh
+- sh
+- tmux
+- dig
-|Category |Name |
-|||
-|Linux tools |bash<br> zsh<br> sh<br> tmux<br> dig<br> |
-|Azure tools |[Azure CLI](https://github.com/Azure/azure-cli) and [Azure classic CLI](https://github.com/Azure/azure-xplat-cli)<br> [AzCopy](../storage/common/storage-use-azcopy-v10.md)<br> [Azure Functions CLI](https://github.com/Azure/azure-functions-core-tools)<br> [Service Fabric CLI](../service-fabric/service-fabric-cli.md)<br> [Batch Shipyard](https://github.com/Azure/batch-shipyard)<br> [blobxfer](https://github.com/Azure/blobxfer)|
-|Text editors |code (Cloud Shell editor)<br> vim<br> nano<br> emacs |
-|Source control |git |
-|Build tools |make<br> maven<br> npm<br> pip |
-|Containers |[Docker Machine](https://github.com/docker/machine)<br> [Kubectl](https://kubernetes.io/docs/user-guide/kubectl-overview/)<br> [Helm](https://github.com/kubernetes/helm)<br> [DC/OS CLI](https://github.com/dcos/dcos-cli) |
-|Databases |MySQL client<br> PostgreSql client<br> [sqlcmd Utility](/sql/tools/sqlcmd-utility)<br> [mssql-scripter](https://github.com/Microsoft/sql-xplat-cli) |
-|Other |iPython Client<br> [Cloud Foundry CLI](https://github.com/cloudfoundry/cli)<br> [Terraform](https://www.terraform.io/docs/providers/azurerm/)<br> [Ansible](https://www.ansible.com/microsoft-azure)<br> [Chef InSpec](https://www.chef.io/inspec/)<br> [Puppet Bolt](https://puppet.com/docs/bolt/latest/bolt.html)<br> [HashiCorp Packer](https://www.packer.io/)<br> [Office 365 CLI](https://pnp.github.io/office365-cli/)|
+Azure tools
-## Language support
+- [Azure CLI][09]
+- [AzCopy][04]
+- [Azure Functions CLI][05]
+- [Service Fabric CLI][03]
+- [Batch Shipyard][10]
+- [blobxfer][11]
-|Language |Version |
-|||
-|.NET Core |[3.1.302](https://github.com/dotnet/core/blob/master/release-notes/3.1/3.1.6/3.1.302-download.md) |
-|Go |1.9 |
-|Java |1.8 |
-|Node.js |8.16.0 |
-|PowerShell |[7.0.0](https://github.com/PowerShell/powershell/releases) |
-|Python |2.7 and 3.7 (default)|
+Text editors
+
+- code (Cloud Shell editor)
+- vim
+- nano
+- emacs
+
+Source control
+
+- git
+
+Build tools
+
+- make
+- maven
+- npm
+- pip
+
+Containers
+
+- [Docker Desktop][15]
+- [Kubectl][19]
+- [Helm][17]
+- [DC/OS CLI][14]
+
+Databases
+
+- MySQL client
+- PostgreSql client
+- [sqlcmd Utility][09]
+- [mssql-scripter][18]
+
+Other
+
+- iPython Client
+- [Cloud Foundry CLI][13]
+- [Terraform][25]
+- [Ansible][22]
+- [Chef InSpec][23]
+- [Puppet Bolt][21]
+- [HashiCorp Packer][24]
+- [Office 365 CLI][20]
+
+### Language support
+
+| Language | Version |
+| - | |
+| .NET Core | [6.0.402][16] |
+| Go | 1.9 |
+| Java | 1.8 |
+| Node.js | 8.16.0 |
+| PowerShell | [7.2][08] |
+| Python | 2.7 and 3.7 (default) |
## Next steps
-[Bash in Cloud Shell Quickstart](quickstart.md) <br>
-[PowerShell in Cloud Shell Quickstart](quickstart-powershell.md) <br>
-[Learn about Azure CLI](/cli/azure/) <br>
-[Learn about Azure PowerShell](/powershell/azure/) <br>
+
+- [Bash in Cloud Shell Quickstart][31]
+- [PowerShell in Cloud Shell Quickstart][30]
+- [Learn about Azure CLI][06]
+- [Learn about Azure PowerShell][07]
+
+<!-- link references -->
+[02]: ../key-vault/general/manage-with-cli2.md#prerequisites
+[03]: ../service-fabric/service-fabric-cli.md
+[04]: ../storage/common/storage-use-azcopy-v10.md
+[05]: /azure/azure-functions/functions-run-local
+[06]: /cli/azure/
+[07]: /powershell/azure
+[08]: /powershell/scripting/whats-new/what-s-new-in-powershell-72
+[09]: /sql/tools/sqlcmd-utility
+[10]: https://batch-shipyard.readthedocs.io/en/latest/
+[11]: https://blobxfer.readthedocs.io/en/latest/
+[12]: https://github.com/Azure/CloudShell/issues
+[13]: https://docs.cloudfoundry.org/cf-cli/
+[14]: https://docs.d2iq.com/dkp/2.3/azure-quick-start
+[15]: https://docs.docker.com/desktop/
+[16]: https://dotnet.microsoft.com/download/dotnet/6.0
+[17]: https://helm.sh/docs/
+[18]: https://github.com/microsoft/mssql-scripter/blob/dev/doc/usage_guide.md
+[19]: https://kubernetes.io/docs/user-guide/kubectl-overview/
+[20]: https://pnp.github.io/office365-cli/
+[21]: https://puppet.com/docs/bolt/latest/bolt.html
+[22]: https://www.ansible.com/microsoft-azure
+[23]: https://docs.chef.io/
+[24]: https://developer.hashicorp.com/packer/docs
+[25]: https://www.terraform.io/docs/providers/azurerm/
+[26]: media/features/azure-drive.png
+[27]: media/features/exchangeonline.png
+[28]: medilets.png
+[29]: persisting-shell-storage.md
+[30]: quickstart-powershell.md
+[31]: quickstart.md
cloud-shell Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/limitations.md
Title: Azure Cloud Shell limitations | Microsoft Docs+ description: Overview of limitations of Azure Cloud Shell---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 02/15/2018-++
+tags: azure-resource-manager
+ Title: Azure Cloud Shell limitations
- # Limitations of Azure Cloud Shell Azure Cloud Shell has the following known limitations:
Azure Cloud Shell has the following known limitations:
## General limitations ### System state and persistence-
-The machine that provides your Cloud Shell session is temporary, and it is recycled after your session is inactive for 20 minutes. Cloud Shell requires an Azure file share to be mounted. As a result, your subscription must be able to set up storage resources to access Cloud Shell. Other considerations include:
-
-* With mounted storage, only modifications within the `$Home` directory are persisted.
-* Azure file shares can be mounted only from within your [assigned region](persisting-shell-storage.md#mount-a-new-clouddrive).
- * In Bash, run `env` to find your region set as `ACC_LOCATION`.
+<!--
+TODO:
+- verify the regions
+-->
+The machine that provides your Cloud Shell session is temporary, and it's recycled after your
+session is inactive for 20 minutes. Cloud Shell requires an Azure file share to be mounted. As a
+result, your subscription must be able to set up storage resources to access Cloud Shell. Other
+considerations include:
+
+- With mounted storage, only modifications within the `$HOME` directory are persisted.
+- Azure file shares can be mounted only from within your [assigned region][02].
+ - In Bash, run `env` to find your region set as `ACC_LOCATION`.
### Browser support-
-Cloud Shell supports the latest versions of Microsoft Edge, Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, and Apple Safari. Safari in private mode is not supported.
+<!--
+TODO:
+- Do we still support Microsoft Internet Explorer?
+-->
+Cloud Shell supports the latest versions of Microsoft Edge, Microsoft Internet Explorer, Google
+Chrome, Mozilla Firefox, and Apple Safari. Safari in private mode isn't supported.
### Copy and paste
+- Windows: <kbd>Ctrl</kbd>-<kbd>C</kbd> to copy is supported but use
+ <kbd>Shift</kbd>-<kbd>Insert</kbd> to paste.
+ - FireFox/IE may not support clipboard permissions properly.
+- macOS: <kbd>Cmd</kbd>-<kbd>C</kbd> to copy and <kbd>Cmd</kbd>-<kbd>V</kbd> to paste.
-### For a given user, only one shell can be active
+### Only one shell can be active for a given user
-Users can only launch one type of shell at a time, either **Bash** or **PowerShell**. However, you may have multiple instances of Bash or PowerShell running at one time. Swapping between Bash or PowerShell by using the menu causes Cloud Shell to restart, which terminates existing sessions. Alternatively, you can run bash inside PowerShell by typing `bash`, and you can run PowerShell inside bash by typing `pwsh`.
+Users can only launch one Cloud Shell session at a time. However, you may have multiple instances of
+Bash or PowerShell running within that session. Switching between Bash or PowerShell using the menu
+restarts the Cloud Shell session and terminate the existing session. To avoid losing your current
+session, you can run `bash` inside PowerShell and you can run `pwsh` inside of Bash.
### Usage limits
-Cloud Shell is intended for interactive use cases. As a result, any long-running non-interactive sessions are ended without warning.
+Cloud Shell is intended for interactive use cases. As a result, any long-running non-interactive
+sessions are ended without warning.
## Bash limitations ### User permissions
-Permissions are set as regular users without sudo access. Any installation outside your `$Home` directory is not persisted.
-
-### Editing .bashrc or $PROFILE
-
-Take caution when editing .bashrc or PowerShell's $PROFILE file, doing so can cause unexpected errors in Cloud Shell.
+Permissions are set as regular users without sudo access. Any installation outside your `$Home`
+directory isn't persisted.
## PowerShell limitations-
+<!--
+TODO:
+- outdated info about AzureAD and SQL
+- Not running on Windows so the GUI comment not valid
+-->
### `AzureAD` module name
-The `AzureAD` module name is currently `AzureAD.Standard.Preview`, the module provides the same functionality.
+The `AzureAD` module name is currently `AzureAD.Standard.Preview`, the module provides the same
+functionality.
### `SqlServer` module functionality
-The `SqlServer` module included in Cloud Shell has only prerelease support for PowerShell Core. In particular, `Invoke-SqlCmd` is not available yet.
+The `SqlServer` module included in Cloud Shell has only prerelease support for PowerShell Core. In
+particular, `Invoke-SqlCmd` isn't available yet.
+
+### Default file location when created from Azure drive
-### Default file location when created from Azure drive:
+You can't create files under the `Azure:` drive. When users create new files using other tools, such
+as vim or nano, the files are saved to the `$HOME` by default.
-Using PowerShell cmdlets, users can not create files under the Azure: drive. When users create new files using other tools, such as vim or nano, the files are saved to the `$HOME` by default.
+### GUI applications aren't supported
-### GUI applications are not supported
+If the user runs a command that would create a dialog box, one sees an error message such
+as:
+
+> Unable to load DLL 'IEFRAME.dll': The specified module couldn't be found.
-If the user runs a command that would create a Windows dialog box, one sees an error message such as: `Unable to load DLL 'IEFRAME.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)`.
### Large Gap after displaying progress bar
-If the user performs an action that displays a progress bar, such as a tab completing while in the `Azure:` drive, then it is possible that the cursor is not set properly and a gap appears where the progress bar was previously.
+When the user performs an action that displays a progress bar, such as a tab completing while in the
+`Azure:` drive, it's possible that the cursor isn't set properly and a gap appears where the
+progress bar was previously.
## Next steps
-[Troubleshooting Cloud Shell](troubleshooting.md) <br>
-[Quickstart for Bash](quickstart.md) <br>
-[Quickstart for PowerShell](quickstart-powershell.md)
+- [Troubleshooting Cloud Shell][05]
+- [Quickstart for Bash][04]
+- [Quickstart for PowerShell][03]
+
+<!-- link references -->
+[02]: persisting-shell-storage.md#mount-a-new-clouddrive
+[03]: quickstart-powershell.md
+[04]: quickstart.md
+[05]: troubleshooting.md
cloud-shell Msi Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/msi-authorization.md
Title: Acquiring a user token in Azure Cloud Shell
-description: How to acquire a token for the authenticated user in Azure Cloud Shell
---
-tags: azure-resource-manager
+
+description: How to acquire a token for the authenticated user in Azure Cloud Shell
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 09/29/2021++
+tags: azure-resource-manager
+ Title: Acquiring a user token in Azure Cloud Shell
- # Acquire a token in Azure Cloud Shell-
-Azure Cloud Shell provides an endpoint that will automatically authenticate the user logged into the Azure portal. Use this endpoint to acquire access tokens to interact with Azure services.
+<!--
+TODO:
+- MSI is never mentioned in this article - what is it?
+- Need powershell example - there are examples in other articles - be consistent
+-->
+Azure Cloud Shell provides an endpoint that automatically authenticates the user logged into the
+Azure portal. Use this endpoint to acquire access tokens to interact with Azure services.
## Authenticating in the Cloud Shell
-The Azure Cloud Shell has its own endpoint that interacts with your browser to automatically log you in. When this endpoint receives a request, it sends the request back to your browser, which forwards it to the parent Portal frame. The Portal window makes a request to Azure Active Directory, and the resulting token is returned.
-If you want to authenticate with different credentials, you can do so using `az login` or `Connect-AzAccount`
+The Azure Cloud Shell has its own endpoint that interacts with your browser to automatically log you
+in. When this endpoint receives a request, it sends the request back to your browser, which forwards
+it to the parent Portal frame. The Portal window makes a request to Azure Active Directory, and the
+resulting token is returned.
+
+If you want to authenticate with different credentials, you can do so using `az login` or
+`Connect-AzAccount`
## Acquire and use access token in Cloud Shell ### Acquire token
-Execute the following commands to set your user access token as an environment variable, `access_token`.
-```
+Execute the following commands to set your user access token as an environment variable,
+`access_token`.
+
+```bash
response=$(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s) access_token=$(echo $response | python -c 'import sys, json; print (json.load(sys.stdin)["access_token"])') echo The access token is $access_token
echo The access token is $access_token
### Use token
-Execute the following command to get a list of all Virtual Machines in your account, using the token you acquired in the previous step.
+Execute the following command to get a list of all Virtual Machines in your account, using the token
+you acquired in the previous step.
-```
+```bash
curl https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachines?api-version=2021-07-01 -H "Authorization: Bearer $access_token" -H "x-ms-version: 2019-02-02" ``` ## Handling token expiration
-The local authentication endpoint caches tokens. You can call it as often as you like, and an authentication call to Azure Active Directory will only happen if there's no token stored in the cache, or the token is expired.
+The local authentication endpoint caches tokens. You can call it as often as you like. Cloud Shell
+only calls Azure Active Directory only occurs when there's no token stored in the cache or the token
+has expired.
## Limitations-- There's an allowlist of resources that Cloud Shell tokens can be provided for. If you run a command and receive a message similar to `"error":{"code":"AudienceNotSupported","message":"Audience https://newservice.azure.com/ is not a supported MSI token audience...."}`, you've come across this limitation. You can file an issue on [GitHub](https://github.com/Azure/CloudShell/issues) to request that this service is added to the allowlist.-- If you log in explicitly using the `az login` command, any Conditional Access rules your company may have in place will be evaluated based on the Cloud Shell container rather than the machine where your browser runs. The Cloud Shell container doesnΓÇÖt count as a managed device for these policies so rights may be limited by the policy.-- Azure Managed Identities aren't available in the Azure Cloud Shell. [Read more about Azure Managed Identities](../active-directory/managed-identities-azure-resources/overview.md).+
+- There's an allowlist of resources that Cloud Shell tokens can be provided for. When you try to use
+ a token with a service that is not listed, you may see the following error message:
+
+ ```
+ "error":{"code":"AudienceNotSupported","message":"Audience https://newservice.azure.com/
+ isn't a supported MSI token audience...."}
+ ```
+
+ You can open an issue on [GitHub][02] to request for the service to be added to the allowlist.
+
+- If you sign in explicitly using the `az login` command, any Conditional Access rules your company
+ may have in place are evaluated based on the Cloud Shell container rather than the machine where
+ your browser runs. The Cloud Shell container doesn't count as a managed device for these policies
+ so rights may be limited by the policy.
+
+- Azure Managed Identities aren't available in the Azure Cloud Shell. Read more about
+ [Azure Managed Identities][01].
+
+<!-- link references -->
+[01]: ../active-directory/managed-identities-azure-resources/overview.md
+[02]: https://github.com/Azure/CloudShell/issues
cloud-shell Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/overview.md
Title: Azure Cloud Shell overview | Microsoft Docs+ description: Overview of the Azure Cloud Shell.---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 06/4/2021-+
+tags: azure-resource-manager
+ Title: Azure Cloud Shell overview
- # Overview of Azure Cloud Shell
-Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work, either Bash or PowerShell.
+Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure
+resources. It provides the flexibility of choosing the shell experience that best suits the way you
+work, either Bash or PowerShell.
-You can access the Cloud Shell in three ways:
+You can access Cloud Shell in three ways:
-- **Direct link**: Open a browser to [https://shell.azure.com](https://shell.azure.com).
+- **Direct link**: Open a browser to [https://shell.azure.com][11].
-- **Azure portal**: Select the Cloud Shell icon on the [Azure portal](https://portal.azure.com):
+- **Azure portal**: Select the Cloud Shell icon on the [Azure portal][10]:
- ![Icon to launch the Cloud Shell from the Azure portal](media/overview/portal-launch-icon.png)
+ ![Icon to launch Cloud Shell from the Azure portal][14]
-- **Code snippets**: In Microsoft [technical documentation](/) and [training resources](/training), select the **Try It** button that appears with Azure CLI and Azure PowerShell code snippets:
+- **Code samples**: In Microsoft [technical documentation][02] and [training resources][05], select
+ the **Try It** button that appears with Azure CLI and Azure PowerShell code snippets:
```azurecli-interactive az account show
You can access the Cloud Shell in three ways:
Get-AzSubscription ```
- The **Try It** button opens the Cloud Shell directly alongside the documentation using Bash (for Azure CLI snippets) or PowerShell (for Azure PowerShell snippets).
+ The **Try It** button opens Cloud Shell directly alongside the documentation using Bash (for
+ Azure CLI snippets) or PowerShell (for Azure PowerShell snippets).
- To run the command, use **Copy** in the code snippet, use **Ctrl**+**Shift**+**V** (Windows/Linux) or **Cmd**+**Shift**+**V** (macOS) to paste the command, and then press **Enter**.
+ To run the command, use **Copy** in the code snippet, use
+ <kbd>Ctrl</kbd>+<kbd>Shift</kbd>+<kbd>V</kbd> (Windows/Linux) or
+ <kbd>Cmd</kbd>+<kbd>Shift</kbd>+<kbd>V</kbd> (macOS) to paste the command, and then press
+ <kbd>Enter</kbd>.
## Features ### Browser-based shell experience
-Cloud Shell enables access to a browser-based command-line experience built with Azure management tasks in mind. Leverage Cloud Shell to work untethered from a local machine in a way only the cloud can provide.
+Cloud Shell enables access to a browser-based command-line experience built with Azure management
+tasks in mind. Use Cloud Shell to work untethered from a local machine in a way only the cloud
+can provide.
### Choice of preferred shell experience
Users can choose between Bash or PowerShell.
1. Select **Cloud Shell**.
- ![Cloud Shell icon](media/overview/overview-cloudshell-icon.png)
+ ![Cloud Shell icon][13]
-2. Select **Bash** or **PowerShell**.
+1. Select **Bash** or **PowerShell**.
- ![Choose either Bash or PowerShell](media/overview/overview-choices.png)
+ ![Choose either Bash or PowerShell][12]
- After first launch, you can use the shell type drop-down control to switch between Bash and PowerShell:
+ After first launch, you can use the shell type drop-down control to switch between Bash and
+ PowerShell:
- ![Drop-down control to select Bash or PowerShell](media/overview/select-shell-drop-down.png)
+ ![Drop-down control to select Bash or PowerShell][15]
### Authenticated and configured Azure workstation
-Cloud Shell is managed by Microsoft so it comes with popular command-line tools and language support. Cloud Shell also securely authenticates automatically for instant access to your resources through the Azure CLI or Azure PowerShell cmdlets.
+Cloud Shell is managed by Microsoft so it comes with popular command-line tools and language
+support. Cloud Shell also securely authenticates automatically for instant access to your resources
+through the Azure CLI or Azure PowerShell cmdlets.
-View the full [list of tools installed in Cloud Shell.](features.md#tools)
+View the full [list of tools installed in Cloud Shell.][07]
### Integrated Cloud Shell editor
-Cloud Shell offers an integrated graphical text editor based on the open-source Monaco Editor. Simply create and edit configuration files by running `code .` for seamless deployment through Azure CLI or Azure PowerShell.
+Cloud Shell offers an integrated graphical text editor based on the open source Monaco Editor.
+Create and edit configuration files by running `code .` for seamless deployment through Azure CLI or
+Azure PowerShell.
-[Learn more about the Cloud Shell editor](using-cloud-shell-editor.md).
+[Learn more about the Cloud Shell editor][20].
### Multiple access points Cloud Shell is a flexible tool that can be used from:
-* [portal.azure.com](https://portal.azure.com)
-* [shell.azure.com](https://shell.azure.com)
-* [Azure CLI documentation](/cli/azure)
-* [Azure PowerShell documentation](/powershell/azure/)
-* [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/)
-* [Visual Studio Code Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account)
+- [portal.azure.com][10]
+- [shell.azure.com][11]
+- [Azure CLI documentation][03]
+- [Azure PowerShell documentation][04]
+- [Azure mobile app][08]
+- [Visual Studio Code Azure Account extension][09]
### Connect your Microsoft Azure Files storage
-Cloud Shell machines are temporary, but your files are persisted in two ways: through a disk image, and through a mounted file share named `clouddrive`. On first launch, Cloud Shell prompts to create a resource group, storage account, and Azure Files share on your behalf. This is a one-time step and will be automatically attached for all sessions. A single file share can be mapped and will be used by both Bash and PowerShell in Cloud Shell.
+Cloud Shell machines are temporary, but your files are persisted in two ways: through a disk image,
+and through a mounted file share named `clouddrive`. On first launch, Cloud Shell prompts to create
+a resource group, storage account, and Azure Files share on your behalf. This is a one-time step and
+the resources created are automatically attached for all future sessions. A single file share can be
+mapped and is used by both Bash and PowerShell in Cloud Shell.
-Read more to learn how to mount a [new or existing storage account](persisting-shell-storage.md) or to learn about the [persistence mechanisms used in Cloud Shell](persisting-shell-storage.md#how-cloud-shell-storage-works).
+Read more to learn how to mount a [new or existing storage account][16] or to learn about the
+[persistence mechanisms used in Cloud Shell][17].
> [!NOTE]
-> Azure storage firewall is not supported for cloud shell storage accounts.
+> Azure storage firewall isn't supported for cloud shell storage accounts.
## Concepts
-* Cloud Shell runs on a temporary host provided on a per-session, per-user basis
-* Cloud Shell times out after 20 minutes without interactive activity
-* Cloud Shell requires an Azure file share to be mounted
-* Cloud Shell uses the same Azure file share for both Bash and PowerShell
-* Cloud Shell is assigned one machine per user account
-* Cloud Shell persists $HOME using a 5-GB image held in your file share
-* Permissions are set as a regular Linux user in Bash
+- Cloud Shell runs on a temporary host provided on a per-session, per-user basis
+- Cloud Shell times out after 20 minutes without interactive activity
+- Cloud Shell requires an Azure file share to be mounted
+- Cloud Shell uses the same Azure file share for both Bash and PowerShell
+- Cloud Shell is assigned one machine per user account
+- Cloud Shell persists $HOME using a 5-GB image held in your file share
+- Permissions are set as a regular Linux user in Bash
-Learn more about features in [Bash in Cloud Shell](features.md) and [PowerShell in Cloud Shell](./features.md).
+Learn more about features in [Bash in Cloud Shell][06] and [PowerShell in Cloud Shell][01].
## Compliance+ ### Encryption at rest
-All Cloud Shell infrastructure is compliant with double encryption at rest by default. No action is required by users.
+
+All Cloud Shell infrastructure is compliant with double encryption at rest by default. No action is
+required by users.
## Pricing
-The machine hosting Cloud Shell is free, with a pre-requisite of a mounted Azure Files share. Regular storage costs apply.
+The machine hosting Cloud Shell is free, with a pre-requisite of a mounted Azure Files share.
+Regular storage costs apply.
## Next steps
-[Bash in Cloud Shell quickstart](quickstart.md) <br>
-[PowerShell in Cloud Shell quickstart](quickstart-powershell.md)
+- [Bash in Cloud Shell quickstart][19]
+- [PowerShell in Cloud Shell quickstart][18]
+
+<!-- link references -->
+[01]: ./features.md
+[02]: /samples/browse
+[03]: /cli/azure
+[04]: /powershell/azure
+[05]: /training
+[06]: features.md
+[07]: features.md#pre-installed-tools
+[08]: https://azure.microsoft.com/features/azure-portal/mobile-app/
+[09]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account
+[10]: https://portal.azure.com
+[11]: https://shell.azure.com
+[12]: media/overview/overview-choices.png
+[13]: media/overview/overview-cloudshell-icon.png
+[14]: media/overview/portal-launch-icon.png
+[15]: media/overview/select-shell-drop-down.png
+[16]: persisting-shell-storage.md
+[17]: persisting-shell-storage.md#how-cloud-shell-storage-works
+[18]: quickstart-powershell.md
+[19]: quickstart.md
+[20]: using-cloud-shell-editor.md
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/persisting-shell-storage.md
Title: Persist files in Azure Cloud Shell | Microsoft Docs+ description: Walkthrough of how Azure Cloud Shell persists files.---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 02/24/2020-++
+tags: azure-resource-manager
+ Title: Persist files in Azure Cloud Shell
- # Persist files in Azure Cloud Shell
-Cloud Shell utilizes Azure Files to persist files across sessions. On initial start, Cloud Shell prompts you to associate a new or existing file share to persist files across sessions.
-> [!NOTE]
-> Bash and PowerShell share the same file share. Only one file share can be associated with automatic mounting in Cloud Shell.
+Cloud Shell uses Azure Files to persist files across sessions. On initial start, Cloud Shell prompts
+you to associate a new or existing fileshare to persist files across sessions.
> [!NOTE]
-> Azure storage firewall is not supported for cloud shell storage accounts.
+> Bash and PowerShell share the same fileshare. Only one fileshare can be associated with
+> automatic mounting in Cloud Shell.
+>
+> Azure storage firewall isn't supported for cloud shell storage accounts.
## Create new storage
-When you use basic settings and select only a subscription, Cloud Shell creates three resources on your behalf in the supported region that's nearest to you:
-* Resource group: `cloud-shell-storage-<region>`
-* Storage account: `cs<uniqueGuid>`
-* File share: `cs-<user>-<domain>-com-<uniqueGuid>`
+When you use basic settings and select only a subscription, Cloud Shell creates three resources on
+your behalf in the supported region that's nearest to you:
+
+- Resource group: `cloud-shell-storage-<region>`
+- Storage account: `cs<uniqueGuid>`
+- fileshare: `cs-<user>-<domain>-com-<uniqueGuid>`
-![The Subscription setting](media/persisting-shell-storage/basic-storage.png)
+![Screenshot of choosing the subscription for your storage account][09]
-The file share mounts as `clouddrive` in your `$Home` directory. This is a one-time action, and the file share mounts automatically in subsequent sessions.
+The fileshare mounts as `clouddrive` in your `$HOME` directory. This is a one-time action, and the
+fileshare mounts automatically in subsequent sessions.
-The file share also contains a 5-GB image that is created for you which automatically persists data in your `$Home` directory. This applies for both Bash and PowerShell.
+The fileshare also contains a 5-GB image that automatically persists data in your `$HOME` directory.
+This fileshare is used for both Bash and PowerShell.
## Use existing resources
-By using the advanced option, you can associate existing resources. When selecting a Cloud Shell region you must select a backing storage account co-located in the same region. For example, if your assigned region is West US then you must associate a file share that resides within West US as well.
+Using the advanced option, you can associate existing resources. When selecting a Cloud Shell region,
+you must select a backing storage account co-located in the same region. For example, if your
+assigned region is West US then you must associate a fileshare that resides within West US as well.
-When the storage setup prompt appears, select **Show advanced settings** to view additional options. The populated storage options filter for locally redundant storage (LRS), geo-redundant storage (GRS), and zone-redundant storage (ZRS) accounts.
+When the storage setup prompt appears, select **Show advanced settings** to view more options. The
+populated storage options filter for locally redundant storage (LRS), geo-redundant storage (GRS),
+and zone-redundant storage (ZRS) accounts.
> [!NOTE]
-> Using GRS or ZRS storage accounts are recommended for additional resiliency for your backing file share. Which type of redundancy depends on your goals and price preference. [Learn more about replication options for Azure Storage accounts](../storage/common/storage-redundancy.md).
+> Using GRS or ZRS storage accounts are recommended for additional resiliency for your backing file
+> share. Which type of redundancy depends on your goals and price preference.
+> [Learn more about replication options for Azure Storage accounts][04].
-![The Resource group setting](media/persisting-shell-storage/advanced-storage.png)
+![Screenshot of configuring your storage account][08]
## Securing storage access
-For security, each user should provision their own storage account. For Azure role-based access control (Azure RBAC), users must have contributor access or above at the storage account level.
-Cloud Shell uses an Azure File Share in a storage account, inside a specified subscription. Due to inherited permissions, users with sufficient access rights to the subscription will be able to access all the storage accounts, and file shares contained in the subscription.
+For security, each user should create their own storage account. For Azure role-based access control
+(Azure RBAC), users must have contributor access or above at the storage account level.
-Users should lock down access to their files by setting the permissions at the storage account or the subscription level.
+Cloud Shell uses an Azure fileshare in a storage account, inside a specified subscription. Due to
+inherited permissions, users with sufficient access rights to the subscription can access all the
+storage accounts, and file shares contained in the subscription.
-The Cloud Shell storage account will contain files created by the Cloud Shell user in their home directory, which may include sensitive information including access tokens or credentials.
+Users should lock down access to their files by setting the permissions at the storage account or
+the subscription level.
+
+The Cloud Shell storage account contains files created by the Cloud Shell user in their home
+directory, which may include sensitive information including access tokens or credentials.
## Supported storage regions
-To find your current region you may run `env` in Bash and locate the variable `ACC_LOCATION`, or from PowerShell run `$env:ACC_LOCATION`. File shares receive a 5-GB image created for you to persist your `$Home` directory.
+
+To find your current region you may run `env` in Bash and locate the variable `ACC_LOCATION`, or
+from PowerShell run `$env:ACC_LOCATION`. File shares receive a 5-GB image created for you to persist
+your `$HOME` directory.
Cloud Shell machines exist in the following regions:
-|Area|Region|
-|||
-|Americas|East US, South Central US, West US|
-|Europe|North Europe, West Europe|
-|Asia Pacific|India Central, Southeast Asia|
+| Area | Region |
+| | - |
+| Americas | East US, South Central US, West US |
+| Europe | North Europe, West Europe |
+| Asia Pacific | India Central, Southeast Asia |
-Customers should choose a primary region, unless they have a requirement that their data at rest be stored in a particular region. If they have such a requirement, a secondary storage region should be used.
+Customers should choose a primary region, unless they have a requirement that their data at rest be
+stored in a particular region. If they have such a requirement, a secondary storage region should be
+used.
### Secondary storage regions
-If a secondary storage region is used, the associated Azure storage account resides in a different region as the Cloud Shell machine that you're mounting them to. For example, Jane can set her storage account to be located in Canada East, a secondary region, but the machine she is mounted to is still located in a primary region. Her data at rest is located in Canada, but it is processed in the United States.
+
+If a secondary storage region is used, the associated Azure storage account resides in a different
+region as the Cloud Shell machine that you're mounting them to. For example, you can set your
+storage account to be located in Canada East, a secondary region, but your Cloud Shell machine is
+still located in a primary region. Your data at rest is located in Canada, but it's processed in the
+United States.
> [!NOTE] > If a secondary region is used, file access and startup time for Cloud Shell may be slower.
-A user can run `(Get-CloudDrive | Get-AzStorageAccount).Location` in PowerShell to see the location of their File Share.
+A user can run `(Get-CloudDrive | Get-AzStorageAccount).Location` in PowerShell to see the location
+of their fileshare.
## Restrict resource creation with an Azure resource policy
-Storage accounts that you create in Cloud Shell are tagged with `ms-resource-usage:azure-cloud-shell`. If you want to disallow users from creating storage accounts in Cloud Shell, create an [Azure resource policy for tags](../governance/policy/samples/index.md) that are triggered by this specific tag.
-## How Cloud Shell storage works
-Cloud Shell persists files through both of the following methods:
-* Creating a disk image of your `$Home` directory to persist all contents within the directory. The disk image is saved in your specified file share as `acc_<User>.img` at `fileshare.storage.windows.net/fileshare/.cloudconsole/acc_<User>.img`, and it automatically syncs changes.
-* Mounting your specified file share as `clouddrive` in your `$Home` directory for direct file-share interaction. `/Home/<User>/clouddrive` is mapped to `fileshare.storage.windows.net/fileshare`.
-
+Storage accounts that you create in Cloud Shell are tagged with
+`ms-resource-usage:azure-cloud-shell`. If you want to disallow users from creating storage accounts
+in Cloud Shell, create an [Azure resource policy for tags][03] that is triggered by this specific
+tag.
+
+## How Cloud Shell storage works
+
+Cloud Shell persists files through both of the following methods:
+
+- Creating a disk image of your `$HOME` directory to persist all contents within the directory. The
+ disk image is saved in your specified fileshare as `acc_<User>.img` at
+ `fileshare.storage.windows.net/fileshare/.cloudconsole/acc_<User>.img`, and it automatically syncs
+ changes.
+- Mounting your specified fileshare as `clouddrive` in your `$HOME` directory for direct file-share
+ interaction. `/Home/<User>/clouddrive` is mapped to `fileshare.storage.windows.net/fileshare`.
+ > [!NOTE]
-> All files in your `$Home` directory, such as SSH keys, are persisted in your user disk image, which is stored in your mounted file share. Apply best practices when you persist information in your `$Home` directory and mounted file share.
+> All files in your `$HOME` directory, such as SSH keys, are persisted in your user disk image,
+> which is stored in your mounted fileshare. Apply best practices when you persist information in
+> your `$HOME` directory and mounted fileshare.
## clouddrive commands ### Use the `clouddrive` command
-In Cloud Shell, you can run a command called `clouddrive`, which enables you to manually update the file share that is mounted to Cloud Shell.
-![Running the "clouddrive" command](media/persisting-shell-storage/clouddrive-h.png)
+In Cloud Shell, you can run a command called `clouddrive`, which enables you to manually update the
+fileshare that's mounted to Cloud Shell.
+
+![Screenshot of running the clouddrive command in bash][10]
### List `clouddrive`
-To discover which file share is mounted as `clouddrive`, run the `df` command.
-The file path to clouddrive shows your storage account name and file share in the URL. For example, `//storageaccountname.file.core.windows.net/filesharename`
+To discover which fileshare is mounted as `clouddrive`, run the `df` command.
-```
+The file path to clouddrive shows your storage account name and fileshare in the URL. For example,
+`//storageaccountname.file.core.windows.net/filesharename`
+
+```bash
justin@Azure:~$ df Filesystem 1K-blocks Used Available Use% Mounted on overlay 29711408 5577940 24117084 19% /
tmpfs 986716 0 986716
/dev/sda1 29711408 5577940 24117084 19% /etc/hosts shm 65536 0 65536 0% /dev/shm //mystoragename.file.core.windows.net/fileshareName 5368709120 64 5368709056 1% /home/justin/clouddrive
-justin@Azure:~$
``` ### Mount a new clouddrive #### Prerequisites for manual mounting
-You can update the file share that's associated with Cloud Shell by using the `clouddrive mount` command.
-If you mount an existing file share, the storage accounts must be located in your select Cloud Shell region. Retrieve the location by running `env` and checking the `ACC_LOCATION`.
+You can update the fileshare that's associated with Cloud Shell using the `clouddrive mount`
+command.
+
+If you mount an existing fileshare, the storage accounts must be located in your select Cloud Shell
+region. Retrieve the location by running `env` and checking the `ACC_LOCATION`.
#### The `clouddrive mount` command > [!NOTE]
-> If you're mounting a new file share, a new user image is created for your `$Home` directory. Your previous `$Home` image is kept in your previous file share.
+> If you're mounting a new fileshare, a new user image is created for your `$HOME` directory. Your
+> previous `$HOME` image is kept in your previous fileshare.
Run the `clouddrive mount` command with the following parameters:
-```
+```bash
clouddrive mount -s mySubscription -g myRG -n storageAccountName -f fileShareName ``` To view more details, run `clouddrive mount -h`, as shown here:
-![Running the `clouddrive mount`command](media/persisting-shell-storage/mount-h.png)
+![Screenshot of running the clouddrive mount command in bash][11]
### Unmount clouddrive
-You can unmount a file share that's mounted to Cloud Shell at any time. Since Cloud Shell requires a mounted file share to be used, you will be prompted to create and mount another file share on the next session.
+
+You can unmount a fileshare that's mounted to Cloud Shell at any time. Since Cloud Shell requires a
+mounted fileshare to be used, Cloud Shell prompts you to create and mount another fileshare on the
+next session.
1. Run `clouddrive unmount`.
-2. Acknowledge and confirm prompts.
+1. Acknowledge and confirm prompts.
-Your file share will continue to exist unless you delete it manually. Cloud Shell will no longer search for this file share on subsequent sessions. To view more details, run `clouddrive unmount -h`, as shown here:
+The unmounted fileshare continues to exist until you manually delete it. After unmounting, Cloud
+Shell no longer searches for this fileshare in subsequent sessions. To view more details, run
+`clouddrive unmount -h`, as shown here:
-![Running the `clouddrive unmount`command](media/persisting-shell-storage/unmount-h.png)
+![Screenshot of running the clouddrive unmount command in bash][12]
> [!WARNING]
-> Although running this command will not delete any resources, manually deleting a resource group, storage account, or file share that's mapped to Cloud Shell erases your `$Home` directory disk image and any files in your file share. This action cannot be undone.
+> Although running this command doesn't delete any resources, manually deleting a resource group,
+> storage account, or fileshare that's mapped to Cloud Shell erases your `$HOME` directory disk
+> image and any files in your fileshare. This action can't be undone.
+ ## PowerShell-specific commands ### List `clouddrive` Azure file shares
-The `Get-CloudDrive` cmdlet retrieves the Azure file share information currently mounted by the `clouddrive` in the Cloud Shell. <br>
-![Running Get-CloudDrive](media/persisting-shell-storage-powershell/Get-Clouddrive.png)
+
+The `Get-CloudDrive` cmdlet retrieves the Azure fileshare information currently mounted by the
+`clouddrive` in Cloud Shell.
+
+![Screenshot of running the Get-CloudDrive command in PowerShell][07]
### Unmount `clouddrive`
-You can unmount an Azure file share that's mounted to Cloud Shell at any time. If the Azure file share has been removed, you will be prompted to create and mount a new Azure file share at the next session.
-The `Dismount-CloudDrive` cmdlet unmounts an Azure file share from the current storage account. Dismounting the `clouddrive` terminates the current session. The user will be prompted to create and mount a new Azure file share during the next session.
-![Running Dismount-CloudDrive](media/persisting-shell-storage-powershell/Dismount-Clouddrive.png)
+You can unmount an Azure fileshare that's mounted to Cloud Shell at any time. The
+`Dismount-CloudDrive` cmdlet unmounts an Azure fileshare from the current storage account.
+Dismounting the `clouddrive` terminates the current session.
+
+If the Azure fileshare has been removed, you'll be prompted to create and mount a new Azure
+fileshare in the next session.
+
+![Screenshot of running the Dismount-CloudDrive command in PowerShell][06]
+## Transfer local files to Cloud Shell
-Note: If you need to define a function in a file and call it from the PowerShell cmdlets, then the dot operator must be included.
-For example: . .\MyFunctions.ps1
+The `clouddrive` directory syncs with the Azure portal storage blade. Use this blade to transfer
+local files to or from your file share. Updating files from within Cloud Shell is reflected in the
+file storage GUI when you refresh the blade.
+
+### Download files
+
+![Screenshot listing local files in the Azure portal][13]
+1. In the Azure portal, go to the mounted file share.
+2. Select the target file.
+3. Select the **Download** button.
+
+### Upload files
+
+![Screenshot showing how to upload files in the Azure portal][14]
+1. Go to your mounted file share.
+2. Select the **Upload** button.
+3. Select the file or files that you want to upload.
+4. Confirm the upload.
+
+You should now see the files that are accessible in your `clouddrive` directory in Cloud Shell.
+
+> [!NOTE]
+> If you need to define a function in a file and call it from the PowerShell cmdlets, then the
+> dot operator must be included. For example: `. .\MyFunctions.ps1`
## Next steps
-[Cloud Shell Quickstart](quickstart.md) <br>
-[Learn about Microsoft Azure Files storage](../storage/files/storage-files-introduction.md) <br>
-[Learn about storage tags](../azure-resource-manager/management/tag-resources.md) <br>
+
+- [Cloud Shell Quickstart][15]
+- [Learn about Microsoft Azure Files storage][05]
+- [Learn about storage tags][02]
+
+<!-- link references -->
+[01]: includes/cloud-shell-persisting-shell-storage-endblock.md
+[02]: ../azure-resource-manager/management/tag-resources.md
+[03]: ../governance/policy/samples/index.md
+[04]: ../storage/common/storage-redundancy.md
+[05]: ../storage/files/storage-files-introduction.md
+[06]: media/persisting-shell-storage/dismount-clouddrive.png
+[07]: media/persisting-shell-storage/get-clouddrive.png
+[08]: media/persisting-shell-storage/advanced-storage.png
+[09]: media/persisting-shell-storage/basic-storage.png
+[10]: media/persisting-shell-storage/clouddrive-h.png
+[11]: media/persisting-shell-storage/mount-h.png
+[12]: media/persisting-shell-storage/unmount-h.png
+[13]: media/persisting-shell-storage/download.png
+[14]: media/persisting-shell-storage/upload.png
+[15]: quickstart.md
cloud-shell Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/pricing.md
Title: Azure Cloud Shell pricing | Microsoft Docs+ description: Overview of pricing of Azure Cloud Shell---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 09/25/2017-+
+tags: azure-resource-manager
+ Title: Azure Cloud Shell pricing
- # Pricing+ Bash in Cloud Shell and PowerShell in Cloud Shell are subject to information below.
-## Compute Cost
-Azure Cloud Shell runs on a machine provided for free by Azure, but requires an Azure file share to use.
+## Compute cost
+
+Azure Cloud Shell runs on a machine provided for free by Azure, but requires an Azure file share to
+use.
+
+## Storage cost
+
+Cloud Shell requires a new or existing Azure Files share to be mounted to persist files across
+sessions. Storage incurs regular costs.
-## Storage Cost
-Cloud Shell requires a new or existing Azure Files share to be mounted to persist files across sessions. Storage incurs regular costs.
+Check [here for details on Azure Files costs][01].
-Check [here for details on Azure Files costs](https://azure.microsoft.com/pricing/details/storage/files/).
+<!-- link references -->
+[01]: https://azure.microsoft.com/pricing/details/storage/files/
cloud-shell Private Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/private-vnet.md
Title: Cloud Shell in an Azure Virtual Network+ description: Deploy Cloud Shell into an Azure virtual network---
-tags: azure-resource-manager
-++
+ms.contributor: jahelmic
+ Last updated : 11/14/2022 - vm-linux Previously updated : 04/27/2022--++
+tags: azure-resource-manager
+ Title: Cloud Shell in an Azure virtual network
# Deploy Cloud Shell into an Azure virtual network
-A regular Cloud Shell session runs in a container in a Microsoft network separate from your resources. This means that commands running inside the container cannot access resources that can only be accessed from a specific virtual network. For example, you cannot use SSH to connect from Cloud Shell to a virtual machine that only has a private IP address, or use kubectl to connect to a Kubernetes cluster which has locked down access.
+A regular Cloud Shell session runs in a container in a Microsoft network separate from your
+resources. Commands running inside the container can't access resources that can only be accessed
+from a specific virtual network. For example, you can't use SSH to connect from Cloud Shell to a
+virtual machine that only has a private IP address, or use `kubectl` to connect to a Kubernetes
+cluster that has locked down access.
-This optional feature addresses these limitations and allows you to deploy Cloud Shell into an Azure virtual network that you control. From there, the container is able to interact with resources within the virtual network you select.
+This optional feature addresses these limitations and allows you to deploy Cloud Shell into an Azure
+virtual network that you control. From there, the container is able to interact with resources
+within the virtual network you select.
-Below you can see the resource architecture that will be deployed and used in this scenario.
+The following diagram shows the resource architecture that's deployed and used in this scenario.
-![Illustrates the Cloud Shell isolated VNET architecture.](media/private-vnet/data-diagram.png)
+![Illustrates the Cloud Shell isolated VNET architecture.][06]
-Before you can use Cloud Shell in your own Azure Virtual Network, you will need to create several resources to support this functionality. This article shows how to set up the required resources using an ARM template.
+Before you can use Cloud Shell in your own Azure Virtual Network, you need to create several
+resources. This article shows how to set up the required resources using an ARM template.
> [!NOTE]
-> These resources only need to be set up once for the virtual network. They can then be shared by all administrators with access to the virtual network.
+> These resources only need to be set up once for the virtual network. They can then be shared by
+> all administrators with access to the virtual network.
## Required network resources ### Virtual network+ A virtual network defines the address space in which one or more subnets are created.
-The desired virtual network to be used for Cloud Shell needs to be identified. This will usually be an existing virtual network that contains resources you would like to manage or a network that peers with networks that contain your resources.
+You need to identify the virtual network to be used for Cloud Shell. Usually, you want to use an
+existing virtual network that contains resources you want to manage or a network that peers with
+networks that contain your resources.
### Subnet
-Within the selected virtual network, a dedicated subnet must be used for Cloud Shell containers. This subnet is delegated to the Azure Container Instances (ACI) service. When a user requests a Cloud Shell container in a virtual network, Cloud Shell uses ACI to create a container that is in this delegated subnet. No other resources can be created in this subnet.
+
+Within the selected virtual network, a dedicated subnet must be used for Cloud Shell containers.
+This subnet is delegated to the Azure Container Instances (ACI) service. When a user requests a
+Cloud Shell container in a virtual network, Cloud Shell uses ACI to create a container that's in
+this delegated subnet. No other resources can be created in this subnet.
### Network profile
-A network profile is a network configuration template for Azure resources that specifies certain network properties for the resource.
+
+A network profile is a network configuration template for Azure resources that specifies certain
+network properties for the resource.
### Azure Relay
-An [Azure Relay](../azure-relay/relay-what-is-it.md) allows two endpoints that are not directly reachable to communicate. In this case, it is used to allow the administrator's browser to communicate with the container in the private network.
-The Azure Relay instance used for Cloud Shell can be configured to control which networks can access container resources:
-- Accessible from the public internet: In this configuration, Cloud Shell provides a way to reach otherwise internal resources from outside. -- Accessible from specified networks: In this configuration, administrators will have to access the Azure portal from a computer running in the appropriate network to be able to use Cloud Shell.
+An [Azure Relay][01] allows two endpoints that aren't directly reachable to communicate. In this
+case, it's used to allow the administrator's browser to communicate with the container in the
+private network.
+
+The Azure Relay instance used for Cloud Shell can be configured to control which networks can access
+container resources:
+
+- Accessible from the public internet: In this configuration, Cloud Shell provides a way to reach
+ the internal resources from outside.
+- Accessible from specified networks: In this configuration, administrators must access the Azure
+ portal from a computer running in the appropriate network to be able to use Cloud Shell.
## Storage requirements
-As in standard Cloud Shell, a storage account is required while using Cloud Shell in a virtual network. Each administrator needs a file share to store their files. The storage account needs to be accessible from the virtual network that is used by Cloud Shell.
+
+As in standard Cloud Shell, a storage account is required while using Cloud Shell in a virtual
+network. Each administrator needs a fileshare to store their files. The storage account needs to be
+accessible from the virtual network that's used by Cloud Shell.
> [!NOTE] > Secondary storage regions are currently not supported in Cloud Shell VNET scenarios. ## Virtual network deployment limitations
-* Due to the additional networking resources involved, starting Cloud Shell in a virtual network is typically slower than a standard Cloud Shell session.
-* All Cloud Shell primary regions apart from Central India are currently supported.
-
-* [Azure Relay](../azure-relay/relay-what-is-it.md) is not a free service, please view their [pricing](https://azure.microsoft.com/pricing/details/service-bus/). In the Cloud Shell scenario, one hybrid connection is used for each administrator while they are using Cloud Shell. The connection will automatically be shut down after the Cloud Shell session is complete.
+- Starting Cloud Shell in a virtual network is typically slower than a standard Cloud Shell session.
+- All Cloud Shell primary regions, except Central India, are supported.
+- [Azure Relay][01] is a paid service. See the [pricing][04] guide. In the Cloud Shell scenario, one
+ hybrid connection is used for each administrator while they're using Cloud Shell. The connection
+ is automatically shut down after the Cloud Shell session is ended.
## Register the resource provider
-The Microsoft.ContainerInstances resource provider needs to be registered in the subscription that holds the virtual network you want to use. Select the appropriate subscription with `Set-AzContext -Subscription {subscriptionName}`, and then run:
+The Microsoft.ContainerInstances resource provider needs to be registered in the subscription that
+holds the virtual network you want to use. Select the appropriate subscription with
+`Set-AzContext -Subscription {subscriptionName}`, and then run:
```powershell PS> Get-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance | select ResourceTypes,RegistrationState
ResourceTypes RegistrationState
... ```
-If **RegistrationState** is `Registered`, no action is required. If it is `NotRegistered`, run `Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance`.
+If **RegistrationState** is `Registered`, no action is required. If it's `NotRegistered`, run
+`Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance`.
## Deploy network resources
-
+ ### Create a resource group and virtual network+ If you already have a desired VNET that you would like to connect to, skip this section.
-In the Azure portal, or using Azure CLI, Azure PowerShell, etc. create a resource group and a virtual network in the new resource group, **the resource group and virtual network need to be in the same region**.
+In the Azure portal, or using Azure CLI, Azure PowerShell, etc. create a resource group and a
+virtual network in the new resource group, **the resource group and virtual network need to be in
+the same region**.
### ARM templates
-Utilize the [Azure Quickstart Template](https://aka.ms/cloudshell/docs/vnet/template) for creating Cloud Shell resources in a virtual network, and the [Azure Quickstart Template](https://azure.microsoft.com/resources/templates/cloud-shell-vnet-storage/) for creating necessary storage. Take note of your resource names, primarily your file share name.
+
+Use the [Azure Quickstart Template][03] for creating Cloud Shell resources in a virtual network,
+and the [Azure Quickstart Template][05] for creating necessary storage. Take note of your resource
+names, primarily your fileshare name.
### Open relay firewall
-Navigate to the relay created using the above template, select "Networking" in settings, allow access from your browser network to the relay. By default the relay is only accessible from the virtual network it has been created in.
+
+By default the relay is only accessible from the virtual network where it was created. To open
+access, navigate to the relay created using the previous template, select "Networking" in settings,
+allow access from your browser network to the relay.
### Configuring Cloud Shell to use a virtual network.+ > [!NOTE]
-> This step must be completed for each administrator will use Cloud Shell.
+> This step must be completed for each administrator that uses Cloud Shell.
-After deploying completing the above steps, navigate to Cloud Shell in the Azure portal or on https://shell.azure.com. One of these experiences must be used each time you want to connect to an isolated Cloud Shell experience.
+After deploying and completing the previous steps, open Cloud Shell. One of these experiences must
+be used each time you want to connect to an isolated Cloud Shell experience.
> [!NOTE]
-> If Cloud Shell has been used in the past, the existing clouddrive must be unmounted. To do this run `clouddrive unmount` from an active Cloud Shell session, refresh your page.
+> If Cloud Shell has been used in the past, the existing clouddrive must be unmounted. To do this
+> run `clouddrive unmount` from an active Cloud Shell session, refresh your page.
-Connect to Cloud Shell, you will be prompted with the first run experience. Select your preferred shell experience, select "Show advanced settings" and select the "Show VNET isolation settings" box. Fill in the fields in the pop-up. Most fields will autofill to the available resources that can be associated with Cloud Shell in a virtual network. The File Share name will have to be filled in by the user.
+Connect to Cloud Shell. You'll be prompted with the first run experience. Select your preferred
+shell experience, select **Show advanced settings** and select the **Show VNET isolation settings**
+box. Fill in the fields in the form. Most fields will be autofilled to the available resources that
+can be associated with Cloud Shell in a virtual network. You must provide a name for the fileshare.
-
-![Illustrates the Cloud Shell isolated VNET first experience settings.](media/private-vnet/vnet-settings.png)
+![Illustrates the Cloud Shell isolated VNET first experience settings.][07]
## Next steps
-[Learn about Azure Virtual Networks](../virtual-network/virtual-networks-overview.md)
+
+[Learn about Azure Virtual Networks][02]
+
+<!-- link references -->
+[01]: ../azure-relay/relay-what-is-it.md
+[02]: ../virtual-network/virtual-networks-overview.md
+[03]: https://aka.ms/cloudshell/docs/vnet/template
+[04]: https://azure.microsoft.com/pricing/details/service-bus/
+[05]: https://azure.microsoft.com/resources/templates/cloud-shell-vnet-storage/
+[06]: media/private-vnet/data-diagram.png
+[07]: media/private-vnet/vnet-settings.png
cloud-shell Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-powershell.md
Title: Azure Cloud Shell Quickstart - PowerShell+ description: Learn how to use the PowerShell in your browser with Azure Cloud Shell.--
-tags: azure-resource-manager
++
+ms.contributor: jahelmic
+ Last updated : 11/14/2022 - vm-linux Previously updated : 10/18/2018-+
+tags: azure-resource-manager
+ Title: Quickstart for PowerShell in Azure Cloud Shell
- # Quickstart for PowerShell in Azure Cloud Shell
-This document details how to use the PowerShell in Cloud Shell in the [Azure portal](https://portal.azure.com/).
+This document details how to use the PowerShell in Cloud Shell in the [Azure portal][06].
-> [!NOTE]
-> A [Bash in Azure Cloud Shell](quickstart.md) Quickstart is also available.
+The PowerShell experience in Azure Cloud Shell now runs [PowerShell 7.2][02] in a Linux environment.
+There are differences in the PowerShell experience in Cloud Shell compared Windows PowerShell.
+
+The filesystem in Linux is case-sensitive. Windows considers `file.txt` and `FILE.txt` to be the
+same file. In Linux, they're considered to be different files. Proper casing must be used while
+tab-completing in the filesystem. PowerShell specific experiences, such as tab-completing cmdlet
+names, parameters, and values, aren't case-sensitive.
+
+For a detailed list of differences, see [PowerShell differences on non-Windows platforms][01].
## Start Cloud Shell
-1. Click on **Cloud Shell** button from the top navigation bar of the Azure portal
+1. Select on **Cloud Shell** button from the top navigation bar of the Azure portal
- ![Screenshot showing how to start Azure Cloud Shell from the Azure portal.](media/quickstart-powershell/shell-icon.png)
+ ![Screenshot showing how to start Azure Cloud Shell from the Azure portal.][09]
-2. Select the PowerShell environment from the drop-down and you will be in Azure drive `(Azure:)`
+1. Select the PowerShell environment from the drop-down and you'll be in Azure drive `(Azure:)`
- ![Screenshot showing how to select the PowerShell environment for the Azure Cloud Shell.](media/quickstart-powershell/environment-ps.png)
+ ![Screenshot showing how to select the PowerShell environment for the Azure Cloud Shell.][08]
## Run PowerShell commands
MyResourceGroup MyVM2 eastus Standard_DS2_v2_Promo Windows S
### Interact with virtual machines
-You can find all your virtual machines under the current subscription via `VirtualMachines` directory.
+You can find all your virtual machines under the current subscription via `VirtualMachines`
+directory.
```azurepowershell-interactive PS Azure:\MySubscriptionName\VirtualMachines> dir
TestVm10 MyResourceGroup2 eastus Standard_DS1_v2 Windows mytest
#### Invoke PowerShell script across remote VMs > [!WARNING]
- > Please refer to [Troubleshooting remote management of Azure VMs](troubleshooting.md#troubleshooting-remote-management-of-azure-vms).
+ > Please refer to [Troubleshooting remote management of Azure VMs][11].
- Assuming you have a VM, MyVM1, let's use `Invoke-AzVMCommand` to invoke a PowerShell script block on the remote machine.
+Assuming you have a VM, MyVM1, let's use `Invoke-AzVMCommand` to invoke a PowerShell script block on
+the remote machine.
- ```azurepowershell-interactive
- Enable-AzVMPSRemoting -Name MyVM1 -ResourceGroupname MyResourceGroup
- Invoke-AzVMCommand -Name MyVM1 -ResourceGroupName MyResourceGroup -Scriptblock {Get-ComputerInfo} -Credential (Get-Credential)
- ```
+```azurepowershell-interactive
+Enable-AzVMPSRemoting -Name MyVM1 -ResourceGroupname MyResourceGroup
+Invoke-AzVMCommand -Name MyVM1 -ResourceGroupName MyResourceGroup -Scriptblock {Get-ComputerInfo} -Credential (Get-Credential)
+```
- You can also navigate to the VirtualMachines directory first and run `Invoke-AzVMCommand` as follows.
+You can also navigate to the VirtualMachines directory first and run `Invoke-AzVMCommand` as follows.
- ```azurepowershell-interactive
- PS Azure:\> cd MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines
- PS Azure:\MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines> Get-Item MyVM1 | Invoke-AzVMCommand -Scriptblock {Get-ComputerInfo} -Credential (Get-Credential)
- ```
+```azurepowershell-interactive
+PS Azure:\> cd MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines
+PS Azure:\MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines> Get-Item MyVM1 | Invoke-AzVMCommand -Scriptblock {Get-ComputerInfo} -Credential (Get-Credential)
+```
- ```output
- # You will see output similar to the following:
+```output
+# You will see output similar to the following:
- PSComputerName : 65.52.28.207
- RunspaceId : 2c2b60da-f9b9-4f42-a282-93316cb06fe1
- WindowsBuildLabEx : 14393.1066.amd64fre.rs1_release_sec.170327-1835
- WindowsCurrentVersion : 6.3
- WindowsEditionId : ServerDatacenter
- WindowsInstallationType : Server
- WindowsInstallDateFromRegistry : 5/18/2017 11:26:08 PM
- WindowsProductId : 00376-40000-00000-AA947
- WindowsProductName : Windows Server 2016 Datacenter
- WindowsRegisteredOrganization :
- ...
- ```
+PSComputerName : 65.52.28.207
+RunspaceId : 2c2b60da-f9b9-4f42-a282-93316cb06fe1
+WindowsBuildLabEx : 14393.1066.amd64fre.rs1_release_sec.170327-1835
+WindowsCurrentVersion : 6.3
+WindowsEditionId : ServerDatacenter
+WindowsInstallationType : Server
+WindowsInstallDateFromRegistry : 5/18/2017 11:26:08 PM
+WindowsProductId : 00376-40000-00000-AA947
+WindowsProductName : Windows Server 2016 Datacenter
+WindowsRegisteredOrganization :
+...
+```
-#### Interactively log on to a remote VM
+#### Interactively sign-in to a remote VM
You can use `Enter-AzVM` to interactively log into a VM running in Azure.
- ```azurepowershell-interactive
- PS Azure:\> Enter-AzVM -Name MyVM1 -ResourceGroupName MyResourceGroup -Credential (Get-Credential)
- ```
+```azurepowershell-interactive
+Enter-AzVM -Name MyVM1 -ResourceGroupName MyResourceGroup -Credential (Get-Credential)
+```
You can also navigate to the `VirtualMachines` directory first and run `Enter-AzVM` as follows: ```azurepowershell-interactive
-PS Azure:\MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines> Get-Item MyVM1 | Enter-AzVM -Credential (Get-Credential)
+Get-Item MyVM1 | Enter-AzVM -Credential (Get-Credential)
``` ### Discover WebApps
PS Azure:\MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\vi
By entering into the `WebApps` directory, you can easily navigate your web apps resources ```azurepowershell-interactive
-PS Azure:\MySubscriptionName> dir .\WebApps\
+dir .\WebApps\
``` ```output
mywebapp3 Running MyResourceGroup3 {mywebapp3.azurewebsites.net... So
## SSH To authenticate to servers or VMs using SSH, generate the public-private key pair in Cloud Shell and
-publish the public key to `authorized_keys` on the remote machine, such as `/home/user/.ssh/authorized_keys`.
+publish the public key to `authorized_keys` on the remote machine, such as
+`/home/user/.ssh/authorized_keys`.
> [!NOTE]
-> You can create SSH private-public keys using `ssh-keygen` and publish them to `$env:USERPROFILE\.ssh` in Cloud Shell.
+> You can create SSH private-public keys using `ssh-keygen` and publish them to
+> `$env:USERPROFILE\.ssh` in Cloud Shell.
### Using SSH
-Follow instructions [here](../virtual-machines/linux/quick-create-powershell.md) to create a new VM configuration using Azure PowerShell cmdlets.
-Before calling into `New-AzVM` to kick off the deployment, add SSH public key to the VM configuration.
-The newly created VM will contain the public key in the `~\.ssh\authorized_keys` location, thereby enabling credential-free SSH session to the VM.
+Follow instructions [here][03] to create a new VM configuration using Azure PowerShell cmdlets.
+Before calling into `New-AzVM` to kick off the deployment, add SSH public key to the VM
+configuration. The newly created VM will contain the public key in the `~\.ssh\authorized_keys`
+location, thereby enabling credential-free SSH session to the VM.
```azurepowershell-interactive # Create VM config object - $vmConfig using instructions on linked page above # Generate SSH keys in Cloud Shell
-ssh-keygen -t rsa -b 2048 -f $HOME\.ssh\id_rsa
+ssh-keygen -t rsa -b 2048 -f $HOME\.ssh\id_rsa
# Ensure VM config is updated with SSH keys $sshPublicKey = Get-Content "$HOME\.ssh\id_rsa.pub"
ssh azureuser@MyVM.Domain.Com
Under `Azure` drive, type `Get-AzCommand` to get context-specific Azure commands.
-Alternatively, you can always use `Get-Command *az* -Module Az.*` to find out the available Azure commands.
+Alternatively, you can always use `Get-Command *az* -Module Az.*` to find out the available Azure
+commands.
## Install custom modules
-You can run `Install-Module` to install modules from the [PowerShell Gallery][gallery].
+You can run `Install-Module` to install modules from the [PowerShell Gallery][07].
## Get-Help
Get-Help Get-AzVM
## Use Azure Files to store your data
-You can create a script, say `helloworld.ps1`, and save it to your `clouddrive` to use it across shell sessions.
+You can create a script, say `helloworld.ps1`, and save it to your `clouddrive` to use it across
+shell sessions.
```azurepowershell-interactive cd $HOME\clouddrive
code .\helloworld.ps1
Hello World! ```
-Next time when you use PowerShell in Cloud Shell, the `helloworld.ps1` file will exist under the `$HOME\clouddrive` directory that mounts your Azure Files share.
+Next time when you use PowerShell in Cloud Shell, the `helloworld.ps1` file will exist under the
+`$HOME\clouddrive` directory that mounts your Azure Files share.
## Use custom profile
-You can customize your PowerShell environment, by creating PowerShell profile(s) - `profile.ps1` (or `Microsoft.PowerShell_profile.ps1`).
-Save it under `$profile.CurrentUserAllHosts` (or `$profile.CurrentUserCurrentHost`), so that it can be loaded in every PowerShell in Cloud Shell session.
+You can customize your PowerShell environment, by creating PowerShell profiles - `profile.ps1` (or
+`Microsoft.PowerShell_profile.ps1`). Save it under `$profile.CurrentUserAllHosts` (or
+`$profile.CurrentUserCurrentHost`), so that it can be loaded in every PowerShell in Cloud Shell
+session.
-For how to create a profile, refer to [About Profiles][profile].
+For how to create a profile, refer to [About Profiles][04].
## Use Git
-To clone a Git repo in the Cloud Shell, you need to create a [personal access token][githubtoken] and use it as the username. Once you have your token, clone the repository as follows:
+To clone a Git repo in Cloud Shell, you need to create a [personal access token][05] and use it as
+the username. Once you have your token, clone the repository as follows:
```azurepowershell-interactive
- git clone https://<your-access-token>@github.com/username/repo.git
+git clone https://<your-access-token>@github.com/username/repo.git
``` ## Exit the shell Type `exit` to terminate the session.
-[bashqs]: quickstart.md
-[gallery]: https://www.powershellgallery.com/
-[customex]: ../virtual-machines/extensions/custom-script-windows.md
-[profile]: /powershell/module/microsoft.powershell.core/about/about_profiles
-[azmount]: ../storage/files/storage-how-to-use-files-windows.md
-[githubtoken]: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
+<!-- link references -->
+[01]: /powershell/scripting/whats-new/unix-support
+[02]: /powershell/scripting/whats-new/what-s-new-in-powershell-72
+[03]: ../virtual-machines/linux/quick-create-powershell.md
+[04]: /powershell/module/microsoft.powershell.core/about/about_profiles
+[05]: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
+[06]: https://portal.azure.com/
+[07]: https://www.powershellgallery.com/
+[08]: media/quickstart-powershell/environment-ps.png
+[09]: media/quickstart-powershell/shell-icon.png
+[11]: troubleshooting.md#troubleshooting-remote-management-of-azure-vms
cloud-shell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart.md
Title: Azure Cloud Shell Quickstart - Bash+ description: Learn how to use the Bash command line in your browser with Azure Cloud Shell.--
-tags: azure-resource-manager
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 03/12/2018-+
+tags: azure-resource-manager
+ Title: Quickstart for Bash in Azure Cloud Shell
- # Quickstart for Bash in Azure Cloud Shell
-This document details how to use Bash in Azure Cloud Shell in the [Azure portal](https://portal.azure.com/).
+This document details how to use Bash in Azure Cloud Shell in the [Azure portal][03].
> [!NOTE]
-> A [PowerShell in Azure Cloud Shell](quickstart-powershell.md) Quickstart is also available.
+> A [PowerShell in Azure Cloud Shell][09] Quickstart is also available.
## Start Cloud Shell
-1. Launch **Cloud Shell** from the top navigation of the Azure portal. <br>
-![Screenshot showing how to start Azure Cloud Shell in the Azure portal.](media/quickstart/shell-icon.png)
-2. Select a subscription to create a storage account and Microsoft Azure Files share.
-3. Select "Create storage"
+1. Launch **Cloud Shell** from the top navigation of the Azure portal.
+
+ ![Screenshot showing how to start Azure Cloud Shell in the Azure portal.][05]
+
+1. Select a subscription to create a storage account and Microsoft Azure Files share.
+1. Select "Create storage"
> [!TIP] > You are automatically authenticated for Azure CLI in every session. ### Select the Bash environment
-Check that the environment drop-down from the left-hand side of shell window says `Bash`. <br>
-![Screenshot showing how to select the Bash environment for the Azure Cloud Shell.](media/quickstart/env-selector.png)
+
+Check that the environment drop-down from the left-hand side of shell window says `Bash`.
+
+![Screenshot showing how to select the Bash environment for the Azure Cloud Shell.][04]
### Set your subscription+ 1. List subscriptions you have access to.+ ```azurecli-interactive az account list ```
-2. Set your preferred subscription:
+1. Set your preferred subscription:
```azurecli-interactive az account set --subscription 'my-subscription-name' ``` > [!TIP]
-> Your subscription will be remembered for future sessions using `/home/<user>/.azure/azureProfile.json`.
+> Your subscription is remembered for future sessions using `/home/<user>/.azure/azureProfile.json`.
### Create a resource group+ Create a new resource group in WestUS named "MyRG".+ ```azurecli-interactive az group create --location westus --name MyRG ``` ### Create a Linux VM
-Create an Ubuntu VM in your new resource group. The Azure CLI will create SSH keys and set up the VM with them. <br>
+
+Create an Ubuntu VM in your new resource group. The Azure CLI will create SSH keys and set up the VM
+with them.
```azurecli-interactive az vm create -n myVM -g MyRG --image UbuntuLTS --generate-ssh-keys ``` > [!NOTE]
-> Using `--generate-ssh-keys` instructs Azure CLI to create and set up public and private keys in your VM and `$Home` directory. By default keys are placed in Cloud Shell at `/home/<user>/.ssh/id_rsa` and `/home/<user>/.ssh/id_rsa.pub`. Your `.ssh` folder is persisted in your attached file share's 5-GB image used to persist `$Home`.
+> Using `--generate-ssh-keys` instructs Azure CLI to create and set up public and private keys in
+> your VM and `$Home` directory. By default keys are placed in Cloud Shell at
+> `/home/<user>/.ssh/id_rsa` and `/home/<user>/.ssh/id_rsa.pub`. Your `.ssh` folder is persisted in
+> your attached file share's 5-GB image used to persist `$Home`.
Your username on this VM will be your username used in Cloud Shell ($User@Azure:). ### SSH into your Linux VM+ 1. Search for your VM name in the Azure portal search bar.
-2. Click "Connect" to get your VM name and public IP address. <br>
- ![Screenshot showing how to connect to a Linux V M using S S H.](medi-copy.png)
+1. Select **Connect** to get your VM name and public IP address.
-3. SSH into your VM with the `ssh` cmd.
- ```
+ ![Screenshot showing how to connect to a Linux VM using SSH.][06]
+
+1. SSH into your VM with the `ssh` cmd.
+
+ ```bash
ssh username@ipaddress ```
-Upon establishing the SSH connection, you should see the Ubuntu welcome prompt. <br>
-![Screenshot showing the Ubuntu initialization and welcome prompt after you establish an S S H connection.](media/quickstart/ubuntu-welcome.png)
+Upon establishing the SSH connection, you should see the Ubuntu welcome prompt.
+
+![Screenshot showing the Ubuntu initialization and welcome prompt after you establish an SSH connection.][07]
+
+## Cleaning up
-## Cleaning up
1. Exit your ssh session.+ ``` exit ```
-2. Delete your resource group and any resources within it.
+1. Delete your resource group and any resources within it.
+ ```azurecli-interactive az group delete -n MyRG ``` ## Next steps
-[Learn about persisting files for Bash in Cloud Shell](persisting-shell-storage.md) <br>
-[Learn about Azure CLI](/cli/azure/) <br>
-[Learn about Azure Files storage](../storage/files/storage-files-introduction.md) <br>
+
+- [Learn about persisting files for Bash in Cloud Shell][08]
+- [Learn about Azure CLI][02]
+- [Learn about Azure Files storage][01]
+
+<!-- link references -->
+[01]: ../storage/files/storage-files-introduction.md
+[02]: /cli/azure/
+[03]: https://portal.azure.com/
+[04]: media/quickstart/env-selector.png
+[05]: media/quickstart/shell-icon.png
+[06]: medi-copy.png
+[07]: media/quickstart/ubuntu-welcome.png
+[08]: persisting-shell-storage.md
+[09]: quickstart-powershell.md
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
Title: Azure Cloud Shell troubleshooting | Microsoft Docs
-description: Troubleshooting Azure Cloud Shell
---
-tags: azure-resource-manager
-
+
+description: This article covers troubleshooting Cloud Shell common scenarios.
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 01/28/2022-++
+tags: azure-resource-manager
+ Title: Azure Cloud Shell troubleshooting
- # Troubleshooting & Limitations of Azure Cloud Shell
-Known resolutions for troubleshooting issues in Azure Cloud Shell include:
-
+This article covers troubleshooting Cloud Shell common scenarios.
## General troubleshooting ### Error running AzureAD cmdlets in PowerShell -- **Details**: When you run AzureAD cmdlets like `Get-AzureADUser` in Cloud Shell, you might see an error: `You must call the Connect-AzureAD cmdlet before calling any other cmdlets`. -- **Resolution**: Run the `Connect-AzureAD` cmdlet. Previously, Cloud Shell ran this cmdlet automatically during PowerShell startup. To speed up start time, the cmdlet no longer runs automatically. You can choose to restore the previous behavior by adding `Connect-AzureAD` to the $PROFILE file in PowerShell.
+- **Details**: When you run AzureAD cmdlets like `Get-AzureADUser` in Cloud Shell, you might see an
+ error: `You must call the Connect-AzureAD cmdlet before calling any other cmdlets`.
+- **Resolution**: Run the `Connect-AzureAD` cmdlet. Previously, Cloud Shell ran this cmdlet
+ automatically during PowerShell startup. To speed up start time, the cmdlet no longer runs
+ automatically. You can choose to restore the previous behavior by adding `Connect-AzureAD` to the
+ $PROFILE file in PowerShell.
### Early timeouts in FireFox -- **Details**: Cloud Shell utilizes an open websocket to pass input/output to your browser. FireFox has preset policies that can close the websocket prematurely causing early timeouts in Cloud Shell.-- **Resolution**: Open FireFox and navigate to "about:config" in the URL box. Search for "network.websocket.timeout.ping.request" and change the value from 0 to 10.
+- **Details**: Cloud Shell uses an open websocket to pass input/output to your browser. FireFox has
+ preset policies that can close the websocket prematurely causing early timeouts in Cloud Shell.
+- **Resolution**: Open FireFox and navigate to "about:config" in the URL box. Search for
+ "network.websocket.timeout.ping.request" and change the value from 0 to 10.
### Disabling Cloud Shell in a locked down network environment -- **Details**: Administrators may wish to disable access to Cloud Shell for their users. Cloud Shell utilizes access to the `ux.console.azure.com` domain, which can be denied, stopping any access to Cloud Shell's entrypoints including `portal.azure.com`, `shell.azure.com`, Visual Studio Code Azure Account extension, and `learn.microsoft.com`. In the US Government cloud, the entrypoint is `ux.console.azure.us`; there is no corresponding `shell.azure.us`.-- **Resolution**: Restrict access to `ux.console.azure.com` or `ux.console.azure.us` via network settings to your environment. The Cloud Shell icon will still exist in the Azure portal, but will not successfully connect to the service.
+- **Details**: Administrators may wish to disable access to Cloud Shell for their users. Cloud Shell
+ depends on access to the `ux.console.azure.com` domain, which can be denied, stopping any access
+ to Cloud Shell's entry points including `portal.azure.com`, `shell.azure.com`, Visual Studio Code
+ Azure Account extension, and `learn.microsoft.com`. In the US Government cloud, the entry point is
+ `ux.console.azure.us`; there's no corresponding `shell.azure.us`.
+- **Resolution**: Restrict access to `ux.console.azure.com` or `ux.console.azure.us` via network
+ settings to your environment. The Cloud Shell icon will still exist in the Azure portal, but can't
+ connect to the service.
### Storage Dialog - Error: 403 RequestDisallowedByPolicy -- **Details**: When creating a storage account through Cloud Shell, it is unsuccessful due to an Azure Policy assignment placed by your admin. Error message will include: `The resource action 'Microsoft.Storage/storageAccounts/write' is disallowed by one or more policies.`-- **Resolution**: Contact your Azure administrator to remove or update the Azure Policy assignment denying storage creation.
+- **Details**: When creating a storage account through Cloud Shell, it's unsuccessful due to an
+ Azure Policy assignment placed by your admin. The error message includes:
+
+ > The resource action 'Microsoft.Storage/storageAccounts/write' is disallowed by
+ > one or more policies.
+
+- **Resolution**: Contact your Azure administrator to remove or update the Azure Policy assignment
+ denying storage creation.
### Storage Dialog - Error: 400 DisallowedOperation -- **Details**: When using an Azure Active Directory subscription, you cannot create storage.-- **Resolution**: Use an Azure subscription capable of creating storage resources. Azure AD subscriptions are not able to create Azure resources.
+- **Details**: When using an Azure Active Directory subscription, you can't create storage.
+- **Resolution**: Use an Azure subscription capable of creating storage resources. Azure AD
+ subscriptions aren't able to create Azure resources.
+
+### Terminal output - Error: Failed to connect terminal: websocket can't be established
-### Terminal output - Error: Failed to connect terminal: websocket cannot be established. Press `Enter` to reconnect.
-- **Details**: Cloud Shell requires the ability to establish a websocket connection to Cloud Shell infrastructure.-- **Resolution**: Check you have configured your network settings to enable sending https requests and websocket requests to domains at *.console.azure.com.
+- **Details**: Cloud Shell requires the ability to establish a websocket connection to Cloud Shell
+ infrastructure.
+- **Resolution**: Confirm that your network settings to allow sending HTTPS and websocket requests
+ to domains at `*.console.azure.com`.
### Set your Cloud Shell connection to support using TLS 1.2+
+ - **Details**: To define the version of TLS for your connection to Cloud Shell, you must set
+ browser-specific settings.
+ - **Resolution**: Navigate to the security settings of your browser and select the checkbox next to
+ **Use TLS 1.2**.
## Bash troubleshooting
-### Cannot run the docker daemon
+### You can't run the docker daemon
-- **Details**: Cloud Shell utilizes a container to host your shell environment, as a result running the daemon is disallowed.-- **Resolution**: Utilize [docker-machine](https://docs.docker.com/machine/overview/), which is installed by default, to manage docker containers from a remote Docker host.
+- **Details**: Cloud Shell uses a container to host your shell environment, as a result running
+ the daemon is disallowed.
+- **Resolution**: Utilize [docker-machine][04], which is installed by default, to manage docker
+ containers from a remote Docker host.
## PowerShell troubleshooting
-### GUI applications are not supported
+### GUI applications aren't supported
-- **Details**: If a user launches a GUI application, the prompt does not return. For example, when one clone a private GitHub repo that has two factor authentication enabled, a dialog box is displayed for completing the two factor authentication.
+- **Details**: If a user launches a GUI application, the prompt doesn't return. For example, when
+ one clone a private GitHub repo that has two factor authentication enabled, a dialog box is
+ displayed for completing the two factor authentication.
- **Resolution**: Close and reopen the shell. ### Troubleshooting remote management of Azure VMs+ > [!NOTE] > Azure VMs must have a Public facing IP address. -- **Details**: Due to the default Windows Firewall settings for WinRM the user may see the following error:
- `Ensure the WinRM service is running. Remote Desktop into the VM for the first time and ensure it can be discovered.`
-- **Resolution**: Run `Enable-AzVMPSRemoting` to enable all aspects of PowerShell remoting on the target machine.
+- **Details**: Due to the default Windows Firewall settings for WinRM the user may see the following
+ error:
+
+ > Ensure the WinRM service is running. Remote Desktop into the VM for the first time and ensure
+ > it can be discovered.
-### `dir` does not update the result in Azure drive
+- **Resolution**: Run `Enable-AzVMPSRemoting` to enable all aspects of PowerShell remoting on the
+ target machine.
-- **Details**: By default, to optimize for user experience, the results of `dir` is cached in Azure drive.-- **Resolution**: After you create, update or remove an Azure resource, run `dir -force` to update the results in the Azure drive.
+### `dir` doesn't update the result in Azure drive
+
+- **Details**: By default, to optimize for user experience, the results of `dir` is cached in Azure
+ drive.
+- **Resolution**: After you create, update or remove an Azure resource, run `dir -force` to update
+ the results in the Azure drive.
## General limitations
Azure Cloud Shell has the following known limitations:
### Quota limitations
-Azure Cloud Shell has a limit of 20 concurrent users per tenant per region. If you try to open more simultaneous sessions than the limit, you will see an "Tenant User Over Quota" error. If you have a legitimate need to have more sessions open than this (for example for training sessions), contact support in advance of your anticipated usage to request a quota increase.
+Azure Cloud Shell has a limit of 20 concurrent users per tenant per region. Opening more than 20
+simultaneous sessions produces a "Tenant User Over Quota" error. If you have a legitimate need to
+have more than 20 sessions open, such as for training sessions, contact Support to request a quota
+increase before your anticipated usage.
-Cloud Shell is provided as a free service and is designed to be used to configure your Azure environment, not as a general purpose computing platform. Excessive automated usage may be considered in breach to the Azure Terms of Service and could lead to Cloud Shell access being blocked.
+Cloud Shell is provided as a free service for managing your Azure environment. It's not as a general
+purpose computing platform. Excessive automated usage may be considered in breach to the Azure Terms
+of Service and could lead to Cloud Shell access being blocked.
### System state and persistence
-The machine that provides your Cloud Shell session is temporary, and it is recycled after your session is inactive for 20 minutes. Cloud Shell requires an Azure file share to be mounted. As a result, your subscription must be able to set up storage resources to access Cloud Shell. Other considerations include:
+The machine that provides your Cloud Shell session is temporary, and it's recycled after your
+session is inactive for 20 minutes. Cloud Shell requires an Azure fileshare to be mounted. As a
+result, your subscription must be able to set up storage resources to access Cloud Shell. Other
+considerations include:
-- With mounted storage, only modifications within the `clouddrive` directory are persisted. In Bash, your `$HOME` directory is also persisted.-- Azure file shares can be mounted only from within your [assigned region](persisting-shell-storage.md#mount-a-new-clouddrive).
+- With mounted storage, only modifications within the `clouddrive` directory are persisted. In Bash,
+ your `$HOME` directory is also persisted.
+- Azure fileshares can be mounted only from within your [assigned region][05].
- In Bash, run `env` to find your region set as `ACC_LOCATION`. - Azure Files supports only locally redundant storage and geo-redundant storage accounts.
Cloud Shell supports the latest versions of following browsers:
- Google Chrome - Mozilla Firefox - Apple Safari
- - Safari in private mode is not supported.
+ - Safari in private mode isn't supported.
### Copy and paste
+- Windows: <kbd>Ctrl</kbd>-<kbd>C</kbd> to copy is supported but use
+ <kbd>Shift</kbd>-<kbd>Insert</kbd> to paste.
+ - FireFox/IE may not support clipboard permissions properly.
+- macOS: <kbd>Cmd</kbd>-<kbd>C</kbd> to copy and <kbd>Cmd</kbd>-<kbd>V</kbd> to paste.
### Usage limits
-Cloud Shell is intended for interactive use cases. As a result, any long-running non-interactive sessions are ended without warning.
+Cloud Shell is intended for interactive use cases. As a result, any long-running non-interactive
+sessions are ended without warning.
### User permissions
-Permissions are set as regular users without sudo access. Any installation outside your `$Home` directory is not persisted.
+Permissions are set as regular users without sudo access. Any installation outside your `$Home`
+directory isn't persisted.
### Supported entry point limitations
-Cloud Shell entry points beside the Azure portal, such as Visual Studio Code & Windows Terminal, do not support various Cloud Shell functionalities:
+Cloud Shell entry points beside the Azure portal, such as Visual Studio Code & Windows Terminal,
+don't support various Cloud Shell functionalities:
+ - Use of commands that modify UX components in Cloud Shell, such as `Code` - Fetching non-arm access tokens
Take caution when editing .bashrc, doing so can cause unexpected errors in Cloud
### Preview version of AzureAD module
-Currently, `AzureAD.Standard.Preview`, a preview version of .NET Standard-based, module is available. This module provides the same functionality as `AzureAD`.
+Currently, `AzureAD.Standard.Preview`, a preview version of .NET Standard-based, module is
+available. This module provides the same functionality as `AzureAD`.
## Personal data in Cloud Shell
-Azure Cloud Shell takes your personal data seriously, the data captured and stored by the Azure Cloud Shell service are used to provide defaults for your experience such as your most recently used shell, preferred font size, preferred font type, and file share details that back cloud drive. Should you wish to export or delete this data, use the following instructions.
-
+Azure Cloud Shell takes your personal data seriously. The Azure Cloud Shell service stores your
+preferences, such as your most recently used shell, font size, font type, and details of the
+fileshare that backs cloud drive. You can export or delete this data using the following
+instructions.
+<!--
+TODO:
+- Are there cmdlets or CLI to do this now, instead of REST API?
+-->
### Export
-In order to **export** the user settings Cloud Shell saves for you such as preferred shell, font size, and font type run the following commands.
-
-1. Launch Cloud Shell.
-
-2. Run the following commands in Bash or PowerShell:
-
-Bash:
- ```
- token="Bearer $(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s | jq -r ".accessToken")"
- curl https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $token" -s | jq
- ```
+Use the following commands to **export** Cloud Shell the user settings, such as preferred shell,
+font size, and font type.
-PowerShell:
+1. Launch Cloud Shell.
- ```powershell
- $token= ((Invoke-WebRequest -Uri "$env:MSI_ENDPOINT`?resource=https://management.core.windows.net/" -Headers @{Metadata='true'}).content | ConvertFrom-Json).access_token
- ((Invoke-WebRequest -Uri https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -Headers @{Authorization = "Bearer $token"}).Content | ConvertFrom-Json).properties | Format-List
-```
+1. Run the following commands in Bash or PowerShell:
+
+ Bash:
+<!--
+TODO:
+- Is there a way to wrap the lines for bash?
+- Why are we getting the token this way? The next example uses az cli.
+- The URLs used are not consistent across all the examples
+- Should we be using a newer API version?
+-->
+ ```bash
+ token=$(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s | jq -r ".accessToken")
+ curl https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $token" -s | jq
+ ```
+
+ PowerShell:
+
+ ```powershell
+ $parameters = @{
+ Uri = "$env:MSI_ENDPOINT`?resource=https://management.core.windows.net/"
+ Headers = @{Metadata='true'}
+ }
+ $token= ((Invoke-WebRequest @parameters ).content | ConvertFrom-Json).accessToken
+ $parameters = @{
+ Uri = 'https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview'
+ Headers = @{Authorization = "Bearer $token"}
+ }
+ ((Invoke-WebRequest @parameters ).Content | ConvertFrom-Json).properties | Format-List
+ ```
### Delete
-In order to **delete** your user settings Cloud Shell saves for you such as preferred shell, font size, and font type run the following commands. The next time you start Cloud Shell you will be asked to onboard a file share again.
->[!Note]
-> If you delete your user settings, the actual Azure Files share will not be deleted. Go to your Azure Files to complete that action.
+Run the following commands to **delete** Cloud Shell user settings, such as preferred shell, font
+size, and font type. The next time you start Cloud Shell you'll be asked to onboard a fileshare
+again.
+
+> [!NOTE]
+> If you delete your user settings, the actual Azure fileshare is not deleted. Go to your Azure
+> Files to complete that action.
1. Launch Cloud Shell or a local shell with either Azure PowerShell or Azure CLI installed.
-2. Run the following commands in Bash or PowerShell:
+1. Run the following commands in Bash or PowerShell:
+
+ Bash:
-Bash:
+ ```bash
+ token=$(az account get-access-token --resource "https://management.azure.com/" | jq -r ".accessToken")
+ curl -X DELETE https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $token"
+ ```
- ```
- token="Bearer $(az account get-access-token --resource "https://management.azure.com/" | jq -r ".accessToken")"
- curl -X DELETE https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"$token"
- ```
+ PowerShell:
-PowerShell:
+ ```powershell
+ $token= (Get-AzAccessToken -Resource https://management.azure.com/).Token
+ $parameters = @{
+ Method = 'Delete'
+ Uri = 'https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview'
+ Headers = @{Authorization = "Bearer $token"}
+ }
+ Invoke-WebRequest @parameters
+ ```
- ```powershell
- $token= (Get-AzAccessToken -Resource https://management.azure.com/).Token
- Invoke-WebRequest -Method Delete -Uri https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -Headers @{Authorization = "Bearer $token"}
- ```
## Azure Government limitations+ Azure Cloud Shell in Azure Government is only accessible through the Azure portal.
->[!Note]
+> [!NOTE]
> Connecting to GCC-High or Government DoD Clouds for Exchange Online is currently not supported.+
+<!-- link references -->
+[04]: https://docs.docker.com/machine/overview/
+[05]: persisting-shell-storage.md#mount-a-new-clouddrive
cloud-shell Using Cloud Shell Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/using-cloud-shell-editor.md
Title: Using the Azure Cloud Shell editor | Microsoft Docs+ description: Overview of how to use the Azure Cloud Shell editor.---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 07/24/2018-++
+tags: azure-resource-manager
+ Title: Using the Azure Cloud Shell editor
# Using the Azure Cloud Shell editor
-Azure Cloud Shell includes an integrated file editor built from the open-source [Monaco Editor](https://github.com/Microsoft/monaco-editor). The Cloud Shell editor supports features such as language highlighting, the command palette, and a file explorer.
+Azure Cloud Shell includes an integrated file editor built from the open-source
+[Monaco Editor][02]. The Cloud Shell editor supports features such as language highlighting, the
+command palette, and a file explorer.
-![Cloud Shell editor](media/using-cloud-shell-editor/open-editor.png)
+![Cloud Shell editor][06]
## Opening the editor
-For simple file creation and editing, launch the editor by running `code .` in the Cloud Shell terminal. This action opens the editor with your active working directory set in the terminal.
+For simple file creation and editing, launch the editor by running `code .` in the Cloud Shell
+terminal. This action opens the editor with your active working directory set in the terminal.
-To directly open a file for quick editing, run `code <filename>` to open the editor without the file explorer.
+To directly open a file for quick editing, run `code <filename>` to open the editor without the file
+explorer.
-To open the editor via UI button, click the `{}` editor icon from the toolbar. This will open the editor and default the file explorer to the `/home/<user>` directory.
+Select the `{}` icon from the toolbar to open the editor and default the file explorer to the
+`/home/<user>` directory.
## Closing the editor
-To close the editor, open the `...` action panel in the top right of the editor and select `Close editor`.
+To close the editor, open the `...` action panel in the top right of the editor and select
+`Close editor`.
-![Close editor](media/using-cloud-shell-editor/close-editor.png)
+![Close editor][04]
## Command palette
-To launch the command palette, use the `F1` key when focus is set on the editor. Opening the command palette can also be done through the action panel.
-
-![Cmd palette](medi-palette.png)
+To launch the command palette, use the `F1` key when focus is set on the editor. Opening the command
+palette can also be done through the action panel.
-## Contributing to the Monaco Editor
+![Cmd palette][05]
-Language highlight support in the Cloud Shell editor is supported through upstream functionality in the [Monaco Editor](https://github.com/Microsoft/monaco-editor)'s use of Monarch syntax definitions. To learn how to make contributions, read the [Monaco contributor guide](https://github.com/Microsoft/monaco-editor/blob/master/CONTRIBUTING.md).
+<!--
+TODO:
+- Why are we talking about contributions here?
+- Need to document how to use the editor and the quirks
+-->
## Next steps -- [Try the quickstart for Bash in Cloud Shell](quickstart.md)-- [View the full list of integrated Cloud Shell tools](features.md)
+- [Try the quickstart for Bash in Cloud Shell][07]
+- [View the full list of integrated Cloud Shell tools][01]
+
+<!-- link references -->
+[01]: features.md
+[02]: https://github.com/Microsoft/monaco-editor
+[03]: https://github.com/Microsoft/monaco-editor/blob/master/CONTRIBUTING.md
+[04]: media/using-cloud-shell-editor/close-editor.png
+[05]: medi-palette.png
+[06]: media/using-cloud-shell-editor/open-editor.png
+[07]: quickstart.md
cloud-shell Using The Shell Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/using-the-shell-window.md
Title: Using the Azure Cloud Shell window | Microsoft Docs+ description: Overview of how to use the Azure Cloud Shell window.---
-tags: azure-resource-manager
-
++
+ms.contributor: jahelmic
Last updated : 11/14/2022 - vm-linux Previously updated : 04/15/2019-++
+tags: azure-resource-manager
+ Title: Using the Azure Cloud Shell window
# Using the Azure Cloud Shell window
This document explains how to use the Cloud Shell window.
## Swap between Bash and PowerShell environments
-Use the environment selector in the Cloud Shell toolbar to swap between Bash and PowerShell environments.
-![Select environment](media/using-the-shell-window/env-selector.png)
+Use the environment selector in the Cloud Shell toolbar to switch between Bash and PowerShell
+environments.
+
+![Select environment][02]
## Restart Cloud Shell
-Click the restart icon in the Cloud Shell toolbar to reset machine state.
-![Restart Cloud Shell](media/using-the-shell-window/restart.png)
+
+Select the restart icon in the Cloud Shell toolbar to reset machine state.
+
+![Restart Cloud Shell][08]
+ > [!WARNING]
-> Restarting Cloud Shell will reset machine state and any files not persisted by your Azure file share will be lost.
+> Restarting Cloud Shell resets machine state and any files not persisted in your Azure fileshare
+> are lost.
## Change the text size
-Click the settings icon on the top left of the window, then hover over the "Text size" option and select your desired text size. Your selection will be persisted across sessions.
-![Text size](media/using-the-shell-window/text-size.png)
+
+Select the settings icon on the top left of the window, then hover over the **Text size** option and
+select your desired text size. Your selection is persisted across sessions.
+
+![Text size][10]
## Change the font
-Click the settings icon on the top left of the window, then hover over the "Font" option and select your desired font. Your selection will be persisted across sessions.
-![Font](media/using-the-shell-window/text-font.png)
+
+Select the settings icon on the top left of the window, then hover over the **Font** option and select
+your desired font. Your selection is persisted across sessions.
+
+![Font][09]
## Upload and download files
-Click the upload/download files icon on the top left of the window, then select upload or download.
-![Upload/download files](media/using-the-shell-window/uploaddownload.png)
-* For uploading files, use the pop-up to browse to the file on your local computer, select the desired file, and click the "Open" button. The file will be uploaded into
-the `/home/user` directory.
-* For downloading file, enter the fully qualified file path into the pop-up window (i.e., basically a path under the `/home/user` directory which shows up by default), and select the "Download" button.
-> [!NOTE]
-> Files and file paths are case sensitive in Cloud Shell. Double check your casing in your file path.
+
+Select the upload/download files icon on the top left of the window, then select **Upload** or
+**Download**.
+
+![Upload/download files][11]
+
+- For uploading files, use the pop-up to browse to the file on your local computer, select the
+ desired file, and select the **Open** button. The file is uploaded into the `/home/user`
+ directory.
+- For downloading file, enter the fully qualified file path into the pop-up window. For example, the
+ path under the `/home/user` directory that shows up by default. Then, select the **Download**
+ button.
+
+> [!NOTE]
+> File and path names are case sensitive in Cloud Shell. Double check your casing in your file
+> path.
## Open another Cloud Shell window
-Cloud Shell enables multiple concurrent sessions across browser tabs by allowing each session to exist as a separate process.
-If exiting a session, be sure to exit from each session window as each process runs independently although they run on the same machine.
-Click the open new session icon on the top left of the window. A new tab will open with another session connected to the existing container.
-![Open new session](media/using-the-shell-window/newsession.png)
+
+Cloud Shell enables multiple concurrent sessions across browser tabs by allowing each session to
+exist as a separate process. If exiting a session, be sure to exit from each session window as each
+process runs independently although they run on the same machine. Select the open new session icon on
+the top left of the window. A new tab opens with another session connected to the existing
+container.
+
+![Open new session][04]
## Cloud Shell editor
-* Refer to the [Using the Azure Cloud Shell editor](using-cloud-shell-editor.md) page.
+
+Refer to the [Using the Azure Cloud Shell editor][14] page.
## Web preview
-Click the web preview icon on the top left of the window, select "Configure", specify the desired port to open. Select either "Open port" to only open the port, or "Open and browse" to open the port and preview the port in a new tab.
-![Web preview](media/using-the-shell-window/preview.png)
-<br>
-![Configure port](media/using-the-shell-window/preview-configure.png)
-Click the web preview icon on the top left of the window, select "Preview port ..." to preview an open port in a new tab.
-Click the web preview icon on the top left of the window, select "Close port ..." to close the open port.
-![Preview/close port](media/using-the-shell-window/preview-options.png)
+
+Select the web preview icon on the top left of the window, select **Configure**, specify the desired
+port to open.
+
+![Web preview][07]
+
+Select either **Open port** to only open the port, or **Open and browse** to open the
+port and preview the port in a new tab.
+
+![Configure port][05]
+
+To preview an open port in a new tab, select the web preview icon on the top left of the window then
+select **Preview port**.
+
+To close the open port, select the web preview icon on the top left of the window the select
+**Close port**.
+
+![Preview/close port][06]
## Minimize & maximize Cloud Shell window
-Click the minimize icon on the top right of the window to hide it. Click the Cloud Shell icon again to unhide.
-Click the maximize icon to set window to max height. To restore window to previous size, click restore.
-![Minimize or maximize the window](media/using-the-shell-window/minmax.png)
+
+Select the minimize icon on the top right of the window to hide it. Select the Cloud Shell icon again
+to unhide. Select the maximize icon to set window to max height. To restore window to previous size,
+select restore.
+
+![Minimize or maximize the window][03]
## Copy and paste+
+- Windows: <kbd>Ctrl</kbd>-<kbd>C</kbd> to copy is supported but use
+ <kbd>Shift</kbd>-<kbd>Insert</kbd> to paste.
+ - FireFox/IE may not support clipboard permissions properly.
+- macOS: <kbd>Cmd</kbd>-<kbd>C</kbd> to copy and <kbd>Cmd</kbd>-<kbd>V</kbd> to paste.
## Resize Cloud Shell window
-Click and drag the top edge of the toolbar up or down to resize the Cloud Shell window.
+
+Drag the top edge of the toolbar up or down to resize the Cloud Shell window.
## Scrolling text display+ Scroll with your mouse or touchpad to move terminal text. ## Exit command
-Running `exit` terminates the active session. This behavior occurs by default after 20 minutes without interaction.
+
+The `exit` command terminates the active session. Cloud Shell also terminates your session after 20
+minutes without interaction.
## Next steps
-[Bash in Cloud Shell Quickstart](quickstart.md) <br>
-[PowerShell in Cloud Shell Quickstart](quickstart-powershell.md)
+- [Bash in Cloud Shell Quickstart][13]
+- [PowerShell in Cloud Shell Quickstart][12]
+
+<!-- link references -->
+[02]: media/using-the-shell-window/env-selector.png
+[03]: media/using-the-shell-window/minmax.png
+[04]: media/using-the-shell-window/newsession.png
+[05]: media/using-the-shell-window/preview-configure.png
+[06]: media/using-the-shell-window/preview-options.png
+[07]: media/using-the-shell-window/preview.png
+[08]: media/using-the-shell-window/restart.png
+[09]: media/using-the-shell-window/text-font.png
+[10]: media/using-the-shell-window/text-size.png
+[11]: media/using-the-shell-window/uploaddownload.png
+[12]: quickstart-powershell.md
+[13]: quickstart.md
+[14]: using-cloud-shell-editor.md
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/role-based-access-control.md
A user that should only be validating and reviewing LUIS applications, typically
* [LUIS Programmatic v2.0 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f) All the APIs under:
- * [LUIS Endpoint APIs v2.0](/azure/cognitive-services/LUIS/luis-migration-api-v1-to-v2)
+ * [LUIS Endpoint APIs v2.0](./luis-migration-api-v1-to-v2.md)
* [LUIS Endpoint APIs v3.0](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) * [LUIS Endpoint APIs v3.0-preview](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0-preview/operations/5cb0a9459a1fe8fa44c28dd8)
These users are the gatekeepers for LUIS applications in a production environmen
## Next steps
-* [Managing Azure resources](./luis-how-to-azure-subscription.md?branch=pr-en-us-171715&tabs=portal#authoring-resource)
+* [Managing Azure resources](./luis-how-to-azure-subscription.md?branch=pr-en-us-171715&tabs=portal#authoring-resource)
cognitive-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-audio-data.md
Follow these steps to restrict access to the storage account.
1. Select **Disabled** for **Allow storage account key access** 1. Select **Save**.
-For more information, see [Prevent anonymous public read access to containers and blobs](/azure/storage/blobs/anonymous-read-access-prevent) and [Prevent Shared Key authorization for an Azure Storage account](/azure/storage/common/shared-key-authorization-prevent).
+For more information, see [Prevent anonymous public read access to containers and blobs](../../storage/blobs/anonymous-read-access-prevent.md) and [Prevent Shared Key authorization for an Azure Storage account](../../storage/common/shared-key-authorization-prevent.md).
### Configure Azure Storage firewall
You could otherwise specify individual files in the container. You must generate
- [Batch transcription overview](batch-transcription.md) - [Create a batch transcription](batch-transcription-create.md)-- [Get batch transcription results](batch-transcription-get.md)
+- [Get batch transcription results](batch-transcription-get.md)
cognitive-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/embedded-speech.md
zone_pivot_groups: programming-languages-set-thirteen
# Embedded Speech (preview)
-Embedded Speech is designed for on-device [speech-to-text](speech-to-text.md) and [text-to-speech](text-to-speech.md) scenarios where cloud connectivity is intermittent or unavailable. For example, you can use embedded speech in medical equipment, a voice enabled air conditioning unit, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](/azure/cognitive-services/containers/disconnected-containers).
+Embedded Speech is designed for on-device [speech-to-text](speech-to-text.md) and [text-to-speech](text-to-speech.md) scenarios where cloud connectivity is intermittent or unavailable. For example, you can use embedded speech in medical equipment, a voice enabled air conditioning unit, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../containers/disconnected-containers.md).
> [!IMPORTANT] > Microsoft limits access to embedded speech. You can apply for access through the Azure Cognitive Services [embedded speech limited access review](https://aka.ms/csgate-embedded-speech). For more information, see [Limited access for embedded speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/cognitive-services/speech-service/context/context).
For cloud speech, you use the `SpeechConfig` object, as shown in the [speech-to-
## Next steps - [Quickstart: Recognize and convert speech to text](get-started-speech-to-text.md)-- [Quickstart: Convert text to speech](get-started-text-to-speech.md)
+- [Quickstart: Convert text to speech](get-started-text-to-speech.md)
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/sovereign-clouds.md
curl -X POST "https://api.cognitive.microsofttranslator.us/translate?api-version
``` > [!div class="nextstepaction"]
-> [Azure Government: Translator text reference](/azure/azure-government/documentation-government-cognitiveservices#translator)
+> [Azure Government: Translator text reference](../../azure-government/documentation-government-cognitiveservices.md#translator)
### [Azure China 21 Vianet](#tab/china)
https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.cn/translator/text/batch
## Next steps > [!div class="nextstepaction"]
-> [Learn more about Translator](index.yml)
+> [Learn more about Translator](index.yml)
cognitive-services Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/autoscale.md
Yes, you can disable the autoscale feature through Azure portal or CLI and retur
Autoscale feature is available for the following
-* [Cognitive Services multi-key](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Canomaly-detector%2Clanguage-service%2Ccomputer-vision%2Cwindows)
+* [Cognitive Services multi-key](./cognitive-services-apis-create-account.md?tabs=multiservice%2canomaly-detector%2clanguage-service%2ccomputer-vision%2cwindows)
* [Computer Vision](computer-vision/index.yml) * [Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
-* [Anomaly Detector](/azure/cognitive-services/anomaly-detector/overview)
-* [Content Moderator](/azure/cognitive-services/content-moderator/overview)
-* [Custom Vision (Prediction)](/azure/cognitive-services/custom-vision-service/overview)
-* [Immersive Reader](/azure/applied-ai-services/immersive-reader/overview)
-* [LUIS](/azure/cognitive-services/luis/what-is-luis)
-* [Metrics Advisor](/azure/applied-ai-services/metrics-advisor/overview)
-* [Personalizer](/azure/cognitive-services/personalizer/what-is-personalizer)
-* [QnAMaker](/azure/cognitive-services/qnamaker/overview/overview)
+* [Anomaly Detector](./anomaly-detector/overview.md)
+* [Content Moderator](./content-moderator/overview.md)
+* [Custom Vision (Prediction)](./custom-vision-service/overview.md)
+* [Immersive Reader](../applied-ai-services/immersive-reader/overview.md)
+* [LUIS](./luis/what-is-luis.md)
+* [Metrics Advisor](../applied-ai-services/metrics-advisor/overview.md)
+* [Personalizer](./personalizer/what-is-personalizer.md)
+* [QnAMaker](./qnamaker/overview/overview.md)
* [Form Recognizer](../applied-ai-services/form-recognizer/overview.md?tabs=v3-0) ### Can I test this feature using a free subscription?
No, the autoscale feature is not available to free tier subscriptions.
- [Plan and Manage costs for Azure Cognitive Services](./plan-manage-costs.md). - [Optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
cognitive-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md
Detailed information about supported regions for Custom Neural Voice and Speaker
If you're an existing customer and your application for access is denied, you will no longer be able to use Limited Access features after June 30, 2023. Your data is subject to Microsoft's data retention [policies](https://www.microsoft.com/trust-center/privacy/data-management#:~:text=If%20you%20terminate%20a%20cloud,data%20or%20renew%20your%20subscription.).
+### How long will the registration process take?
+
+You'll receive communication from us about your application within 10 business days. In some cases, reviews can take longer. You'll receive an email as soon as your application is reviewed.
+ ## Help and support Report abuse of Limited Access services [here](https://aka.ms/reportabuse).
cognitive-services Power Virtual Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/power-virtual-agents.md
In this tutorial, you learn how to:
> * Test Power Virtual Agents, and receive an answer from your Question Answering project > [!NOTE]
-> The QnA Maker service is being retired on the 31st of March, 2025. A newer version of the question and answering capability is now available as part of [Azure Cognitive Service for Language](/azure/cognitive-services/language-service/). For question answering capabilities within the Language Service, see [question answering](../overview.md). Starting 1st October, 2022 you wonΓÇÖt be able to create new QnA Maker resources. For information on migrating existing QnA Maker knowledge bases to question answering, consult the [migration guide](../how-to/migrate-qnamaker.md).
+> The QnA Maker service is being retired on the 31st of March, 2025. A newer version of the question and answering capability is now available as part of [Azure Cognitive Service for Language](../../index.yml). For question answering capabilities within the Language Service, see [question answering](../overview.md). Starting 1st October, 2022 you wonΓÇÖt be able to create new QnA Maker resources. For information on migrating existing QnA Maker knowledge bases to question answering, consult the [migration guide](../how-to/migrate-qnamaker.md).
## Create and publish a project 1. Follow the [quickstart](../quickstart/sdk.md?pivots=studio) to create a Question Answering project. Once you have deployed your project.
cognitive-services Rotate Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/rotate-keys.md
+
+ Title: Rotate keys in Azure Cognitive Services
+
+description: "Learn how to rotate API keys for better security, without interrupting service"
+++++ Last updated : 11/08/2022+++
+# Rotate subscription keys in Cognitive Services
+
+Each Cognitive Services resource has two API keys to enable secret rotation. This is a security precaution that lets you regularly change the keys that can access your service, protecting the privacy of your resource if a key gets leaked.
+
+## How to rotate keys
+
+Keys can be rotated using the following procedure:
+
+1. If you're using both keys in production, change your code so that only one key is in use. In this guide, assume it's key 1.
+
+ This is a necessary step because once a key is regenerated, the older version of that key will stop working immediately. This would cause clients using the older key to get 401 access denied errors.
+1. Once you have only key 1 in use, you can regenerate the key 2. Go to your resource's page on the Azure portal, select the **Keys and Endpoint** tab, and select the **Regenerate Key 2** button at the top of the page.
+1. Next, update your code to use the newly generated key 2.
+
+ It will help to have logs or availability to check that users of the key have successfully swapped from using key 1 to key 2 before you proceed.
+1. Now you can regenerate the key 1 using the same process.
+1. Finally, update your code to use the new key 1.
+
+## See also
+
+* [What is Cognitive Services?](./what-are-cognitive-services.md)
+* [Cognitive Services security features](./security-features.md)
cognitive-services Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-features.md
For a comprehensive list of Azure service security recommendations see the [Cogn
|:|:| | [Transport Layer Security (TLS)](/dotnet/framework/network-programming/tls) | All of the Cognitive Services endpoints exposed over HTTP enforce the TLS 1.2 protocol. With an enforced security protocol, consumers attempting to call a Cognitive Services endpoint should follow these guidelines: </br>- The client operating system (OS) needs to support TLS 1.2.</br>- The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request. Depending on the language and platform, specifying TLS is done either implicitly or explicitly.</br>- For .NET users, consider the [Transport Layer Security best practices](/dotnet/framework/network-programming/tls). | | [Authentication options](./authentication.md)| Authentication is the act of verifying a user's identity. Authorization, by contrast, is the specification of access rights and privileges to resources for a given identity. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal</a>, and a principal can be either an individual user or a service.</br></br>By default, you authenticate your own calls to Cognitive Services using the subscription keys provided; this is the simplest method but not the most secure. The most secure authentication method is to use managed roles in Azure Active Directory. To learn about this and other authentication options, see [Authenticate requests to Cognitive Services](./authentication.md). |
+| [Key rotation](./authentication.md)| Each Cognitive Services resource has two API keys to enable secret rotation. This is a security precaution that lets you regularly change the keys that can access your service, protecting the privacy of your service in the event that a key gets leaked. To learn about this and other authentication options, see [Rotate keys](/azure/cognitive-services/rotate-keys). |
| [Environment variables](cognitive-services-environment-variables.md) | Environment variables are name-value pairs that are stored within a specific development environment. You can store your credentials in this way as a more secure alternative to using hardcoded values in your code. However, if your environment is compromised, the environment variables are compromised as well, so this is not the most secure approach.</br></br> For instructions on how to use environment variables in your code, see the [Environment variables guide](cognitive-services-environment-variables.md). | | [Customer-managed keys (CMK)](./encryption/cognitive-services-encryption-keys-portal.md) | This feature is for services that store customer data at rest (longer than 48 hours). While this data is already double-encrypted on Azure servers, users can get extra security by adding another layer of encryption, with keys they manage themselves. You can link your service to Azure Key Vault and manage your data encryption keys there. </br></br>You need special approval to get the E0 SKU for your service, which enables CMK. Within 3-5 business days after you submit the [request form](https://aka.ms/cogsvc-cmk), you'll get an update on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once you're approved for using the E0 SKU, you'll need to create a new resource from the Azure portal and select E0 as the Pricing Tier. You won't be able to upgrade from F0 to the new E0 SKU. </br></br>Only some services can use CMK; look for your service on the [Customer-managed keys](./encryption/cognitive-services-encryption-keys-portal.md) page.| | [Virtual networks](./cognitive-services-virtual-networks.md) | Virtual networks allow you to specify which endpoints can make API calls to your resource. The Azure service will reject API calls from devices outside of your network. You can set a formula-based definition of the allowed network, or you can define an exhaustive list of endpoints to allow. This is another layer of security that can be used in combination with others. |
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md
Development of Calling and Chat applications can be accelerated by the [Azure C
| Email | [REST](/rest/api/communication/Email) | Service|Send and get status on Email messages| | Chat | [REST](/rest/api/communication/) with proprietary signaling | Client & Service | Add real-time text chat to your applications | | Calling | Proprietary transport | Client | Voice, video, screen-sharing, and other real-time communication |
+| Call Automation | [REST](/rest/api/communication/callautomation/server-calling) | Service| Build customized calling workflows for PSTN and VoIP calls|
| Network Traversal | [REST](./network-traversal.md)| Service| Access TURN servers for low-level data transport | | UI Library | N/A | Client | Production-ready UI components for chat and calling apps |
Publishing locations for individual SDK packages are detailed below.
| SMS| [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Sms)| [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | -| -| -| | Email| [npm](https://www.npmjs.com/package/@azure/communication-email) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Email)| [PyPi](https://pypi.org/project/azure-communication-email/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-email) | -| -| -| | Calling| [npm](https://www.npmjs.com/package/@azure/communication-calling) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Calling) | -| - | [GitHub](https://github.com/Azure/Communication/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/)| -|
-|Call Automation||[NuGet](https://www.NuGet.org/packages/Azure.Communication.CallingServer/)||[Maven](https://search.maven.org/artifact/com.azure/azure-communication-callingserver)
+|Call Automation||[NuGet](https://www.NuGet.org/packages/Azure.Communication.CallAutomation/)||[Maven](https://search.maven.org/artifact/com.azure/azure-communication-callautomation)
|Network Traversal| [npm](https://www.npmjs.com/package/@azure/communication-network-traversal)|[NuGet](https://www.NuGet.org/packages/Azure.Communication.NetworkTraversal/) | [PyPi](https://pypi.org/project/azure-communication-networktraversal/) | [Maven](https://search.maven.org/search?q=a:azure-communication-networktraversal) | -|- | - | | UI Library| [npm](https://www.npmjs.com/package/@azure/communication-react) | - | - | - | [GitHub](https://github.com/Azure/communication-ui-library-ios) | [GitHub](https://github.com/Azure/communication-ui-library-android) | [GitHub](https://github.com/Azure/communication-ui-library), [Storybook](https://azure.github.io/communication-ui-library/?path=/story/overview--page) | | Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html)| -| [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | [docs](/objectivec/communication-services/calling/)| [docs](/java/api/com.azure.android.communication.calling)| -|
You may be required to update to the v2.05 version of the Calling SDK within 12
For more information, see the following SDK overviews: - [Calling SDK Overview](../concepts/voice-video-calling/calling-sdk-features.md)
+- [Call Automation SDK Overview](../concepts/call-automation/call-automation.md)
- [Chat SDK Overview](../concepts/chat/sdk-features.md) - [SMS SDK Overview](../concepts/sms/sdk-features.md) - [Email SDK Overview](../concepts/email/sdk-features.md)
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
Call Recording supports multiple media outputs and content types to address your
### Audio
-> [!NOTE]
-> **Unmixed audio** is in **Private Preview**.
- | Channel Type | Content Format | Sampling Rate | Output | Description | | :-- | :- | :-- | :- | :- | | mixed | mp3 & wav | 16 kHz | single file, single channel | mixed audio of all participants |
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The maximum call duration is 30 hours, participants that reach the maximum call
The following table represents the set of supported browsers which are currently available. **We support the most recent three major versions of the browser (most recent three minor versions for Safari)** unless otherwise indicated.
-| Platform | Chrome | Safari | Edge (Chromium) |
-| | | | -- |
-| Android | ✔️ | ❌ | ❌ |
-| iOS | ❌ | ✔️ | ❌ |
-| macOS | ✔️ | ✔️ | ✔️ |
-| Windows | ✔️ | ❌ | ✔️ |
-| Ubuntu/Linux | ✔️ | ❌ | ❌ |
+| Platform | Chrome | Safari | Edge (Chromium) | Firefox |
+| | | | -- | - |
+| Android | ✔️ | ❌ | ❌ | ❌ |
+| iOS | ❌ | ✔️ | ❌ | ❌ |
+| macOS | ✔️ | ✔️ | ✔️ | ✔️ |
+| Windows | ✔️ | ❌ | ✔️ | ✔️ |
+| Ubuntu/Linux | ✔️ | ❌ | ❌ | ❌ |
* Outgoing Screen Sharing is not supported on iOS or Android.
+* Firefox support is in public preview.
* [An iOS app on Safari can't enumerate/select mic and speaker devices](../known-issues.md#enumerating-devices-isnt-possible-in-safari-when-the-application-runs-on-ios-or-ipados) (for example, Bluetooth); this is a limitation of the OS, and there's always only one device, OS controls default device selection. ## Android Calling SDK support
communication-services Media Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-streaming.md
Audio streams can be used in many ways, below are some examples of how developer
## Supported formats ### Mixed format
-Contains mixed audio of all participants on the call.
+Contains mixed audio of all participants on the call. As this is mixed audio, the participantRawID will be null.
### Unmixed Contains audio per participant per channel, with support for up to four channels for four dominant speakers. You will also get a participantRawID that you can use to determine the speaker.
communication-services Raise Hand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/raise-hand.md
Last updated 09/09/2022
+zone_pivot_groups: acs-web-android
#Customer intent: As a developer, I want to learn how to send and receive Raise Hand state using SDK.
During an active call, you may want to send or receive states from other users.
- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) [!INCLUDE [Raise Hand Client-side JavaScript](./includes/raise-hand/raise-hand-web.md)]+ ## Next steps - [Learn how to manage calls](./manage-calls.md)
communication-services Handle Email Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/handle-email-events.md
+
+ Title: Quickstart - Handle EMAIL and delivery report events
+
+description: "In this quickstart, you'll learn how to handle Azure Communication Services events. See how to create, receive, and subscribe to Email delivery report and Email engagement tracking events."
++++ Last updated : 07/09/2022++++
+# Quickstart: Handle Email events
+
+Get started with Azure Communication Services by using Azure Event Grid to handle Communication Services Email events. After subscribing to Email events such as delivery reports and engagement reports, you generate and receive these events. Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A Communication Services resource. For detailed information, see [Create an Azure Communication Services resource](../create-communication-resource.md).
+- An Email resource with a provisioned domain. [Create an Email Resource](../email/create-email-communication-resource.md).
+
+## About Event Grid
+
+[Event Grid](../../../event-grid/overview.md) is a cloud-based eventing service. In this article, you'll learn how to subscribe to [communication service events](../../../event-grid/event-schema-communication-services.md), and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. In this article, we'll send the events to a web app that collects and displays the messages.
+
+## Set up the environment
+
+To set up the environment that we'll use to generate and receive events, take the steps in the following sections.
+
+### Register an Event Grid resource provider
+
+If you haven't previously used Event Grid in your Azure subscription, you might need to register your Event Grid resource provider. To register the provider, follow these steps:
+
+1. Go to the Azure portal.
+1. On the left menu, select **Subscriptions**.
+1. Select the subscription that you use for Event Grid.
+1. On the left menu, under **Settings**, select **Resource providers**.
+1. Find **Microsoft.EventGrid**.
+1. If your resource provider isn't registered, select **Register**.
+
+It might take a moment for the registration to finish. Select **Refresh** to update the status. When **Registered** appears under **Status**, you're ready to continue.
+
+### Deploy the Event Grid viewer
+
+For this quickstart, we'll use an Event Grid viewer to view events in near-real time. The viewer provides the user with the experience of a real-time feed. Also, the payload of each event should be available for inspection.
+
+To set up the viewer, follow the steps in [Azure Event Grid Viewer](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/).
+
+## Subscribe to Email events by using web hooks
+
+You can subscribe to specific events to provide Event Grid with information about where to send the events that you want to track.
+
+1. In the portal, go to the Communication Services resource that you created.
+
+1. Inside the Communication Services resource, on the left menu of the **Communication Services** page, select **Events**.
+
+1. Select **Add Event Subscription**.
+
+ :::image type="content" source="./media/handle-email-events/select-events.png" alt-text="Screenshot that shows the Events page of an Azure Communication Services resource. The Event Subscription button is called out.":::
+
+1. On the **Create Event Subscription** page, enter a **name** for the event subscription.
+
+1. Under **Event Types**, select the events that you'd like to subscribe to. For Email, you can choose `Email Delivery Report Received` and `Email Engagement Tracking Report Received`.
+
+1. If you're prompted to provide a **System Topic Name**, feel free to provide a unique string. This field has no impact on your experience and is used for internal telemetry purposes.
+
+ :::image type="content" source="./media/handle-email-events/select-events-create-eventsub.png" alt-text="Screenshot that shows the Create Event Subscription dialog. Under Event Types, Email Delivery Report Received and Email Engagement Tracking Report Received are selected.":::
+
+1. For **Endpoint type**, select **Web Hook**.
+
+ :::image type="content" source="./media/handle-email-events/select-events-create-linkwebhook.png" alt-text="Screenshot that shows a detail of the Create Event Subscription dialog. In the Endpoint Type list, Web Hook is selected.":::
+
+1. For **Endpoint**, select **Select an endpoint**, and then enter the URL of your web app.
+
+ In this case, we'll use the URL from the [Event Grid viewer](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/) that we set up earlier in the quickstart. The URL for the sample has this format: `https://{{site-name}}.azurewebsites.net/api/updates`
+
+1. Select **Confirm Selection**.
+
+ :::image type="content" source="./media/handle-email-events/select-events-create-selectwebhook-epadd.png" alt-text="Screenshot that shows the Select Web Hook dialog. The Subscriber Endpoint box contains a URL, and a Confirm Selection button is visible.":::
+
+## View Email events
+
+To generate and receive Email events, take the steps in the following sections.
+
+### Trigger Email events
+
+To view event triggers, we need to generate some events. To trigger an event, [send email](../email/send-email.md) using the Email domain resource attached to the Communication Services resource.
+
+- `Email Delivery Report Received` events are generated when the Email status is in terminal state, i.e. Delivered, Failed, FilteredSpam, Quarantined.
+- `Email Engagement Tracking Report Received` events are generated when the email sent is either opened or a link within the email is clicked. To trigger an event, you need to turn on the `User Interaction Tracking` option on the Email domain resource
+
+Check out the full list of [events that Communication Services supports](../../../event-grid/event-schema-communication-services.md).
+
+### Receive Email events
+
+After you generate an event, you'll notice that `Email Delivery Report Received` and `Email Engagement Tracking Report Received` events are sent to your endpoint. These events show up in the [Event Grid viewer](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/) that we set up at the beginning of this quickstart. Select the eye icon next to the event to see the entire payload. Events should look similar to the following data:
+++
+Learn more about the [event schemas and other eventing concepts](../../../event-grid/event-schema-communication-services.md).
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+In this quickstart, you learned how to consume Email events. You can receive Email events by creating an Event Grid subscription.
+
+> [!div class="nextstepaction"]
+> [Send Email](../email/send-email.md)
+
+You might also want to:
+
+ - [Learn about event handling concepts](../../../event-grid/event-schema-communication-services.md)
+ - [Learn about Event Grid](../../../event-grid/overview.md)
communication-services Get Started Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-call-recording.md
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles:
+- Download our [Java](https://github.com/Azure-Samples/communication-services-java-quickstarts/tree/main/ServerRecording) and [.NET](https://github.com/Azure-Samples/communication-services-dotnet-quickstarts/tree/main/ServerRecording) call recording sample apps
- Learn more about [Call Recording](../../concepts/voice-video-calling/call-recording.md) - Learn more about [Call Automation](https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/call-automation)-- Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Learn about [Calling SDK capabilities](./getting-started-with-calling.md)-- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)+
communication-services Media Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/media-streaming.md
Get started with using audio streams through Azure Communication Services Media
When ACS has received the URL for your WebSocket server, it will create a connection to it. Once ACS has successfully connected to your WebSocket server, it will send through the first data packet which contains metadata regarding the incoming media packets. ``` code
-/**
- * The first message upon WebSocket connection will be the metadata packet
- * which contains the subscriptionId and audio format
- */
-public class AudioMetadataSample {
- public string kind; // What kind of data this is, e.g. AudioMetadata, AudioData.
- public AudioMetadata audioMetadata;
-}
-
-public class AudioMetadata {
- public string subscriptionId // unique identifier for a subscription request
- public string encoding; // PCM only supported
- public int sampleRate; // 16000 default
- public int channels; // 1 default
- public int length; // 640 default
+{
+ "kind": <string> // What kind of data this is, e.g. AudioMetadata, AudioData.
+ "audioMetadata": {
+ "subscriptionId": <string>, // unique identifier for a subscription request
+ "encoding":<string>, // PCM only supported
+ "sampleRate": <int>, // 16000 default
+ "channels": <int>, // 1 default
+ "length": <int> // 640 default
+ }
} ```
public class AudioMetadata {
After sending through the metadata packet, ACS will start streaming audio media to your WebSocket server. Below is an example of what the media object your server will receive looks like. ``` code
-/**
- * The audio buffer object which is then serialized to JSON format
- */
-public class AudioDataSample {
- public string kind; // What kind of data this is, e.g. AudioMetadata, AudioData.
- public AudioData audioData;
+{
+ "kind": <string>, // What kind of data this is, e.g. AudioMetadata, AudioData.
+ "audioData":{
+ "data": <string>, // Base64 Encoded audio buffer data
+ "timestamp": <string>, // In ISO 8601 format (yyyy-mm-ddThh:mm:ssZ)
+ "participantRawID": <string>,
+ "silent": <boolean> // Indicates if the received audio buffer contains only silence.
+ }
}-
-public class AudioData {
- public string data; // Base64 Encoded audio buffer data
- public string timestamp; // In ISO 8601 format (yyyy-mm-ddThh:mm:ssZ)
- public string participantRawID;
- public boolean silent; // Indicates if the received audio buffer contains only silence.
-}
``` Example of audio data being streamed
communication-services Virtual Visits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md
# Virtual appointments
-This tutorial describes concepts for virtual visit applications. After completing this tutorial and the associated [Sample Builder](https://aka.ms/acs-sample-builder), you will understand common use cases that a virtual appointments application delivers, the Microsoft technologies that can help you build those uses cases, and have built a sample application integrating Microsoft 365 and Azure that you can use to demo and explore further.
+This tutorial describes concepts for virtual appointment applications. After completing this tutorial and the associated [Sample Builder](https://aka.ms/acs-sample-builder), you will understand common use cases that a virtual appointments application delivers, the Microsoft technologies that can help you build those uses cases, and have built a sample application integrating Microsoft 365 and Azure that you can use to demo and explore further.
Virtual appointments are a communication pattern where a **consumer** and a **business** assemble for a scheduled appointment. The **organizational boundary** between consumer and business, and **scheduled** nature of the interaction, are key attributes of most virtual appointments. Many industries operate virtual appointments: meetings with a healthcare provider, a loan officer, or a product support technician.
-No matter the industry, there are at least three personas involved in a virtual visit and certain tasks they accomplish:
+No matter the industry, there are at least three personas involved in a virtual appointment and certain tasks they accomplish:
- **Office Manager.** The office manager configures the businessΓÇÖ availability and booking rules for providers and consumers.-- **Provider.** The provider gets on the call with the consumer. They must be able to view upcoming virtual appointments and join the virtual visit and engage in communication.-- **Consumer**. The consumer who schedules and motivates the visit. They must schedule a visit, enjoy reminders of the visit, typically through SMS or email, and join the virtual visit and engage in communication.
+- **Provider.** The provider gets on the call with the consumer. They must be able to view upcoming virtual appointments and join the virtual appointment and engage in communication.
+- **Consumer**. The consumer who schedules and motivates the appointment. They must schedule an appointment, enjoy reminders of the appointment, typically through SMS or email, and join the virtual appointment and engage in communication.
Azure and Teams are interoperable. This interoperability gives organizations choice in how they deliver virtual appointments using Microsoft's cloud. Three examples include: - **Microsoft 365** provides a zero-code suite for virtual appointments using Microsoft [Teams](https://www.microsoft.com/microsoft-teams/group-chat-software/) and [Bookings](https://www.microsoft.com/microsoft-365/business/scheduling-and-booking-app). This is the easiest option but customization is limited. [Check out this video for an introduction.](https://www.youtube.com/watch?v=zqfGrwW2lEw)-- **Microsoft 365 + Azure hybrid.** Combine Microsoft 365 Teams and Bookings with a custom Azure application for the consumer experience. Organizations take advantage of Microsoft 365's employee familiarity but customize and embed the consumer visit experience in their own application.
+- **Microsoft 365 + Azure hybrid.** Combine Microsoft 365 Teams and Bookings with a custom Azure application for the consumer experience. Organizations take advantage of Microsoft 365's employee familiarity but customize and embed the consumer appointment experience in their own application.
- **Azure custom.** Build the entire solution on Azure primitives: the business experience, the consumer experience, and scheduling systems.
-![Diagram of virtual visit implementation options](./media/virtual-visits/virtual-visit-options.svg)
+![Diagram of virtual appointment implementation options](./media/virtual-visits/virtual-visit-options.svg)
These three **implementation options** are columns in the table below, while each row provides a **use case** and the **enabling technologies**. |*Persona* | **Use Case** | **Microsoft 365** | **Microsoft 365 + Azure hybrid** | **Azure Custom** | |--||--||| | *Manager* | Configure Business Availability | Bookings | Bookings | Custom |
-| *Provider* | Managing upcoming visits | Outlook & Teams | Outlook & Teams | Custom |
-| *Provider* | Join the visit | Teams | Teams | ACS Calling & Chat |
-| *Consumer* | Schedule a visit | Bookings | Bookings | ACS Rooms |
-| *Consumer*| Be reminded of a visit | Bookings | Bookings | ACS SMS |
-| *Consumer*| Join the visit | Teams or virtual appointments | ACS Calling & Chat | ACS Calling & Chat |
+| *Provider* | Managing upcoming appointments | Outlook & Teams | Outlook & Teams | Custom |
+| *Provider* | Join the appointment | Teams | Teams | ACS Calling & Chat |
+| *Consumer* | Schedule an appointment | Bookings | Bookings | ACS Rooms |
+| *Consumer*| Be reminded of a appointment | Bookings | Bookings | ACS SMS |
+| *Consumer*| Join the appointment | Teams or virtual appointments | ACS Calling & Chat | ACS Calling & Chat |
There are other ways to customize and combine Microsoft tools to deliver a virtual appointments experience: - **Replace Bookings with a custom scheduling experience with Graph.** You can build your own consumer-facing scheduling experience that controls Microsoft 365 meetings with Graph APIs.-- **Replace TeamsΓÇÖ provider experience with Azure.** You can still use Microsoft 365 and Bookings to manage meetings but have the business user launch a custom Azure application to join the Teams meeting. This might be useful where you want to split or customize virtual visit interactions from day-to-day employee Teams activity.
+- **Replace TeamsΓÇÖ provider experience with Azure.** You can still use Microsoft 365 and Bookings to manage meetings but have the business user launch a custom Azure application to join the Teams meeting. This might be useful where you want to split or customize virtual appointment interactions from day-to-day employee Teams activity.
## Extend Microsoft 365 with Azure
-The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. These hybrid configurations are popular because they combine employee familiarity of Microsoft 365 with the ability to customize the consumer experience. TheyΓÇÖre also a good launching point to understanding more complex and customized architectures. The diagram below shows user steps for a virtual visit:
+The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. These hybrid configurations are popular because they combine employee familiarity of Microsoft 365 with the ability to customize the consumer experience. TheyΓÇÖre also a good launching point to understanding more complex and customized architectures. The diagram below shows user steps for a virtual appointment:
![High-level architecture of a hybrid virtual appointments solution](./media/virtual-visits/virtual-visit-arch.svg)
-1. Consumer schedules the visit using Microsoft 365 Bookings.
-2. Consumer gets a visit reminder through SMS and Email.
-3. Provider joins the visit using Microsoft Teams.
+1. Consumer schedules the appointment using Microsoft 365 Bookings.
+2. Consumer gets a appointment reminder through SMS and Email.
+3. Provider joins the appointment using Microsoft Teams.
4. Consumer uses a link from the Bookings reminders to launch the Contoso consumer app and join the underlying Teams meeting. 5. The users communicate with each other using voice, video, and text chat in a meeting.
-## Building a virtual visit sample
+## Building a virtual appointment sample
In this section weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual appointments application to an Azure subscription. This application will be a desktop and mobile friendly browser experience, with code that you can use to explore and productionize. ### Step 1 - Configure bookings
Opening the App ServiceΓÇÖs URL and navigating to `https://<YOUR URL>/VISIT` all
Enter the application url followed by "/visit" in the "Deployed App URL" field in https://outlook.office.com/bookings/businessinformation. ## Going to production
-The Sample Builder gives you the basics of a Microsoft 365 and Azure virtual visit: consumer scheduling via Bookings, consumer joins via custom app, and the provider joins via Teams. However, there are several things to consider as you take this scenario to production.
+The Sample Builder gives you the basics of a Microsoft 365 and Azure virtual appointment: consumer scheduling via Bookings, consumer joins via custom app, and the provider joins via Teams. However, there are several things to consider as you take this scenario to production.
### Launching patterns
-Consumers want to jump directly to the virtual visit from the scheduling reminders they receive from Bookings. In Bookings, you can provide a URL prefix that will be used in reminders. If your prefix is `https://<YOUR URL>/VISIT`, Bookings will point users to `https://<YOUR URL>/VISIT?MEETINGURL=<MEETING URL>.`
+Consumers want to jump directly to the virtual appointment from the scheduling reminders they receive from Bookings. In Bookings, you can provide a URL prefix that will be used in reminders. If your prefix is `https://<YOUR URL>/VISIT`, Bookings will point users to `https://<YOUR URL>/VISIT?MEETINGURL=<MEETING URL>.`
### Integrate into your existing app The app service generated by the Sample Builder is a stand-alone artifact, designed for desktop and mobile browsers. However you may have a website or mobile application already and need to migrate these experiences to that existing codebase. The code generated by the Sample Builder should help, but you can also use:
confidential-ledger Verify Write Transaction Receipts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/verify-write-transaction-receipts.md
The third step is to verify that the cryptographic signature produced over the r
1. Decode the base64 string `signature` into an array of bytes. 2. Extract the ECDSA public key from the signing node certificate `cert`.
-3. Verify that the signature over the root of the Merkle Tree (computed using the instructions in the previous subsection) is authentic using the extracted public key from the previous step. This step effectively corresponds to a standard [digital signature](https://wikipedia.org/wiki/Digital_signature) verification process using ECDSA. There are many libraries in the most popular programming languages that allow verifying an ECDSA signature using a public key certificate over some data (for example, [ecdsa](https://pypi.org/project/ecdsa/) for Python).
+3. Verify that the signature over the root of the Merkle Tree (computed using the instructions in the previous subsection) is authentic using the extracted public key from the previous step. This step effectively corresponds to a standard [digital signature](https://wikipedia.org/wiki/Digital_signature) verification process using ECDSA. There are many libraries in the most popular programming languages that allow verifying an ECDSA signature using a public key certificate over some data (for example, the [cryptography library](https://cryptography.io/en/latest/) for Python).
### Verify signing node certificate endorsement
container-apps Application Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/application-lifecycle-management.md
Previously updated : 11/02/2021 Last updated : 10/25/2022
The Azure Container Apps application lifecycle revolves around [revisions](revis
When you deploy a container app, the first revision is automatically created. [More revisions are created](revisions.md) as [containers](containers.md) change, or any adjustments are made to the `template` section of the configuration.
-A container app flows through three phases: deployment, update, and deactivation.
+A container app flows through four phases: deployment, update, deactivation, and shutdown.
## Deployment
As a container app is deployed, the first revision is automatically created.
## Update
-As a container app is updated with a [revision scope-change](revisions.md#revision-scope-changes), a new revision is created. You can choose whether to [automatically deactivate old revisions, or allow them to remain available](revisions.md).
+As a container app is updated with a [revision scope-change](revisions.md#revision-scope-changes), a new revision is created. You can [choose](revisions.md#revision-modes) whether to automatically deactivate old revisions (single revision mode), or allow them to remain available (multiple revision mode).
:::image type="content" source="media/application-lifecycle-management/azure-container-apps-lifecycle-update.png" alt-text="Azure Container Apps: Update phase":::
+### Zero downtime deployment
+
+In single revision mode, Container Apps automatically ensures your app does not experience downtime when creating new a revision. The existing active revision is not deactivated until the new revision is ready. If ingress is enabled, the existing revision will continue to receive 100% of the traffic until the new revision is ready.
+
+> [!NOTE]
+> A new revision is considered ready when one of its replicas starts and becomes ready. A replica is ready when all of its containers start and pass their [startup and readiness probes](./health-probes.md).
+
+In multiple revision mode, you control when revisions are activated or deactivated and which revisions receive ingress traffic. If a [traffic splitting rule](./revisions-manage.md#traffic-splitting) is configured with `latestRevision` set to `true`, traffic does not switch to the latest revision until it is ready.
+ ## Deactivate Once a revision is no longer needed, you can deactivate a revision with the option to reactivate later. During deactivation, containers in the revision are [shut down](#shutdown).
container-apps Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-pipelines.md
+
+ Title: Publish revisions with Azure Pipelines in Azure Container Apps
+description: Learn to automatically create new revisions in Azure Container Apps using an Azure DevOps pipeline
++++ Last updated : 11/09/2022+++
+# Deploy to Azure Container Apps from Azure Pipelines (preview)
+
+Azure Container Apps allows you to use Azure Pipelines to publish [revisions](revisions.md) to your container app. As commits are pushed to your [Azure DevOps repository](/azure/devops/repos/), a pipeline is triggered which updates the container image in the container registry. Azure Container Apps creates a new revision based on the updated container image.
+
+The pipeline is triggered by commits to a specific branch in your repository. When creating the pipeline, you decide which branch is the trigger.
+
+## Container Apps Azure Pipelines task
+
+To build and deploy your container app, add the [`AzureContainerAppsRC`](https://marketplace.visualstudio.com/items?itemName=microsoft-oryx.AzureContainerAppsRC) (preview) Azure Pipelines task to your pipeline.
+
+The task supports the following scenarios:
+
+* Build from a Dockerfile and deploy to Container Apps
+* Build from source code without a Dockerfile and deploy to Container Apps. Supported languages include .NET, Node.js, PHP, Python, and Ruby
+* Deploy an existing container image to Container Apps
+
+### Usage examples
+
+Here are some common scenarios for using the task. For more information, see the [task's documentation](https://github.com/Azure/container-apps-deploy-pipelines-task/blob/main/README.md).
+
+#### Build and deploy to Container Apps
+
+The following snippet shows how to build a container image from source code and deploy it to Container Apps.
+
+```yaml
+steps:
+- task: AzureContainerAppsRC@0
+ inputs:
+ appSourcePath: '$(Build.SourcesDirectory)/src'
+ azureSubscription: 'my-subscription-service-connection'
+ acrName: 'myregistry'
+ containerAppName: 'my-container-app'
+ resourceGroup: 'my-container-app-rg'
+```
+
+The task uses the Dockerfile in `appSourcePath` to build the container image. If no Dockerfile is found, the task attempts to build the container image from source code in `appSourcePath`.
+
+#### Deploy an existing container image to Container Apps
+
+The following snippet shows how to deploy an existing container image to Container Apps.
+
+```yaml
+steps:
+ - task: AzureContainerAppsRC@0
+ inputs:
+ azureSubscription: 'my-subscription-service-connection'
+ acrName: 'myregistry'
+ containerAppName: 'my-container-app'
+ resourceGroup: 'my-container-app-rg'
+ imageToDeploy: 'myregistry.azurecr.io/my-container-app:$(Build.BuildId)'
+```
+
+> [!IMPORTANT]
+> If you're building a container image in a separate step, make sure you use a unique tag such as the build ID instead of a stable tag like `latest`. For more information, see [Image tag best practices](../container-registry/container-registry-image-tag-version.md).
+
+### Authenticate with Azure Container Registry
+
+The Azure Container Apps task needs to authenticate with your Azure Container Registry to push the container image. The container app also needs to authenticate with your Azure Container Registry to pull the container image.
+
+To push images, the task automatically authenticates with the container registry specified in `acrName` using the service connection provided in `azureSubscription`.
+
+To pull images, Azure Container Apps uses either managed identity (recommended) or admin credentials to authenticate with the Azure Container Registry. To use managed identity, the target container app for the task must be [configured to use managed identity](managed-identity-image-pull.md). To authenticate with the registry's admin credentials, set the task's `acrUsername` and `acrPassword` inputs.
+
+## Configuration
+
+Take the following steps to configure an Azure DevOps pipeline to deploy to Azure Container Apps.
+
+> [!div class="checklist"]
+> * Create an Azure DevOps repository for your app
+> * Create a container app with managed identity enabled
+> * Assign the `AcrPull` role for the Azure Container Registry to the container app's managed identity
+> * Install the Azure Container Apps task from the Azure DevOps Marketplace
+> * Configure an Azure DevOps service connection for your Azure subscription
+> * Create an Azure DevOps pipeline
+
+### Prerequisites
+
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| Azure Devops project | Go to [Azure DevOps](https://azure.microsoft.com/services/devops/) and select *Start free*. Then create a new project. |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+
+### Create an Azure DevOps repository and clone the source code
+
+Before creating a pipeline, the source code for your app must be in a repository.
+
+1. Log in to [Azure DevOps](https://dev.azure.com/) and navigate to your project.
+
+1. Open the **Repos** page.
+
+1. In the top navigation bar, select the repositories dropdown and select **Import repository**.
+
+1. Enter the following information and select **Import**:
+
+ | Field | Value |
+ |--|--|
+ | **Repository type** | Git |
+ | **Clone URL** | `https://github.com/Azure-Samples/containerapps-albumapi-csharp.git` |
+ | **Name** | `my-container-app` |
+
+1. Select **Clone** to view the repository URL and copy it.
+
+1. Open a terminal and run the following command to clone the repository:
+
+ ```bash
+ git clone <REPOSITORY_URL> my-container-app
+ ```
+
+ Replace `<REPOSITORY_URL>` with the URL you copied.
+
+### Create a container app and configure managed identity
+
+Create your container app using the `az containerapp up` command with the following steps. This command creates Azure resources, builds the container image, stores the image in a registry, and deploys to a container app.
+
+After your app is created, you can add a managed identity to your app and assign the identity the `AcrPull` role to allow the identity to pull images from the registry.
++
+### Install the Azure Container Apps task
+
+The Azure Container Apps Azure Pipelines task is currently in preview. Before you use the task, you must install it from the Azure DevOps Marketplace.
+
+1. Open the [Azure Container Apps task](https://marketplace.visualstudio.com/items?itemName=microsoft-oryx.AzureContainerAppsRC) in the Azure DevOps Marketplace.
+
+1. Select **Get it free**.
+
+1. Select your Azure DevOps organization and select **Install**.
+
+### Create an Azure DevOps service connection
+
+To deploy to Azure Container Apps, you need to create an Azure DevOps service connection for your Azure subscription.
+
+1. In Azure DevOps, select **Project settings**.
+
+1. Select **Service connections**.
+
+1. Select **New service connection**.
+
+1. Select **Azure Resource Manager**.
+
+1. Select **Service principal (automatic)** and select **Next**.
+
+1. Enter the following information and select **Save**:
+
+ | Field | Value |
+ |--|--|
+ | **Subscription** | Select your Azure subscription. |
+ | **Resource group** | Select the resource group (`my-container-app-rg`) that contains your container app and container registry. |
+ | **Service connection name** | `my-subscription-service-connection` |
+
+To learn more about service connections, see [Connect to Microsoft Azure](/azure/devops/pipelines/library/connect-to-azure).
+
+### Create an Azure DevOps YAML pipeline
+
+1. In your Azure DevOps project, select **Pipelines**.
+
+1. Select **New pipeline**.
+
+1. Select **Azure Repos Git**.
+
+1. Select the repo that contains your source code (`my-container-app`).
+
+1. Select **Starter pipeline**.
+
+1. In the editor, replace the contents of the file with the following YAML:
+
+ ```yaml
+ trigger:
+ branches:
+ include:
+ - main
+
+ pool:
+ vmImage: ubuntu-latest
+
+ steps:
+ - task: AzureContainerAppsRC@0
+ inputs:
+ appSourcePath: '$(Build.SourcesDirectory)/src'
+ azureSubscription: '<AZURE_SUBSCRIPTION_SERVICE_CONNECTION>'
+ acrName: '<ACR_NAME>'
+ containerAppName: 'my-container-app'
+ resourceGroup: 'my-container-app-rg'
+ ```
+
+ Replace `<AZURE_SUBSCRIPTION_SERVICE_CONNECTION>` with the name of the Azure DevOps service connection (`my-subscription-service-connection`) you created in the previous step and `<ACR_NAME>` with the name of your Azure Container Registry.
+
+1. Select **Save and run**.
+
+An Azure Pipelines run starts to build and deploy your container app. To check its progress, navigate to *Pipelines* and select the run. During the first pipeline run, you may be prompted to authorize the pipeline to use your service connection.
+
+To deploy a new revision of your app, push a new commit to the *main* branch.
container-apps Github Actions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions-cli.md
Title: Publish revisions with GitHub Actions in Azure Container Apps
-description: Learn to automatically create new revisions using GitHub Actions in Azure Container Apps
+ Title: Generate GitHub Actions workflow with Azure CLI in Azure Container Apps
+description: Learn to automatically create GitHub Actions workflow in Azure Container Apps
Previously updated : 12/30/2021 Last updated : 11/09/2022
-# Publish revisions with GitHub Actions in Azure Container Apps
+# Set up GitHub Actions with Azure CLI in Azure Container Apps
-Azure Container Apps allows you to use GitHub Actions to publish [revisions](revisions.md) to your container app. As commits are pushed to your GitHub repository, a GitHub Actions is triggered which updates the [container](containers.md) image in the container registry. Once the container is updated in the registry, Azure Container Apps creates a new revision based on the updated container image.
+Azure Container Apps allows you to use GitHub Actions to publish [revisions](revisions.md) to your container app. As commits are pushed to your GitHub repository, a GitHub Actions workflow is triggered which updates the [container](containers.md) image in the container registry. Once the container is updated in the registry, Azure Container Apps creates a new revision based on the updated container image.
:::image type="content" source="media/github-actions/azure-container-apps-github-actions.png" alt-text="Changes to a GitHub repo trigger an action to create a new revision.":::
-The GitHub Actions is triggered by commits to a specific branch in your repository. When creating the integration link, you decide which branch triggers the action.
+The GitHub Actions workflow is triggered by commits to a specific branch in your repository. When creating the workflow, you decide which branch triggers the action.
+
+This article shows you how to generate a starter GitHub Actions workflow with Azure CLI. To create your own workflow that you can fully customize, see [Deploy to Azure Container Apps with GitHub Actions](github-actions.md).
## Authentication
az ad sp create-for-rbac `
As you interact with this example, replace the placeholders surrounded by `<>` with your values.
-The return values from this command includes the service principal's `appId`, `password` and `tenant`. You need to pass these values to the `az containerapp github-action add` command.
+The return values from this command include the service principal's `appId`, `password`, and `tenant`. You need to pass these values to the `az containerapp github-action add` command.
The following example shows you how to add an integration while using a personal access token.
container-apps Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions.md
+
+ Title: Publish revisions with GitHub Actions in Azure Container Apps
+description: Learn to automatically create new revisions in Azure Container Apps using a GitHub Actions workflow
++++ Last updated : 11/09/2022+++
+# Deploy to Azure Container Apps with GitHub Actions (preview)
+
+Azure Container Apps allows you to use GitHub Actions to publish [revisions](revisions.md) to your container app. As commits are pushed to your GitHub repository, a workflow is triggered which updates the container image in the container registry. Azure Container Apps creates a new revision based on the updated container image.
++
+The GitHub Actions workflow is triggered by commits to a specific branch in your repository. When creating the workflow, you decide which branch triggers the workflow.
+
+This article shows you how to create a fully customizable workflow. To generate a starter GitHub Actions workflow with Azure CLI, see [Generate GitHub Actions workflow with Azure CLI](github-actions-cli.md).
+
+## Azure Container Apps GitHub action
+
+To build and deploy your container app, you add the [`azure/container-apps-deploy-action`](https://github.com/marketplace/actions/azure-container-apps-build-and-deploy) action to your GitHub Actions workflow.
+
+The action supports the following scenarios:
+
+* Build from a Dockerfile and deploy to Container Apps
+* Build from source code without a Dockerfile and deploy to Container Apps. Supported languages include .NET, Node.js, PHP, Python, and Ruby
+* Deploy an existing container image to Container Apps
+
+### Usage examples
+
+Here are some common scenarios for using the action. For more information, see the [action's GitHub Marketplace page](https://github.com/marketplace/actions/azure-container-apps-build-and-deploy).
+
+#### Build and deploy to Container Apps
+
+The following snippet shows how to build a container image from source code and deploy it to Container Apps.
+
+```yaml
+steps:
+
+ - name: Log in to Azure
+ uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - name: Build and deploy Container App
+ uses: azure/container-apps-deploy-action@v0
+ with:
+ appSourcePath: ${{ github.workspace }}/src
+ acrName: myregistry
+ containerAppName: my-container-app
+ resourceGroup: my-rg
+```
+
+The action uses the Dockerfile in `appSourcePath` to build the container image. If no Dockerfile is found, the action attempts to build the container image from source code in `appSourcePath`.
+
+#### Deploy an existing container image to Container Apps
+
+The following snippet shows how to deploy an existing container image to Container Apps.
+
+```yaml
+steps:
+
+ - name: Log in to Azure
+ uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - name: Build and deploy Container App
+ uses: azure/container-apps-deploy-action@v0
+ with:
+ acrName: myregistry
+ containerAppName: my-container-app
+ resourceGroup: my-rg
+ imageToDeploy: myregistry.azurecr.io/app:${{ github.sha }}
+```
+
+> [!IMPORTANT]
+> If you're building a container image in a separate step, make sure you use a unique tag such as the commit SHA instead of a stable tag like `latest`. For more information, see [Image tag best practices](../container-registry/container-registry-image-tag-version.md).
+
+### Authenticate with Azure Container Registry
+
+The Azure Container Apps action needs to authenticate with your Azure Container Registry to push the container image. The container app also needs to authenticate with your Azure Container Registry to pull the container image.
+
+To push images, the action automatically authenticates with the container registry specified in `acrName` using the credentials provided to the `azure/login` action.
+
+To pull images, Azure Container Apps uses either managed identity (recommended) or admin credentials to authenticate with the Azure Container Registry. To use managed identity, the container app the action is deploying must be [configured to use managed identity](managed-identity-image-pull.md). To authenticate with the registry's admin credentials, set the action's `acrUsername` and `acrPassword` inputs.
+
+## Configuration
+
+You take the following steps to configure a GitHub Actions workflow to deploy to Azure Container Apps.
+
+> [!div class="checklist"]
+> * Create a GitHub repository for your app
+> * Create a container app with managed identity enabled
+> * Assign the `AcrPull` role for the Azure Container Registry to the container app's managed identity
+> * Configure secrets in your GitHub repository
+> * Create a GitHub Actions workflow
+
+### Prerequisites
+
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| GitHub Account | Sign up for [free](https://github.com/join). |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+
+### Create a GitHub repository and clone source code
+
+Before creating a workflow, the source code for your app must be in a GitHub repository.
+
+1. Log in to Azure with the Azure CLI.
+
+ ```azurecli
+ az login
+ ```
+
+1. Next, install the latest Azure Container Apps extension for the CLI.
+
+ ```azurecli
+ az extension add --name containerapp --upgrade
+ ```
+
+1. If you do not have your own GitHub repository, create one from a sample.
+ 1. Navigate to the following location to create a new repository:
+ - [https://github.com/Azure-Samples/containerapps-albumapi-csharp/generate](https://github.com/login?return_to=%2FAzure-Samples%2Fcontainerapps-albumapi-csharp%2Fgenerate)
+ 1. Name your repository `my-container-app`.
+
+1. Clone the repository to your local machine.
+
+ ```bash
+ git clone https://github.com/<YOUR_GITHUB_ACCOUNT_NAME>/my-container-app.git
+ ```
+
+### Create a container app with managed identity enabled
+
+Create your container app using the `az containerapp up` command in the following steps. This command will create Azure resources, build the container image, store the image in a registry, and deploy to a container app.
+
+After you create your app, you can add a managed identity to the app and assign the identity the `AcrPull` role to allow the identity to pull images from the registry.
++
+### Configure secrets in your GitHub repository
+
+The GitHub workflow requires a secret named `AZURE_CREDENTIALS` to authenticate with Azure. The secret contains the credentials for a service principal with the *Contributor* role on the resource group containing the container app and container registry.
+
+1. Create a service principal with the *Contributor* role on the resource group that contains the container app and container registry.
+
+ ```azurecli
+ az ad sp create-for-rbac \
+ --name my-container-app \
+ --role contributor \
+ --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/my-container-app-rg \
+ --sdk-auth \
+ --output json
+ ```
+
+ Replace `<SUBSCRIPTION_ID>` with the ID of your Azure subscription. If your container registry is in a different resource group, specify both resource groups in the `--scopes` parameter.
+
+1. Copy the JSON output from the command.
+
+1. In the GitHub repository, navigate to *Settings* > *Secrets* > *Actions* and select **New repository secret**.
+
+1. Enter `AZURE_CREDENTIALS` as the name and paste the contents of the JSON output as the value.
+
+1. Select **Add secret**.
+
+### Create a GitHub Actions workflow
+
+1. In the GitHub repository, navigate to *Actions* and select **New workflow**.
+
+1. Select **Set up a workflow yourself**.
+
+1. Paste the following YAML into the editor.
+
+ ```yaml
+ name: Azure Container Apps Deploy
+
+ on:
+ push:
+ branches:
+ - main
+
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+
+ steps:
+ - uses: actions/checkout@v3
+
+ - name: Log in to Azure
+ uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - name: Build and deploy Container App
+ uses: azure/container-apps-deploy-action@v0
+ with:
+ appSourcePath: ${{ github.workspace }}/src
+ acrName: <ACR_NAME>
+ containerAppName: my-container-app
+ resourceGroup: my-container-app-rg
+ ```
+
+ Replace `<ACR_NAME>` with the name of your Azure Container Registry. Confirm that the branch name under `branches` and values for `appSourcePath`, `containerAppName`, and `resourceGroup` match the values for your repository and Azure resources.
+
+1. Commit the changes to the *main* branch.
+
+A GitHub Actions workflow run should start to build and deploy your container app. To check its progress, navigate to *Actions*.
+
+To deploy a new revision of your app, push a new commit to the *main* branch.
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
Previously updated : 03/30/2022 Last updated : 10/28/2022
The optional `failureThreshold` setting defines the number of attempts Container
## Default configuration
-Container Apps offers default probe settings if no probes are defined. If your app takes an extended amount of time to start, which is very common in Java, you often need to customize the probes so your container won't crash.
+If ingress is enabled, the following default probes are automatically added to the main app container if none is defined for each type.
+
+| Probe type | Default values |
+| -- | -- |
+| Startup | Protocol: TCP<br>Port: ingress target port<br>Timeout: 1 second<br>Period: 1 second<br>Initial delay: 1 second<br>Success threshold: 1<br>Failure threshold: `timeoutSeconds` |
+| Readiness | Protocol: TCP<br>Port: ingress target port<br>Timeout: 5 seconds<br>Period: 5 seconds<br>Initial delay: 3 seconds<br>Success threshold: 1<br>Failure threshold: `timeoutSeconds / 5` |
+| Liveness | Protocol: TCP<br>Port: ingress target port |
+
+If your app takes an extended amount of time to start, which is very common in Java, you often need to customize the probes so your container won't crash.
The following example demonstrates how to configure the liveness and readiness probes in order to extend the startup times.
container-apps Revisions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md
az containerapp revision set-mode `
Applied by assigning percentage values, you can decide how to balance traffic among different revisions. Traffic splitting rules are assigned by setting weights to different revisions.
+To create a traffic rule that always routes traffic to the latest revision, set its `latestRevision` property to `true` and don't set `revisionName`.
+ The following example shows how to split traffic between three revisions. ```json
container-apps Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions.md
The revision mode controls whether only a single revision or multiple revisions
### Single revision mode
-By default, a container app is in *single revision mode*. In this mode, only one revision is active at a time. When a new revision is created, the latest revision replaces the active revision.
+By default, a container app is in *single revision mode*. In this mode, when a new revision is created, the latest revision replaces the active revision. For more information, see [Zero downtime deployment](./application-lifecycle-management.md#zero-downtime-deployment).
### Multiple revision mode
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-virtual-network-concepts.md
Last updated 06/17/2022
This article provides background about virtual network scenarios, limitations, and resources. For deployment examples using the Azure CLI, see [Deploy container instances into an Azure virtual network](container-instances-vnet.md). > [!IMPORTANT]
-> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For more details on which regions have virtual network capabilities, see [Regions and resource availability](container-instances-region-availability.md).
+> Container group deployment to a virtual network is generally available for Linux containers, in most regions where Azure Container Instances is available. For details, see [Regions and resource availability](container-instances-region-availability.md).
## Scenarios
Container groups deployed into an Azure virtual network enable scenarios like:
## Other limitations
+* Currently, only Linux containers are supported in a container group deployed to a virtual network.
* To deploy container groups to a subnet, the subnet can't contain other resource types. Remove all existing resources from an existing subnet prior to deploying container groups to it, or create a new subnet. * To deploy container groups to a subnet, the subnet and the container group must be on the same Azure subscription. * You can't enable a [liveness probe](container-instances-liveness-probe.md) or [readiness probe](container-instances-readiness-probe.md) in a container group deployed to a virtual network.
cosmos-db Access Secrets From Keyvault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-secrets-from-keyvault.md
Last updated 11/07/2022
If you're using Azure Cosmos DB as your database, you connect to databases, container, and items by using an SDK, the API endpoint, and either the primary or secondary key.
-It's not a good practice to store the endpoint URI and sensitive read-write keys directly within application code or configuration file. Ideally, this data is read from environment variables within the host. In Azure App Service, [app settings](/azure/app-service/configure-common#configure-app-settings) allow you to inject runtime credentials for your Azure Cosmos DB account without the need for developers to store these credentials in an insecure clear text manner.
+It's not a good practice to store the endpoint URI and sensitive read-write keys directly within application code or configuration file. Ideally, this data is read from environment variables within the host. In Azure App Service, [app settings](../app-service/configure-common.md#configure-app-settings) allow you to inject runtime credentials for your Azure Cosmos DB account without the need for developers to store these credentials in an insecure clear text manner.
Azure Key Vault iterates on this best practice further by allowing you to store these credentials securely while giving services like Azure App Service managed access to the credentials. Azure App Service will securely read your credentials from Azure Key Vault and inject those credentials into your running application.
Finally, inject the secrets stored in your key vault as app settings within the
## Next steps - To configure a firewall for Azure Cosmos DB, see [firewall support](how-to-configure-firewall.md) article.-- To configure virtual network service endpoint, see [secure access by using VNet service endpoint](how-to-configure-vnet-service-endpoint.md) article.
+- To configure virtual network service endpoint, see [secure access by using VNet service endpoint](how-to-configure-vnet-service-endpoint.md) article.
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
Because a system-assigned managed identity can only be retrieved after the creat
}, // ... "properties": {
- "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>"
+ "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>",
"keyVaultKeyUri": "<key-vault-key-uri>" // ... }
When you create a new Azure Cosmos DB account through an Azure Resource Manager
// ... "properties": { "backupPolicy": { "type": "Continuous" },
- "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>"
+ "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>",
"keyVaultKeyUri": "<key-vault-key-uri>" // ... }
cosmos-db How To Configure Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-configure-capabilities.md
Capabilities are features that can be added or removed to your API for MongoDB a
| `EnableMongoRoleBasedAccessControl` | Enable support for creating Users/Roles for native MongoDB role-based access control | No | | `EnableMongoRetryableWrites` | Enables support for retryable writes on the account | Yes | | `EnableMongo16MBDocumentSupport` | Enables support for inserting documents upto 16 MB in size | No |
-| `EnableUniqueCompoundNestedDocs` | Enables support for compound and unique indexes on nested fields, as long as the nested field is not an array. | No |
## Enable a capability
cosmos-db How To Javascript Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-databases.md
The preceding code snippet displays the following example console output:
## See also - [Get started with Azure Cosmos DB for MongoDB and JavaScript](how-to-javascript-get-started.md)-- Work with a collection](how-to-javascript-manage-collections.md)
+- [Work with a collection](how-to-javascript-manage-collections.md)
cosmos-db Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/indexing.md
but won't won't work in this case since there's an array in the path:
{ "people": { "tom": [ { "age": "25" } ], "mark": [ { "age": "30" } ] } } ```
-This feature can be enabled for your database account by [enabling the 'EnableUniqueCompoundNestedDocs' capability](how-to-configure-capabilities.md).
+This feature can be enabled for your database account by filing a support ticket.
> [!NOTE]
but won't won't work in this case since there's an array in the path:
{ "people": { "tom": [ { "age": "25" } ], "mark": [ { "age": "30" } ] } } ```
-This feature can be enabled for your database account by [enabling the 'EnableUniqueCompoundNestedDocs' capability](how-to-configure-capabilities.md).
+This feature can be enabled for your database account by filing a support ticket.
### TTL indexes
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-python.md
Get started with the PyMongo package to create databases, collections, and docum
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) are available on GitHub as a Python project.
-In this quickstart, you'll communicate with the Azure Cosmos DBΓÇÖs API for MongoDB by using one of the open-source MongoDB client drivers for Python, [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/). Also, you'll use the [MongoDB extension commands](/azure/cosmos-db/mongodb/custom-commands), which are designed to help you create and obtain database resources that are specific to the [Azure Cosmos DB capacity model](/azure/cosmos-db/account-databases-containers-items).
+In this quickstart, you'll communicate with the Azure Cosmos DBΓÇÖs API for MongoDB by using one of the open-source MongoDB client drivers for Python, [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/). Also, you'll use the [MongoDB extension commands](./custom-commands.md), which are designed to help you create and obtain database resources that are specific to the [Azure Cosmos DB capacity model](../account-databases-containers-items.md).
## Prerequisites
Use the [MongoClient](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo
### Get database
-Check if the database exists with [list_database_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.list_database_names) method. If the database doesn't exist, use the [create database extension command](/azure/cosmos-db/mongodb/custom-commands#create-database) to create it with a specified provisioned throughput.
+Check if the database exists with [list_database_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.list_database_names) method. If the database doesn't exist, use the [create database extension command](./custom-commands.md#create-database) to create it with a specified provisioned throughput.
:::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/001-quickstart/run.py" id="new_database"::: ### Get collection
-Check if the collection exists with the [list_collection_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html#pymongo.database.Database.list_collection_names) method. If the collection doesn't exist, use the [create collection extension command](/azure/cosmos-db/mongodb/custom-commands#create-collection) to create it.
+Check if the collection exists with the [list_collection_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html#pymongo.database.Database.list_collection_names) method. If the collection doesn't exist, use the [create collection extension command](./custom-commands.md#create-collection) to create it.
:::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/001-quickstart/run.py" id="create_collection"::: ### Create an index
-Create an index using the [update collection extension command](/azure/cosmos-db/mongodb/custom-commands#update-collection). You can also set the index in the create collection extension command. Set the index to `name` property in this example so that you can later sort with the cursor class [sort](https://pymongo.readthedocs.io/en/stable/api/pymongo/cursor.html#pymongo.cursor.Cursor.sort) method on product name.
+Create an index using the [update collection extension command](./custom-commands.md#update-collection). You can also set the index in the create collection extension command. Set the index to `name` property in this example so that you can later sort with the cursor class [sort](https://pymongo.readthedocs.io/en/stable/api/pymongo/cursor.html#pymongo.cursor.Cursor.sort) method on product name.
:::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/001-quickstart/run.py" id="create_index":::
Remove-AzResourceGroup @parameters
In this quickstart, you learned how to create an Azure Cosmos DB for MongoDB account, create a database, and create a collection using the PyMongo driver. You can now dive deeper into the Azure Cosmos DB for MongoDB to import more data, perform complex queries, and manage your Azure Cosmos DB MongoDB resources. > [!div class="nextstepaction"]
-> [Options to migrate your on-premises or cloud data to Azure Cosmos DB](/azure/cosmos-db/migration-choices)
+> [Options to migrate your on-premises or cloud data to Azure Cosmos DB](../migration-choices.md)
cosmos-db Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor.md
You can monitor your data with client-side and server-side metrics. When using s
* **Monitor with metrics in Azure monitor:** You can monitor the metrics of your Azure Cosmos DB account and create dashboards from the Azure Monitor. Azure Monitor collects the Azure Cosmos DB metrics by default, you will not need to explicitly configure anything. These metrics are collected with one-minute granularity, the granularity may vary based on the metric you choose. By default, these metrics have a retention period of 30 days. Most of the metrics that are available from the previous options are also available in these metrics. The dimension values for the metrics such as container name are case-insensitive. So you need to use case-insensitive comparison when doing string comparisons on these dimension values. To learn more, see the [Analyze metric data](#analyzing-metrics) section of this article.
-* **Monitor with diagnostic logs in Azure Monitor:** You can monitor the logs of your Azure Cosmos DB account and create dashboards from the Azure Monitor. Data such as events and traces that occur at a second granularity are stored as logs. For example, if the throughput of a container is changes, the properties of an Azure Cosmos DB account are changed these events are captures within the logs. You can analyze these logs by running queries on the gathered data. To learn more, see the [Analyze log data](#analyzing-logs) section of this article.
+* **Monitor with diagnostic logs in Azure Monitor:** You can monitor the logs of your Azure Cosmos DB account and create dashboards from the Azure Monitor. Data such as events and traces that occur at a second granularity are stored as logs. For example, if the throughput of a container changes, the properties of an Azure Cosmos DB account are changed, and these events are captured within the logs. You can analyze these logs by running queries on the gathered data. To learn more, see the [Analyze log data](#analyzing-logs) section of this article.
* **Monitor programmatically with SDKs:** You can monitor your Azure Cosmos DB account programmatically by using the .NET, Java, Python, Node.js SDKs, and the headers in REST API. To learn more, see the [Monitoring Azure Cosmos DB programmatically](#monitor-azure-cosmos-db-programmatically) section of this article.
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
Support for Partial document update (Patch API) in the [Azure Cosmos DB Java v4
## [Node.js](#tab/nodejs)
-Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](nosql/sdk-nodejs.md) is available from version *3.15.0* onwards. You can download it from the [npm Registry](https://www.npmjs.com/package/@azure/cosmos/v/3.15.0)
+Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](nosql/sdk-nodejs.md) is available from version *3.15.0* onwards. You can download it from the [npm Registry](https://www.npmjs.com/package/@azure/cosmos)
> [!NOTE] > A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub. In the sample, as the container is created without a partition key specified, the JavaScript SDK
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
Watch the [How to schedule exports to storage with Cost Management](https://www.
>[!VIDEO https://www.youtube.com/embed/rWa_xI1aRzo]
-The examples in this tutorial walk you though exporting your cost management data and then verify that the data was successfully exported.
+The examples in this tutorial walk you through exporting your cost management data and then verify that the data was successfully exported.
In this tutorial, you learn how to:
To create or view a data export or to schedule an export, choose a scope in the
:::image type="content" source="./media/tutorial-export-acm-data/basics_exports.png" alt-text="New export example" lightbox="./media/tutorial-export-acm-data/basics_exports.png"::: 1. Review your export details and select **Create**.
-Your new export appears in the list of exports. By default, new exports are enabled. If you want to disable or delete a scheduled export, select any item in the list and then select either **Disable** or **Delete**.
+Your new export appears in the list of exports. By default, new exports are enabled. If you want to disable or delete a scheduled export, select any item in the list, and then select either **Disable** or **Delete**.
Initially, it can take 12-24 hours before the export runs. However, it can take up longer before data is shown in exported files.
Start by preparing your environment for the Azure CLI:
--schedule-status Active --storage-directory demodirectory ```
- For the **--type** parameter, you can choose `ActualCost`, `AmortizedCost`, or `Usage`.
+ For the `--type` parameter, you can choose `ActualCost`, `AmortizedCost`, or `Usage`.
This example uses `MonthToDate`. The export creates an export file daily for your month-to-date costs. The latest data is aggregated from previous daily exports this month.
Remove-AzCostManagementExport -Name DemoExport -Scope 'subscriptions/00000000-00
### Export schedule
-Scheduled exports are affected by the time and day of week of when you initially create the export. When you create a scheduled export, the export runs at the same frequency for each export that runs later. For example, for a daily export of month-to-date costs export set at a daily frequency, the export runs daily. Similarly for a weekly export, the export runs every week on the same day as it is scheduled. The exact delivery time of the export isn't guaranteed and the exported data is available within four hours of run time.
+Scheduled exports are affected by the time and day of week of when you initially create the export. When you create a scheduled export, the export runs at the same frequency for each export that runs later. For example, for a daily export of month-to-date costs export set at a daily frequency, the export runs during once each UTC day. Similarly for a weekly export, the export runs every week on the same UTC day as it is scheduled. Individual export runs can occur at different times throughout the day. So, avoid taking a firm dependency on the exact time of the export runs. Run timing depends on the active load present in Azure during a given UTC day. When an export run begins, your data should be available within 4 hours.
Exports are scheduled using Coordinated Universal Time (UTC). The Exports API always uses and displays UTC. - When you create an export using the [Exports API](/rest/api/cost-management/exports/create-or-update?tabs=HTTP), specify the `recurrencePeriod` in UTC time. The API doesnΓÇÖt convert your local time to UTC. - Example - A weekly export is scheduled on Friday, August 19 with `recurrencePeriod` set to 2:00 PM. The API receives the input as 2:00 PM UTC, Friday, August 19. The weekly export will be scheduled to run every Friday. - When you create an export in the Azure portal, its start date time is automatically converted to the equivalent UTC time.
- - Example - A weekly export is scheduled on Friday, August 19 with the local time of 2:00 AM IST (UTC+5:30) from the Azure portal. The API receives the input as 8:30 PM, Thursday, August 18th. The weekly export will be scheduled to run every Thursday.
+ - Example - A weekly export is scheduled on Friday, August 19 with the local time of 2:00 AM IST (UTC+5:30) from the Azure portal. The API receives the input as 8:30 PM, Thursday, August 18. The weekly export will be scheduled to run every Thursday.
Each export creates a new file, so older exports aren't overwritten. #### Create an export for multiple subscriptions
-If you have an Enterprise Agreement, then you can use a management group to aggregate subscription cost information in a single container. Then you can export cost management data for the management group. Exports for management groups only support actual costs.
+If you have an Enterprise Agreement, then you can use a management group to aggregate subscription cost information in a single container. Then you can export cost management data for the management group. When you create an export in the Azure portal, select the **Actual Costs** option. When you create a management group export using the API, create a *usage export*. Currently, exports at the management group scope only support usage charges. Purchases including reservations and savings plans aren't present in your exports file.
Exports for management groups of other subscription types aren't supported.
When you create a scheduled export in the Azure portal or with the API, it alway
If you want to use the latest data and fields available, we recommend that you create a new export in the Azure portal. To update an existing export to the latest version, update it in the Azure portal or with the latest Export API version. Updating an existing export might cause you to see minor differences in the fields and charges in files that are produced afterward. - ## Verify that data is collected You can easily verify that your Cost Management data is being collected and view the exported CSV file using Azure Storage Explorer.
Select an export to view its run history.
If you've created a daily export, you'll have two runs per day for the first five days of each month. One run executes and creates a file with the current monthΓÇÖs cost data. It's the run that's available for you to see in the run history. A second run also executes to create a file with all the costs from the prior month. The second run isn't currently visible in the run history. Azure executes the second run to ensure that your latest file for the past month contains all charges exactly as seen on your invoice. It runs because there are cases where latent usage and charges are included in the invoice up to 72 hours after the calendar month has closed. To learn more about Cost Management usage data updates, see [Cost and usage data updates and retention](understand-cost-mgt-data.md#cost-and-usage-data-updates-and-retention). - ## Access exported data from other systems One of the purposes of exporting your Cost Management data is to access the data from external systems. You might use a dashboard system or other financial system. Such systems vary widely so showing an example wouldn't be practical. However, you can get started with accessing your data from your applications at [Introduction to Azure Storage](../../storage/common/storage-introduction.md).
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
Title: Change your credit card for Azure
-description: Describes how to change the credit card used to pay for an Azure subscription.
+ Title: Add, update, or delete a payment method
+description: This article describes how to add, update, or delete a payment method used to pay for an Azure subscription.
tags: billing Previously updated : 09/20/2022 Last updated : 11/15/2022
-# Add or update a credit card
+# Add, update, or delete a payment method
This document applies to customers who signed up for Azure online with a credit card.
-In the Azure portal, you can change your default payment method to a new credit card and update your credit card details.
+In the Azure portal, you can change your default payment method to a new credit card and update your credit card details. You can also delete a payment method used to pay for an Azure subscription.
- For a Microsoft Online Service Program (pay-as-you-go) account, you must be an [Account Administrator](add-change-subscription-administrator.md#whoisaa). - For a Microsoft Customer Agreement, you must have the correct [MCA permissions](understand-mca-roles.md) to make these changes.
-If you want to a delete credit card, see [Delete an Azure billing payment method](delete-azure-payment-method.md).
- The supported payment methods for Microsoft Azure are credit cards, debit cards, and check wire transfer. To get approved to pay by check wire transfer, see [Pay for your Azure subscription by check or wire transfer](pay-by-invoice.md). >[!NOTE]
To edit a credit card, follow these steps:
:::image type="content" source="./media/change-credit-card/edit-delete-credit-card-mca.png" alt-text="Screenshot showing the ellipsis." lightbox="./media/change-credit-card/edit-delete-credit-card-mca.png" ::: 1. To edit your credit card details, select **Edit** from the context menu.
+## Delete an Azure billing payment method
+
+The following information helps you delete a payment method, like a credit card, from different types of Azure subscriptions. You can delete a payment method for:
+
+- Microsoft Customer Agreement (MCA)
+- Microsoft Online Services Program (MOSP) also referred to as pay-as-you-go
+
+Whatever your Azure subscription type, you must cancel it so that you can delete its associated payment method.
+
+Removing a payment method for other Azure subscription types like Microsoft Partner Agreement and Enterprise Agreement isn't supported.
+
+### Delete an MCA payment method
+
+Only the user who created the Microsoft Customer Agreement account can delete a payment method.
+
+To delete a payment method for a Microsoft Customer Agreement, do the following steps.
+
+1. Sign in to the Azure portal at https://portal.azure.com/.
+1. Navigate to **Cost Management + Billing**.
+1. If necessary, select a billing scope.
+1. In the left menu list under **Billing**, select **Billing profiles**.
+ :::image type="content" source="./media/change-credit-card/billing-profiles.png" alt-text="Example screenshot showing Billing profiles in the Azure portal." lightbox="./media/change-credit-card/billing-profiles.png" :::
+1. In the list of billing profiles, select the one where the payment method is being used.
+ :::image type="content" source="./media/change-credit-card/select-billing-profile.png" alt-text="Example screenshot showing the list of billing profiles." :::
+1. In the left menu list, under **Settings**, select **Payment methods**.
+1. On the payment methods page for your billing profile, a table of payment methods is shown under the **Your credit cards** section. Find the credit card that you want to delete, select the ellipsis (**…**), and then select **Delete**.
+ :::image type="content" source="./media/change-credit-card/delete-credit-card.png" alt-text="Example screenshot showing where to delete a credit card." :::
+1. The Delete a payment method page appears. Azure checks if the payment method is in use.
+ - When the payment method isn't being used, the **Delete** option is enabled. Select it to delete the credit card information.
+ - If the payment method is being used, it must be replaced or detached. Continue reading the following sections. They explain how to **detach** the payment method that's in use by your subscription.
+
+### Detach payment method used by an MCA billing profile
+
+If your payment method is being used by an MCA billing profile, you'll see a message similar to the following example.
++
+To detach a payment method, a list of conditions must be met. If any conditions aren't met, instructions appear explaining how to meet the condition. A link also appears that takes you to the location where you can resolve the condition.
+
+When all the conditions are all satisfied, you can detach the payment method from the billing profile.
+
+> [!NOTE]
+> When the default payment method is detached, the billing profile is put into an _inactive_ state. Anything deleted in this process will not be able to be recovered. After a billing profile is set to inactive, you must sign up for a new Azure subscription to create new resources.
+
+#### To detach a payment method
+
+1. In the Delete a payment method area, select the **Detach the current payment method** link.
+1. If all conditions are met, select **Detach**. Otherwise, continue to the next step.
+1. If Detach is unavailable, a list of conditions is shown. Take the actions listed. Select the link shown in the Detach the default payment method area. Here's an example of a corrective action that explains the actions you need to take.
+ :::image type="content" source="./media/change-credit-card/azure-subscriptions.png" alt-text="Example screenshot showing a corrective action needed to detach a payment method for MCA." :::
+1. When you select the corrective action link, you're redirected to the Azure page where you take action. Take whatever correction action is needed.
+1. If necessary, complete all other corrective actions.
+1. Navigate back to **Cost Management + Billing** > **Billing profiles** > **Payment methods**. Select **Detach**. At the bottom of the Detach the default payment method page, select **Detach**.
+
+> [!NOTE]
+> - After you cancel a subscription, it can take up to 90 days for the subscription to be deleted.
+> - You can only delete a payment method after all previous charges for a billing profile are settled. If you are in an active billing period, you must wait until the end of the billing period to delete your payment method. **Ensure all other detach conditions are met while waiting for your billing period to end**.
+
+### Delete a pay-as-you-go (MOSP) payment method
+
+You must be an account administrator to delete a payment method.
+
+If your payment method is in use by a subscription, do the following steps.
+
+1. Sign in to the Azure portal at https://portal.azure.com/.
+1. Navigate to **Cost Management + Billing**.
+1. If necessary, select a billing scope.
+1. In the left menu list under **Billing**, select **Payment methods**.
+1. In the Payment methods area, on the row that the payment method is on, select the ellipsis (**...**) symbol and then select **Delete**.
+ :::image type="content" source="./media/change-credit-card/delete-mosp-payment-method.png" alt-text="Example screenshot showing a corrective action needed to detach a payment method for MOSP." :::
+1. In the Delete a payment method area, select **Delete** if all conditions are met. If Delete is unavailable, continue to the next step.
+1. A list of conditions is shown. Take the actions listed. Select the link shown in the Delete a payment method area.
+ :::image type="content" source="./media/change-credit-card/payment-method-in-use-mosp.png" alt-text="Example screenshot showing that a payment method is in use by a pay-as-you-go subscription." :::
+1. When you select the corrective action link, you're redirected to the Azure page where you take action. Take whatever correction action is needed.
+1. If necessary, complete all other corrective actions.
+1. Navigate back to **Cost Management + Billing** > **Billing profiles** > **Payment methods** and delete the payment method.
+
+> [!NOTE]
+> After you cancel a subscription, it can take up to 90 days for the subscription to be deleted.
+ ## Troubleshooting Azure doesn't support virtual or prepaid cards. If you're getting errors when adding or updating a valid credit card, try opening your browser in private mode.
If you have questions or need help, [create a support request](https://go.micro
## Next steps - Learn about [Azure reservations](../reservations/save-compute-costs-reservations.md) to see if they can save you money.-- If you want to a delete credit card, see [Delete an Azure billing payment method](delete-azure-payment-method.md).
cost-management-billing Delete Azure Payment Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/delete-azure-payment-method.md
- Title: Delete an Azure billing payment method
-description: Describes how to delete a payment method used by an Azure subscription.
--
-tags: billing
--- Previously updated : 12/06/2021---
-# Delete an Azure billing payment method
-
-This document provides instructions to help you delete a payment method, like a credit card, from different types of Azure subscriptions. You can delete a payment method for:
--- Microsoft Customer Agreement (MCA)-- Microsoft Online Services Program (MOSP) also referred to as pay-as-you-go-
-Whatever your Azure subscription type, you must cancel it so that you can delete its associated payment method.
-
-Removing a payment method for other Azure subscription types like Microsoft Partner Agreement and Enterprise Agreement isn't supported.
-
-## Delete an MCA payment method
-
-Only the user who created the Microsoft Customer Agreement account can delete a payment method.
-
-To delete a payment method for a Microsoft Customer Agreement, do the following steps.
-
-1. Sign in to the Azure portal at https://portal.azure.com/.
-1. Navigate to **Cost Management + Billing**.
-1. If necessary, select a billing scope.
-1. In the left menu list under **Billing**, select **Billing profiles**.
- :::image type="content" source="./media/delete-azure-payment-method/billing-profiles.png" alt-text="Example screenshot showing Billing profiles in the Azure portal" lightbox="./media/delete-azure-payment-method/billing-profiles.png" :::
-1. In the list of billing profiles, select the one where the payment method is being used.
- :::image type="content" source="./media/delete-azure-payment-method/select-billing-profile.png" alt-text="Example image showing the list of billing profiles" :::
-1. In the left menu list, under **Settings**, select **Payment methods**.
-1. On the payment methods page for your billing profile, a table of payment methods is shown under the **Your credit cards** section. Find the credit card that you want to delete, select the ellipsis (**…**), and then select **Delete**.
- :::image type="content" source="./media/delete-azure-payment-method/delete-credit-card.png" alt-text="Example showing where to delete a credit card" :::
-1. The Delete a payment method page appears. Azure checks if the payment method is in use.
- - When the payment method isn't being used, the **Delete** option is enabled. Select it to delete the credit card information.
- - If the payment method is being used, it must be replaced or detached. Continue reading the following sections. They explain how to **detach** the payment method that's in use by your subscription.
-
-### Detach payment method used by an MCA billing profile
-
-If your payment method is being used by an MCA billing profile, you'll see a message similar to the following example.
--
-To detach a payment method, a list of conditions must be met. If any conditions aren't met, instructions appear explaining how to meet the condition. A link also appears that takes you to the location where you can resolve the condition.
-
-When all the conditions are all satisfied, you can detach the payment method from the billing profile.
-
-> [!NOTE]
-> When the default payment method is detached, the billing profile is put into an _inactive_ state. Anything deleted in this process will not be able to be recovered. After a billing profile is set to inactive, you must sign up for a new Azure subscription to create new resources.
-
-#### To detach a payment method
-
-1. In the Delete a payment method area, select the **Detach the current payment method** link.
-1. If all conditions are met, select **Detach**. Otherwise, continue to the next step.
-1. If Detach is unavailable, a list of conditions is shown. Take the actions listed. Select the link shown in the Detach the default payment method area. Here's an example of a corrective action that explains the actions you need to take.
- :::image type="content" source="./media/delete-azure-payment-method/azure-subscriptions.png" alt-text="Example showing a corrective action needed to detach a payment method for MCA" :::
-1. When you select the corrective action link, you're redirected to the Azure page where you take action. Take whatever correction action is needed.
-1. If necessary, complete all other corrective actions.
-1. Navigate back to **Cost Management + Billing** > **Billing profiles** > **Payment methods**. Select **Detach**. At the bottom of the Detach the default payment method page, select **Detach**.
-
-> [!NOTE]
-> - After you cancel a subscription, it can take up to 90 days for the subscription to be deleted.
-> - You can only delete a payment method after all previous charges for a billing profile are settled. If you are in an active billing period, you must wait until the end of the billing period to delete your payment method. **Ensure all other detach conditions are met while waiting for your billing period to end**.
-
-## Delete a pay-as-you-go (MOSP) payment method
-
-You must be an account administrator to delete a payment method.
-
-If your payment method is in use by a subscription, do the following steps.
-
-1. Sign in to the Azure portal at https://portal.azure.com/.
-1. Navigate to **Cost Management + Billing**.
-1. If necessary, select a billing scope.
-1. In the left menu list under **Billing**, select **Payment methods**.
-1. In the Payment methods area, on the row that the payment method is on, select the ellipsis (**...**) symbol and then select **Delete**.
- :::image type="content" source="./media/delete-azure-payment-method/delete-mosp-payment-method.png" alt-text="Example showing a corrective action needed to detach a payment method for MOSP" :::
-1. In the Delete a payment method area, select **Delete** if all conditions are met. If Delete is unavailable, continue to the next step.
-1. A list of conditions is shown. Take the actions listed. Select the link shown in the Delete a payment method area.
- :::image type="content" source="./media/delete-azure-payment-method/payment-method-in-use-mosp.png" alt-text="Example image showing that a payment method is in use by a pay-as-you-go subscription" :::
-1. When you select the corrective action link, you're redirected to the Azure page where you take action. Take whatever correction action is needed.
-1. If necessary, complete all other corrective actions.
-1. Navigate back to **Cost Management + Billing** > **Billing profiles** > **Payment methods** and delete the payment method.
-
-> [!NOTE]
-> After you cancel a subscription, it can take up to 90 days for the subscription to be deleted.
-
-## Next steps
--- If you need more information about canceling your Azure subscription, see [Cancel you Azure subscription](cancel-azure-subscription.md).-- For more information about adding or updating a credit card, see [Add or update a credit card for Azure](change-credit-card.md).
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Title: View your Azure usage summary details and download reports for direct EA
description: This article explains how enterprise administrators of direct Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Previously updated : 08/29/2022 Last updated : 11/14/2022
Your invoice displays Azure usage charges with costs associated to them first, f
### Download your Azure invoices (.pdf)
-For most subscriptions, you can download your invoice in the Azure portal.
+For EA enrollments, you can download your invoice in the Azure portal.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for **Cost Management + Billing** and select it. 1. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with. 1. In the navigation menu, selectΓÇ»**Invoices**. The Invoices page shows all the invoices and credit memos generated for the last 12 months. :::image type="content" source="./media/direct-ea-azure-usage-charges-invoices/invoices-page.png" alt-text="Screenshot showing the Invoices page." lightbox="./media/direct-ea-azure-usage-charges-invoices/invoices-page.png" :::
-
-1. On the invoice page, find the row of the invoice that you want to download. To the right of the row, select the ellipsis (**…**) symbol.
+1. On the Invoices page, find the row of the invoice that you want to download. To the right of the row, select the ellipsis (**…**) symbol.
1. In the context menu, select **Download**. :::image type="content" source="./media/direct-ea-azure-usage-charges-invoices/download-context-menu.png" alt-text="Screenshot showing the Download context menu." :::
+1. Select **Prepare document** to prepare the document that you want to download.
+ :::image type="content" source="./media/direct-ea-azure-usage-charges-invoices/prepare-document.png" alt-text="Screenshot showing the Prepare document page when you prepare the invoice." lightbox="./media/direct-ea-azure-usage-charges-invoices/prepare-document.png" :::
+1. When the document is prepared, select **Download**.
You can select a Timespan to view up to the last three years of invoice details.
The following table lists the terms and descriptions shown on the Invoices page:
| PO number | PO number for the invoice or credit memo. | | Total Amount | Total amount of the invoice or credit. |
+## Updated direct EA billing invoice documents
+
+Azure is enhancing its invoicing experience. The enhanced experience includes an improved invoice PDF file, a summary PDF, and a transactions file.
+
+There are no changes to invoices generated before November 18, 2022.
+
+The invoice notification email address is changing from `msftinv@microsoft.com` to `no-reply@microsoft.com` for customers and partners under the enhanced invoicing experience.
+
+We recommend that you add the new email address to your address book or safe sender list to ensure that you receive the emails.
+
+For more information about invoice documents, see [Direct EA billing invoice documents](direct-ea-billing-invoice-documents.md).
+ ## Update a PO number for an upcoming overage invoice In the Azure portal, a direct enterprise administrator can update the purchase order (PO) for upcoming invoices. The PO number can get updated anytime before the invoice is created during the current billing period.
To update the PO number for a billing account:
1. SelectΓÇ» **Update PO number**. 1. Enter a PO number and then select ΓÇ»**Update**.
-Or you can update the PO number from Invoice blade for the upcoming invoice:
+Or you can update the PO number in the Invoice area for the upcoming invoice:
1. Sign in to the  [Azure portal](https://portal.azure.com). 1. Search for  **Cost Management + Billing** and then select  **Billing scopes**.
The following table lists the terms and descriptions shown on the Reservation tr
| Amount (USD) | Reservation cost | > [!NOTE]
-> The newly added column Purchase Month will help identify in which month the refunds are updated and helps to reconcile the RI refunds.
+> The newly added column Purchase Month will help identify in which month the refunds are updated and helps to reconcile reservation refunds.
## CSV report formatting issues
cost-management-billing Direct Ea Billing Invoice Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-billing-invoice-documents.md
+
+ Title: Direct EA billing invoice documents
+description: Learn how to understand the invoice files associated with your direct enterprise agreement.
++
+tags: billing
+++ Last updated : 11/14/2021+++
+# Direct EA billing invoice documents
+
+This article helps you understand the invoice files associated with your direct enterprise agreement. For information about downloading the files, see [View your Azure usage summary details and download reports for direct EA enrollments](direct-ea-azure-usage-charges-invoices.md).
+
+All documents are available between the 12th- 15th day of each month. However, when an invoice is unusually large, availability might be delayed.
+
+## Understand the invoice file
+
+Your invoice is a PDF file that contains at least two pages.
+
+The first page is the billing summary. It contains general information about the invoice, amount due, and payment instructions, if applicable. It also contains address information for your organization and high-level details about your order.
+
+The second page lists the individual products in your order.
+
+### Invoice terms
+
+On the top of page one, you'll find the summary with the following fields:
+
+**Invoice** ΓÇô The number is a unique number generated by Microsoft that identifies your spending for the corresponding billing period.
+
+**Invoice Date** - The date Microsoft created the invoice.
+
+**PO Number** - The purchase order (PO) number that you specify. The PO number can't be updated on an invoice that's already generated.
+
+**PO Date** ΓÇô It's generally the date when the order was entered into Microsoft systems.
+
+**Billing Period** - The date range covered by the invoice.
+
+**Payment Terms** ΓÇô They explain the arrangement for when the invoice payment is due.
+ > [!IMPORTANT]
+ > Pay using wire transfer or ACH using the instructions provided on the invoice. Don't mail a physical check to a Microsoft address shown on your invoice.
+
+**Due Date** - The date when the invoice payment is due to Microsoft.
+
+#### Addresses
+
+The following addresses differ, depending on the size and configuration of your organization.
+
+- The Sold-To address is the name and address of the organization that bought the subscription.
+- The Bill-To address is the address of your billing department.
+- The Ship-To address are details of the location to which the products are shipped or used for tax exemption (for India).
+- The End Customer Address is the address where the service is used.
+
+#### Billing summary
+
+Billing summary gives the breakdown of the total amount due. Total = Charges ΓÇô Commitment usage (if applicable) + Sales Tax.
+
+#### Billing details by product
+
+Page two lists billing details by product, including unit price, quantity, commitment usage (if applicable), net charge, tax rate, tax amount, and total corresponding to each usage charge.
+
+Here's an example of the first page of the invoice file.
++
+Here's an example of the last page of the invoice file.
++
+_Prices shown on the invoice are for informational purposes only._
+
+## Understand the summary invoice file
+
+The summary invoice file is a concise version of the detailed PDF file with a broad categorization of purchases at product family level. The first page is the same as the invoice PDF. The second page contains a billing summary by product family covering usages charges and corresponding amount incurred.
+
+Here's an example of the first page of the summary invoice file.
++
+Here's an example of the last page of the summary invoice file.
+++
+## Understand the transactions file
+
+The transactions file is a CSV file that includes the same information as the invoice in a format that helps you quickly reconcile charges. It contains the following line-item details:
+
+| Invoice item | Description |
+| | |
+| Invoice Number | A Microsoft generated unique number that identifies your spending for the corresponding billing period. |
+| Invoice Date | The date Microsoft created the invoice. |
+| Document Type | Determines whether it's an invoice or credit note. |
+| Agreement Number | The Licensing ID # / enrollment # / contract number. |
+| Bill To | Customer Number, Bill To Customer Name, and Bill To Customer Country are details of the billing department. |
+| Sold To | Customer Number, Sold To Customer Name, and Sold To Customer Country are details of the organization that bought the subscription. |
+| Ship To | Customer Number, Ship To Customer Name, and Ship To Customer Country are details of the location where the products are shipped or used for tax exemption (for India). |
+| End Customer Name and End Customer Country | Details of the final consumer where the service is used. |
+| Purchase Order Number | The purchase order (PO) number that you specify. |
+| Billing Currency | Shows the currency that was chosen by the end customer in terms of payment. |
+| Transaction Type | Reflects whether it's a debit invoice or a credit memo. |
+| Line Item Number | The line ID for internal reference. |
+| Usage Country | The location where the product is used. |
+| Delivery | Shows how the invoice is being delivered. |
+| MS Part Number | A reference number for the product. |
+| Item Name | The description of the purchased product. |
+| Product Family | The logical categorization of products. |
+| License Type | Reflects the terms of buying the product. |
+| Price Level | The price categorization of product. |
+| Billing Option | The mode of payment. |
+| Taxable | Indicates whether the product is taxable. |
+| Pool | The classification of the product into a system, server, or application. |
+| Service Period Start Date, Service Period End Date | Indicates the eligible service period. |
+| Reason Code | Used at the time of return. |
+| Description | The explanation of the reason code. |
+| Quantity | The number of units bought or used. |
+| Unit Price | The price per unit product. |
+| Extended Amount | The quantity multiplied by the unit price. |
+| Commitment Usage | The amount of monetary commitment that has been used. |
+| Net Amount | The extended amount minus the commitment usage. |
+| Tax Rate | The tax rate applicable to the product based on the country of billing. |
+| Tax Amount | The net amount multiplied by tax rate. |
+| Total | The sum of the net amount and tax amount. |
+| Is Third Party | Indicates whether the product or service is a third-party product. |
+
+## Next steps
+
+- Learn how to download your Direct EA billing invoice documents at [View your Azure usage summary details and download reports for direct EA enrollments](direct-ea-azure-usage-charges-invoices.md).
cost-management-billing Ea Portal Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-rest-apis.md
Title: Azure Enterprise REST APIs
description: This article describes the REST APIs for use with your Azure enterprise enrollment. Previously updated : 12/10/2021 Last updated : 11/16/2022 -+ # Azure Enterprise REST APIs
Microsoft Enterprise Azure customers can get usage and billing information throu
Role owners can perform the following steps in the Azure EA portal. Navigate to **Reports** > **Download Usage** > **API Access Key**. Then they can: -- Generate primary and secondary access keys.-- Disable access keys.
+- Generate and regenerate primary and secondary access keys.
+- Revoke access keys.
- View start and end dates of access keys. ### Generate or retrieve the API Key
Role owners can perform the following steps in the Azure EA portal. Navigate to
1. Sign in as an enterprise administrator. 2. Select **Reports** on the left navigation window and then select the **Download Usage** tab. 3. Select **API Access Key**.
-4. Under **Enrollment Access Keys**, select the generate key symbol to generate either a primary or secondary key.
+4. Under **Enrollment Access Keys**, select **regenerate** to generate either a primary or secondary key.
5. Select **Expand Key** to view the entire generated API access key. 6. Select **Copy** to get the API access key for immediate use.
-![Example showing API Access Key page](./media/ea-portal-rest-apis/ea-create-generate-or-retrieve-the-api-key.png)
If you want to give the API access keys to people who aren't enterprise administrators in your enrollment, perform the following steps:
If you want to give the API access keys to people who aren't enterprise administ
4. Select the pencil symbol next to **AO view charges** (Account Owner view charges). 5. Select **Enable** and then select **Save**.
-![Example showing DA and AO view charges enabled](./media/ea-portal-rest-apis/create-ea-generate-or-retrieve-api-key-enable-ao-do-view.png)
+![Screenshot showing DA and AO view charges enabled.](./media/ea-portal-rest-apis/create-ea-generate-or-retrieve-api-key-enable-ao-do-view.png)
+ The preceding steps give API access key holders with access to cost and pricing information in usage reports. ### Pass keys in the API
Pass the API key for each call for authentication and authorization. Pass the fo
| Request header key | Value | | | | | Authorization | Specify the value in this format: **bearer {API\_KEY}**
-Example: bearer \&lt;APIKey\&gt; |
+Example: bearer \<APIKey\> |
### Swagger
JSON format is generated from the CSV report. As a result, the format is same as
| Account Name | AccountName | AccountName | | | ServiceAdministratorId | ServiceAdministratorLiveId | ServiceAdministratorLiveId | | | SubscriptionId | SubscriptionId | SubscriptionId | |
-| SubscriptionGuid | MOCPSubscriptionGuid | SubscriptionGuid | |
+| SubscriptionGuid | MOSPSubscriptionGuid | SubscriptionGuid | |
| Subscription Name | SubscriptionName | SubscriptionName | | | Date | Date | Date | Shows the date that the service catalog report ran. The format is a date string without a time stamp. | | Month | Month | Month | |
cost-management-billing View All Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-all-accounts.md
tags: billing
Previously updated : 11/08/2022 Last updated : 11/16/2022
A billing account is created when you sign up to use Azure. You use your billing
Azure portal supports the following type of billing accounts: -- **Microsoft Online Services Program**: A billing account for a Microsoft Online Services Program is created when you sign up for Azure through the Azure website. For example, when you sign up for an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/), [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or as a [Visual studio subscriber](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/).
+- **Microsoft Online Services Program**: A billing account for a Microsoft Online Services Program is created when you sign up for Azure through the Azure website. For example, when you sign up for an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/), [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or as a [Visual studio subscriber](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/).
+ - The ability to create other Microsoft Online Services Program subscriptions is determined on an individual basis according to your history with Azure.
-- **Enterprise Agreement**: A billing account for an Enterprise Agreement (EA) is created when your organization signs an [Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. An EA enrollment can contain an unlimited number of EA accounts. However, an EA account has a subscription limit of 5000. *Regardless of a subscription's state, its included in the limit. So, deleted and disabled subscriptions are included in the limit*. If you need more subscriptions than the limit, create more EA accounts. Generally speaking, a subscription is a billing container. We recommend that you avoid creating multiple subscriptions to implement access boundaries. To separate resources with an access boundary, consider using a resource group. For more information about resource groups, see [Manage Azure resource groups by using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md).
+- **Enterprise Agreement**: A billing account for an Enterprise Agreement (EA) is created when your organization signs an [Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. An EA enrollment can contain an unlimited number of EA accounts.
+ - An EA account has a subscription limit of 5000. *Regardless of a subscription's state, it's included in the limit. So, deleted and disabled subscriptions are included in the limit*. If you need more subscriptions than the limit, create more EA accounts. Generally speaking, a subscription is a billing container.
+ - We recommend that you avoid creating multiple subscriptions to implement access boundaries. To separate resources with an access boundary, consider using a resource group. For more information about resource groups, see [Manage Azure resource groups by using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md).
-- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/) may have a billing account for a Microsoft Customer Agreement as well. You can have a maximum of 5 subscriptions in a Microsoft Customer Agreement for an individual. A Microsoft Customer Agreement for an enterprise can have up to 5000 subscriptions under it.
+- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/) may have a billing account for a Microsoft Customer Agreement as well.
+ - You can have a maximum of 5 subscriptions in a Microsoft Customer Agreement for an individual. The ability to create additional subscriptions is determined on an individual basis according to your history with Azure.
+ - A Microsoft Customer Agreement for an enterprise can have up to 5000 subscriptions under it.
- **Microsoft Partner Agreement**: A billing account for a Microsoft Partner Agreement is created for Cloud Solution Provider (CSP) partners to manage their customers in the new commerce experience. Partners need to have at least one customer with an [Azure plan](/partner-center/purchase-azure-plan) to manage their billing account in the Azure portal. For more information, see [Get started with your billing account for Microsoft Partner Agreement](../understand/mpa-overview.md). To determine the type of your billing account, see [Check the type of your billing account](#check-the-type-of-your-account). ## Scopes for billing accounts
-A scope is a node within a billing account that you use to view and manage billing. It is where you manage billing data, payments, invoices, and conduct general account management.
+A scope is a node within a billing account that you use to view and manage billing. It's where you manage billing data, payments, invoices, and conduct general account management.
If you don't have access to view or manage billing accounts, you probably don't have permission to access. You can ask your billing account administrator to grant you access. For more information, see the following articles:
cost-management-billing Buy Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/buy-savings-plan.md
Previously updated : 10/12/2022 Last updated : 11/16/2022 # Buy an Azure savings plan
-Azure savings plans help you save money by committing to an hourly spend for one-year or three-years plans for Azure compute resources. Saving plans discounts apply to usage from virtual machines, Dedicated Hosts, Container Instances, App Services and Azure Premium Functions. The hourly commitment is priced in USD for Microsoft Customer Agreement customers and local currency for Enterprise customers. Before you enter a commitment to buy a savings plan, be sure to review the following sections to prepare for your purchase.
+Azure savings plans help you save money by committing to an hourly spend for one-year or three-years plans for Azure compute resources. Saving plans discounts apply to usage from virtual machines, Dedicated Hosts, Container Instances, App Services and Azure Premium Functions. The hourly commitment is priced in USD for Microsoft Customer Agreement customers and local currency for Enterprise customers.
+
+Before you enter a commitment to buy a savings plan, review the following sections to prepare for your purchase.
## Who can buy a savings plan You can buy a savings plan for an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P), Microsoft Customer Agreement (MCA) or Microsoft Partner Agreement.
+To determine if you're eligible to buy a plan, [check your billing type](../manage/view-all-accounts.md#check-the-type-of-your-account).
+ Savings plan discounts only apply to resources associated with subscriptions purchased through an Enterprise Agreement, Microsoft Customer Agreement, or Microsoft Partner Agreement (MPA). ### Enterprise Agreement customers
cost-management-billing Purchase Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/purchase-recommendations.md
Previously updated : 11/04/2022 Last updated : 11/16/2022 # Azure savings plan recommendations
The following steps define how recommendations are calculated:
The savings plan purchase experience shows up to 10 commitment amounts. All recommendations are based on the last 30 days of usage. For each amount, we include the percentage (off your current pay-as-you-go costs) that the amount could save you. The percentage of your total compute usage that would be covered with the commitment amount is also included.
-By default, the recommendations are for the entire billing scope (billing account or billing profile for MCA and enrollment for EA). You can view subscription and resource group-level recommendations by restricting benefit application to one of those levels. We don't currently support management group-level recommendations.
+By default, the recommendations are for the entire billing scope (billing account or billing profile for MCA and enrollment for EA). You can view subscription and resource group-level recommendations by restricting benefit application to one of those levels.
+
+Recommendations are based on terms, so you'll see the 1-year or 3-year recommendations at each level by toggling the term options. We don't currently support management group-level recommendations.
The first recommendation value is the one that is projected to result in the highest percent savings. The other values allow you to see how increasing or decreasing your commitment could affect both your savings and compute coverage. When the commitment amount is increased, your savings could be reduced because you could end up with reduced utilization. In other words, you'd pay for an hourly commitment that isn't fully used. If you lower the commitment, your savings could also be reduced. Although you'll have increased utilization, there will likely be periods when your savings plan won't fully cover your use. Usage beyond your hourly commitment will be charged at the more expensive pay-as-you-go rates.
The minimum value doesn't necessarily represent the hourly commitment necessary
1. Download your price list. 2. For each reservation order you're returning, find the product in the price sheet and determine its unit price under either a 1-year or 3-year savings plan (filter by term and price type).
-3. Multiple the rate by the number of instances that are being returned.
+3. Multiply the rate by the number of instances that are being returned.
4. Repeat for each reservation order to be returned. 5. Sum the values and enter it as the hourly commitment.
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-using-azure-monitor.md
Data Factory stores pipeline-run data for only 45 days. Use Azure Monitor if you
* **Log Analytics**: Analyze the logs with Log Analytics. The Data Factory integration with Azure Monitor is useful in the following scenarios: * You want to write complex queries on a rich set of metrics that are published by Data Factory to Monitor. You can create custom alerts on these queries via Monitor. - You want to monitor across data factories. You can route data from multiple data factories to a single Monitor workspace.
-* **Partner Solution:** Diagnostic logs could be sent to Partner solutions through integration. For potential partner integrations, [click to learn more about partner integration.](/azure/partner-solutions/overview)
+* **Partner Solution:** Diagnostic logs could be sent to Partner solutions through integration. For potential partner integrations, [click to learn more about partner integration.](../partner-solutions/overview.md)
You can also use a storage account or event-hub namespace that isn't in the subscription of the resource that emits logs. The user who configures the setting must have appropriate Azure role-based access control (Azure RBAC) access to both subscriptions. ## Next steps - [Azure Data Factory metrics and alerts](monitor-metrics-alerts.md)-- [Monitor and manage pipelines programmatically](monitor-programmatically.md)-
+- [Monitor and manage pipelines programmatically](monitor-programmatically.md)
data-factory Tutorial Incremental Copy Change Tracking Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-portal.md
In this tutorial, you create two pipelines that perform the following operations
1. For **Region**, select the region for the data factory. The dropdown list displays only locations that are supported. The data stores (for example, Azure Storage and Azure SQL Database) and computes (for example, Azure HDInsight) that a data factory uses can be in other regions.
-1. Select **Next: Git configuration**. Set up the repository by following the instructions in [Configuration method 4: During factory creation](/azure/data-factory/source-control#configuration-method-4-during-factory-creation), or select the **Configure Git later** checkbox.
+1. Select **Next: Git configuration**. Set up the repository by following the instructions in [Configuration method 4: During factory creation](./source-control.md#configuration-method-4-during-factory-creation), or select the **Configure Git later** checkbox.
![Screenshot that shows options for Git configuration in creating a data factory.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-azure-data-factory-menu-git-configuration.png) 1. Select **Review + create**. 1. Select **Create**.
PersonID Name Age SYS_CHANGE_VERSION SYS_CHANGE_OPERATION
Advance to the following tutorial to learn about copying only new and changed files, based on `LastModifiedDate`: > [!div class="nextstepaction"]
-> [Incrementally copy new and changed files based on LastModifiedDate by using the Copy Data tool](tutorial-incremental-copy-lastmodified-copy-data-tool.md)
+> [Incrementally copy new and changed files based on LastModifiedDate by using the Copy Data tool](tutorial-incremental-copy-lastmodified-copy-data-tool.md)
data-lake-analytics Data Lake Analytics Data Lake Tools Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-get-started.md
Title: Query Azure Data Lake Analytics - Visual Studio description: Learn how to install Data Lake Tools for Visual Studio, and how to develop and test U-SQL scripts. -+ Previously updated : 08/30/2019 Last updated : 11/15/2022 # Develop U-SQL scripts by using Data Lake Tools for Visual Studio
After the job submission, the **Job view** tab opens to show the job progress.
* **MetaData Operations** shows all the actions that were taken on the U-SQL catalog. * **Data** shows all the inputs and outputs. * **State History** shows the timeline and state details.
-* **AU Analysis** shows how many AUs were used in the job and explore simulations of different AU allocation strategies.
+* **AU Analysis** shows how many AUs (analytics units) were used in the job and explore simulations of different AU allocation strategies.
* **Diagnostics** provides an advanced analysis for job execution and performance optimization. ![U-SQL Visual Studio Data Lake Analytics job performance graph](./media/data-lake-analytics-data-lake-tools-get-started/data-lake-analytics-data-lake-tools-performance-graph.png)
To see the latest job status and refresh the screen, select **Refresh**.
## Check job status
-1. In **Server Explorer**, select **Azure** > **Data Lake Analytics**.
+1. In **Data Lake Analytics Explorer**, select **Data Lake Analytics**.
1. Expand the Data Lake Analytics account name.
To see the latest job status and refresh the screen, select **Refresh**.
## See the job output
-1. In **Server Explorer**, browse to the job you submitted.
+1. In **Data Lake Analytics Explorer**, browse to the job you submitted.
-1. Click the **Data** tab.
+1. Select the **Data** tab in your job.
1. In the **Job Outputs** tab, select the `"/data.csv"` file.
data-lake-analytics Data Lake Analytics Data Lake Tools Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-install.md
Title: Install Azure Data Lake Tools for Visual Studio description: This article describes how to install Azure Data Lake Tools for Visual Studio. -+ Previously updated : 08/30/2019 Last updated : 11/15/2022 # Install Data Lake Tools for Visual Studio
information about Data Lake Analytics, see [Azure Data Lake Analytics overview](
* Visual Studio 2015 * Visual Studio 2013
-* **Microsoft Azure SDK for .NET** version 2.7.1 or later. Install it by using the [Web platform installer](https://www.microsoft.com/web/downloads/platform.aspx).
+* **Microsoft Azure SDK for .NET** [version 2.7.1 or later](https://azure.microsoft.com/downloads/).
* A **Data Lake Analytics** account. To create an account, see [Get Started with Azure Data Lake Analytics using Azure portal](data-lake-analytics-get-started-portal.md). ## Install Azure Data Lake Tools for Visual Studio 2017 or Visual Studio 2019
data-lake-analytics Data Lake Analytics Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-diagnostic-logs.md
Title: Enable and view diagnostic logs for Azure Data Lake Analytics description: Understand how to set up and access diagnostic logs for Azure Data Lake Analytics -- Previously updated : 10/14/2022 Last updated : 11/15/2022 # Accessing diagnostic logs for Azure Data Lake Analytics
data-lake-analytics Data Lake Analytics Manage Use Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-portal.md
Title: Manage Azure Data Lake Analytics by using the Azure portal description: This article describes how to use the Azure portal to manage Data Lake Analytics accounts, data sources, users, & jobs. -+ Previously updated : 12/05/2016 Last updated : 11/15/2022 # Manage Azure Data Lake Analytics using the Azure portal [!INCLUDE [manage-selector](../../includes/data-lake-analytics-selector-manage.md)]
-This article describes how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs by using the Azure portal.
-
-<!-- ################################ -->
-<!-- ################################ -->
+This article describes how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs by using the Azure portal.
## Manage Data Lake Analytics accounts ### Create an account 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Click **Create a resource** > **Intelligence + analytics** > **Data Lake Analytics**.
+2. Select **Create a resource** and search for **Data Lake Analytics**.
3. Select values for the following items: 1. **Name**: The name of the Data Lake Analytics account. 2. **Subscription**: The Azure subscription used for the account. 3. **Resource Group**: The Azure resource group in which to create the account. 4. **Location**: The Azure datacenter for the Data Lake Analytics account. 5. **Data Lake Store**: The default store to be used for the Data Lake Analytics account. The Azure Data Lake Store account and the Data Lake Analytics account must be in the same location.
-4. Click **Create**.
+4. Select **Create**.
### Delete a Data Lake Analytics account Before you delete a Data Lake Analytics account, delete its default Data Lake Store account. 1. In the Azure portal, go to your Data Lake Analytics account.
-2. Click **Delete**.
+2. Select **Delete**.
3. Type the account name.
-4. Click **Delete**.
+4. Select **Delete**.
-<!-- ################################ -->
-<!-- ################################ -->
## Manage data sources
You can use Data Explorer to browse data sources and perform basic file manageme
### Add a data source 1. In the Azure portal, go to your Data Lake Analytics account.
-2. Click **Data Sources**.
-3. Click **Add Data Source**.
+2. Select **Data explorer**.
+3. Select **Add Data Source**.
* To add a Data Lake Store account, you need the account name and access to the account to be able to query it.
- * To add Azure Blob storage, you need the storage account and the account key. To find them, go to the storage account in the portal.
+ * To add Azure Blob storage, you need the storage account and the account key. To find them, go to the storage account in the portal and select **Access keys**.
## Set up firewall rules
If other Azure services, like Azure Data Factory or VMs, connect to the Data Lak
### Set up a firewall rule 1. In the Azure portal, go to your Data Lake Analytics account.
-2. On the menu on the left, click **Firewall**.
+2. On the menu on the left, select **Firewall**.
## Add a new user You can use the **Add User Wizard** to easily provision new Data Lake users. 1. In the Azure portal, go to your Data Lake Analytics account.
-2. On the left, under **Getting Started**, click **Add User Wizard**.
-3. Select a user, and then click **Select**.
-4. Select a role, and then click **Select**. To set up a new developer to use Azure Data Lake, select the **Data Lake Analytics Developer** role.
-5. Select the access control lists (ACLs) for the U-SQL databases. When you're satisfied with your choices, click **Select**.
-6. Select the ACLs for files. For the default store, don't change the ACLs for the root folder "/" and for the /system folder. Click **Select**.
-7. Review all your selected changes, and then click **Run**.
-8. When the wizard is finished, click **Done**.
+2. On the left, under **Getting Started**, select **Add User Wizard**.
+3. Select a user, and then select **Select**.
+4. Select a role, and then select **Select**. To set up a new developer to use Azure Data Lake, select the **Data Lake Analytics Developer** role.
+5. Select the access control lists (ACLs) for the U-SQL databases. When you're satisfied with your choices, select **Select**.
+6. Select the ACLs for files. For the default store, don't change the ACLs for the root folder "/" and for the /system folder. select **Select**.
+7. Review all your selected changes, and then select **Run**.
+8. When the wizard is finished, select **Done**.
## Manage Azure role-based access control
The standard Azure roles have the following capabilities:
* **Reader**: Can monitor jobs. Use the Data Lake Analytics Developer role to enable U-SQL developers to use the Data Lake Analytics service. You can use the Data Lake Analytics Developer role to:+ * Submit jobs. * Monitor job status and the progress of jobs submitted by any user. * See the U-SQL scripts from jobs submitted by any user.
Use the Data Lake Analytics Developer role to enable U-SQL developers to use the
>If a user or a security group needs to submit jobs, they also need permission on the store account. For more information, see [Secure data stored in Data Lake Store](../data-lake-store/data-lake-store-secure-data.md). >
-<!-- ################################ -->
-<!-- ################################ -->
- ## Manage jobs ### Submit a job 1. In the Azure portal, go to your Data Lake Analytics account.
-2. Click **New Job**. For each job, configure:
+2. Select **New Job**. For each job, configure:
1. **Job Name**: The name of the job.
- 2. **Priority**: Lower numbers have higher priority. If two jobs are queued, the one with lower priority value runs first.
- 3. **Parallelism**: The maximum number of compute processes to reserve for this job.
+ 2. **Priority**: This is under **More options**. Lower numbers have higher priority. If two jobs are queued, the one with lower priority value runs first.
+ 3. **AUs**: The maximum number of Analytics Units, or compute processes to reserve for this job.
+ 4. **Runtime**: Also under **More options**. Select the Default runtime unless you've received a custom runtime.
+
+3. Add your script.
-3. Click **Submit Job**.
+4. Select **Submit Job**.
### Monitor jobs 1. In the Azure portal, go to your Data Lake Analytics account.
-2. Click **View All Jobs**. A list of all the active and recently finished jobs in the account is shown.
-3. Optionally, click **Filter** to help you find the jobs by **Time Range**, **Job Name**, and **Author** values.
+2. Select **View All Jobs** at the top of the page. A list of all the active and recently finished jobs in the account is shown.
+3. Optionally, select **Filter** to help you find the jobs by **Time Range**, **Status**, **Job Name**, **Job ID**, **Pipeline name** or **Pipeline ID**, **Recurrence name** or **Recurrence ID**, and **Author** values.
### Monitoring pipeline jobs+ Jobs that are part of a pipeline work together, usually sequentially, to accomplish a specific scenario. For example, you can have a pipeline that cleans, extracts, transforms, aggregates usage for customer insights. Pipeline jobs are identified using the "Pipeline" property when the job was submitted. Jobs scheduled using ADF V2 will automatically have this property populated. To view a list of U-SQL jobs that are part of pipelines: 1. In the Azure portal, go to your Data Lake Analytics accounts.
-2. Click **Job Insights**. The "All Jobs" tab will be defaulted, showing a list of running, queued, and ended jobs.
-3. Click the **Pipeline Jobs** tab. A list of pipeline jobs will be shown along with aggregated statistics for each pipeline.
+2. Select **Job Insights**. The "All Jobs" tab will be defaulted, showing a list of running, queued, and ended jobs.
+3. Select the **Pipeline Jobs** tab. A list of pipeline jobs will be shown along with aggregated statistics for each pipeline.
### Monitoring recurring jobs+ A recurring job is one that has the same business logic but uses different input data every time it runs. Ideally, recurring jobs should always succeed, and have relatively stable execution time; monitoring these behaviors will help ensure the job is healthy. Recurring jobs are identified using the "Recurrence" property. Jobs scheduled using ADF V2 will automatically have this property populated. To view a list of U-SQL jobs that are recurring: 1. In the Azure portal, go to your Data Lake Analytics accounts.
-2. Click **Job Insights**. The "All Jobs" tab will be defaulted, showing a list of running, queued, and ended jobs.
-3. Click the **Recurring Jobs** tab. A list of recurring jobs will be shown along with aggregated statistics for each recurring job.
+2. Select **Job Insights**. The "All Jobs" tab will be defaulted, showing a list of running, queued, and ended jobs.
+3. Select the **Recurring Jobs** tab. A list of recurring jobs will be shown along with aggregated statistics for each recurring job.
## Next steps
data-lake-analytics Data Lake Analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-whats-new.md
Previously updated : 07/31/2020 Last updated : 11/16/2022 # What's new in Data Lake Analytics? + Azure Data Lake Analytics is updated on an aperiodic basis for certain components. To stay updated with the most recent update, this article provides you with information about: - The notification of key component beta preview
Azure Data Lake Analytics is updated on an aperiodic basis for certain component
## Notification of key component beta preview
-No key component beta version available for preview.
+No key component beta version available for preview.
## U-SQL runtime
The runtime version will be updated aperiodically. And the previous runtime will
> - Choosing a runtime that is different from the default has the potential to break your U-SQL jobs. It is highly recommended not to use these non-default versions for production, but for testing only. > - The non-default runtime version has a fixed lifecycle. It will be automatically expired.
-The following version is the current default runtime version.
--- **release_20200707_scope_2b8d563_usql**- To get understanding how to troubleshoot U-SQL runtime failures, refer to [Troubleshoot U-SQL runtime failures](runtime-troubleshoot.md). ## .NET Framework Azure Data Lake Analytics now is using the **.NET Framework v4.7.2**.
-If your Azure Data Lake Analytics U-SQL script code uses custom assemblies, and those custom assemblies use .NET libraries, validate your code to check if there is any breakings.
+If your Azure Data Lake Analytics U-SQL script code uses custom assemblies, and those custom assemblies use .NET libraries, validate your code to check if there are any errors.
To get understanding how to troubleshoot a .NET upgrade using [Troubleshoot a .NET upgrade](runtime-troubleshoot.md).
data-lake-analytics Migrate Azure Data Lake Analytics To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse.md
Previously updated : 08/25/2021 Last updated : 11/15/2022 # Migrate Azure Data Lake Analytics to Azure Synapse Analytics
-Microsoft launched the Azure Synapse Analytics which aims at bringing both data lakes and data warehouse together for a unique big data analytics experience. It will help customers gather and analyze all the varying data, to solve data inefficiency, and work together. Moreover, SynapseΓÇÖs integration with Azure Machine Learning and Power BI will allow the improved ability for organizations to get insights from its data as well as execute machine learning to all its smart apps.
+Microsoft launched the Azure Synapse Analytics that aims at bringing both data lakes and data warehouse together for a unique big data analytics experience. It will help customers gather and analyze all the varying data, to solve data inefficiency, and work together. Moreover, SynapseΓÇÖs integration with Azure Machine Learning and Power BI will allow the improved ability for organizations to get insights from its data and execute machine learning to all its smart apps.
The document shows you how to do the migration from Azure Data Lake Analytics to Azure Synapse Analytics.
The document shows you how to do the migration from Azure Data Lake Analytics to
1. Identify jobs and data that you'll migrate. - Take this opportunity to clean up those jobs that you no longer use. Unless you plan to migrate all your jobs at one time, take this time to identify logical groups of jobs that you can migrate in phases.
- - Evaluate the size of the data and understand Apache Spark data format. Review your U-SQL scripts and evaluate the scripts re-writing efforts and understand the Apache Spark code concept.
+ - Evaluate the size of the data and understand Apache Spark data format. Review your U-SQL scripts and evaluate the scripts rewriting efforts and understand the Apache Spark code concept.
2. Determine the impact that a migration will have on your business. For example, whether you can afford any downtime while migration takes place.
databox-online Azure Stack Edge Gpu Virtual Machine Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-virtual-machine-sizes.md
Previously updated : 09/21/2022 Last updated : 11/15/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device by using APIs, so that I can efficiently manage my VMs.
defender-for-cloud Defender For Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md
This glossary provides a brief description of important terms and concepts for t
|**ANH** | Adaptive network hardening| [Improve your network security posture with adaptive network hardening](adaptive-network-hardening.md) |**APT** | Advanced Persistent Threats | [Video: Understanding APTs](/events/teched-2012/sia303)| | **Arc-enabled Kubernetes**| Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers or clusters running on your on-premises data center.|[What is Azure Arc-enabled Logic Apps? (Preview)](../logic-apps/azure-arc-enabled-logic-apps-overview.md)
-|**ARM**| Azure Resource Manager-the deployment and management service for Azure.| [Azure Resource Manager Overview](/azure/azure-resource-manager/management/overview)|
+|**ARM**| Azure Resource Manager-the deployment and management service for Azure.| [Azure Resource Manager Overview](../azure-resource-manager/management/overview.md)|
|**ASB**| Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on Azure.| [Azure Security Benchmark](/azure/baselines/security-center-security-baseline) | |**Auto-provisioning**| To make sure that your server resources are secure, Microsoft Defender for Cloud uses agents installed on your servers to send information about your servers to Microsoft Defender for Cloud for analysis. You can use auto provisioning to quietly deploy the Azure Monitor Agent on your servers.| [Configure auto provision](../iot-dps/quick-setup-auto-provision.md)| ## B | Term | Description | Learn more | |--|--|--|
-|**Blob storage**| Azure Blob Storage is the high scale object storage service for Azure and a key building block for data storage in Azure.| [what is Azure blob storage?](/azure/storage/blobs/storage-blobs-introduction)|
+|**Blob storage**| Azure Blob Storage is the high scale object storage service for Azure and a key building block for data storage in Azure.| [what is Azure blob storage?](../storage/blobs/storage-blobs-introduction.md)|
## C | Term | Description | Learn more | |--|--|--| |**Cacls** | Change access control list, Microsoft Windows native command-line utility often used for modifying the security permission on folders and files.| [access-control-lists](/windows/win32/secauthz/access-control-lists) |
-|**CIS Benchmark** | (Kubernetes) Center for Internet Security benchmark| [CIS](/azure/aks/cis-kubernetes)|
+|**CIS Benchmark** | (Kubernetes) Center for Internet Security benchmark| [CIS](../aks/cis-kubernetes.md)|
|**CORS**| Cross origin resource sharing, an HTTP feature that enables a web application running under one domain to access resources in another domain.| [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services)| |**CNCF**|Cloud Native Computing Foundation|[Build CNCF projects by using Azure Kubernetes service](/azure/architecture/example-scenario/apps/build-cncf-incubated-graduated-projects-aks)| |**CSPM**|Cloud Security Posture Management| [Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md)|
-|**CWPP** | Cloud Workload Protection Platform | [CWPP](/azure/defender-for-cloud/overview-page)|
+|**CWPP** | Cloud Workload Protection Platform | [CWPP](./overview-page.md)|
## D | Term | Description | Learn more | |--|--|--|
-| **DDOS Attack** | Distributed denial-of-service, a type of attack where an attacker sends more requests to an application than the application is capable of handling.| [DDOS FAQs](/azure/ddos-protection/ddos-faq)|
+| **DDOS Attack** | Distributed denial-of-service, a type of attack where an attacker sends more requests to an application than the application is capable of handling.| [DDOS FAQs](../ddos-protection/ddos-faq.yml)|
## E | Term | Description | Learn more |
This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--| |**FIM**| File Integrity Monitoring | ([File Integrity Monitoring in Microsoft Defender for Cloud](file-integrity-monitoring-overview.md)|
-**FTP** | File Transfer Protocol | [Deploy content using FTP](/azure/app-service/deploy-ftp?tabs=portal)|
+**FTP** | File Transfer Protocol | [Deploy content using FTP](../app-service/deploy-ftp.md?tabs=portal)|
## G | Term | Description | Learn more | |--|--|--|
-|**GCP**| Google Cloud Platform | [Onboard a GPC Project](/azure/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp)|
+|**GCP**| Google Cloud Platform | [Onboard a GPC Project](../active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md)|
|**GKE**| Google Kubernetes Engine, GoogleΓÇÖs managed environment for deploying, managing, and scaling applications using GCP infrastructure.|[Deploy a Kubernetes workload using GPU sharing on your Azure Stack Edge Pro](../databox-online/azure-stack-edge-gpu-deploy-kubernetes-gpu-sharing.md)| ## J
This glossary provides a brief description of important terms and concepts for t
|--|--|--| |**MDC**| Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, on-premises, and multicloud (Amazon AWS and Google GCP) resources. | [What is Microsoft Defender for Cloud?](defender-for-cloud-introduction.md)| |**MDE**| Microsoft Defender for Endpoint is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats.|[Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md)|
-|**MFA**|multi factor authentication, a process in which users are prompted during the sign-in process for an additional form of identification, such as a code on their cellphone or a fingerprint scan.|[How it works: Azure Multi Factor Authentication](/azure/active-directory/authentication/concept-mfa-howitworks)|
+|**MFA**|multi factor authentication, a process in which users are prompted during the sign-in process for an additional form of identification, such as a code on their cellphone or a fingerprint scan.|[How it works: Azure Multi Factor Authentication](../active-directory/authentication/concept-mfa-howitworks.md)|
|**MITRE ATT&CK**| a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations.|[MITRE ATT&CK](https://attack.mitre.org/)|
-|**MMA**| Microsoft Monitoring Agent, also known as Log Analytics Agent|[Log Analytics Agent Overview](/azure/azure-monitor/agents/log-analytics-agent)|
+|**MMA**| Microsoft Monitoring Agent, also known as Log Analytics Agent|[Log Analytics Agent Overview](../azure-monitor/agents/log-analytics-agent.md)|
## N | Term | Description | Learn more |
This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--| |**RaMP**| Rapid Modernization Plan, guidance based on initiatives, giving you a set of deployment paths to more quickly implement key layers of protection.|[Zero Trust Rapid Modernization Plan](../security/fundamentals/zero-trust.md)|
-|**RBAC**| Azure role-based access control (Azure RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. | [RBAC Overview](/azure/role-based-access-control/overview)|
-|**RDP** | Remote Desktop Protocol (RDP) is a sophisticated technology that uses various techniques to perfect the server's remote graphics' delivery to the client device.| [RDP Bandwidth Requirements](/azure/virtual-desktop/rdp-bandwidth)|
+|**RBAC**| Azure role-based access control (Azure RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. | [RBAC Overview](../role-based-access-control/overview.md)|
+|**RDP** | Remote Desktop Protocol (RDP) is a sophisticated technology that uses various techniques to perfect the server's remote graphics' delivery to the client device.| [RDP Bandwidth Requirements](../virtual-desktop/rdp-bandwidth.md)|
|**Recommendations**|Recommendations secure your workloads with step-by-step actions that protect your workloads from known security risks.| [What are security policies, initiatives, and recommendations?](security-policy-concept.md)| **Regulatory Compliance** | Regulatory compliance refers to the discipline and process of ensuring that a company follows the laws enforced by governing bodies in their geography or rules required | [Regulatory Compliance Overview](/azure/cloud-adoption-framework/govern/policy-compliance/regulatory-compliance) |
This glossary provides a brief description of important terms and concepts for t
|**Secure Score**|Defender for Cloud continually assesses your cross-cloud resources for security issues. It then aggregates all the findings into a single score that represents your current security situation: the higher the score, the lower the identified risk level.|[Security posture for Microsoft Defender for Cloud](secure-score-security-controls.md)| |**Security Initiative** | A collection of Azure Policy Definitions, or rules, that are grouped together towards a specific goal or purpose. | [What are security policies, initiatives, and recommendations?](security-policy-concept.md) |**Security Policy**| An Azure rule about specific security conditions that you want controlled.|[Understanding Security Policies](security-policy-concept.md)|
-|**SOAR**| Security Orchestration Automated Response, a collection of software tools designed to collect data about security threats from multiple sources and respond to low-level security events without human assistance.| [SOAR](/azure/sentinel/automation)|
+|**SOAR**| Security Orchestration Automated Response, a collection of software tools designed to collect data about security threats from multiple sources and respond to low-level security events without human assistance.| [SOAR](../sentinel/automation.md)|
## T | Term | Description | Learn more |
This glossary provides a brief description of important terms and concepts for t
## Z | Term | Description | Learn more | |--|--|--|
-|**Zero-Trust**|A new security model that assumes breach and verifies each request as though it originated from an uncontrolled network.|[Zero-Trust Security](/azure/security/fundamentals/zero-trust)|
+|**Zero-Trust**|A new security model that assumes breach and verifies each request as though it originated from an uncontrolled network.|[Zero-Trust Security](../security/fundamentals/zero-trust.md)|
## Next Steps
-[Microsoft Defender for Cloud-overview](overview-page.md)
-
+[Microsoft Defender for Cloud-overview](overview-page.md)
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
For more information about migrating servers from Defender for Endpoint to Defen
|-|:--| | Release state: | General availability (GA) | | Pricing: | Requires [Microsoft Defender for Servers Plan 1 or Plan 2](defender-for-servers-introduction.md#defender-for-servers-plans) |
-| Supported environments: | :::image type="icon" source="./medi) (formerly Windows Virtual Desktop), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml) (formerly Enterprise for Virtual Desktops)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 11 or Windows 10 (except if running Azure Virtual Desktop or Windows 10 Enterprise multi-session) |
+| Supported environments: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines running Windows/Linux<br>:::image type="icon" source="./media/icons/yes-icon.png":::Azure VMs running Linux ([supported versions](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux))<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure VMs running Windows Server 2022, 2019, 2016, 2012 R2, 2008 R2 SP1, [Windows 10/11 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml) (formerly Enterprise for Virtual Desktops)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 10 or Windows 11 (except if running Windows 10/11 Enterprise multi-session) |
| Required roles and permissions: | - To enable/disable the integration: **Security admin** or **Owner**<br>- To view Defender for Endpoint alerts in Defender for Cloud: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor** | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects |
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
You can configure the Microsoft Security DevOps tools on Azure Pipelines and Git
| Name | Language | License | |--|--|--|
-| [Bandit](https://github.com/PyCQA/bandit) | python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/main/LICENSE) |
+| [Bandit](https://github.com/PyCQA/bandit) | Python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/main/LICENSE) |
| [BinSkim](https://github.com/Microsoft/binskim) | Binary ΓÇô Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) | | [ESlint](https://github.com/eslint/eslint) | JavaScript | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) | | [CredScan](https://secdevtools.azurewebsites.net/helpcredscan.html) (Azure DevOps Only) | Credential Scanner (also known as CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files common types: default passwords, SQL connection strings, Certificates with private keys| Not Open Source |
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
For information about when recommendations are generated for each of these solut
| McAfee v10+ | Windows Server (all) | No | | McAfee v10+ | Linux (GA) | No | | Microsoft Defender for Endpoint for Linux<sup>[1](#footnote1)</sup> | Linux (GA) | Via extension |
+| Microsoft Defender for Endpoint Unified Solution<sup>[2](#footnote2)</sup> | Windows Server 2012 R2 and Windows 2016 | Via extension |
| Sophos V9+ | Linux (GA) | No | <sup><a name="footnote1"></a>1</sup> It's not enough to have Microsoft Defender for Endpoint on the Linux machine: the machine will only appear as healthy if the always-on scanning feature (also known as real-time protection (RTP)) is active. By default, the RTP feature is **disabled** to avoid clashes with other AV software.
+<sup><a name="footnote2"></a>2</sup> With the MDE unified solution on Server 2012 R2, it automatically installs Microsoft Defender Antivirus in Active mode. For Windows Server 2016, Microsoft Defender Antivirus is built into the OS.
+
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
Title: The regulatory compliance dashboard in Microsoft Defender for Cloud
description: Learn how to add and remove regulatory standards from the regulatory compliance dashboard in Defender for Cloud Previously updated : 09/18/2022 Last updated : 11/15/2022 # Customize the set of standards in your regulatory compliance dashboard
By default, every Azure subscription has the **Microsoft cloud security benchmar
Available regulatory standards: - PCI-DSS v3.2.1
+- PCI DSS v4
- SOC TSP - ISO 27001:2013 - Azure CIS 1.1.0
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
Last updated 03/24/2022
The Microsoft Defender for IoT system is built to provide broad coverage and visibility from diverse data sources.
-The following image shows how data can stream into Defender for IoT from network sensors, Microsoft Defender for Endpoint, and partner sources to provide a unified view of IoT/OT security. Defender for IoT in the Azure portal provides asset inventories, vulnerability assessments, and continuous threat monitoring.
+The following image shows how data can stream into Defender for IoT from network sensors and partner sources to provide a unified view of IoT/OT security. Defender for IoT in the Azure portal provides asset inventories, vulnerability assessments, and continuous threat monitoring.
Defender for IoT connects to both cloud and on-premises components, and is built for scalability in large and geographically distributed environments. Defender for IoT systems include the following components: -- The Azure portal, for cloud management and integration to other Microsoft services, such as Microsoft Sentinel.-- Network sensors, deployed on either a virtual machine or a physical appliance. You can configure your OT sensors as cloud-connected sensors, or fully on-premises sensors.-- An on-premises management console for cloud-connected or local, air-gapped site management.-- An embedded security agent (optional).
+- **The Azure portal**, for cloud management and integration to other Microsoft services, such as Microsoft Sentinel.
+- **Network sensors**, deployed on either a virtual machine or a physical appliance. You can configure your OT sensors as cloud-connected sensors, or fully on-premises sensors.
+- **An on-premises management console** for cloud-connected or local, air-gapped site management.
+- **An embedded security agent** (optional).
-## Network sensors
+## OT network sensors
-Defender for IoT network sensors discover and continuously monitor network traffic on IoT and OT devices.
+OT network sensors discover and continuously monitor network traffic across your OT devices.
-- The sensors are purpose-built for IoT and OT networks. They connect to a SPAN port or network TAP and can provide visibility into IoT and OT risks within minutes of connecting to the network.
+- Network sensors are purpose-built for OT networks. They connect to a SPAN port or network TAP and can provide visibility into risks within minutes of connecting to the network.
-- The sensors use IoT and OT-aware analytics engines and Layer-6 Deep Packet Inspection (DPI) to detect IoT and OT threats, such as fileless malware, based on anomalous or unauthorized activity.
+- Network sensors use OT-aware analytics engines and Layer-6 Deep Packet Inspection (DPI) to detect threats, such as fileless malware, based on anomalous or unauthorized activity.
Data collection, processing, analysis, and alerting takes place directly on the sensor. Running processes directly on the sensor can be ideal for locations with low bandwidth or high-latency connectivity because only the metadata is transferred on for management, either to the Azure portal or an on-premises management console.
When you have a cloud connected sensor:
- All data that the sensor detects is displayed in the sensor console, but alert information is also delivered to Azure, where it can be analyzed and shared with other Azure services. -- Microsoft threat intelligence packages can also be automatically pushed to cloud-connected sensors.
+- Microsoft threat intelligence packages can be automatically pushed to cloud-connected sensors.
- The sensor name defined during onboarding is the name displayed in the sensor, and is read-only from the sensor console.
In contrast, when working with locally managed sensors:
- View any data for a specific sensor from the sensor console. For a unified view of all information detected by several sensors, use an on-premises management console. For more information, see [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md). -- You must manually upload any threat intelligence packages.
+- You must manually upload any threat intelligence packages to locally managed sensors.
- Sensor names can be updated in the sensor console.
Defender for IoT sensors apply analytics engines on ingested data, triggering al
Analytics engines provide machine learning and profile analytics, risk analysis, a device database and set of insights, threat intelligence, and behavioral analytics.
-For example, for OT networks, the **policy violation detection** engine alerts users of any deviation from baseline behavior, such as unauthorized use of specific function codes, access to specific objects, or changes to device configuration. The policy violation engine models industry control system (ICS) networks as deterministic sequences of states and transitions - using a patented technique called Industrial Finite State Modeling (IFSM). The policy violation detection engine creates a baseline for industrial control system (ICS) networks. Since many detection algorithms were built for IT, rather than OT, networks, an extra baseline for ICS networks helps to shorten the systems learning curve for new detections.
+For example, the **policy violation detection** engine alerts users of any deviation from baseline behavior, such as unauthorized use of specific function codes, access to specific objects, or changes to device configuration. The policy violation engine models industry control system (ICS) networks as deterministic sequences of states and transitions - using a patented technique called Industrial Finite State Modeling (IFSM). The policy violation detection engine creates a baseline for industrial control system (ICS) networks. Since many detection algorithms were built for IT, rather than OT, networks, an extra baseline for ICS networks helps to shorten the systems learning curve for new detections.
-Specifically for OT networks, OT network sensors also provide the following analytics engines:
+OT network sensors include the following analytics engines:
- **Protocol violation detection engine**: Identifies the use of packet structures and field values that violate ICS protocol specifications, for example: Modbus exception, and initiation of an obsolete function code alerts.
defender-for-iot Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/billing.md
+
+ Title: Subscription billing
+description: Learn how you're billed for the Microsoft Defender for IoT service on your Azure subscription.
+ Last updated : 10/30/2022++
+# Defender for IoT subscription billing
+
+As you plan your Microsoft Defender for IoT deployment, you typically want to understand the Defender for IoT pricing plans and billing models so you can optimize your costs.
+
+## Free trial
+
+If you would like to evaluate Defender for IoT, you can use a trial commitment.
+
+The trial is valid for 30 days and supports 1,000 [committed devices](#defender-for-iot-committed-devices), which are the number of devices you want to monitor in your network.
+
+- **For OT networks**, use a trial to deploy one or more Defender for IoT sensors on your network to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more.
+
+- **For Enterprise IoT networks**, use a trial to view alerts, recommendations, and vulnerabilities in Microsoft 365.
+
+## Defender for IoT committed devices
+
+When onboarding or editing a monthly or annual Defender for IoT plan, we recommend that you have a sense of how many devices you would like to monitor.
+
+You're billed based on the number of committed devices associated with each subscription.
++
+## Billing cycles and changes in your plans
+
+Billing cycles for Microsoft Defender for IoT follow each a calendar month. Changes you make to Defender for IoT plans are implemented one hour after confirming the updated, and are reflected in your monthly bill.
+
+Canceling a Defender for IoT plan from your Azure subscription also takes effect one hour after canceling the plan.
+
+Your enterprise may have more than one paying entity. If so, you can onboard, edit, or cancel a plan for more than one subscription.
+
+## Next steps
+
+For more information, see:
+
+- The [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/)
+- [Manage Defender for IoT plans for OT monitoring](how-to-manage-subscriptions.md)
+- [Manage Defender for IoT plans for Enterprise IoT monitoring](manage-subscriptions-enterprise.md)
defender-for-iot Concept Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-enterprise.md
+
+ Title: Securing IoT devices in the enterprise with Microsoft Defender for Endpoint
+description: Learn how integrating Microsoft Defender for Endpoint and Microsoft Defender for IoT enhances your IoT network security.
+ Last updated : 10/19/2022++
+# Securing IoT devices in the enterprise
+
+The number of IoT devices continues to grow exponentially across enterprise networks, such as the printers, Voice over Internet Protocol (VoIP) devices, smart TVs, and conferencing systems scattered around many office buildings.
+
+While the number of IoT devices continues to grow, they often lack the security safeguards that are common on managed endpoints like laptops and mobile phones. To bad actors, these unmanaged devices can be used as a point of entry for lateral movement or evasion, and too often, the use of such tactics leads to the exfiltration of sensitive information.
+
+[Microsoft Defender for IoT](/azure/defender-for-iot/organizations/) seamlessly integrates with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to provide both IoT device discovery and security value for IoT devices, including purpose-built alerts, recommendations, and vulnerability data.
+
+> [!IMPORTANT]
+> The Enterprise IoT Network sensor is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## IoT security across Microsoft 365 Defender and Azure
+
+Defender for IoT provides IoT security functionality across both the Microsoft 365 Defender and Azure portals using the following methods:
+
+|Method |Description and requirements | Configure in ... |
+||||
+|**[An Enterprise IoT plan](#security-value-in-microsoft-365-defender) only** | Add an Enterprise IoT plan in Microsoft 365 Defender to view IoT-specific alerts, recommendations, and vulnerability data in Microsoft 365 Defender. <br><br>The extra security value is provided for IoT devices detected by Defender for Endpoint. <br><br>**Requires**: <br> - A Microsoft Defender for Endpoint P2 license<br> - Microsoft 365 Defender access as a [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator)<br>- Azure access as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) | Add your Enterprise IoT plan in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender. |
+|**[An Enterprise IoT plan](#security-value-in-microsoft-365-defender) plus an [Enterprise IoT sensor](#device-visibility-with-enterprise-iot-sensors-public-preview)** | Add an Enterprise IoT plan in Microsoft 365 Defender to add IoT-specific alerts, recommendations, and vulnerability data Microsoft 365 Defender, for IoT devices detected by Defender for Endpoint. <br><br>Register an Enterprise IoT sensor in Defender for IoT for more device visibility in both Microsoft 365 Defender and the Azure portal.<br><br>**Requires**: <br>- A Microsoft Defender for Endpoint P2 license<br> - Microsoft 365 Defender access as a [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator)<br>- Azure access as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner)<br>- A physical or VM appliance to use as a sensor |Add your Enterprise IoT plan in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender. <br><br>Register an Enterprise IoT sensor in the **Getting started** > **Set up Enterprise IoT Security** page in Defender for IoT in the Azure portal. |
+|**[An Enterprise IoT sensor only](#device-visibility-with-enterprise-iot-sensors-only)** | Register an Enterprise IoT sensor in Defender for IoT for Enterprise IoT device visibility in the Azure portal only. <br><br>Alerts, recommendations, and vulnerability data aren't currently available. <br><br>**Requires**: <br>- Azure access as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) <br>- A physical or VM appliance to use as a sensor | Register an Enterprise IoT sensor in the **Getting started** > **Set up Enterprise IoT Security** page in Defender for IoT in the Azure portal. |
+
+## Security value in Microsoft 365 Defender
+
+Defender for IoT's Enterprise IoT plan adds purpose-built alerts, recommendations, and vulnerability data for the IoT devices discovered by Defender for Endpoint agents. The added security value is available in Microsoft 365 Defender only, which is Microsoft's central portal for combined enterprise IT and IoT device security.
+
+For example, use the added security recommendations to open a single IT ticket to patch vulnerable applications on both servers and printers. Or, use a recommendation to request that the network team adds firewall rules that apply for both workstations and cameras communicating with a suspicious IP address.
+
+The following image shows the architecture and extra features added with an Enterprise IoT plan in Microsoft 365 Defender:
+++
+> [!NOTE]
+> Defender for Endpoint doesn't issue IoT-specific alerts, recommendations, and vulnerability data without an Enterprise IoT plan in Microsoft 365 Defender. Use our [quickstart](eiot-defender-for-endpoint.md) to start seeing this extra security value across your network.
+>
+
+For more information, see:
+
+- [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md)
+- [Alerts queue in Microsoft 365 Defender](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response)
+- [Security recommendations](/microsoft-365/security/defender-vulnerability-management/tvm-security-recommendation)
+- [Vulnerabilities in my organization](/microsoft-365/security/defender-vulnerability-management/tvm-weaknesses)
+
+## Device visibility with Enterprise IoT sensors (Public preview)
+
+IT networks can be complex, and Defender for Endpoint agents may not give you full visibility for all IoT devices. For example, if you have a VLAN dedicated to VoIP devices with no other endpoints, Defender for Endpoint may not be able to discover devices on that VLAN.
+
+To discover devices not covered by Defender for Endpoint, register an Enterprise IoT network sensor and gain full visibility over your network devices.
+
+The following image shows the architecture of an Enterprise IoT network sensor connected to Defender for IoT, in addition to an Enterprise IoT plan added in Microsoft 365 Defender:
++
+View discovered devices in both Microsoft 365 Defender and Defender for IoT, whether they've been discovered by Defender for Endpoint or discovered by your network sensor.
+
+The Enterprise IoT network sensor is a low-touch appliance, with automatic updates and transparent maintenance for customers.
+
+> [!NOTE]
+> Deploying a network sensor is optional and is *not* a prerequisite for integrating Defender for Endpoint and Defender for IoT.
+
+Add an Enterprise IoT sensor from Defender for IoT in the Azure portal. For more information, see [Enhance IoT security monitoring with an Enterprise IoT network sensor](eiot-sensor.md) and [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md).
+
+### Device visibility with Enterprise IoT sensors only
+
+You can also register an Enterprise IoT network sensor *without* using Defender for Endpoint, and view IoT devices in Defender for IoT in the Azure portal only. This view is especially helpful when you're also managing Operational Technology (OT) devices, monitored by OT network sensors with Defender for IoT.
+
+The following image shows the architecture of an Enterprise IoT network sensor connected to Defender for IoT, without an Enterprise IoT plan:
++
+## Next steps
+
+Start securing your Enterprise IoT network resources with by [onboarding to Defender for IoT from Microsoft 365 Defender](eiot-defender-for-endpoint.md). Then, add even more device visibility by [adding an Enterprise IoT network sensor](eiot-sensor.md) to Defender for IoT.
+
+For more information, see [Enterprise IoT networks frequently asked questions](faqs-eiot.md).
defender-for-iot Eiot Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-defender-for-endpoint.md
+
+ Title: Enable Enterprise IoT security in Microsoft 365 with Defender for Endpoint - Microsoft Defender for IoT
+description: Learn how to start integrating between Microsoft Defender for IoT and Microsoft Defender for Endpoint in Microsoft 365 Defender.
+ Last updated : 10/19/2022++
+# Enable Enterprise IoT security with Defender for Endpoint
+
+This article describes how [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) customers can add an Enterprise IoT plan in Microsoft 365 Defender, providing extra security value for IoT devices.
+
+While IoT device inventory is already available for Defender for Endpoint P2 customers, adding an Enterprise IoT plan adds alerts, recommendations, and vulnerability data, purpose-built for IoT devices in your enterprise network.
+
+IoT devices include printers, cameras, VOIP phones, smart TVs, and more. Adding an Enterprise IoT plan means, for example, that you can use a recommendation in Microsoft 365 Defender to open a single IT ticket for patching vulnerable applications across both servers and printers.
+
+## Prerequisites
+
+Before you start the procedures in this article, read through [Secure IoT devices in the enterprise](concept-enterprise.md) to understand more about the integration between Defender for Endpoint and Defender for IoT.
+
+Make sure that you have:
+
+- A Microsoft Defender for Endpoint P2 license
+
+- IoT devices in your network, visible in the Microsoft 365 Defender **Device inventory**
+
+- An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/).
+
+- The following user roles:
+
+ |Identity management |Roles required |
+ |||
+ |**In Azure Active Directory** | [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator) for your Microsoft 365 tenant |
+ |**In Azure RBAC** | [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) for the Azure subscription that you'll be using for the integration |
+
+## Onboard a Defender for IoT plan
+
+1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Settings** \> **Device discovery** \> **Enterprise IoT**.
+
+1. Select the following options for your plan:
+
+ - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription.
+
+ - **Price plan**: For the sake of this tutorial, select a **Trial** pricing plan. Microsoft Defender for IoT provides a [30-day free trial](billing.md#free-trial) for the first 1,000 committed devices for evaluation purposes. For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
+
+1. Select the **I accept the terms and conditions** option and then select **Save**.
+
+For example:
++
+## View added security value in Microsoft 365 Defender
+
+This procedure describes how to view related alerts, recommendations, and vulnerabilities for a specific device in Microsoft 365 Defender. Alerts, recommendations, and vulnerabilities are shown for IoT devices only after you've added an Enterprise IoT plan.
+
+**To view added security value**:
+
+1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Assets** \> **Devices** to open the **Device inventory** page.
+
+1. Select the **IoT devices** tab and select a specific device **IP** to drill down for more details. For example:
+
+ :::image type="content" source="media/enterprise-iot/select-a-device.png" alt-text="Screenshot of the IoT devices tab in Microsoft 365 Defender." lightbox="media/enterprise-iot/select-a-device.png":::
+
+1. On the device details page, explore the following tabs to view data added by the Enterprise IoT plan for your device:
+
+ - On the **Alerts** tab, check for any alerts triggered by the device.
+
+ - On the **Security recommendations** tab, check for any recommendations available for the device to reduce risk and maintain a smaller attack surface.
+
+ - On the **Discovered vulnerabilities** tab, check for any known CVEs associated with the device. Known CVEs can help decide whether to patch, remove, or contain the device and mitigate risk to your network.
+++
+## Next steps
+
+Learn how to set up an Enterprise IoT network sensor (Public preview) and gain more visibility into more IoT segments of your corporate network that aren't otherwise covered by Defender for Endpoint.
+
+Customers that have set up an Enterprise IoT network sensor will be able to see all discovered devices in the **Device inventory** in either Microsoft 365 Defender, or Defender for IoT in the Azure portal.
+
+> [!div class="nextstepaction"]
+> [Enhance device discovery with an Enterprise IoT network sensor](eiot-sensor.md)
defender-for-iot Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-sensor.md
+
+ Title: Enhance device discovery with a Microsoft Defender for IoT Enterprise IoT network sensor
+description: Learn how to register an Enterprise IoT network sensor in Defender for IoT for extra device visibility not covered by Defender for Endpoint.
+ Last updated : 10/19/2022++
+# Discover Enterprise IoT devices with an Enterprise IoT network sensor (Public preview)
+
+This article describes how to register an Enterprise IoT network sensor in Microsoft Defender for IoT.
+
+**If you're a Defender for Endpoint customer** with an Enterprise IoT plan for Defender for IoT, adding an Enterprise IoT network sensor extends your network visibility to IoT segments in your corporate network not otherwise covered by Microsoft Defender for Endpoint. For example, if you have a VLAN dedicated to VoIP devices with no other endpoints, Defender for Endpoint may not be able to discover devices on that VLAN.
+
+Customers that have set up an Enterprise IoT network sensor can see all discovered devices in the **Device inventory** in either Microsoft 365 Defender or Defender for IoT. You'll also get extra security value from more alerts, vulnerabilities, and recommendations in Microsoft 365 Defender for the newly discovered devices.
+
+**If you're a Defender for IoT customer** working solely in the Azure portal, an Enterprise IoT network sensor provides extra device visibility to Enterprise IoT devices, such as Voice over Internet Protocol (VoIP) devices, printers, and cameras, which may not be covered by your OT network sensors.
+
+For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md).
+
+> [!IMPORTANT]
+> The Enterprise IoT Network sensor is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+Before you start registering an Enterprise IoT sensor:
+
+- To view Defender for IoT data in Microsoft 365 Defender, including devices, alerts, recommendations, and vulnerabilities, you must have an Enterprise IoT plan, [onboarded from Microsoft 365 Defender](eiot-defender-for-endpoint.md).
+
+ If you only want to view data in the Azure portal, an Enterprise IoT plan isn't required. You can also onboard your Enterprise IoT plan from Microsoft 365 Defender after registering your network sensor to bring [extra device visibility and security value](concept-enterprise.md#security-value-in-microsoft-365-defender) to your organization.
+
+- Make sure you can access the Azure portal as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) user. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/).
++
+- Allocate a physical appliance or a virtual machine (VM) to use as your network sensor. Make sure that your machine has the following specifications:
+
+ | Tier | Requirements |
+ |--|--|
+ | **Minimum** | To support up to 1 Gbps of data: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16-GB RAM of DDR4 or better<br>- 250 GB HDD |
+ | **Recommended** | To support up to 15 Gbps of data: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32-GB RAM of DDR4 or better<br>- 500 GB HDD |
+
+ Your machine must also have:
+
+ - The [Ubuntu 18.04 Server](https://releases.ubuntu.com/18.04/) operating system. If you don't yet have Ubuntu installed, download the installation files to an external storage, such as a DVD or disk-on-key, and then install it on your appliance or VM. For more information, see the Ubuntu [Image Burning Guide](https://help.ubuntu.com/community/BurningIsoHowto).
+
+ - Network adapters, at least one for your switch monitoring (SPAN) port, and one for your management port to access the sensor's user interface
+
+## Prepare a physical appliance or VM
+
+This procedure describes how to prepare your physical appliance or VM to install the Enterprise IoT network sensor software.
+
+**To prepare your appliance**:
+
+1. Connect a network interface (NIC) from your physical appliance or VM to a switch as follows:
+
+ - **Physical appliance** - Connect a monitoring NIC to a SPAN port directly by a copper or fiber cable.
+
+ - **VM** - Connect a vNIC to a vSwitch, and configure your vSwitch security settings to accept *Promiscuous mode*. For more information, see, for example [Configure a SPAN monitoring interface for a virtual appliance](extra-deploy-enterprise-iot.md#configure-a-span-monitoring-interface-for-a-virtual-appliance).
+
+1. <a name="sign-in"></a>Sign in to your physical appliance or VM and run the following command to validate incoming traffic to the monitoring port:
+
+ ```bash
+ ifconfig
+ ```
+
+ The system displays a list of all monitored interfaces.
+
+ Identify the interfaces that you want to monitor, which are usually the interfaces with no IP address listed. Interfaces with incoming traffic will show an increasing number of RX packets.
+
+1. For each interface you want to monitor, run the following command to enable *Promiscuous mode* in the network adapter:
+
+ ```bash
+ ifconfig <monitoring port> up promisc
+ ```
+
+ Where `<monitoring port>` is an interface you want to monitor. Repeat this step for each interface you want to monitor.
+
+1. Ensure network connectivity by opening the following ports in your firewall:
+
+ | Protocol | Transport | In/Out | Port | Purpose |
+ |--|--|--|--|--|
+ | HTTPS | TCP | In/Out | 443 | Cloud connection |
+ | DNS | TCP/UDP | In/Out | 53 | Address resolution |
++
+1. Make sure that your physical appliance or VM can access the cloud using HTTPS on port 443 to the following Microsoft endpoints:
+
+ - **EventHub**: `*.servicebus.windows.net`
+ - **Storage**: `*.blob.core.windows.net`
+ - **Download Center**: `download.microsoft.com`
+ - **IoT Hub**: `*.azure-devices.net`
+
+ > [!TIP]
+ > You can also download and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) so your firewall will allow the Azure endpoints that are specified above, along with their region.
+ >
+ > The Azure public IP ranges are updated weekly. New ranges appearing in the file will not be used in Azure for at least one week. To use this option, download the new json file every week and perform the necessary changes at your site to correctly identify services running in Azure.
+
+## Register an Enterprise IoT sensor in Defender for IoT
+
+This section describes how to register an Enterprise IoT sensor in Defender for IoT. You can start directly in the Azure portal, or, if you're a Defender for Endpoint customer with an Enterprise IoT plan, you can start in Microsoft 365.
+
+When you're done registering your sensor, you'll continue on with installing the Enterprise IoT monitoring software on your sensor machine.
+
+### Access sensor setup from Microsoft 365 Defender
+
+In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal:
+
+1. Select **Settings** \> **Device discovery** \> **Enterprise IoT**.
+
+1. Under **Set up an Enterprise IoT Security sensor** select the **Microsoft Defender for IoT** link. For example:
+
+ :::image type="content" source="media/enterprise-iot/defender-for-endpoint-setup-sensor.png" alt-text="Screenshot of the Defender for IoT link in Microsoft 365 Defender.":::
+
+The **Microsoft Defender for IoT** link brings you to the sensor setup process in the Azure portal. For example:
++
+> [!NOTE]
+> You can also access the sensor setup directly from Defender for IoT. In the Azure portal > Defender for IoT, select **Getting started** > **Set up Enterprise IoT Security**.
+
+### Register a sensor in the Azure portal
+
+1. On the **Set up Enterprise IoT Security** page, enter the following details, and then select **Register**:
+
+ - In the **Sensor name** field, enter a meaningful name for your sensor.
+ - From the **Subscription** drop-down menu, select the subscription where you want to add your sensor.
+
+ A **Sensor registration successful** screen shows your next steps and the command you'll need to start the sensor installation.
+
+ For example:
+
+ :::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise IoT sensor.":::
+
+1. Copy the command to a safe location, where you'll be able to copy it to your physical appliance or VM in order to [install sensor software](#install-enterprise-iot-sensor-software).
+
+## Install Enterprise IoT sensor software
+
+This procedure describes how to install Enterprise IoT monitoring software on [your sensor machine](#prepare-a-physical-appliance-or-vm), either a physical appliance or VM.
+
+> [!NOTE]
+> While this procedure describes how to install sensor software on a VM using ESXi, enterprise IoT sensors are also supported using Hyper-V.
+>
+
+**To install sensor software**:
+
+1. On your sensor machine, sign in to the sensor's CLI using a terminal, such as PuTTY, or MobaXterm.
+
+1. Run the command that you'd copied from the [sensor registration](#register-a-sensor-in-the-azure-portal) step. For example:
+
+ :::image type="content" source="media/tutorial-get-started-eiot/enter-command.png" alt-text="Screenshot of running the command to install the Enterprise IoT sensor monitoring software.":::
+
+ The process checks to see if the required Docker version is already installed. If itΓÇÖs not, the sensor installation also installs the latest Docker version.
+
+ When the command process completes, the Ubuntu **Configure microsoft-eiot-sensor** wizard appears. In this wizard, use the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
+
+1. In the **Configure microsoft-eiot-sensor** wizard, in the **What is the name of the monitored interface?** screen, select one or more interfaces that you want to monitor with your sensor, and then select **OK**.
+
+ For example:
+
+ :::image type="content" source="media/tutorial-get-started-eiot/install-monitored-interface.png" alt-text="Screenshot of the Configuring microsoft-eiot-sensor screen.":::
+
+1. In the **Set up proxy server?** screen, select whether to set up a proxy server for your sensor. For example:
+
+ :::image type="content" source="media/tutorial-get-started-eiot/proxy.png" alt-text="Screenshot of the Set up a proxy server? screen.":::
+
+ If you're setting up a proxy server, select **Yes**, and then define the proxy server host, port, username, and password, selecting **Ok** after each option.
+
+ The installation takes a few minutes to complete.
+
+1. In the Azure portal, check that the **Sites and sensors** page now lists your new sensor.
+
+ For example:
+
+ :::image type="content" source="media/tutorial-get-started-eiot/view-sensor-listed.png" alt-text="Screenshot of your new Enterprise IoT sensor listed in the Sites and sensors page.":::
+
+In the **Sites and sensors** page, Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**. For more information, see [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
+
+> [!TIP]
+> If you don't see your Enterprise IoT data in Defender for IoT as expected, make sure that you're viewing the Azure portal with the correct subscriptions selected. For more information, see [Manage Azure portal settings](../../azure-portal/set-preferences.md).
+>
+> If you still don't view your data as expected, [validate your sensor setup](extra-deploy-enterprise-iot.md#validate-your-enterprise-iot-sensor-setup) from the CLI.
+
+## View newly detected Enterprise IoT devices
+
+Once you've validated your setup, the Defender for IoT **Device inventory** page will start to populate with new devices detected by your sensor after 15 minutes.
+
+If you're a Defender for Endpoint customer with an Enterprise IoT plan, you'll be able to view all detected devices in the **Device inventory** pages, in both Defender for IoT and Microsoft 365 Defender. Detected devices include both devices detected by Defender for Endpoint and devices detected by the Enterprise IoT sensor.
+
+For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md) and [Microsoft 365 Defender device discovery](/microsoft-365/security/defender-endpoint/machines-view-overview).
+
+After detecting new devices with the Enterprise IoT network sensor, you may need to edit the number of committed devices in your Enterprise IoT plan.
+
+You can only edit the number of committed devices on a monthly or annual commitment, as trial commitments automatically include 1,000 devices for 30 days. For more information, see:
+
+- [Calculate committed devices for Enterprise IoT monitoring](manage-subscriptions-enterprise.md#calculate-committed-devices-for-enterprise-iot-monitoring)
+- [Defender for IoT subscription billing](billing.md)
+- The [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/)
+
+## Delete an Enterprise IoT network sensor
+
+Delete a sensor if it's no longer in use with Defender for IoT.
+
+1. From the **Sites and sensors** page on the Azure portal, locate your sensor in the grid.
+
+1. In the row for your sensor, select the **...** options menu on the right > **Delete sensor**.
+
+For more information, see [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
+
+> [!TIP]
+> You can also remove your sensor manually from the CLI. For more information, see [Extra steps and samples for Enterprise IoT deployment](extra-deploy-enterprise-iot.md#remove-an-enterprise-iot-network-sensor-optional).
+
+If you want to cancel your Enterprise IoT plan and stop the integration with Defender for Endpoint, use one of the following methods carefully:
+
+- **To cancel your plan for Enterprise IoT networks only**, do so from [Microsoft 365 Defender](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
+- **To cancel a plan for both OT and Enterprise IoT networks together**, use the [**Pricing** page in Defender for IoT](how-to-manage-subscriptions.md#cancel-a-defender-for-iot-plan) in the Azure portal.
+
+## Move existing sensors to a different subscription
+
+If you've registered an Enterprise IoT network sensor, you may need to apply it to a different subscription than the one youΓÇÖre currently using.
+
+**To apply an existing sensor to a different subscription**:
+
+1. Onboard a new plan to the new subscription
+1. Register the sensors under the new subscription
+1. Remove the sensors from the previous subscription
+
+Billing changes will take effect one hour after cancellation of the previous subscription, and will be reflected on the next month's bill. Devices will be synchronized from the sensor to the new subscription automatically.
+
+**To switch to a new subscription**:
+
+1. In Defender for Endpoint, onboard a new Enterprise IoT plan to the new subscription you want to use. For more information, see [Onboard a Defender for IoT plan](eiot-defender-for-endpoint.md#onboard-a-defender-for-iot-plan).
+
+1. In the Azure portal, register your Enterprise IoT sensor under the new subscription and run the activation command. For more information, see [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
+
+1. Delete the legacy sensor from the previous subscription. In Defender for IoT, go to the **Sites and sensors** page and locate the legacy sensor on the previous subscription.
+
+1. In the row for your sensor, from the options (**...**) menu on the right, select **Delete** to delete the sensor from the previous subscription.
+
+1. If relevant, cancel the Defender for IoT plan from the previous subscription. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
+
+## Next steps
+
+For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal) and [Extra steps and samples for Enterprise IoT deployment](extra-deploy-enterprise-iot.md).
defender-for-iot Extra Deploy Enterprise Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/extra-deploy-enterprise-iot.md
Title: Extra deployment steps and samples for Enterprise IoT deployment - Microsoft Defender for IoT
-description: Describes additional deployment and validation procedures to use when deploying an Enterprise IoT network sensor.
+description: Describes extra deployment and validation procedures to use when deploying an Enterprise IoT network sensor.
Last updated 08/08/2022
Last updated 08/08/2022
This article provides extra steps for deploying an Enterprise IoT sensor, including a sample SPAN port configuration procedure, and CLI steps to validate your deployment or delete a sensor.
-For more information, see [Tutorial: Get started with Enterprise IoT monitoring](tutorial-getting-started-eiot-sensor.md).
+For more information, see [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
## Configure a SPAN monitoring interface for a virtual appliance
This procedure describes an example of how to configure a SPAN port on your vSwi
1. Connect to the sensor, and verify that mirroring works.
-If you've jumped to this procedure from the tutorial procedure for [Prepare a physical appliance or VM](tutorial-getting-started-eiot-sensor.md#prepare-a-physical-appliance-or-vm), continue with [step 2](tutorial-getting-started-eiot-sensor.md#sign-in) to continue preparing your appliance.
+If you've jumped to this procedure from the tutorial procedure for [Prepare a physical appliance or VM](eiot-sensor.md#prepare-a-physical-appliance-or-vm), continue the procedure to [prepare your appliance](eiot-sensor.md#sign-in) to continue preparing your appliance.
## Validate your Enterprise IoT sensor setup
-If, after completing the Enterprise IoT sensor installation and setup, you don't see your sensor showing on the **Sites and sensors** page in the Azure portal, this procedure can help validate your installation directly on the sensor.
+If after completing the Enterprise IoT sensor installation and setup, you don't see your sensor showing on the **Sites and sensors** page in the Azure portal, this procedure can help validate your installation directly on the sensor.
Wait 1 minute after your sensor installation has completed before starting this procedure.
sudo apt purge -y microsoft-eiot-sensor
## Next steps
-For more information, see [Tutorial: Get started with Enterprise IoT monitoring](tutorial-getting-started-eiot-sensor.md) and [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
+For more information, see [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md) and [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
defender-for-iot Faqs Eiot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-eiot.md
Last updated 07/07/2022
-# Enterprise IoT networks frequently asked questions
+# Enterprise IoT network security frequently asked questions
-This article provides a list of frequently asked questions and answers about Enterprise IoT networks in Defender for IoT.
+This article provides a list of frequently asked questions about securing Enterprise IoT networks with Microsoft Defender for IoT.
-## What is the difference between OT and Enterprise IoT?
+## What is the difference between OT and Enterprise IoT?
### Operational Technology (OT) OT network sensors use agentless, patented technology to discover, learn, and continuously monitor network devices for a deep visibility into Operational Technology (OT) / Industrial Control System (ICS) risks. Sensors carry out data collection, analysis, and alerting on-site, making them ideal for locations with low bandwidth or high latency.
-### Enterprise IoT
+### Enterprise IoT
-Enterprise IoT provides visibility and security for IoT devices in the corporate environment.
+Enterprise IoT provides visibility and security for IoT devices in the corporate environment.
Enterprise IoT network protection extends agentless features beyond operational environments, providing coverage for all IoT devices in your environment. For example, an enterprise IoT environment may include printers, cameras, and purpose-built, proprietary, devices. ## What additional security value can Enterprise IoT provide Microsoft Defender for Endpoint customers?
-Enterprise IoT is designed to help customers secure unmanaged devices throughout the organization and extend IT security to also cover IoT devices. The solution leverages multiple means in order to ensure optimal coverage.
+Enterprise IoT is designed to help customers secure un-managed devices throughout the organization and extend IT security to also cover IoT devices. The solution leverages multiple means in order to ensure optimal coverage.
- **In the Microsoft Defender for Endpoint portal**: This is the GA offering for Enterprise IoT. Microsoft 365 P2 customers already have visibility for discovered IoT devices in the **Device inventory** page in Defender for Endpoint. Customers can onboard an Enterprise IoT plan in the same portal and gain security value by viewing alerts, recommendations and vulnerabilities for their discovered IoT devices. -- **In the Azure portal**: Defender for IoT customers can view their discovered IoT devices in the **Device inventory** page in Defender for IoT in the Azure portal. To view Enterprise IoT devices in the Azure portal, you'll need to set up a network sensor (currently in Public Preview). or more information, see [Tutorial: Get started with Enterprise IoT monitoring](tutorial-getting-started-eiot-sensor.md).
+ For more information, see [Onboard with Microsoft Defender for IoT](eiot-defender-for-endpoint.md).
+
+- **In the Azure portal**: Defender for IoT customers can view their discovered IoT devices in the **Device inventory** page in Defender for IoT in the Azure portal. Register an Enterprise IoT network sensor, currently in **Public preview** to gain visibility to additional devices that aren't covered by Defender for Endpoint.
+
+ For more information, see [Enhance device discovery with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
## How can I start using Enterprise IoT?
-To get started, Microsoft 365 P2 customers need to [add a Defender for IoT plan with Enterprise IoT](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#onboard-a-defender-for-iot-plan) to an Azure subscription from the Microsoft Defender for Endpoint portal.
-
-**Public Preview**: Defender for Endpoint customers can also install a network sensor to gain more visibility into additional IoT segments of the corporate network that weren't previously covered by Defender for Endpoint. Deploying a network sensor is not a prerequisite for onboarding Enterprise IoT.
-For more information, see [Tutorial: Get started with Enterprise IoT monitoring](tutorial-getting-started-eiot-sensor.md)
+To get started, Microsoft 365 P2 customers need to [add a Defender for IoT plan with Enterprise IoT](eiot-defender-for-endpoint.md) to an Azure subscription from the Microsoft Defender for Endpoint portal.
-If youΓÇÖre a Defender for Endpoint customer, when adding your Defender for IoT plan, take care to exclude any devices already managed by Defender for Endpoint from your count of committed devices.
+**Public Preview**: Defender for Endpoint customers can also install a network sensor to gain more visibility into additional IoT segments of the corporate network that weren't previously covered by Defender for Endpoint. Deploying a network sensor is not a prerequisite for onboarding Enterprise IoT. For more information, see [Enhance device discovery with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
+
+If youΓÇÖre a Defender for Endpoint customer, when adding your Defender for IoT plan, take care to exclude any devices already [managed by Defender for Endpoint](/microsoft-365/security/defender-endpoint/device-discovery) from your count of committed devices.
## How can I use the Enterprise IoT network sensor? The Enterprise IoT network sensor is currently in Public Preview and can be used by all customers without additional charge. Onboard a Defender for IoT plan with Enterprise IoT, and then set up your Enterprise IoT network sensor.
-For more information, see [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md).
+For more information, see [Onboard with Microsoft Defender for IoT](eiot-defender-for-endpoint.md) and [Enhance device discovery with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
## What permissions do I need to add a Defender for IoT plan? Can I use any Azure subscription?
-For information on required permissions, see [Prerequisites](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+For information on required permissions, see [Prerequisites](eiot-defender-for-endpoint.md#prerequisites).
## Which devices are billable?
-For more information about billable devices, see [Defender for IoT committed devices](how-to-manage-subscriptions.md#defender-for-iot-committed-devices).
+For more information about billable devices, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device).
## How should I estimate the number of committed devices?
In the **Device inventory** in Defender for Endpoint:
Add the total number of discovered network devices with the total number of discovered IoT devices. Round that up to a multiple of 100, and that is the number of committed devices to use.
- For more information, see [Defender for IoT committed devices](how-to-manage-subscriptions.md#defender-for-iot-committed-devices).
+For more information, see [Defender for IoT committed devices](billing.md#defender-for-iot-committed-devices).
+ ## How does the integration between Microsoft Defender for Endpoint and Microsoft Defender for IoT work?
To make any changes to an existing plan, you'll need to cancel your existing pla
To remove only Enterprise IoT from your plan, cancel your plan from Microsoft Defender for Endpoint. For more information, see [Cancel your Defender for IoT plan](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#cancel-your-defender-for-iot-plan).
-To cancel the plan and remove all Defender for IoT services from the associated subscription, cancel the plan in Defender for IoT in the Azure portal. For more information, see [Cancel a Defender for IoT plan from a subscription](how-to-manage-subscriptions.md#cancel-a-defender-for-iot-plan-from-a-subscription).
+To cancel the plan and remove all Defender for IoT services from the associated subscription, cancel the plan in Defender for IoT in the Azure portal. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
## What happens when the 30-day trial ends?
-If you haven't changed your plan from a trial to a monthly commitment by the time your trial ends, your plan is automatically canceled, and youΓÇÖll lose access to Defender for IoT security features.
+If you haven't changed your plan from a trial to a monthly commitment by the time your trial ends, your plan is automatically canceled, and youΓÇÖll lose access to Defender for IoT security features.
To change your plan from a trial to a monthly commitment before the end of the trial, you'll need to cancel your trial plan and onboard a new plan in Defender for Endpoint. For more information, see [Defender for IoT integration](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
For any billing or technical issues, create a support request in the Azure porta
For more information on getting started with Enterprise IoT, see: -- [Tutorial: Get started with Enterprise IoT monitoring](tutorial-getting-started-eiot-sensor.md)-- [Manage Defender for IoT plans](how-to-manage-subscriptions.md)-- [Defender for IoT integration](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration)
+- [Securing IoT devices in the enterprise](concept-enterprise.md)
+- [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md)
+- [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md)
+- [Manage Defender for IoT plans for Enterprise IoT security monitoring](manage-subscriptions-enterprise.md)
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
Last updated 03/24/2022
This quickstart takes you through the initial steps of setting up Defender for IoT, including:
+- Identify and plan OT monitoring system architecture
- Add Defender for IoT to an Azure subscription-- Identify and plan solution architecture You can use this procedure to set up a Defender for IoT trial. The trial provides 30-day support for 1000 devices and a virtual sensor, which you can use to monitor traffic, analyze data, generate alerts, understand network risks and vulnerabilities and more.
If you're using a legacy experience of Defender for IoT and are connecting throu
## Identify and plan your OT solution architecture
-If you're working with an OT network, we recommend that you identify system requirements and plan your system architecture before you start, even if you plan to start with a trial subscription.
-
-> [!NOTE]
-> If you're setting up network monitoring for Enterprise IoT systems, you can skip directly to [Add a Defender for IoT plan for Enterprise IoT networks to an Azure subscription](#add-a-defender-for-iot-plan-for-enterprise-iot-networks-to-an-azure-subscription).
-
-**When working with an OT network**:
+We recommend that you identify system requirements and plan your OT network monitoring architecture before you start, even if you plan to start with a trial subscription.
- To deploy Defender for IoT, you'll need network switches that support traffic monitoring via a SPAN port and hardware appliances for NTA sensors.
For more information, see:
- [Predeployment checklist](pre-deployment-checklist.md) - [Identify required appliances](how-to-identify-required-appliances.md)
-## Add a Defender for IoT plan for OT networks to an Azure subscription
+## Add a Defender for IoT plan for OT networks
This procedure describes how to add a Defender for IoT plan for OT networks to an Azure subscription.
This procedure describes how to add a Defender for IoT plan for OT networks to a
1. In the **Purchase** pane, define the plan:
- - **Purchase method**. Select a monthly or annual commitment, or a [trial](how-to-manage-subscriptions.md#about-defender-for-iot-trials). Microsoft Defender for IoT provides a 30-day free trial for the first 1,000 committed devices for evaluation purposes.
-
+ - **Purchase method**. Select a monthly or annual commitment, or a [trial](billing.md#free-trial). Microsoft Defender for IoT provides a 30-day free trial for the first 1,000 committed devices for evaluation purposes.
+ For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/). - **Subscription**. Select the subscription where you would like to add a plan.
This procedure describes how to add a Defender for IoT plan for OT networks to a
- **Committed devices**. If you selected a monthly or annual commitment, enter the number of assets you'll want to monitor. If you selected a trial, this section doesn't appear as you have a default of 1000 devices.
- For example:
+ For example:
:::image type="content" source="media/how-to-manage-subscriptions/onboard-plan-2.png" alt-text="Screenshot of adding a plan for OT networks to your subscription.":::
This procedure describes how to add a Defender for IoT plan for OT networks to a
Your OT networks plan will be shown under the associated subscription in the **Plans** grid. For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md). -
-## Add a Defender for IoT plan for Enterprise IoT networks to an Azure subscription
-
-Onboard your Defender for IoT plan for Enterprise IoT networks in the Defender for Endpoint portal. For more information, see [Onboard Microsoft Defender for IoT](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration) in the Defender for Endpoint documentation.
-
-Once you've onboarded a plan for Enterprise IoT networks from Defender for Endpoint, you'll see the plan in Defender for IoT in the Azure portal, under the associated subscription in the **Plans** grid, on the **Defender for IoT** > **Pricing** page.
-
-For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md).
- ## Next steps
-Continue with one of the following tutorials, depending on whether you're setting up a network for OT system security or Enterprise IoT system security:
--- [Tutorial: Get started with OT network security](tutorial-onboarding.md)-- [Tutorial: Get started with Enterprise IoT network security](tutorial-getting-started-eiot-sensor.md)
+Continue with [Tutorial: Get started with OT network security](tutorial-onboarding.md) or [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
For more information, see: - [Welcome to Microsoft Defender for IoT for organizations](overview.md) - [Microsoft Defender for IoT architecture](architecture.md)-- [Move existing sensors to a different subscription](how-to-manage-subscriptions.md#move-existing-sensors-to-a-different-subscription)
+- [Defender for IoT subscription billing](billing.md)
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
After activating an on-premises management console, you'll need to apply new act
||| |**On-premises management console** | Apply a new activation file on your on-premises management console if you've [modified the number of committed devices](how-to-manage-subscriptions.md#edit-a-plan-for-ot-networks) in your subscription. | |**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>However, you'll also need to apply a new activation file when [updating your sensor software](update-ot-software.md#download-and-apply-a-new-activation-file) from a legacy version to version 22.2.x. |
-| **Locally-managed** | Apply a new activation file to locally-managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. |
+| **Locally-managed** | Apply a new activation file to locally managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. |
For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
# Manage sensors with Defender for IoT in the Azure portal
-This article describes how to view and manage sensors with [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
+This article describes how to view and manage sensors with [Microsoft Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
-## Purchase sensors or download software for sensors
+## Prerequisites
-This procedure describes how to use the Azure portal to contact vendors for pre-configured appliances, or how to download software for you to install on your own appliances.
+Before you can use the procedures in this article, you'll need to have network sensors onboarded to Defender for IoT. For more information, see:
-1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Sensor**.
+- [Onboard OT sensors to Defender for IoT](onboard-sensors.md)
+- [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md)
-1. Do one of the following:
-
- - To buy a pre-configured appliance, select **Contact** under **Buy preconfigured appliance**. This opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20MD4IoT%20pre-configured%20appliances) with a template request for Defender for IoT appliances. For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md).
-
- - To install software on your own appliances, do the following:
-
- 1. Make sure that you have a supported appliance available.
-
- 1. Under *Select version**, select the software version you want to install. We recommend that you always select the most recent version.
-
- 1. Select **Download**. Download the sensor software and save it in a location that you can access from your selected appliance.
-
- 1. Install your software. For more information, see [Defender for IoT installation](how-to-install-software.md).
-
-## Onboard sensors
-
-Onboard a sensor by registering it with Microsoft Defender for IoT. For OT sensors, you'll also need to download a sensor activation file.
-
-Select one of the following tabs, depending on the type of network you're working with.
-
-# [OT sensors](#tab/ot)
-
-**Prerequisites**: Make sure that you've set up your sensor and configured your SPAN port or TAP.
-
-For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md) and [Defender for IoT installation](how-to-install-software.md), or our [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md).
-
-**To onboard your OT sensor to Defender for IoT**:
-
-1. In the Azure portal, navigate to **Defender for IoT** > **Getting started** and select **Set up OT/ICS Security**.
-
- :::image type="content" source="media/tutorial-onboarding/onboard-a-sensor.png" alt-text="Screenshot of the Set up O T/I C S Security button on the Get started page.":::
-
- Alternately, from the Defender for IoT **Sites and sensors** page, select **Onboard OT sensor**.
-
-1. By default, on the **Set up OT/ICS Security** page, **Step 1: Did you set up a sensor?** and **Step 2: Configure SPAN port or TAPΓÇï** of the wizard are collapsed. If you haven't completed these steps, do so before continuing.
-
-1. In **Step 3: Register this sensor with Microsoft Defender for IoT** enter or select the following values for your sensor:
-
- 1. In the **Sensor name** field, enter a meaningful name for your sensor. We recommend including your sensor's IP address as part of the name, or using another easily identifiable name that can help you keep track between the registration name in the Azure portal and the IP address of the sensor shown in the sensor console.
-
- 1. In the **Subscription** field, select your Azure subscription.
-
- 1. Toggle on the **Cloud connected** option to have your sensor connected to other Azure services, such as Microsoft Sentinel, and to push [threat intelligence packages](how-to-work-with-threat-intelligence-packages.md) from Defender for IoT to your sensors.
-
- 1. In the **Sensor version** field, select which software version is installed on your sensor machine. We recommend that you select **22.X and above** to get all of the latest features and enhancements.
-
- If you haven't yet upgraded to version 22.x, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-
- 1. In the **Site** section, select the **Resource name** and enter the **Display name** for your site. Add any tags as needed to help you identify your sensor.
-
- 1. In the **Zone** field, select a zone from the menu, or select **Create Zone** to create a new one.
-
-1. Select **Register**.
-
-A success message appears and your activation file is automatically downloaded. Your sensor is now shown under the configured site on the Defender for IoT **Sites and sensors** page.
-
-Until you activate your sensor, the sensor's status will show as **Pending Activation**.
-
-Make the downloaded activation file accessible to the sensor console admin so that they can activate the sensor. For more information, see [Upload new activation files](how-to-manage-individual-sensors.md#upload-new-activation-files).
-
-# [Enterprise IoT sensors](#tab/eiot)
-
-**To set up an Enterprise IoT sensor**:
-
-1. Navigate to the [Azure portal](https://portal.azure.com#home).
-
-1. Select **Set up Enterprise IoT Security**.
-
- :::image type="content" source="media/tutorial-get-started-eiot/onboard-sensor.png" alt-text="Screenshot of the Set up Enterprise I O T Security button on the Get started page.":::
-
-1. In the **Sensor name** field, enter a meaningful name for your sensor.
-
-1. From the **Subscription** drop-down menu, select the subscription where you want to add your sensor.
-
-1. Select **Register**. A **Sensor registration successful** screen shows your next steps and the command you'll need to start the sensor installation.
-
- For example:
-
- :::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise I O T sensor.":::
-
-1. Copy the command to a safe location, and continue with installing the sensor. For more information, see [Tutorial: Get started with Enterprise IoT monitoring](tutorial-getting-started-eiot-sensor.md#install-the-sensor-software).
-
-> [!NOTE]
-> As opposed to OT sensors, where you define your sensor's site, all Enterprise IoT sensors are automatically added to the **Enterprise network** site.
-- ## View your sensors All of your currently cloud-connected sensors, including both OT and Enterprise IoT sensors, are listed in the **Sites and sensors** page. For example:
Details about each sensor are listed in the following columns:
|**Sensor status**| Displays a [sensor health message](sensor-health-messages.md). For more information, see [Understand sensor health (Public preview)](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview).| |**Last connected (UTC)**| Displays how long ago the sensor was last connected.| |**Threat Intelligence version**| Displays the [Threat Intelligence version](how-to-work-with-threat-intelligence-packages.md) installed on the sensor. The name of the version is based on the day the package was built by Defender for IoT. |
-|**Threat Intelligence mode**| Displays whether the Threat Intelligence mode is manual or automatic. If it's manual, that means that you can [push newly released packages directly to sensors](how-to-work-with-threat-intelligence-packages.md) as needed. Otherwise, the new packages will be automatically installed on the cloud connected sensors. |
+|**Threat Intelligence mode**| Displays whether the Threat Intelligence mode is manual or automatic. If it's manual that means that you can [push newly released packages directly to sensors](how-to-work-with-threat-intelligence-packages.md) as needed. Otherwise, the new packages will be automatically installed on the cloud connected sensors. |
|**Threat Intelligence update status**| Displays the update status of the Threat Intelligence package. The status can be either **Failed**, **In Progress**, **Update Available**, or **Ok**.| ## Site management options from the Azure portal
-When onboarding a new OT sensor to the Defender for IoT, you can add it to a new or existing site. When working with OT networks, organizing your sensors into sites allows you to manage your sensors more efficiently. Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**.
+When onboarding a new OT sensor to the Defender for IoT, you can add it to a new or existing site. When working with OT networks, organizing your sensors into sites allows you to manage your sensors more efficiently.
+
+Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**.
To edit a site's details, select the site's name on the **Sites and sensors** page. In the **Edit site** pane that opens on the right, modify any of the following values:
Use the options on the **Sites and sensor** page and a sensor details page to do
|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-export.png" border="false"::: **Export sensor data** | Available from the **Sites and sensors** toolbar only, to download a CSV file with details about all the sensors listed. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-export.png" border="false"::: **Download an activation file** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit a sensor zone** | For individual sensors only, from the **...** options menu or a sensor details page. <br><br>Select **Edit**, and then select a new zone from the **Zone** menu or select **Create new zone**. Select **Submit** to save your changes. |
-|:::image type="icon" source="medi#install-the-sensor-software). |
+|:::image type="icon" source="medi#install-enterprise-iot-sensor-software). |
|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit automatic threat intelligence updates** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. <br><br>Select **Edit** and then toggle the **Automatic Threat Intelligence Updates (Preview)** option on or off as needed. Select **Submit** to save your changes. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-delete.png" border="false"::: **Delete a sensor** | For individual sensors only, from the **...** options menu or a sensor details page. | | :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support (Public preview)](#upload-a-diagnostics-log-for-support-public-preview).|
You may need to reactivate an OT sensor because you want to:
- **Work in locally managed mode instead of cloud-connected mode**: After reactivation, sensor detection information is displayed only in the sensor console. -- **Associate the sensor to a new site**: To do this, re-register the sensor with new site definitions and use the new activation file to activate.
+- **Associate the sensor to a new site**: Re-register the sensor with new site definitions and use the new activation file to activate.
-In such cases, do the following:
+In such cases, do the following steps:
1. [Delete your existing sensor](#sensor-management-options-from-the-azure-portal). 1. [Onboard the sensor again](onboard-sensors.md#onboard-ot-sensors), registering it with any new settings.
If you need to open a support ticket for a locally managed sensor, upload a diag
1. Make sure you have the diagnostics report available for upload. For more information, see [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support).
-1. In Defender for IoT in the Azure portal, go to the **Sites and sensors** page and select the locally-managed sensor that's related to your support ticket.
+1. In Defender for IoT in the Azure portal, go to the **Sites and sensors** page and select the locally managed sensor that's related to your support ticket.
1. For your selected sensor, select the **...** options menu on the right > **Send diagnostic files to support (Preview)**. For example:
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Title: Manage Defender for IoT plans on Azure subscriptions
-description: Manage Defender for IoT plans on your Azure subscriptions.
Previously updated : 07/06/2022
+ Title: Manage OT plans on Azure subscriptions
+description: Manage Microsoft Defender for IoT plans for OT monitoring on your Azure subscriptions.
Last updated : 11/07/2022
-# Manage Defender for IoT plans
+# Manage OT plans on Azure subscriptions
-Your Defender for IoT deployment is managed through a Microsoft Defender for IoT plan on your Azure subscription.
+Your Defender for IoT deployment is managed through a Microsoft Defender for IoT plan on your Azure subscription. For OT networks, use Defender for IoT in the Azure portal to onboard, edit, and cancel Defender for IoT plans.
-- **For OT networks**, onboard, edit, and cancel Defender for IoT plans from Defender for IoT in the Azure portal. -- **For Enterprise IoT networks**, onboard and cancel Defender for IoT plans in Microsoft Defender for Endpoint.-
-For each plan, you'll be asked to define the number of *committed devices*. Committed devices are the approximate number of devices that will be monitored in your enterprise.
+If you're looking to manage Enterprise IoT plans, see [Manage Defender for IoT plans for Enterprise IoT security monitoring](manage-subscriptions-enterprise.md).
> [!NOTE] > If you've come to this page because you are a [former CyberX customer](https://blogs.microsoft.com/blog/2020/06/22/microsoft-acquires-cyberx-to-accelerate-and-secure-customers-iot-deployments) and have questions about your account, reach out to your account manager for guidance. -
-## Subscription billing
-
-You're billed based on the number of committed devices associated with each subscription.
-
-The billing cycle for Microsoft Defender for IoT follows a calendar month. Changes you make to committed devices during the month are implemented one hour after confirming your update and are reflected in your monthly bill. Removal of Defender for IoT from a subscription also takes effect one hour after canceling a plan.
-
-Your enterprise may have more than one paying entity. If so, you can onboard, edit, or cancel a plan for more than one subscription.
-
-Before you add a plan or services, we recommend that you have a sense of how many devices you would like to monitor. If you're working with OT networks, see [Best practices for planning your OT network monitoring](plan-network-monitoring.md).
-
-Users can also work with a trial commitment, which supports monitoring a limited number of devices for 30 days. For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
- ## Prerequisites
-Before you onboard a plan, verify that:
--- Your Azure account is set up.-- You have the required Azure [user permissions](getting-started.md#permissions).-
-### Azure account subscription requirements
-
-To get started with Microsoft Defender for IoT, you must have a Microsoft Azure account subscription.
-
-If you don't have a subscription, you can sign up for a free account. For more information, see https://azure.microsoft.com/free/.
-
-If you already have access to an Azure subscription, but it isn't listed when adding a Defender for IoT plan, check your account details and confirm your permissions with the subscription owner. For more information, see https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade.
+Before performing the procedures in this article, make sure that you have:
-### User permission requirements
+- An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/).
-Azure **Security admin**, **Subscription owners** and **Subscription contributors** can onboard, update, and remove Defender for IoT plans. For more information on user permissions, see [Defender for IoT user permissions](getting-started.md#permissions).
+- A [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) user role for the Azure subscription that you'll be using for the integration
-### Defender for IoT committed devices
+## Calculate committed devices for OT monitoring
-When onboarding or editing your Defender for IoT plan, you'll need to know how many devices you want to monitor.
+If you're adding a plan with a monthly or annual commitment, you'll be asked to enter the number of [committed devices](billing.md#defender-for-iot-committed-devices), which are the approximate number of devices that will be monitored in your enterprise.
+We recommend that you make an initial estimate of your committed devices when onboarding your Defender for IoT plan. You can skip this procedure if you're adding a trial plan.
-#### Calculate the number of devices you need to monitor
-
-We recommend making an initial estimate of your committed devices when onboarding your Defender for IoT plan.
-
-**For OT devices**:
+**To calculate committed devices:**:
1. Collect the total number of devices at each site in your network, and add them together.
-1. Remove any devices that are [*not* considered as committed devices by Defender for IoT](#defender-for-iot-committed-devices).
-
-After you've set up your network sensor and have full visibility into all devices, you can [Edit a plan](#edit-a-plan-for-ot-networks) to update the number of committed devices as needed.
-
-**For Enterprise IoT devices**:
-
-In the **Device inventory** page in the **Defender for Endpoint** portal:
-
-1. Add the total number of discovered **network devices** with the total number of discovered **IoT devices**.
+1. Remove any of the following devices, which are *not* considered as committed devices by Defender for IoT:
- For example:
+ - **Public internet IP addresses**
+ - **Multi-cast groups**
+ - **Broadcast groups**
+ - **Inactive devices**: Devices that have no network activity detected for more than 60 days
- :::image type="content" source="media/how-to-manage-subscriptions/eiot-calculate-devices.png" alt-text="Screenshot of network device and IoT devices in the device inventory in Microsoft Defender for Endpoint.":::
-
- For more information, see the [Defender for Endpoint Device discovery overview](/microsoft-365/security/defender-endpoint/device-discovery).
-
-1. Remove any devices that are [*not* considered as committed devices by Defender for IoT](#defender-for-iot-committed-devices).
-
-1. Round up your total to a multiple of 100.
-
- For example: In the device inventory, you have 473 network devices and 1206 IoT devices. Added together the total is 1679 devices, and rounded up to a multiple of 100 is 1700. Use 1700 as the estimated number of committed devices.
-
-To edit the number of committed Enterprise IoT devices after you've onboarded a plan, you will need to cancel the plan and onboard a new plan in Defender for Endpoint. For more information, see the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+After you've onboarded your plan, [set up a network sensor](tutorial-onboarding.md) and have [full visibility into your devices](how-to-manage-device-inventory-for-organizations.md), [edit a plan](#edit-a-plan-for-ot-networks) to update the number of committed devices as needed.
## Onboard a Defender for IoT plan for OT networks This procedure describes how to add a Defender for IoT plan for OT networks to an Azure subscription.
-**To onboard a Defender for IoT plan for OT networks:**
+**To onboard a Defender for IoT plan for OT networks**:
1. In the Azure portal, go to **Defender for IoT** > **Pricing**.
This procedure describes how to add a Defender for IoT plan for OT networks to a
1. In the **Purchase** pane, define the plan:
- - **Purchase method**. Select a monthly or annual commitment, or a [trial](#about-defender-for-iot-trials). Microsoft Defender for IoT provides a 30-day free trial for the first 1,000 committed devices for evaluation purposes.
-
+ - **Purchase method**. Select a monthly or annual commitment, or a [trial](billing.md#free-trial).
+
+ Microsoft Defender for IoT provides a 30-day free trial for the first 1,000 committed devices for evaluation purposes.
+ For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/). - **Subscription**. Select the subscription where you would like to add a plan.
- - **Number of sites** (for annual commitment only). Enter the number of committed sites.
+ You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription.
+
+ > [!TIP]
+ > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner.
+
+ - **Number of sites**: Relevant for annual commitments only. Enter the number of committed sites.
- **Committed devices**. If you selected a monthly or annual commitment, enter the number of assets you'll want to monitor. If you selected a trial, this section doesn't appear as you have a default of 100 devices.
- For example:
+ For example:
:::image type="content" source="media/how-to-manage-subscriptions/onboard-plan-2.png" alt-text="Screenshot of adding a plan for OT networks to your subscription."::: 1. Select the **I accept the terms** option, and then select **Save**.
-Your OT networks plan will be shown under the associated subscription in the **Plans** grid.
-
-## Onboard a Defender for IoT plan for Enterprise IoT networks
-
-Onboard your Defender for IoT plan for Enterprise IoT networks in the Defender for Endpoint portal. For more information, see [Onboard Microsoft Defender for IoT](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration) in the Defender for Endpoint documentation.
-
-Once you've onboarded a plan for Enterprise IoT networks from Defender for Endpoint, you'll see the plan in Defender for IoT in the Azure portal, under the associated subscription in the **Plans** grid, on the **Defender for IoT** > **Pricing** page.
-
-### About Defender for IoT trials
-
-If you would like to evaluate Defender for IoT, you can use a trial commitment.
+Your new plan is listed under the relevant subscription in the **Plans** grid.
-The trial is valid for 30 days and supports 1000 committed devices. Using the trial lets you deploy one or more Defender for IoT sensors on your network to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more.
-
-The trial also allows you to install an on-premises management console to view aggregated information generated by sensors.
## Edit a plan for OT networks
-You can make changes to your OT networks plan, such as to change your plan commitment, update the number of committed devices, or committed sites.
+Edit your Defender for IoT plans for OT networks if you need change your commitment tier or update the number of committed devices or committed sites.
+
+For example, you may have more devices that require monitoring if you're increasing existing site coverage, have discovered more devices than expected, or there are network changes such as adding switches.
-For example, you may have more devices that require monitoring if you're increasing existing site coverage, have discovered more devices than expected, or there are network changes such as adding switches. If the actual number of devices exceeds the number of committed devices on your plan, you'll see a warning on the **Pricing** page, and will need to adjust the number of committed devices on your plan accordingly.
+If the actual number of devices exceeds the number of committed devices on your plan, you'll see a warning on the **Pricing** page, and will need to adjust the number of committed devices on your plan accordingly.
**To edit a plan:** 1. In the Azure portal, go to **Defender for IoT** > **Pricing**.
-1. On the subscription row, select the options menu (**...**) at the right.
-
-1. Select **Edit plan**.
+1. On the subscription row, select the options menu (**...**) at the right > select **Edit plan**.
-1. Make your changes as needed:
+1. Make any of the following changes as needed:
- Change your purchase method - Update the number of committed devices
For example, you may have more devices that require monitoring if you're increas
1. Select the **I accept the terms** option, and then select **Save**.
-Changes to your plan will take effect one hour after confirming the change. This change will appear on your next monthly statement, and you will be charged based on the length of time each plan was in effect.
+Changes to your plan will take effect one hour after confirming the change. This change will appear on your next monthly statement, and you'll be charged based on the length of time each plan was in effect.
> [!NOTE]
-> **For an on-premises management console:**
- After any changes are made, you will need to upload a new activation file to your on-premises management console. The activation file reflects the new number of committed devices. For more information, see [Upload an activation file](how-to-manage-the-on-premises-management-console.md#upload-an-activation-file).
+> **For an on-premises management console:** After any changes are made, you'll need to upload a new activation file to your on-premises management console. The activation file reflects the new number of committed devices. For more information, see [Upload an activation file](how-to-manage-the-on-premises-management-console.md#upload-an-activation-file).
-## Cancel a Defender for IoT plan from a subscription
+## Cancel a Defender for IoT plan
-You may need to cancel a Defender for IoT plan from your Azure subscription, for example, if you need to work with a new payment entity. Your changes take effect one hour after confirmation. This change will be reflected in your upcoming monthly statement, and you will only be charged for the time that the subscription was active.
-This option removes all Defender for IoT services from the subscription, including both OT and Enterprise IOT services.
+You may need to cancel a Defender for IoT plan from your Azure subscription, for example, if you need to work with a new payment entity, or if you no longer need the service.
-Delete all sensors that are associated with the subscription prior to removing the plan. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
+> [!IMPORTANT]
+> Canceling a plan removes all Defender for IoT services from the subscription, including both OT and Enterprise IOT services. If you have an Enterprise IoT plan on your subscription, do this with care.
+>
+> To cancel only an Enterprise IoT plan, do so from Microsoft 365. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
+>
-**To cancel Defender for IoT from a subscription:**
+**Prerequisites**: Before canceling your plan, make sure to delete any sensors that are associated with the subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
-1. In the Azure portal, go to **Defender for IoT** > **Pricing**.
-1. On the subscription row, select the options menu (**...**) at the right.
+**To cancel a Defender for IoT plan for OT networks**:
-1. Select **Cancel plan**.
+1. In the Azure portal, go to **Defender for IoT** > **Pricing**.
-1. In the plan cancellation dialog, confirm that you've removed all associated sensors, and then select **Confirm cancellation** to cancel the Defender for IoT plan from the subscription.
+1. On the subscription row, select the options menu (**...**) at the right and select **Cancel plan**.
-> [!NOTE]
-> To remove Enterprise IoT only from your plan, cancel your plan from Microsoft Defender for Endpoint. For more information, see the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#cancel-your-defender-for-iot-plan).
+1. In the plan cancellation dialog, confirm that you've removed all associated sensors, and then select **Confirm cancellation**.
-> [!IMPORTANT]
-> If you are a Microsoft Defender for IoT customer and also have a subscription to Microsoft Defender for Endpoint, the data collected by Microsoft Defender for IoT will automatically populate in your Microsoft Defender for Endpoint instance as well. Customers who want to delete their data from Defender for IoT must also delete their data from Defender for Endpoint.
+Your changes take effect one hour after confirmation. This change will be reflected in your upcoming monthly statement, and you'll only be charged for the time that the subscription was active.
## Move existing sensors to a different subscription Business considerations may require that you apply your existing IoT sensors to a different subscription than the one youΓÇÖre currently using. To do this, you'll need to onboard a new plan to the new subscription, register the sensors under the new subscription, and then remove them from the previous subscription.
-Billing changes will take effect one hour after cancellation of the previous subscription, and will be reflected on the next month's bill. Devices will be synchronized from the sensor to the new subscription automatically. Manual edits made in the portal will not be migrated. New alerts created by the sensor will be created under the new subscription, and existing alerts in the old subscription can be closed in bulk.
-
-**To switch to a new subscription**:
+Billing changes will take effect one hour after cancellation of the previous subscription, and will be reflected on the next month's bill.
-**For OT sensors**:
+- Devices will be synchronized from the sensor to the new subscription automatically.
-1. In the Azure portal, [onboard a new plan for OT networks](#onboard-a-defender-for-iot-plan-for-ot-networks) to the new subscription you want to use.
+- Manual edits made in the portal won't be migrated.
-1. Create a new activation file by [following the steps to onboard an OT sensor](onboard-sensors.md#onboard-ot-sensors).
- - Replicate site and sensor hierarchy as is.
- - For sensors monitoring overlapping network segments, create the activation file under the same zone. Identical devices that are detected in more than one sensor in a zone, will be merged into one device.
-
-1. [Upload a new activation file](how-to-manage-individual-sensors.md#upload-new-activation-files) for your sensors under the new subscription.
+- New alerts created by the sensor will be created under the new subscription, and existing alerts in the old subscription can be closed in bulk.
-1. Delete the sensor identities from the previous subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
+**To switch sensors to a new subscription**:
-1. If relevant, [cancel the Defender for IoT plan](#cancel-a-defender-for-iot-plan-from-a-subscription) from the previous subscription.
+1. In the Azure portal, [onboard a new plan for OT networks](#onboard-a-defender-for-iot-plan-for-ot-networks) to the new subscription you want to use.
-**For Enterprise IoT sensors**:
+1. Create a new activation file by [following the steps to onboard an OT sensor](onboard-sensors.md#onboard-ot-sensors).
-1. In Defender for Endpoint, [onboard a new plan for Enterprise IoT networks](#onboard-a-defender-for-iot-plan-for-enterprise-iot-networks) to the new subscription you want to use.
+ - Replicate site and sensor hierarchy as is.
-1. In the Azure portal, [follow the steps to register an Enterprise IoT sensor](tutorial-getting-started-eiot-sensor.md#register-an-enterprise-iot-sensor) under the new subscription.
+ - For sensors monitoring overlapping network segments, create the activation file under the same zone. Identical devices that are detected in more than one sensor in a zone, will be merged into one device.
-1. Log into your sensor and run the activation command you saved when registering the sensor under the new subscription.
+1. [Upload a new activation file](how-to-manage-individual-sensors.md#upload-new-activation-files) for your sensors under the new subscription.
1. Delete the sensor identities from the previous subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
-1. If relevant, [cancel the Defender for IoT plan](#cancel-a-defender-for-iot-plan-from-a-subscription) from the previous subscription.
+1. If relevant, cancel the Defender for IoT plan from the previous subscription. For more information, see [Cancel a Defender for IoT plan](#cancel-a-defender-for-iot-plan).
+ > [!NOTE]
-> If the previous subscription was connected to Microsoft Sentinel, you will need to connect the new subscription to Microsoft Sentinel and remove the old subscription. For more information, see [Connect Microsoft Defender for IoT with Microsoft Sentinel](/azure/sentinel/iot-solution).
+> If the previous subscription was connected to Microsoft Sentinel, you'll need to connect the new subscription to Microsoft Sentinel and remove the old subscription. For more information, see [Connect Microsoft Defender for IoT with Microsoft Sentinel](../../sentinel/iot-solution.md).
## Next steps
+For more information, see:
+
+- [Defender for IoT subscription billing](billing.md)
+ - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md) - [Create an additional Azure subscription](../../cost-management-billing/manage/create-subscription.md) -- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
+- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
defender-for-iot Manage Subscriptions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-subscriptions-enterprise.md
+
+ Title: Manage Enterprise IoT plans on Azure subscriptions
+description: Manage Defender for IoT plans for Enterprise IoT monitoring on your Azure subscriptions.
Last updated : 07/06/2022+++
+# Manage Defender for IoT plans for Enterprise IoT security monitoring
+
+Enterprise IoT security monitoring with Defender for IoT is managed by an Enterprise IoT plan on your Azure subscription. While you can view your plan in Microsoft Defender for IoT, onboarding and canceling a plan is done with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) in Microsoft 365 Defender.
+
+For each monthly or annual price plan, you'll be asked to define the number of [committed devices](billing.md#defender-for-iot-committed-devices). Committed devices are the approximate number of devices that will be monitored in your enterprise.
+
+For information about OT networks, see [Manage Defender for IoT plans for OT security monitoring](how-to-manage-subscriptions.md).
+
+## Prerequisites
+
+Before performing the procedures in this article, make sure that you have:
+
+- A Microsoft Defender for Endpoint P2 license
+
+- An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/).
+
+- The following user roles:
+
+ - **In Azure Active Directory**: [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator) for your Microsoft 365 tenant
+
+ - **In Azure RBAC**: [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) for the Azure subscription that you'll be using for the integration
+
+## Calculate committed devices for Enterprise IoT monitoring
+
+If you're adding an Enterprise IoT plan with a monthly or annual commitment, you'll be asked to enter the number of committed devices.
+
+We recommend that you make an initial estimate of your committed devices when onboarding your plan. You can skip this procedure if you're adding a [trial plan](billing.md#free-trial).
+
+**To calculate committed devices:**:
+
+1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Assets** \> **Devices** to open the **Device inventory** page.
+
+1. Add the total number of devices listed on both the **Network devices** and **IoT devices** tabs.
+
+ For example:
+
+ :::image type="content" source="media/how-to-manage-subscriptions/eiot-calculate-devices.png" alt-text="Screenshot of network device and IoT devices in the device inventory in Microsoft Defender for Endpoint." lightbox="media/how-to-manage-subscriptions/eiot-calculate-devices.png":::
+
+1. Round up your total to a multiple of 100.
+
+For example:
+
+- In the Microsoft 365 Defender **Device inventory**, you have *473* network devices and *1206* IoT devices.
+- Added together, the total is *1679* devices.
+- Rounded up to a multiple of 100 is **1700**.
+
+Use **1700** as the estimated number of committed devices.
+
+For more information, see the [Defender for Endpoint Device discovery overview](/microsoft-365/security/defender-endpoint/device-discovery).
+
+> [!NOTE]
+> Devices listed on the **Computers & Mobile** tab, including those managed by Defender for Endpoint or otherwise, are not included in the number of committed devices for Defender for IoT.
+
+## Onboard an Enterprise IoT plan
+
+This procedure describes how to add an Enterprise IoT plan to your Azure subscription from Microsoft 365 Defender.
+
+**To add an Enterprise IoT plan**:
+
+1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Settings** \> **Device discovery** \> **Enterprise IoT**.
+
+1. Select the following options for your plan:
+
+ - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription.
+
+ > [!TIP]
+ > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner.
+
+ - **Price plan**: Select a trial, monthly, or annual commitment.
+
+ Microsoft Defender for IoT provides a 30-day free trial for the first 1,000 committed devices for evaluation purposes. For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
+
+ Both monthly and annual commitments require that you enter the number of committed devices that you'd calculated earlier.
+
+1. Select the **I accept the terms and conditions** option and then select **Save**.
+
+ For example:
+
+ :::image type="content" source="media/enterprise-iot/defender-for-endpoint-onboard.png" alt-text="Screenshot of the Enterprise IoT tab in Defender for Endpoint." lightbox="media/enterprise-iot/defender-for-endpoint-onboard.png":::
+
+After you've onboarded your plan, you'll see it listed in Defender for IoT in the Azure portal. Go to the Defender for IoT **Pricing** page and find your subscription with the new **Enterprise IoT** plan listed. For example:
++
+## Edit your Enterprise IoT plan
+
+To edit your plan, such as to edit your commitment level or the number of committed devices, first [cancel the plan](#cancel-your-enterprise-iot-plan) and then [onboard a new plan](#onboard-an-enterprise-iot-plan).
+
+## Cancel your Enterprise IoT plan
+
+You'll need to cancel your plan if you want to edit the details of your plan, such as the price plan or the number of committed devices, or if you no longer need the service.
+
+You'd also need to cancel your plan and onboard again if you need to work with a new payment entity or Azure subscription.
+
+**To cancel your Enterprise IoT plan**:
+
+1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Settings** \> **Device discovery** \> **Enterprise IoT**.
+
+1. Select **Cancel plan**. For example:
+
+ :::image type="content" source="media/enterprise-iot/defender-for-endpoint-cancel-plan.png" alt-text="Screenshot of the Cancel plan option on the Microsoft 365 Defender page.":::
+
+After you cancel your plan, the integration stops and you'll no longer get added security value in Microsoft 365 Defender, or detect new Enterprise IoT devices in Defender for IoT.
+
+The cancellation takes effect one hour after confirming the change. This change will appear on your next monthly statement, and you will be charged based on the length of time the plan was in effect.
+
+If you're canceling your plan as part of an [editing procedure](#edit-your-enterprise-iot-plan), make sure to [onboard a new plan](#onboard-an-enterprise-iot-plan) back with the new details.
+
+> [!IMPORTANT]
+>
+> If you've [registered an Enterprise IoT network sensor](eiot-sensor.md) (Public preview), device data collected by the sensor remains in your Microsoft 365 Defender instance. If you're canceling the Enterprise IoT plan because you no longer need the service, make sure to manually delete data from Microsoft 365 Defender as needed.
+
+## Next steps
+
+For more information, see:
+
+- [Defender for IoT subscription billing](billing.md)
+
+- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)
+
+- [Create an additional Azure subscription](../../cost-management-billing/manage/create-subscription.md)
+
+- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
defender-for-iot Onboard Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/onboard-sensors.md
Last updated 06/02/2022
-# Onboard sensors to Defender for IoT in the Azure portal
+# Onboard OT sensors to Defender for IoT
This article describes how to onboard sensors with [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). ## Purchase sensors or download software for sensors
-This procedure describes how to use the Azure portal to contact vendors for pre-configured appliances, or how to download software for you to install on your own appliances.
+This procedure describes how to use the Azure portal to contact vendors for pre-configured appliances, or how to download software for you to install on your own appliances.
1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Sensor**.
-1. Do one of the following:
+1. Do one of the following steps:
- - To buy a pre-configured appliance, select **Contact** under **Buy preconfigured appliance**. This opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com) with a template request for Defender for IoT appliances. For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md).
+ - **To buy a pre-configured appliance**, select **Contact** under **Buy preconfigured appliance**.
- - To install software on your own appliances, do the following:
+ This link opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com) with a template request for Defender for IoT appliances. For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md).
- 1. Make sure that you have a supported appliance available.
+ - **To install software on your own appliances**, do the following:
- 1. Under *Select version**, select the software version you want to install. We recommend that you always select the most recent version.
+ 1. Make sure that you have a supported appliance available. For more information, see [Which appliances do I need?](ot-appliance-sizing.md).
+
+ 1. Under **Select version**, select the software version you want to install. We recommend that you always select the most recent version.
1. Select **Download**. Download the sensor software and save it in a location that you can access from your selected appliance.
This procedure describes how to use the Azure portal to contact vendors for pre-
Onboard an OT sensor by registering it with Microsoft Defender for IoT and downloading a sensor activation file. > [!NOTE]
-> Enterprise IoT sensors also require onboarding and activation, with slightly different steps. For more information, see [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md).
+> Enterprise IoT sensors also require onboarding and activation, with slightly different steps. For more information, see [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
>
-**Prerequisites**: Make sure that you've set up your sensor and configured your SPAN port or TAP. For more information, see [Defender for IoT installation](how-to-install-software.md).
+**Prerequisites**: Make sure that you've set up your sensor and configured your SPAN port or TAP. For more information, see [Traffic mirroring methods for OT monitoring](best-practices/traffic-mirroring-methods.md).
**To onboard your sensor to Defender for IoT**:
Onboard an OT sensor by registering it with Microsoft Defender for IoT and downl
1. In **Step 3: Register this sensor with Microsoft Defender for IoT** enter or select the following values for your sensor:
- 1. In the **Sensor name** field, enter a meaningful name for your sensor. We recommend including your sensor's IP address as part of the name, or using another easily identifiable name, that can help you keep track between the registration name in the Azure portal and the IP address of the sensor shown in the sensor console.
+ 1. In the **Sensor name** field, enter a meaningful name for your sensor. We recommend including your sensor's IP address as part of the name, or using another easily identifiable name, to help you keep track between the registration name in the Azure portal and the IP address of the sensor shown in the sensor console.
1. In the **Subscription** field, select your Azure subscription.
Make the downloaded activation file accessible to the sensor console admin so th
[!INCLUDE [root-of-trust](includes/root-of-trust.md)]
-## Onboard Enterprise IoT sensors
-
-For more information, see [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md).
## Next steps
+- [Install OT agentless monitoring software](how-to-install-software.md)
- [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md) - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Manage individual sensors](how-to-manage-individual-sensors.md)-- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)
+- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
The Defender for IoT architecture uses on-premises sensors and management server
Therefore, for example, all **22.1.x** versions, including all hotfix versions, are supported for nine months after the first **22.1.x** release.
- Fixes and new functionality are applied to each new version and are not applied to older versions.
+ Fixes and new functionality are applied to each new version and aren't applied to older versions.
- **Software update packages include new functionality and security patches**. Urgent, high-risk security updates are applied in minor versions that may be released throughout the quarter.
For more information, see [Understand sensor health (Public preview)](how-to-man
The Enterprise IoT integration with Microsoft Defender for Endpoint is now in General Availability (GA). With this update, we've made the following updates and improvements: -- Onboard an Enterprise IoT plan directly in Defender for Endpoint. For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+- Onboard an Enterprise IoT plan directly in Defender for Endpoint. For more information, see [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md).
-- Seamless integration with Microsoft Defender for Endpoint to view detected Enterprise IoT devices, and their related alerts, vulnerabilities, and recommendations in the Microsoft 365 Security portal. For more information, see the [Enterprise IoT tutorial](tutorial-getting-started-eiot-sensor.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration). You can continue to view detected Enterprise IoT devices on the Defender for IoT **Device inventory** page in the Azure portal.
+- Seamless integration with Microsoft Defender for Endpoint to view detected Enterprise IoT devices, and their related alerts, vulnerabilities, and recommendations in the Microsoft 365 Security portal. For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md). You can continue to use an Enterprise IoT network sensor (Public preview) and view detected Enterprise IoT devices on the Defender for IoT **Device inventory** page in the Azure portal.
- All Enterprise IoT sensors are now automatically added to the same site in Defender for IoT, named **Enterprise network**. When onboarding a new Enterprise IoT device, you only need to define a sensor name and select your subscription, without defining a site or zone.
Check out our new structure to follow through viewing devices and assets, managi
- [Microsoft Defender for IoT architecture](architecture.md) - [Quickstart: Get started with Defender for IoT](getting-started.md) - [Tutorial: Microsoft Defender for IoT trial setup](tutorial-onboarding.md)-- [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md) - [Plan your sensor connections for OT monitoring](best-practices/plan-network-monitoring.md) - [About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md)
For more information, see [Defender for IoT installation](how-to-install-softwar
To use all of Defender for IoT's latest features, make sure to update your sensor software versions to 22.1.x.
-If you're on a legacy version, you may need to run a series of updates in order to get to the latest version. You'll also need to update your firewall rules and re-activate your sensor with a new activation file.
+If you're on a legacy version, you may need to run a series of updates in order to get to the latest version. You'll also need to update your firewall rules and reactivate your sensor with a new activation file.
After you've upgraded to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
This information also provides operational engineers with critical visibility in
#### What is an unsecure mode?
-If the Key state is detected as Program or the Run state is detected as either Remote or Program the PLC is defined by Defender for IoT as *unsecure*.
+If the Key state is detected as *Program* or the *Run state* is detected as either *Remote* or *Program*, the PLC is defined by Defender for IoT as *unsecure*.
#### Visibility and risk assessment
If the Key state is detected as Program or the Run state is detected as either R
:::image type="content" source="media/release-notes/device-inventory-plc.png" alt-text="Device inventory showing PLC operating mode."::: -- View PLC secure status and last change information per PLC in the Attributes section of the Device Properties screen. If the Key state is detected as Program or the Run state is detected as either Remote or Program the PLC is defined by Defender for IoT as *unsecure*. The Device Properties PLC Secured option will read false.
+- View PLC secure status and last change information per PLC in the Attributes section of the Device Properties screen. If the *Key state* is detected as *Program* or the *Run state* is detected as either *Remote* or *Program*, the PLC is defined by Defender for IoT as *unsecure*. The Device Properties PLC Secured option will read false.
:::image type="content" source="media/release-notes/attributes-plc.png" alt-text="Attributes screen showing PLC information.":::
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
- Title: Get started with Enterprise IoT - Microsoft Defender for IoT
-description: In this tutorial, you'll learn how to onboard to Microsoft Defender for IoT with an Enterprise IoT deployment
- Previously updated : 07/11/2022---
-# Tutorial: Get started with Enterprise IoT monitoring
-
-This tutorial describes how to get started with your Enterprise IoT monitoring deployment with Microsoft Defender for IoT.
-
-Defender for IoT supports the entire breadth of IoT devices in your environment, including everything from corporate printers and cameras, to purpose-built, proprietary, and unique devices.
-
-In this tutorial, you learn about:
-
-> [!div class="checklist"]
-> * Integration with Microsoft Defender for Endpoint
-> * Prerequisites for Enterprise IoT network monitoring with Defender for IoT
-> * How to prepare a physical appliance or VM as a network sensor
-> * How to onboard an Enterprise IoT sensor and install software
-> * How to view detected Enterprise IoT devices in the Azure portal
-> * How to view devices, alerts, vulnerabilities, and recommendations in Defender for Endpoint
-
-## Microsoft Defender for Endpoint integration
-
-Defender for IoT integrates with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to extend your security analytics capabilities, providing complete coverage across your Enterprise IoT devices. Defender for Endpoint analytics features include alerts, vulnerabilities, and recommendations for your enterprise devices.
-
-Microsoft 365 P2 customers can onboard a plan for Enterprise IoT through the Microsoft Defender for Endpoint portal. After you've onboarded a plan for Enterprise IoT, view discovered IoT devices and related alerts, vulnerabilities, and recommendations in Defender for Endpoint.
-
-Microsoft 365 P2 customers can also install the Enterprise IoT network sensor (currently in **Public Preview**) to gain more visibility into additional IoT segments of the corporate network that were not previously covered by Defender for Endpoint. Deploying a network sensor is not a prerequisite for onboarding Enterprise IoT.
-
-For more information, see [Onboard with Microsoft Defender for IoT in Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
-
-> [!IMPORTANT]
-> The **Enterprise IoT network sensor** is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-## Prerequisites
-
-Before starting this tutorial, make sure that you have the following prerequisites.
-
-### Azure subscription prerequisites
--- Make sure that you've added a Defender for IoT plan for Enterprise IoT networks to your Azure subscription from Microsoft Defender for Endpoint.
-For more information, see [Onboard with Microsoft Defender for IoT](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
--- Make sure that you can access the Azure portal as a **Security admin**, subscription **Contributor**, or subscription **Owner** user. For more information, see [Required permissions](getting-started.md#permissions).-
-### Physical appliance or VM requirements
-
-You can use a physical appliance or a virtual machine as your network sensor. In either case, make sure that your machine has the following specifications:
-
-| Tier | Requirements |
-|--|--|
-| **Minimum** | To support up to 1 Gbps: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16-GB RAM of DDR4 or better<br>- 250 GB HDD |
-| **Recommended** | To support up to 15 Gbps: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32-GB RAM of DDR4 or better<br>- 500 GB HDD |
-
-Make sure that your physical appliance or VM also has:
--- [Ubuntu 18.04 Server](https://releases.ubuntu.com/18.04/) operating system. If you don't yet have Ubuntu installed, download the installation files to an external storage, such as a DVD or disk-on-key, and install it on your appliance or VM. For more information, see the Ubuntu [Image Burning Guide](https://help.ubuntu.com/community/BurningIsoHowto).--- Network adapters, at least one for your switch monitoring (SPAN) port, and one for your management port to access the sensor's user interface-
-## Prepare a physical appliance or VM
-
-This procedure describes how to prepare your physical appliance or VM to install the Enterprise IoT network sensor software.
-
-**To prepare your appliance**:
-
-1. Connect a network interface (NIC) from your physical appliance or VM to a switch as follows:
-
- - **Physical appliance** - Connect a monitoring NIC to a SPAN port directly by a copper or fiber cable.
-
- - **VM** - Connect a vNIC to a vSwitch, and configure your vSwitch security settings to accept *Promiscuous mode*. For more information, see, for example [Configure a SPAN monitoring interface for a virtual appliance](extra-deploy-enterprise-iot.md#configure-a-span-monitoring-interface-for-a-virtual-appliance).
-
-1. <a name="sign-in"></a>Sign in to your physical appliance or VM and run the following command to validate incoming traffic to the monitoring port:
-
- ```bash
- ifconfig
- ```
-
- The system displays a list of all monitored interfaces.
-
- Identify the interfaces that you want to monitor, which are usually the interfaces with no IP address listed. Interfaces with incoming traffic will show an increasing number of RX packets.
-
-1. For each interface you want to monitor, run the following command to enable Promiscuous mode in the network adapter:
-
- ```bash
- ifconfig <monitoring port> up promisc
- ```
-
- Where `<monitoring port>` is an interface you want to monitor. Repeat this step for each interface you want to monitor.
-
-1. Ensure network connectivity by opening the following ports in your firewall:
-
- | Protocol | Transport | In/Out | Port | Purpose |
- |--|--|--|--|--|
- | HTTPS | TCP | In/Out | 443 | Cloud connection |
- | DNS | TCP/UDP | In/Out | 53 | Address resolution |
--
-1. Make sure that your physical appliance or VM can access the cloud using HTTPS on port 443 to the following Microsoft endpoints:
-
- - **EventHub**: `*.servicebus.windows.net`
- - **Storage**: `*.blob.core.windows.net`
- - **Download Center**: `download.microsoft.com`
- - **IoT Hub**: `*.azure-devices.net`
-
- > [!TIP]
- > You can also download and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) so your firewall will allow the Azure endpoints that are specified above, along with their region.
- >
- > The Azure public IP ranges are updated weekly. New ranges appearing in the file will not be used in Azure for at least one week. To use this option, download the new json file every week and perform the necessary changes at your site to correctly identify services running in Azure.
-
-## Register an Enterprise IoT sensor
-
-This procedure describes how to register your Enterprise IoT sensor with Defender for IoT and then install the sensor software on the physical appliance or VM that you're using as your network sensor.
-
-> [!NOTE]
-> This procedure describes how to install sensor software on a VM using ESXi. Enterprise IoT sensors are also supported using Hyper-V.
->
-
-**Prerequisites**: Make sure that you have all [Prerequisites](#prerequisites) satisfied and have completed [Prepare a physical appliance or VM](#prepare-a-physical-appliance-or-vm).
-
-**To register your Enterprise IoT sensor**:
-
-1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Set up Enterprise IoT Security**.
-
- :::image type="content" source="media/tutorial-get-started-eiot/onboard-sensor.png" alt-text="Screenshot of the Getting started page for Enterprise IoT security.":::
-
-1. On the **Set up Enterprise IoT Security** page, enter the following details, and then select **Register**:
-
- - In the **Sensor name** field, enter a meaningful name for your sensor.
- - From the **Subscription** drop-down menu, select the subscription where you want to add your sensor.
-
- A **Sensor registration successful** screen shows your next steps and the command you'll need to start the sensor installation.
-
- For example:
-
- :::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise IoT sensor.":::
-
-1. Copy the command to a safe location, where you'll be able to copy it to your physical appliance or VM in order to [install the sensor](#install-the-sensor-software).
--
-## Install the sensor software
-
-Run the command that you received and saved when you registered the Enterprise IoT sensor.
-
-The installation process checks to see if the required Docker version is already installed. If itΓÇÖs not, the sensor installation also installs the latest Docker version.
-
-<a name="install"></a>**To install the sensor**:
-
-1. On your physical appliance or VM, sign in to the sensor's CLI using a terminal, such as PuTTY, or MobaXterm.
-
-1. Run the command that you'd saved from the Azure portal. For example:
-
- :::image type="content" source="media/tutorial-get-started-eiot/enter-command.png" alt-text="Screenshot of running the command to install the Enterprise IoT sensor monitoring software.":::
-
- The command process checks to see if the required Docker version is already installed. If itΓÇÖs not, the sensor installation also installs the latest Docker version.
-
- When the command process completes, the Ubuntu **Configure microsoft-eiot-sensor** wizard appears. In this wizard, use the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
-
-1. In the **Configure microsoft-eiot-sensor** wizard, in the **What is the name of the monitored interface?** screen, select one or more interfaces that you want to monitor with your sensor, and then select **OK**.
-
- For example:
-
- :::image type="content" source="media/tutorial-get-started-eiot/install-monitored-interface.png" alt-text="Screenshot of the Configuring microsoft-eiot-sensor screen.":::
-
-1. In the **Set up proxy server?** screen, select whether to set up a proxy server for your sensor. For example:
-
- :::image type="content" source="media/tutorial-get-started-eiot/proxy.png" alt-text="Screenshot of the Set up a proxy server? screen.":::
-
- If you're setting up a proxy server, select **Yes**, and then define the proxy server host, port, username, and password, selecting **Ok** after each option.
-
- The installation takes a few minutes to complete.
-
-1. In the Azure portal, check that the **Sites and sensors** page now lists your new sensor.
-
- For example:
-
- :::image type="content" source="media/tutorial-get-started-eiot/view-sensor-listed.png" alt-text="Screenshot of your new Enterprise IoT sensor listed in the Sites and sensors page.":::
-
-In the **Sites and sensors** page, Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**. For more information, see [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
-
-> [!TIP]
-> If you don't see your Enterprise IoT data in Defender for IoT as expected, make sure that you're viewing the Azure portal with the correct subscriptions selected. For more information, see [Manage Azure portal settings](../../azure-portal/set-preferences.md).
->
-> If you still don't view your data as expected, [validate your sensor setup](extra-deploy-enterprise-iot.md#validate-your-enterprise-iot-sensor-setup) from the CLI.
-
-## View detected Enterprise IoT devices in Azure
-
-Once you've validated your setup, the **Device inventory** page will start to populate with all of your devices after 15 minutes.
-
-View your devices and network information in the Defender for IoT **Device inventory** page on the Azure portal.
-
-For more information, see [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md).
-
-## Delete an Enterprise IoT network sensor (optional)
-
-Remove a sensor if it's no longer in use with Defender for IoT.
-
-1. From the **Sites and sensors** page on the Azure portal, locate your sensor in the grid.
-1. In the row for your sensor, select the **...** options menu on the right > **Delete sensor**.
-
-Alternately, remove your sensor manually from the CLI. For more information, see [Extra steps and samples for Enterprise IoT deployment](extra-deploy-enterprise-iot.md#remove-an-enterprise-iot-network-sensor-optional).
-
-> [!IMPORTANT]
-> If you want to cancel your plan for Enterprise IoT networks only, do so from [Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
->
-> If you want to cancel your plan for both OT and Enterprise IoT networks together, you can use the [**Pricing**](how-to-manage-subscriptions.md) page in Defender for IoT in the Azure portal.
->
-
-For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
-
-## Next steps
-
-Continue viewing device data in both the Azure portal and Defender for Endpoint, depending on your organization's needs.
--- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)-- [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)-- [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md)-- [View and manage alerts on the Defender for IoT portal](how-to-manage-cloud-alerts.md)-- [Use Azure Monitor workbooks in Microsoft Defender for IoT (Public preview)](workbooks.md)-- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)-- [Enterprise IoT networks frequently asked questions](faqs-eiot.md)-
-In Defender for Endpoint, also view alerts data, recommendations and vulnerabilities related to your network traffic.
-
-For more information in the Defender for Endpoint documentation, see:
--- [Onboard with Microsoft Defender for IoT in Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration)-- [Defender for Endpoint device inventory](/microsoft-365/security/defender-endpoint/machines-view-overview)-- [Alerts in Defender for Endpoint](/microsoft-365/security/defender-endpoint/alerts-queue)-- [Security recommendations in Defender for Endpoint](/microsoft-365/security/defender-vulnerability-management/tvm-security-recommendation)-- [Defender for Endpoint: Vulnerabilities in my organization](/microsoft-365/security/defender-vulnerability-management/tvm-security-recommendation)
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Last updated 07/11/2022
-# Tutorial: Get started with Microsoft Defender for IoT for OT security
+# Tutorial: Onboard and activate a virtual OT sensor
-This tutorial describes how to set up your network for OT system security monitoring, using a virtual, cloud-connected sensor, on a virtual machine (VM), using a trial subscription of Microsoft Defender for IoT.
+This tutorial describes how to set up your network for OT system security monitoring, using a virtual, cloud-connected sensor, on a virtual machine (VM), using a trial subscription of Microsoft Defender for IoT.
> [!NOTE]
-> If you're looking to set up security monitoring for enterprise IoT systems, see [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md) instead.
+> If you're looking to set up security monitoring for enterprise IoT systems, see [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md) and [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
In this tutorial, you learn how to:
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
To store the personal access token you generated as a [key vault secret](../key-
| Field | Value | | -- | -- | | **Name** | Enter a name for the catalog. |
- | **Git clone URI** | Enter or paste the [clone URL](#get-the-clone-url-for-your-repository) for either your GitHub repository or your Azure DevOps repository.|
- | **Branch** | Enter the repository branch to connect to.|
- | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.|
+ | **Git clone URI** | Enter or paste the [clone URL](#get-the-clone-url-for-your-repository) for either your GitHub repository or your Azure DevOps repository.<br/>*Sample Catalog Example:* https://github.com/Azure/deployment-environments.git |
+ | **Branch** | Enter the repository branch to connect to.<br/>*Sample Catalog Example:* main|
+ | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.<br/>*Sample Catalog Example:* /Environments|
| **Secret identifier**| Enter the [secret identifier](#create-a-personal-access-token) that contains your personal access token for the repository.| :::image type="content" source="media/how-to-configure-catalog/catalog-item-add.png" alt-text="Screenshot that shows how to add a catalog to a dev center.":::
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
To complete this quick start, make sure that you have:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). - Owner or Contributor permissions on an Azure Subscription or a specific resource group. - Network Contributor permissions on an existing virtual network (owner or contributor) or permission to create a new virtual network and subnet.
+- User licenses. Each user must be licensed for Windows 11 Enterprise or Windows 10 Enterprise, Microsoft Endpoint Manager, and Azure Active Directory P1.
+ - These licenses are available independently and included in Microsoft 365 F3, Microsoft 365 E3, Microsoft 365 E5, Microsoft 365 A3, Microsoft 365 A5, Microsoft 365 Business Premium, and Microsoft 365 Education Student Use Benefit subscriptions.
## Create a dev center
devops-project Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/overview.md
# Overview of DevOps Starter >[!IMPORTANT]
->DevOps Starter will be retired on March 31, 2023. [Learn more](/azure/devops-project/retirement-and-migration).
+>DevOps Starter will be retired on March 31, 2023. [Learn more](./retirement-and-migration.md).
DevOps Starter makes it easy to get started on Azure using either GitHub actions or Azure DevOps. It helps you launch your favorite app on the Azure service of your choice in just a few quick steps from the Azure portal.
The build and release pipelines can be customized for additional scenarios. Addi
## DevOps Starter videos
-* [Create CI/CD with Azure DevOps Starter](https://www.youtube.com/watch?v=NuYDAs3kNV8)
+* [Create CI/CD with Azure DevOps Starter](https://www.youtube.com/watch?v=NuYDAs3kNV8)
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-routes.md
After successfully running these commands, the event grid, event hub, or Service
You can also create an endpoint that has identity-based authentication, to use the endpoint with a [managed identity](concepts-security.md#managed-identity-for-accessing-other-resources). This option is only available for Event Hubs and Service Bus-type endpoints (it's not supported for Event Grid).
-The CLI command to create this type of endpoint is below. You'll need the following values to plug into the placeholders in the command:
-* The Azure resource ID of your Azure Digital Twins instance
-* An endpoint name
-* An endpoint type
-* The endpoint resource's namespace
-* The name of the event hub or Service Bus topic
-* The location of your Azure Digital Twins instance
-
-```azurecli-interactive
-az resource create --id <Azure-Digital-Twins-instance-Azure-resource-ID>/endpoints/<endpoint-name> --properties '{\"properties\": { \"endpointType\": \"<endpoint-type>\", \"authenticationType\": \"IdentityBased\", \"endpointUri\": \"sb://<endpoint-namespace>.servicebus.windows.net\", \"entityPath\": \"<name-of-event-hub-or-Service-Bus-topic>\"}, \"location\":\"<instance-location>\" }' --is-full-object
-```
+For instructions on how to do this, see [Create an endpoint with identity-based authentication](how-to-route-with-managed-identity.md?tabs=cli#create-an-endpoint-with-identity-based-authentication).
For instructions on how to create this type of endpoint with the Azure CLI, swit
# [CLI](#tab/cli)
-To create an endpoint that has dead-lettering enabled, add the following dead letter parameter to the [az dt endpoint create](/cli/azure/dt/endpoint/create) command for the [Azure Digital Twins CLI](/cli/azure/dt).
+To create an endpoint that has dead-lettering enabled, add the `--deadletter-sas-uri` parameter to the [az dt endpoint create](/cli/azure/dt/endpoint/create) command that [creates an endpoint](#create-the-endpoint).
-The value for the parameter is the dead letter SAS URI made up of the storage account name, container name, and SAS token that you gathered in the [previous section](#set-up-storage-resources). This parameter creates the endpoint with key-based authentication.
+The value for the parameter is the dead letter SAS URI made up of the storage account name, container name, and SAS token that you gathered in the [previous section](#set-up-storage-resources). This parameter creates the endpoint with key-based authentication. Here is what the parameter looks like:
```azurecli --deadletter-sas-uri https://<storage-account-name>.blob.core.windows.net/<container-name>?<SAS-token> ```
-Add this parameter to the end of the endpoint creation commands from the [Create the endpoint](#create-the-endpoint) section earlier to create an endpoint of your chosen type that has dead-lettering enabled.
+>[!TIP]
+>To create a dead-letter endpoint with identity-based authentication, add both the dead-letter parameter from this section and the [managed identity parameter](how-to-route-with-managed-identity.md?tabs=cli#create-an-endpoint-with-identity-based-authentication) to the same command.
You can also create dead letter endpoints using the [Azure Digital Twins control plane APIs](concepts-apis-sdks.md#overview-control-plane-apis) instead of the CLI. To do so, view the [DigitalTwinsEndpoint documentation](/rest/api/digital-twins/controlplane/endpoints/digitaltwinsendpoint_createorupdate) to see how to structure the request and add the dead letter parameters.
-#### Create a dead-letter endpoint with identity-based authentication
-
-You can also create a dead-lettering endpoint that has identity-based authentication, to use the endpoint with a [managed identity](concepts-security.md#managed-identity-for-accessing-other-resources). This option is only available for Event Hubs and Service Bus-type endpoints (it's not supported for Event Grid).
-
-To create this type of endpoint, use the same CLI command from earlier to [create an endpoint with identity-based authentication](#create-an-endpoint-with-identity-based-authentication), with an extra field in the JSON payload for a `deadLetterUri`.
-
-Here are the values you'll need to plug into the placeholders in the command:
-* The Azure resource ID of your Azure Digital Twins instance
-* An endpoint name
-* An endpoint type
-* The endpoint resource's namespace
-* The name of the event hub or Service Bus topic
-* Dead letter SAS URI details: storage account name, container name
-* The location of your Azure Digital Twins instance
-
-```azurecli-interactive
-az resource create --id <Azure-Digital-Twins-instance-Azure-resource-ID>/endpoints/<endpoint-name> --properties '{\"properties\": { \"endpointType\": \"<endpoint-type>\", \"authenticationType\": \"IdentityBased\", \"endpointUri\": \"sb://<endpoint-namespace>.servicebus.windows.net\", \"entityPath\": \"<name-of-event-hub-or-Service-Bus-topic>\", \"deadLetterUri\": \"https://<storage-account-name>.blob.core.windows.net/<container-name>\"}, \"location\":\"<instance-location>\" }' --is-full-object
-```
- #### Message storage schema
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
The following tables identify the services and tools you can use to plan for dat
| Source | Target | App Data Access<br/>Layer Assessment | Database<br/>Assessment | Performance<br/>Assessment | | | | | | |
-| SQL Server | Azure SQL DB | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL DB MI | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
+| SQL Server | Azure SQL DB | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
+| SQL Server | Azure SQL DB MI | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
+| SQL Server | Azure SQL VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
| SQL Server | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) | | |
-| RDS SQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090) |
+| RDS SQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090) |
| Oracle | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Oracle | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Oracle | Azure DB for PostgreSQL -<br/>Single server | | [Ora2Pg*](http://ora2pg.darold.net/start.html) | |
The following tables identify the services and tools you can use to plan for dat
| Source | Target | Schema | Data<br/>(Offline) | Data<br/>(Online) | | | | | | |
-| SQL Server | Azure SQL DB | [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| SQL Server | Azure SQL DB MI | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| SQL Server | Azure SQL VM | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL DB | [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL DB MI | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| SQL Server | Azure Synapse Analytics | | | |
-| RDS SQL | Azure SQL DB | [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| RDS SQL | Azure SQL DB | [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| RDS SQL | Azure SQL DB MI | [DMS](https://azure.microsoft.com/services/database-migration/) | [DMS](https://azure.microsoft.com/services/database-migration/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| RDS SQL | Azure SQL VM | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| RDS SQL | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | Oracle | Azure DB for PostgreSQL -<br/>Single server | [Ispirer*](https://www.ispirer.com/solutions) | [Ispirer*](https://www.ispirer.com/solutions) | [Ora2Pg*](http://ora2pg.darold.net/start.html) |
The following tables identify the services and tools you can use to plan for dat
## Next steps
-For an overview of the Azure Database Migration Service, see the article [What is the Azure Database Migration Service](dms-overview.md).
+For an overview of the Azure Database Migration Service, see the article [What is the Azure Database Migration Service](dms-overview.md).
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
Known issues and limitations associated with the Azure SQL Migration extension f
- **Cause**: Azure Storage firewall isn't configured to allow access to Azure SQL target. -- **Recommendation**: See [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security) for more information on Azure Storage firewall setup.
+- **Recommendation**: See [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md) for more information on Azure Storage firewall setup.
- **Message**: `Migration for Database <Database Name> failed with error 'There are backups from multiple databases in the container folder. Please make sure the container folder has backups from a single database.`
The Azure SQL Database offline migration (Preview) utilizes Azure Data Factory (
- For an overview and installation of the Azure SQL migration extension, see [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) - For more information on known limitations with Log Replay Service, see [Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)](/azure/azure-sql/managed-instance/log-replay-service-migrate#limitations)-- For more information on SQL Server on Virtual machine resource limits, see [Checklist: Best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist)
+- For more information on SQL Server on Virtual machine resource limits, see [Checklist: Best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist)
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online.md
In this tutorial, you learn how to:
To complete this tutorial, you need to:
-* Download and install [PostgreSQL community edition](https://www.postgresql.org/download/) 9.5, 9.6, or 10. The source PostgreSQL Server version must be 9.5.11, 9.6.7, 10, or later. For more information, see the article [Supported PostgreSQL Database Versions](../postgresql/concepts-supported-versions.md).
+* Download and install [PostgreSQL community edition](https://www.postgresql.org/download/) 9.4, 9.5, 9.6, or 10. The source PostgreSQL Server version must be 9.4, 9.5, 9.6, 10, 11, 12, or 13. For more information, see [Supported PostgreSQL database versions](../postgresql/concepts-supported-versions.md).
Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. For example, PostgreSQL 9.6 can only migrate to Azure Database for PostgreSQL 9.6, 10, or 11, but not to Azure Database for PostgreSQL 9.5.
dns Private Resolver Endpoints Rulesets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-endpoints-rulesets.md
Previously updated : 10/31/2022 Last updated : 11/15/2022 #Customer intent: As an administrator, I want to understand components of the Azure DNS Private Resolver.
In this article, you'll learn about components of the [Azure DNS Private Resolver](dns-private-resolver-overview.md). Inbound endpoints, outbound endpoints, and DNS forwarding rulesets are discussed. Properties and settings of these components are described, and examples are provided for how to use them.
-The architecture for Azure DNS Private Resolver is summarized in the following figure. In this example network, a DNS resolver is deployed in a hub vnet that peers with a spoke vnet. [Ruleset links](#ruleset-links) are provisioned in the [DNS forwarding ruleset](#dns-forwarding-rulesets) to both the hub and spoke vnets, enabling resources in both vnets to resolve custom DNS namespaces using DNS forwarding rules. A private DNS zone is also deployed and linked to the hub vnet, enabling resources in the hub vnet to resolve records in the zone. The spoke vnet resolves records in the private zone by using a DNS forwarding [rule](#rules) that forwards private zone queries to the inbound endpoint VIP in the hub vnet.
+The architecture for Azure DNS Private Resolver is summarized in the following figure. In this example network, a DNS resolver is deployed in a hub vnet that peers with a spoke vnet.
+
+> [!NOTE]
+> The peering connection shown in the diagram is not required for name resolution. Vnets that are linked from a DNS forwarding ruleset will use the ruleset when performing name resolution, whether or not the linked vnet peers with the ruleset vnet.
+
+[Ruleset links](#ruleset-links) are provisioned in the [DNS forwarding ruleset](#dns-forwarding-rulesets) to both the hub and spoke vnets, enabling resources in both vnets to resolve custom DNS namespaces using DNS forwarding rules. A private DNS zone is also deployed and linked to the hub vnet, enabling resources in the hub vnet to resolve records in the zone. The spoke vnet resolves records in the private zone by using a DNS forwarding [rule](#rules) that forwards private zone queries to the inbound endpoint VIP in the hub vnet.
[ ![Diagram that shows private resolver architecture](./media/private-resolver-endpoints-rulesets/ruleset.png) ](./media/private-resolver-endpoints-rulesets/ruleset-high.png#lightbox)
A ruleset can't be linked to a virtual network in another region. For more infor
### Ruleset links
-When you link a ruleset to a virtual network, resources within that virtual network will use the DNS forwarding rules enabled in the ruleset. The linked virtual network must peer with the virtual network where the outbound endpoint exists. This configuration is typically used in a hub and spoke design, with spoke vnets peered to a hub vnet that has one or more private resolver endpoints. In this hub and spoke scenario, the spoke vnet doesn't need to be linked to the private DNS zone in order to resolve resource records in the zone. In this case, the forwarding ruleset rule for the private zone sends queries to the hub vnet's inbound endpoint. For example: **azure.contoso.com** to **10.10.0.4**.
+When you link a ruleset to a virtual network, resources within that virtual network will use the DNS forwarding rules enabled in the ruleset. The linked virtual networks are not required to peer with the virtual network where the outbound endpoint exists, but these networks can be configured as peers. This configuration is common in a hub and spoke design. In this hub and spoke scenario, the spoke vnet doesn't need to be linked to the private DNS zone in order to resolve resource records in the zone. In this case, the forwarding ruleset rule for the private zone sends queries to the hub vnet's inbound endpoint. For example: **azure.contoso.com** to **10.10.0.4**.
The following screenshot shows a DNS forwarding ruleset linked to two virtual networks: a hub vnet: **myeastvnet**, and a spoke vnet: **myeastspoke**. ![View ruleset links](./media/private-resolver-endpoints-rulesets/ruleset-links.png)
-Virtual network links for DNS forwarding rulesets enable resources in vnets to use forwarding rules when resolving DNS names. Vnets that are linked from a ruleset, but don't have their own private resolver, must have a peering connection to the vnet that contains the private resolver. The vnet with the private resolver must also be linked from any private DNS zones for which there are ruleset rules.
+Virtual network links for DNS forwarding rulesets enable resources in other vnets to use forwarding rules when resolving DNS names. The vnet with the private resolver must also be linked from any private DNS zones for which there are ruleset rules.
For example, resources in the vnet `myeastspoke` can resolve records in the private DNS zone `azure.contoso.com` if:-- The vnet `myeastspoke` peers with `myeastvnet` - The ruleset provisioned in `myeastvnet` is linked to `myeastspoke` and `myeastvnet` - A ruleset rule is configured and enabled in the linked ruleset to resolve `azure.contoso.com` using the inbound endpoint in `myeastvnet`
energy-data-services How To Convert Segy To Ovds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-ovds.md
OSDU&trade; is a trademark of The Open Group.
## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]
-> [How to convert a segy to zgy file](/azure/energy-data-services/how-to-convert-segy-to-zgy)
-
+> [How to convert a segy to zgy file](./how-to-convert-segy-to-zgy.md)
energy-data-services How To Convert Segy To Zgy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md
OSDU&trade; is a trademark of The Open Group.
## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]
-> [How to convert segy to ovds](/azure/energy-data-services/how-to-convert-segy-to-ovds)
+> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md)
event-grid Azure Active Directory Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/azure-active-directory-events.md
These events are triggered when a [User](/graph/api/resources/user) or [Group](/
| Event name | Description | | - | -- |
- | **Microsoft.Graph.UserUpdated** | Triggered when a user in Azure AD is created and updated. |
+ | **Microsoft.Graph.UserUpdated** | Triggered when a user in Azure AD is created or updated. |
| **Microsoft.Graph.UserDeleted** | Triggered when a user in Azure AD is permanently deleted. |
- | **Microsoft.Graph.GroupUpdated** | Triggered when a group in Azure AD is created and updated. |
+ | **Microsoft.Graph.GroupUpdated** | Triggered when a group in Azure AD is created or updated. |
| **Microsoft.Graph.GroupDeleted** | Triggered when a group in Azure AD is permanently deleted. | > [!NOTE]
event-grid Communication Services Email Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-email-events.md
+
+ Title: Azure Communication Services - Email events
+description: This article describes how to use Azure Communication Services as an Event Grid event source for Email Events.
+ Last updated : 09/30/2022+++
+# Azure Communication Services - Email events
+
+This article provides the properties and schema for communication services telephony and SMS events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
+
+## Events types
+
+Azure Communication Services emits the following telephony and SMS event types:
+
+| Event type | Description |
+| -- | - |
+| Microsoft.Communication.EmailDeliveryReportReceived | Published when a delivery report is received for an Email sent by the Communication Service. |
+| Microsoft.Communication.EmailEngagementTrackingReportReceived | Published when the Email sent is either opened or the link, if applicable is clicked. |
+
+## Event responses
+
+When an event is triggered, the Event Grid service sends data about that event to subscribing endpoints.
+
+This section contains an example of what that data would look like for each event.
+
+### Microsoft.Communication.EmailDeliveryReportReceived event
+
+```json
+[{
+ "id": "00000000-0000-0000-0000-000000000000",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "sender/senderid@azure.com/message/00000000-0000-0000-0000-000000000000",
+ "data": {
+ "sender": "senderid@azure.com",
+ "recipient": "receiver@azure.com",
+ "messageId": "00000000-0000-0000-0000-000000000000",
+ "status": "Delivered",
+ "DeliveryStatusDetails": "No error.",
+ "ReceivedTimestamp": "2020-09-18T00:22:20.2855749Z",
+ },
+ "eventType": "Microsoft.Communication.EmailDeliveryReportReceived",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2020-09-18T00:22:20Z"
+}]
+```
+
+> [!NOTE]
+> Possible values for `Status` are `Delivered`, `Expanded` and `Failed`.
+
+### Microsoft.Communication.EmailEngagementTrackingReportReceived event
+
+```json
+[{
+ "id": "00000000-0000-0000-0000-000000000000",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "sender/senderid@azure.com/message/00000000-0000-0000-0000-000000000000",
+ "data": {
+ "sender": "senderid@azure.com",
+ "messageId": "00000000-0000-0000-0000-000000000000",
+ "userActionTimeStamp": "2022-09-06T22:34:52.1303595+00:00",
+ "engagementContext": "",
+ "userAgent": "",
+ "engagementType": "view"
+ },
+ "eventType": "Microsoft.Communication.EmailEngagementTrackingReportReceived",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-09-06T22:34:52.1303612Z"
+}]
+```
+
+> [!NOTE]
+> Possible values for `engagementType` are `View`, and `Click`. When the `engagementType` is `Click`, `engagementContext` is the link in the Email sent which was clicked.
event-hubs Event Hubs Go Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-go-get-started-send.md
Title: 'Quickstart: Send and receive events using Go - Azure Event Hubs'
-description: 'Quickstart: This article provides a walkthrough for creating a Go application that sends events from Azure Event Hubs.'
+description: 'Quickstart: This article provides a walkthrough for creating a Go application that sends events to and receive events from Azure Event Hubs.'
Previously updated : 11/11/2021 Last updated : 11/16/2022 ms.devlang: golang
# Quickstart: Send events to or receive events from Event Hubs using Go Azure Event Hubs is a Big Data streaming platform and event ingestion service, capable of receiving and processing millions of events per second. Event Hubs can process and store events, data, or telemetry produced by distributed software and devices. Data sent to an event hub can be transformed and stored using any real-time analytics provider or batching/storage adapters. For detailed overview of Event Hubs, see [Event Hubs overview](event-hubs-about.md) and [Event Hubs features](event-hubs-features.md).
-This tutorial describes how to write Go applications to send events to or receive events from an event hub.
+This quickstart describes how to write Go applications to send events to or receive events from an event hub.
> [!NOTE]
-> You can download this quickstart as a sample from the [GitHub](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/main/services/eventhubs), replace `EventHubConnectionString` and `EventHubName` strings with your event hub values, and run it. Alternatively, you can follow the steps in this tutorial to create your own.
+> This quickstart is based on samples on GitHub at [https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/messaging/azeventhubs](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/messaging/azeventhubs). The send one is based on the **example_producing_events_test.go** sample and the receive one is based on the **example_processor_test.go** sample. The code is simplified for the quickstart and all the detailed comments are removed, so look at the samples for more details and explanations.
## Prerequisites
-To complete this tutorial, you need the following prerequisites:
+To complete this quickstart, you need the following prerequisites:
- Go installed locally. Follow [these instructions](https://go.dev/doc/install) if necessary. - An active Azure account. If you don't have an Azure subscription, create a [free account][] before you begin.
This section shows you how to create a Go application to send events to an event
### Install Go package
-Get the Go package for Event Hubs with `go get` or `dep`. For example:
+Get the Go package for Event Hubs as shown in the following example.
```bash
-go get -u github.com/Azure/azure-event-hubs-go
-go get -u github.com/Azure/azure-amqp-common-go/...
+go get github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs
+```
-# or
+### Code to send events to an event hub
-dep ensure -add github.com/Azure/azure-event-hubs-go
-dep ensure -add github.com/Azure/azure-amqp-common-go
-```
+Here's the code to send events to an event hub. The main steps in the code are:
-### Import packages in your code file
+1. Create an Event Hubs producer client using a connection string to the Event Hubs namespace and the event hub name.
+1. Create a batch object and add sample events to the batch.
+1. Send the batch of events to th events.
-To import the Go packages, use the following code example:
+> [!IMPORTANT]
+> Replace `NAMESPACE CONNECTION STRING` with the connection string to your Event Hubs namespace and `EVENT HUB NAME` with the event hub name in the sample code.
```go
+package main
+ import (
- aad "github.com/Azure/azure-amqp-common-go/aad"
- eventhubs "github.com/Azure/azure-event-hubs-go"
-)
-```
+ "context"
-### Create service principal
+ "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs"
+)
-Create a new service principal by following the instructions in [Create an Azure service principal with Azure CLI 2.0](/cli/azure/create-an-azure-service-principal-azure-cli). Save the provided credentials in your environment with the following names. Both the Azure SDK for Go and the Event Hubs packages are preconfigured to look for these variable names:
+func main() {
-```bash
-export AZURE_CLIENT_ID=
-export AZURE_CLIENT_SECRET=
-export AZURE_TENANT_ID=
-export AZURE_SUBSCRIPTION_ID=
-```
+ // create an Event Hubs producer client using a connection string to the namespace and the event hub
+ producerClient, err := azeventhubs.NewProducerClientFromConnectionString("NAMESPACE CONNECTION STRING", "EVENT HUB NAME", nil)
-Now, create an authorization provider for your Event Hubs client that uses these credentials:
+ if err != nil {
+ panic(err)
+ }
-```go
-tokenProvider, err := aad.NewJWTProvider(aad.JWTProviderWithEnvironmentVars())
-if err != nil {
- log.Fatalf("failed to configure AAD JWT provider: %s\n", err)
-}
-```
+ defer producerClient.Close(context.TODO())
-### Create Event Hubs client
+ // create sample events
+ events := createEventsForSample()
-The following code creates an Event Hubs client:
+ // create a batch object and add sample events to the batch
+ newBatchOptions := &azeventhubs.EventDataBatchOptions{}
-```go
-hub, err := eventhubs.NewHub("namespaceName", "hubName", tokenProvider)
-ctx, cancel := context.WithTimeout(context.Background(), 10 * time.Second)
-defer hub.Close(ctx)
-if err != nil {
- log.Fatalf("failed to get hub %s\n", err)
-}
-```
+ batch, err := producerClient.NewEventDataBatch(context.TODO(), newBatchOptions)
-### Write code to send messages
+ for i := 0; i < len(events); i++ {
+ err = batch.AddEventData(events[i], nil)
+ }
-In the following snippet, use (1) to send messages interactively from a terminal, or (2) to send messages within your program:
-
-```go
-// 1. send messages at the terminal
-ctx = context.Background()
-reader := bufio.NewReader(os.Stdin)
-for {
- fmt.Printf("Input a message to send: ")
- text, _ := reader.ReadString('\n')
- hub.Send(ctx, eventhubs.NewEventFromString(text))
+ // send the batch of events to the event hub
+ producerClient.SendEventDataBatch(context.TODO(), batch, nil)
}
-// 2. send messages within program
-ctx = context.Background()
-hub.Send(ctx, eventhubs.NewEventFromString("hello Azure!"))
-```
-
-### Extras
-
-Get the IDs of the partitions in your event hub:
-
-```go
-info, err := hub.GetRuntimeInformation(ctx)
-if err != nil {
- log.Fatalf("failed to get runtime info: %s\n", err)
+func createEventsForSample() []*azeventhubs.EventData {
+ return []*azeventhubs.EventData{
+ {
+ Body: []byte("hello"),
+ },
+ {
+ Body: []byte("world"),
+ },
+ }
}
-log.Printf("got partition IDs: %s\n", info.PartitionIDs)
```
-Run the application to send events to the event hub.
-Congratulations! You have now sent messages to an event hub.
+Don't run the application yet. You first need to run the receiver app and then the sender app.
## Receive events
Congratulations! You have now sent messages to an event hub.
State such as leases on partitions and checkpoints in the event stream are shared between receivers using an Azure Storage container. You can create a storage account and container with the Go SDK, but you can also create one by following the instructions in [About Azure storage accounts](../storage/common/storage-account-create.md).
-Samples for creating Storage artifacts with the Go SDK are available in the [Go samples repo](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/main/services/storage) and in the sample corresponding to this tutorial.
- ### Go packages
-To receive the messages, get the Go packages for Event Hubs with `go get` or `dep`:
+To receive the messages, get the Go packages for Event Hubs as shown in the following example.
```bash
-go get -u github.com/Azure/azure-event-hubs-go/...
-go get -u github.com/Azure/azure-amqp-common-go/...
-go get -u github.com/Azure/go-autorest/...
+go get github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs
+```
-# or
+### Code to receive events from an event hub
-dep ensure -add github.com/Azure/azure-event-hubs-go
-dep ensure -add github.com/Azure/azure-amqp-common-go
-dep ensure -add github.com/Azure/go-autorest
-```
+Here's the code to receive events from an event hub. The main steps in the code are:
-### Import packages in your code file
+1. Check a checkpoint store object that represents the Azure Blob Storage used by the event hub for checkpointing.
+1. Create an Event Hubs consumer client using a connection string to the Event Hubs namespace and the event hub name.
+1. Create an event processor using the client object and the checkpoint store object. The processor receives and processes events.
+1. For each partition in the event hub, create a partition client with processEvents as the function to process events.
+1. Run all partition clients to receive and process events.
-To import the Go packages, use the following code example:
+> [!IMPORTANT]
+> Replace the following placeholder values with actual values:
+> - `AZURE STORAGE CONNECTION STRING` with the connection string for your Azure storage account
+> - `BLOB CONTAINER NAME` with the name of the blob container you created in the storage account
+> - `NAMESPACE CONNECTION STRING` with the connection string for your Event Hubs namespace
+> - `EVENT HUB NAME` with the event hub name in the sample code.
```go
+package main
+ import (
- aad "github.com/Azure/azure-amqp-common-go/aad"
- eventhubs "github.com/Azure/azure-event-hubs-go"
- eph "github.com/Azure/azure-event-hubs-go/eph"
- storageLeaser "github.com/Azure/azure-event-hubs-go/storage"
- azure "github.com/Azure/go-autorest/autorest/azure"
-)
-```
+ "context"
+ "errors"
+ "fmt"
+ "time"
-### Create service principal
+ "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs"
+ "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs/checkpoints"
+)
-Create a new service principal by following the instructions in [Create an Azure service principal with Azure CLI 2.0](/cli/azure/create-an-azure-service-principal-azure-cli). Save the provided credentials in your environment with the following names: Both Azure SDK for Go and Event Hubs package are preconfigured to look for these variable names.
+func main() {
-```bash
-export AZURE_CLIENT_ID=
-export AZURE_CLIENT_SECRET=
-export AZURE_TENANT_ID=
-export AZURE_SUBSCRIPTION_ID=
-```
+ // create a checkpoint store that will be used by the event hub
+ checkpointStore, err := checkpoints.NewBlobStoreFromConnectionString("AZURE STORAGE CONNECTION STRING", "BLOB CONTAINER NAME", nil)
-Next, create an authorization provider for your Event Hubs client that uses these credentials:
+ if err != nil {
+ panic(err)
+ }
-```go
-tokenProvider, err := aad.NewJWTProvider(aad.JWTProviderWithEnvironmentVars())
-if err != nil {
- log.Fatalf("failed to configure AAD JWT provider: %s\n", err)
-}
-```
+ // create a consumer client using a connection string to the namespace and the event hub
+ consumerClient, err := azeventhubs.NewConsumerClientFromConnectionString("NAMESPACE CONNECTION STRING", "EVENT HUB NAME", azeventhubs.DefaultConsumerGroup, nil)
-### Get metadata struct
+ if err != nil {
+ panic(err)
+ }
-Get a struct with metadata about your Azure environment using the Azure Go SDK. Later operations use this struct to find correct endpoints.
+ defer consumerClient.Close(context.TODO())
-```go
-azureEnv, err := azure.EnvironmentFromName("AzurePublicCloud")
-if err != nil {
- log.Fatalf("could not get azure.Environment struct: %s\n", err)
-}
-```
+ // create a processor to receive and process events
+ processor, err := azeventhubs.NewProcessor(consumerClient, checkpointStore, nil)
-### Create credential helper
+ if err != nil {
+ panic(err)
+ }
-Create a credential helper that uses the previous Azure Active Directory (AAD) credentials to create a Shared Access Signature (SAS) credential for Storage. The last parameter tells this constructor to use the same environment variables as used previously:
+ // for each partition in the event hub, create a partition client with processEvents as the function to process events
+ dispatchPartitionClients := func() {
+ for {
+ partitionClient := processor.NextPartitionClient(context.TODO())
-```go
-cred, err := storageLeaser.NewAADSASCredential(
- subscriptionID,
- resourceGroupName,
- storageAccountName,
- storageContainerName,
- storageLeaser.AADSASCredentialWithEnvironmentVars())
-if err != nil {
- log.Fatalf("could not prepare a storage credential: %s\n", err)
-}
-```
+ if partitionClient == nil {
+ break
+ }
-### Create a check pointer and a leaser
+ go func() {
+ if err := processEvents(partitionClient); err != nil {
+ panic(err)
+ }
+ }()
+ }
+ }
-Create a **leaser**, responsible for leasing a partition to a particular receiver, and a **check pointer**, responsible for writing checkpoints for the message stream so that other receivers can begin reading from the correct offset.
+ // run all partition clients
+ go dispatchPartitionClients()
-Currently, a single **StorageLeaserCheckpointer** is available that uses the same Storage container to manage both leases and checkpoints. In addition to the storage account and container names, the **StorageLeaserCheckpointer** needs the credential created in the previous step and the Azure environment struct to correctly access the container.
+ processorCtx, processorCancel := context.WithCancel(context.TODO())
+ defer processorCancel()
-```go
-leaserCheckpointer, err := storageLeaser.NewStorageLeaserCheckpointer(
- cred,
- storageAccountName,
- storageContainerName,
- azureEnv)
-if err != nil {
- log.Fatalf("could not prepare a storage leaserCheckpointer: %s\n", err)
+ if err := processor.Run(processorCtx); err != nil {
+ panic(err)
+ }
}
-```
-### Construct Event Processor Host
-
-You now have the pieces needed to construct an EventProcessorHost, as follows. The same **StorageLeaserCheckpointer** is used as both a leaser and check pointer, as described previously:
-
-```go
-ctx := context.Background()
-p, err := eph.New(
- ctx,
- nsName,
- hubName,
- tokenProvider,
- leaserCheckpointer,
- leaserCheckpointer)
-if err != nil {
- log.Fatalf("failed to create EPH: %s\n", err)
+func processEvents(partitionClient *azeventhubs.ProcessorPartitionClient) error {
+ defer closePartitionResources(partitionClient)
+ for {
+ receiveCtx, receiveCtxCancel := context.WithTimeout(context.TODO(), time.Minute)
+ events, err := partitionClient.ReceiveEvents(receiveCtx, 100, nil)
+ receiveCtxCancel()
+
+ if err != nil && !errors.Is(err, context.DeadlineExceeded) {
+ return err
+ }
+
+ fmt.Printf("Processing %d event(s)\n", len(events))
+
+ for _, event := range events {
+ fmt.Printf("Event received with body %v\n", string(event.Body))
+ }
+
+ if len(events) != 0 {
+ if err := partitionClient.UpdateCheckpoint(context.TODO(), events[len(events)-1]); err != nil {
+ return err
+ }
+ }
+ }
}
-defer p.Close(context.Background())
-```
-### Create handler
-
-Now create a handler and register it with the Event Processor Host. When the host is started, it applies this and any other specified handlers to incoming messages:
-
-```go
-handler := func(ctx context.Context, event *eventhubs.Event) error {
- fmt.Printf("received: %s\n", string(event.Data))
- return nil
+func closePartitionResources(partitionClient *azeventhubs.ProcessorPartitionClient) {
+ defer partitionClient.Close(context.TODO())
}
-// register the handler with the EPH
-_, err := p.RegisterHandler(ctx, handler)
-if err != nil {
- log.Fatalf("failed to register handler: %s\n", err)
-}
```
-### Write code to receive messages
-
-With everything set up, you can start the Event Processor Host with `Start(context)` to keep it permanently running, or with `StartNonBlocking(context)` to run only as long as messages are available.
+## Run receiver and sender apps
-This tutorial starts and runs as follows; see the GitHub sample for an example using `StartNonBlocking`:
+1. Run the receiver app first.
+1. Run the sender app.
+1. Wait for a minute to see the following output in the receiver window.
-```go
-ctx := context.Background()
-err = p.Start()
-if err != nil {
- log.Fatalf("failed to start EPH: %s\n", err)
-}
-```
+ ```bash
+ Processing 2 event(s)
+ Event received with body hello
+ Event received with body world
+ ```
## Next steps
-Read the following articles:
--- [EventProcessorHost](event-hubs-event-processor-host.md)-- [Features and terminology in Azure Event Hubs](event-hubs-features.md)-- [Event Hubs FAQ](event-hubs-faq.yml)
+See samples on GitHub at [https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/messaging/azeventhubs](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/messaging/azeventhubs).
<!-- Links -->
event-hubs Event Hubs Java Get Started Send Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-java-get-started-send-legacy.md
- Title: Send or receive events from Azure Event Hubs using Java (legacy)
-description: This article provides a walkthrough of creating a Java application that sends/receives events to/from Azure Event Hubs using the old azure-eventhubs package.
- Previously updated : 09/28/2021---
-# Use Java to send events to or receive events from Azure Event Hubs (azure-eventhubs)
-
-This quickstart shows how to send events to and receive events from an event hub using the **azure-eventhubs** Java package.
-
-> [!WARNING]
-> This quickstart uses the old **azure-eventhubs** and **azure-eventhubs-eph** packages. For a quickstart that uses the latest **azure-messaging-eventhubs** package, see [Send and receive events using azure-messaging-eventhubs](event-hubs-java-get-started-send.md). To move your application from using the old package to new one, see the [Guide to migrate from azure-eventhubs to azure-messaging-eventhubs](https://github.com/Azure/azure-sdk-for-jav).
-
-## Prerequisites
-
-If you are new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md) before you do this quickstart.
-
-To complete this quickstart, you need the following prerequisites:
--- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).-- A Java development environment. This quickstart uses [Eclipse](https://www.eclipse.org/).-- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the value of access key for the event hub by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You use the access key in the code you write later in this quickstart. The default key name is: **RootManageSharedAccessKey**.-
-## Send events
-This section shows you how to create a Java application to send events an event hub.
-
-> [!NOTE]
-> You can download this quickstart as a sample from the [GitHub](https://github.com/Azure/azure-event-hubs/tree/master/samples/Java/Basic/SimpleSend), replace `EventHubConnectionString` and `EventHubName` strings with your event hub values, and run it. Alternatively, you can follow the steps in this quickstart to create your own.
-
-### Add reference to Azure Event Hubs library
-
-The Java client library for Event Hubs is available for use in Maven projects from the [Maven Central Repository](https://search.maven.org/#search%7Cga%7C1%7Ca%3A%22azure-eventhubs%22). You can reference this library using the following dependency declaration inside your Maven project file:
-
-```xml
-<dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-eventhubs</artifactId>
- <version>2.2.0</version>
-</dependency>
-```
-
-For different types of build environments, you can explicitly obtain the latest released JAR files from the [Maven Central Repository](https://search.maven.org/#search%7Cga%7C1%7Ca%3A%22azure-eventhubs%22).
-
-For a simple event publisher, import the *com.microsoft.azure.eventhubs* package for the Event Hubs client classes and the *com.microsoft.azure.servicebus* package for utility classes such as common exceptions that are shared with the Azure Service Bus messaging client.
-
-### Write code to send messages to the event hub
-
-For the following sample, first create a new Maven project for a console/shell application in your favorite Java development environment. Add a class named `SimpleSend`, and add the following code to the class:
-
-```java
-import com.google.gson.Gson;
-import com.google.gson.GsonBuilder;
-import com.microsoft.azure.eventhubs.ConnectionStringBuilder;
-import com.microsoft.azure.eventhubs.EventData;
-import com.microsoft.azure.eventhubs.EventHubClient;
-import com.microsoft.azure.eventhubs.EventHubException;
-
-import java.io.IOException;
-import java.nio.charset.Charset;
-import java.time.Instant;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.Executors;
-import java.util.concurrent.ScheduledExecutorService;
-
-public class SimpleSend {
-
- public static void main(String[] args)
- throws EventHubException, ExecutionException, InterruptedException, IOException {
-
- }
- }
-```
-
-### Construct connection string
-
-Use the ConnectionStringBuilder class to construct a connection string value to pass to the Event Hubs client instance. Replace the placeholders with the values you obtained when you created the namespace and event hub:
-
-```java
- final ConnectionStringBuilder connStr = new ConnectionStringBuilder()
- .setNamespaceName("<EVENTHUB NAMESPACE")
- .setEventHubName("EVENT HUB")
- .setSasKeyName("RootManageSharedAccessKey")
- .setSasKey("SHARED ACCESS KEY");
-```
-
-### Write code to send events
-
-Create a singular event by transforming a string into its UTF-8 byte encoding. Then, create a new Event Hubs client instance from the connection string and send the message:
-
-```java
- final Gson gson = new GsonBuilder().create();
-
- // The Executor handles all asynchronous tasks and this is passed to the EventHubClient instance.
- // This enables the user to segregate their thread pool based on the work load.
- // This pool can then be shared across multiple EventHubClient instances.
- // The following sample uses a single thread executor, as there is only one EventHubClient instance,
- // handling different flavors of ingestion to Event Hubs here.
- final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(4);
-
- // Each EventHubClient instance spins up a new TCP/TLS connection, which is expensive.
- // It is always a best practice to reuse these instances. The following sample shows this.
- final EventHubClient ehClient = EventHubClient.createSync(connStr.toString(), executorService);
-
- try {
- for (int i = 0; i < 10; i++) {
-
- String payload = "Message " + Integer.toString(i);
- byte[] payloadBytes = gson.toJson(payload).getBytes(Charset.defaultCharset());
- EventData sendEvent = EventData.create(payloadBytes);
-
- // Send - not tied to any partition
- // Event Hubs service will round-robin the events across all Event Hubs partitions.
- // This is the recommended & most reliable way to send to Event Hubs.
- ehClient.sendSync(sendEvent);
- }
-
- System.out.println(Instant.now() + ": Send Complete...");
- System.out.println("Press Enter to stop.");
- System.in.read();
- } finally {
- ehClient.closeSync();
- executorService.shutdown();
- }
-
-```
-
-Build and run the program, and ensure that there are no errors.
-
-Congratulations! You have now sent messages to an event hub.
-
-### Appendix: How messages are routed to EventHub partitions
-
-Before messages are retrieved by consumers, they have to be published to the partitions first by the publishers. When messages are published to event hub synchronously using the sendSync() method on the com.microsoft.azure.eventhubs.EventHubClient object, the message could be sent to a specific partition or distributed to all available partitions in a round-robin manner depending on whether the partition key is specified or not.
-
-When a string representing the partition key is specified, the key will be hashed to determine which partition to send the event to.
-
-When the partition key is not set, then messages will round-robined to all available partitions
-
-```java
-// Serialize the event into bytes
-byte[] payloadBytes = gson.toJson(messagePayload).getBytes(Charset.defaultCharset());
-
-// Use the bytes to construct an {@link EventData} object
-EventData sendEvent = EventData.create(payloadBytes);
-
-// Transmits the event to event hub without a partition key
-// If a partition key is not set, then we will round-robin to all topic partitions
-eventHubClient.sendSync(sendEvent);
-
-// the partitionKey will be hash'ed to determine the partitionId to send the eventData to.
-eventHubClient.sendSync(sendEvent, partitionKey);
-
-// close the client at the end of your program
-eventHubClient.closeSync();
-
-```
-
-## Receive events
-The code in this tutorial is based on the [EventProcessorSample code on GitHub](https://github.com/Azure/azure-event-hubs/tree/master/samples/Java/Basic/EventProcessorSample), which you can examine to see the full working application.
-
-### Receive messages with EventProcessorHost in Java
-
-**EventProcessorHost** is a Java class that simplifies receiving events from Event Hubs by managing persistent checkpoints and parallel receives from those Event Hubs. Using EventProcessorHost, you can split events across multiple receivers, even when hosted in different nodes. This example shows how to use EventProcessorHost for a single receiver.
-
-### Create a storage account
-
-To use EventProcessorHost, you must have an [Azure Storage account][Azure Storage account]:
-
-1. Sign in the [Azure portal](https://portal.azure.com), and select **Create a resource** on the left-hand side of the screen.
-2. Select **Storage**, then select **Storage account**. In the **Create storage account** window, type a name for the storage account. Complete the rest of the fields, select your desired region, and then select **Create**.
-
- ![Create a storage account in Azure portal](./media/event-hubs-dotnet-framework-getstarted-receive-eph/create-azure-storage-account.png)
-
-3. Select the newly created storage account, and then select **Access Keys**:
-
- ![Get your access keys in Azure portal](./media/event-hubs-dotnet-framework-getstarted-receive-eph/select-azure-storage-access-keys.png)
-
- Copy the key1 value to a temporary location. You use it later in this tutorial.
-
-### Create a Java project using the EventProcessor Host
-
-The Java client library for Event Hubs is available for use in Maven projects from the [Maven Central Repository](https://search.maven.org/#search%7Cga%7C1%7Ca%3A%22azure-eventhubs-eph%22), and can be referenced using the following dependency declaration inside your Maven project file:
-
-```xml
-<dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-eventhubs</artifactId>
- <version>2.2.0</version>
-</dependency>
-<dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-eventhubs-eph</artifactId>
- <version>2.4.0</version>
-</dependency>
-```
-
-For different types of build environments, you can explicitly obtain the latest released JAR files from the [Maven Central Repository](https://search.maven.org/#search%7Cga%7C1%7Ca%3A%22azure-eventhubs-eph%22).
-
-1. For the following sample, first create a new Maven project for a console/shell application in your favorite Java development environment. The class is called `ErrorNotificationHandler`.
-
- ```java
- import java.util.function.Consumer;
- import com.microsoft.azure.eventprocessorhost.ExceptionReceivedEventArgs;
-
- public class ErrorNotificationHandler implements Consumer<ExceptionReceivedEventArgs>
- {
- @Override
- public void accept(ExceptionReceivedEventArgs t)
- {
- System.out.println("SAMPLE: Host " + t.getHostname() + " received general error notification during " + t.getAction() + ": " + t.getException().toString());
- }
- }
- ```
-2. Use the following code to create a new class called `EventProcessorSample`. Replace the placeholders with the values used when you created the event hub and storage account:
-
- ```java
- package com.microsoft.azure.eventhubs.samples.eventprocessorsample;
-
- import com.microsoft.azure.eventhubs.ConnectionStringBuilder;
- import com.microsoft.azure.eventhubs.EventData;
- import com.microsoft.azure.eventprocessorhost.CloseReason;
- import com.microsoft.azure.eventprocessorhost.EventProcessorHost;
- import com.microsoft.azure.eventprocessorhost.EventProcessorOptions;
- import com.microsoft.azure.eventprocessorhost.ExceptionReceivedEventArgs;
- import com.microsoft.azure.eventprocessorhost.IEventProcessor;
- import com.microsoft.azure.eventprocessorhost.PartitionContext;
-
- import java.util.concurrent.ExecutionException;
- import java.util.function.Consumer;
-
- public class EventProcessorSample
- {
- public static void main(String args[]) throws InterruptedException, ExecutionException
- {
- String consumerGroupName = "$Default";
- String namespaceName = "-NamespaceName-";
- String eventHubName = "-EventHubName-";
- String sasKeyName = "-SharedAccessSignatureKeyName-";
- String sasKey = "-SharedAccessSignatureKey-";
- String storageConnectionString = "-AzureStorageConnectionString-";
- String storageContainerName = "-StorageContainerName-";
- String hostNamePrefix = "-HostNamePrefix-";
-
- ConnectionStringBuilder eventHubConnectionString = new ConnectionStringBuilder()
- .setNamespaceName(namespaceName)
- .setEventHubName(eventHubName)
- .setSasKeyName(sasKeyName)
- .setSasKey(sasKey);
-
- EventProcessorHost host = new EventProcessorHost(
- EventProcessorHost.createHostName(hostNamePrefix),
- eventHubName,
- consumerGroupName,
- eventHubConnectionString.toString(),
- storageConnectionString,
- storageContainerName);
-
- System.out.println("Registering host named " + host.getHostName());
- EventProcessorOptions options = new EventProcessorOptions();
- options.setExceptionNotification(new ErrorNotificationHandler());
-
- host.registerEventProcessor(EventProcessor.class, options)
- .whenComplete((unused, e) ->
- {
- if (e != null)
- {
- System.out.println("Failure while registering: " + e.toString());
- if (e.getCause() != null)
- {
- System.out.println("Inner exception: " + e.getCause().toString());
- }
- }
- })
- .thenAccept((unused) ->
- {
- System.out.println("Press enter to stop.");
- try
- {
- System.in.read();
- }
- catch (Exception e)
- {
- System.out.println("Keyboard read failed: " + e.toString());
- }
- })
- .thenCompose((unused) ->
- {
- return host.unregisterEventProcessor();
- })
- .exceptionally((e) ->
- {
- System.out.println("Failure while unregistering: " + e.toString());
- if (e.getCause() != null)
- {
- System.out.println("Inner exception: " + e.getCause().toString());
- }
- return null;
- })
- .get(); // Wait for everything to finish before exiting main!
-
- System.out.println("End of sample");
- }
- }
- ```
-3. Create one more class called `EventProcessor`, using the following code:
-
- ```java
- public static class EventProcessor implements IEventProcessor
- {
- private int checkpointBatchingCount = 0;
-
- // OnOpen is called when a new event processor instance is created by the host.
- @Override
- public void onOpen(PartitionContext context) throws Exception
- {
- System.out.println("SAMPLE: Partition " + context.getPartitionId() + " is opening");
- }
-
- // OnClose is called when an event processor instance is being shut down.
- @Override
- public void onClose(PartitionContext context, CloseReason reason) throws Exception
- {
- System.out.println("SAMPLE: Partition " + context.getPartitionId() + " is closing for reason " + reason.toString());
- }
-
- // onError is called when an error occurs in EventProcessorHost code that is tied to this partition, such as a receiver failure.
- @Override
- public void onError(PartitionContext context, Throwable error)
- {
- System.out.println("SAMPLE: Partition " + context.getPartitionId() + " onError: " + error.toString());
- }
-
- // onEvents is called when events are received on this partition of the Event Hub.
- @Override
- public void onEvents(PartitionContext context, Iterable<EventData> events) throws Exception
- {
- System.out.println("SAMPLE: Partition " + context.getPartitionId() + " got event batch");
- int eventCount = 0;
- for (EventData data : events)
- {
- try
- {
- System.out.println("SAMPLE (" + context.getPartitionId() + "," + data.getSystemProperties().getOffset() + "," +
- data.getSystemProperties().getSequenceNumber() + "): " + new String(data.getBytes(), "UTF8"));
- eventCount++;
-
- // Checkpointing persists the current position in the event stream for this partition and means that the next
- // time any host opens an event processor on this event hub+consumer group+partition combination, it will start
- // receiving at the event after this one.
- this.checkpointBatchingCount++;
- if ((checkpointBatchingCount % 5) == 0)
- {
- System.out.println("SAMPLE: Partition " + context.getPartitionId() + " checkpointing at " +
- data.getSystemProperties().getOffset() + "," + data.getSystemProperties().getSequenceNumber());
- // Checkpoints are created asynchronously. It is important to wait for the result of checkpointing
- // before exiting onEvents or before creating the next checkpoint, to detect errors and to ensure proper ordering.
- context.checkpoint(data).get();
- }
- }
- catch (Exception e)
- {
- System.out.println("Processing failed for an event: " + e.toString());
- }
- }
- System.out.println("SAMPLE: Partition " + context.getPartitionId() + " batch size was " + eventCount + " for host " + context.getOwner());
- }
- }
- ```
-
-This tutorial uses a single instance of EventProcessorHost. To increase throughput, we recommend that you run multiple instances of EventProcessorHost, preferably on separate machines. It provides redundancy as well. In those cases, the various instances automatically coordinate with each other in order to load balance the received events. If you want multiple receivers to each process *all* the events, you must use the **ConsumerGroup** concept. When receiving events from different machines, it might be useful to specify names for EventProcessorHost instances based on the machines (or roles) in which they are deployed.
-
-### Publishing Messages to EventHub
-
-Before messages are retrieved by consumers, they have to be published to the partitions first by the publishers. It is worth noting that when messages are published to event hub synchronously using the sendSync() method on the com.microsoft.azure.eventhubs.EventHubClient object, the message could be sent to a specific partition or distributed to all available partitions in a round-robin manner depending on whether the partition key is specified or not.
-
-When a string representing the partition key is specified, the key is hashed to determine which partition to send the event to.
-
-When the partition key is not set, then messages are round-robined to all available partitions
-
-```java
-// Serialize the event into bytes
-byte[] payloadBytes = gson.toJson(messagePayload).getBytes(Charset.defaultCharset());
-
-// Use the bytes to construct an {@link EventData} object
-EventData sendEvent = EventData.create(payloadBytes);
-
-// Transmits the event to event hub without a partition key
-// If a partition key is not set, then we will round-robin to all topic partitions
-eventHubClient.sendSync(sendEvent);
-
-// the partitionKey will be hash'ed to determine the partitionId to send the eventData to.
-eventHubClient.sendSync(sendEvent, partitionKey);
-
-```
-
-### Implementing a Custom CheckpointManager for EventProcessorHost (EPH)
-
-The API provides a mechanism to implement your custom checkpoint manager for scenarios where the default implementation is not compatible with your use case.
-
-The default checkpoint manager uses blob storage but if you override the checkpoint manager used by EPH with your own implementation, you can use any store you want to back your checkpoint manager implementation.
-
-Create a class that implements the interface com.microsoft.azure.eventprocessorhost.ICheckpointManager
-
-Use your custom implementation of the checkpoint manager (com.microsoft.azure.eventprocessorhost.ICheckpointManager)
-
-Within your implementation, you can override the default checkpointing mechanism and implement our own checkpoints based on your own data store (like SQL Server, CosmosDB, and Azure Cache for Redis). We recommend that the store used to back your checkpoint manager implementation is accessible to all EPH instances that are processing events for the consumer group.
-
-You can use any datastore that is available in your environment.
-
-The com.microsoft.azure.eventprocessorhost.EventProcessorHost class provides you with two constructors that allow you to override the checkpoint manager for your EventProcessorHost.
-
-## Next steps
-Read the following articles:
--- [EventProcessorHost](event-hubs-event-processor-host.md)-- [Features and terminology in Azure Event Hubs](event-hubs-features.md)-- [Event Hubs FAQ](event-hubs-faq.yml)
event-hubs Event Hubs Java Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-java-get-started-send.md
This quickstart shows how to send events to and receive events from an event hub using the **azure-messaging-eventhubs** Java package.
-> [!IMPORTANT]
-> This quickstart uses the new **azure-messaging-eventhubs** package. For a quickstart that uses the old **azure-eventhubs** and **azure-eventhubs-eph** packages, see [Send and receive events using azure-eventhubs and azure-eventhubs-eph](event-hubs-java-get-started-send-legacy.md).
-- ## Prerequisites If you're new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md) before you do this quickstart.
firewall Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-cli.md
Note the private IP address. You'll use it later when you create the default rou
## Create a default route
-Create a table, with BGP route propagation disabled
+Create a route table, with BGP route propagation disabled
```azurecli-interactive az network route-table create \
az group delete \
## Next steps
-* [Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
+* [Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
firewall Firewall Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-diagnostics.md
Previously updated : 10/22/2021 Last updated : 11/15/2022 #Customer intent: As an administrator, I want monitor Azure Firewall logs and metrics so that I can track firewall activity.
It can take a few minutes for the data to appear in your logs after you complete
8. Select your subscription. 9. Select **Save**.
+ :::image type="content" source=".\media\tutorial-diagnostics\firewall-diagnostic-settings.png" alt-text="Screenshot of Firewall Diagnostic setting.":::
## Enable diagnostic logging by using PowerShell Activity logging is automatically enabled for every Resource Manager resource. Diagnostic logging must be enabled to start collecting the data available through those logs.
To enable diagnostic logging with PowerShell, use the following steps:
Name = 'toLogAnalytics' ResourceId = '/subscriptions/<subscriptionId>/resourceGroups/<resource group name>/providers/Microsoft.Network/azureFirewalls/<Firewall name>' WorkspaceId = '/subscriptions/<subscriptionId>/resourceGroups/<resource group name>/providers/microsoft.operationalinsights/workspaces/<workspace name>'
- Enabled = $true
}
- Set-AzDiagnosticSetting @diagSettings
+ New-AzDiagnosticSetting @diagSettings
``` ## Enable diagnostic logging by using the Azure CLI
To enable diagnostic logging with Azure CLI, use the following steps:
az monitor diagnostic-settings create -n 'toLogAnalytics' --resource '/subscriptions/<subscriptionId>/resourceGroups/<resource group name>/providers/Microsoft.Network/azureFirewalls/<Firewall name>' --workspace '/subscriptions/<subscriptionId>/resourceGroups/<resource group name>/providers/microsoft.operationalinsights/workspaces/<workspace name>'
- --logs '[{\"category\":\"AzureFirewallApplicationRule\",\"Enabled\":true}, {\"category\":\"AzureFirewallNetworkRule\",\"Enabled\":true}, {\"category\":\"AzureFirewallDnsProxy\",\"Enabled\":true}]'
- --metrics '[{\"category\": \"AllMetrics\",\"enabled\": true}]'
+ --logs "[{\"category\":\"AzureFirewallApplicationRule\",\"Enabled\":true}, {\"category\":\"AzureFirewallNetworkRule\",\"Enabled\":true}, {\"category\":\"AzureFirewallDnsProxy\",\"Enabled\":true}]"
+ --metrics "[{\"category\": \"AllMetrics\",\"enabled\": true}]"
``` ## View and analyze the activity log
firewall Policy Rule Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-rule-sets.md
Even though you can't delete the default rule collection groups nor modify their
Rule collection groups contain one or multiple rule collections, which can be of type DNAT, network, or application. For example, you can group rules belonging to the same workloads or a VNet in a rule collection group.
-Rule collection groups have a maximum size of 2 MB. If you need more than 2 MB, you can split the rules into multiple rule collection groups. A Firewall Policy created before July 2022 can contain 50 rule collection groups and a Firewall Policy created after July 2022 can contain 100 rule collection groups.
+Rule collection groups have a maximum size of 2 MB. If you need more than 2 MB, you can split the rules into multiple rule collection groups. A Firewall Policy created before July 2022 can contain 50 rule collection groups and a Firewall Policy created after July 2022 can contain 60 rule collection groups.
## Rule collections
frontdoor Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/managed-identity.md
Azure Front Door also supports using managed identities to access Key Vault cert
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > [!NOTE]
-> Once you enable managed identities in Azure Front Door and grant proper permissions to access Key Vault, Azure Front Door will always use managed identities to access Key Vault for customer certificate.
+> Once you enable managed identity in Azure Front Door and grant proper permissions to access Key Vault, Azure Front Door will always use managed identity to access Key Vault for customer certificate. **Make sure you add the managed identity permission to allow access to Key Vault after enabling**. If you fail to complete this step, custom certificate autorotation and adding new certificates will fail without permissions to Key Vault. If you disable managed identity, Azure Front Door will fallback to use the original configured AAD App. This is not the recommended solution.
> > You can grant two types of identities to an Azure Front Door profile: > * A **system-assigned** identity is tied to your service and is deleted if your service is deleted. The service can have only **one** system-assigned identity.
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
see [Tag support for Azure resources](../../../azure-resource-manager/management
The following Resource Provider modes are fully supported: -- `Microsoft.Kubernetes.Data` for managing your Kubernetes clusters on or off Azure. Definitions
- using this Resource Provider mode use effects _audit_, _deny_, and _disabled_. Use
- of the [EnforceOPAConstraint](./effects.md#enforceopaconstraint) effect is _deprecated_.
+- `Microsoft.Kubernetes.Data` for managing your Kubernetes clusters on or off Azure, and for Azure Policy components that target [Azure Arc-enabled Kubernetes clusters](../../../aks/intro-kubernetes.md) components such as pods, containers, and ingresses. Definitions
+ using this Resource Provider mode use effects _audit_, _deny_, and _disabled_.
- `Microsoft.KeyVault.Data` for managing vaults and certificates in [Azure Key Vault](../../../key-vault/general/overview.md). For more information on these policy definitions, see
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
These effects are currently supported in a policy definition:
- [Manual (preview)](#manual-preview) - [Modify](#modify)
-The following effects are _deprecated_:
+## Interchanging effects
-- [EnforceOPAConstraint](#enforceopaconstraint)-- [EnforceRegoPolicy](#enforceregopolicy)
+Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require additional details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies will assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies will assess a resource's compliance based on a child or extension resource's properties.
-> [!IMPORTANT]
-> In place of the **EnforceOPAConstraint** or **EnforceRegoPolicy** effects, use _audit_ and
-> _deny_ with Resource Provider mode `Microsoft.Kubernetes.Data`. The built-in policy definitions
-> have been updated. When existing policy assignments of these built-in policy definitions are
-> modified, the _effect_ parameter must be changed to a value in the updated _allowedValues_ list.
+Below is some general guidance around interchangeable effects:
+- **Audit**, **Deny**, and either **Modify** or **Append** are often interchangeable.
+- **AuditIfNotExists** and **DeployIfNotExists** are often interchangeable.
+- **Manual** isn't interchangeable.
+- **Disabled** is interchangeable with any effect.
## Order of evaluation
definitions as `constraintTemplate` is deprecated.
template. See [Create policy definition from constraint template](../how-to/extension-for-vscode.md) to create a custom definition from an existing
- [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) GateKeeper v3
+ [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) Gatekeeper v3
[constraint template](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates). - **constraint** (deprecated) - Can't be used with `templateInfo`.
definitions as `constraintTemplate` is deprecated.
- An _array_ of [Kubernetes namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) to limit policy evaluation to.
- - An empty or missing value causes policy evaluation to include all namespaces, except those
+ - An empty or missing value causes policy evaluation to include all namespaces not
defined in _excludedNamespaces_. - **excludedNamespaces** (required) - An _array_ of
definitions as `constraintTemplate` is deprecated.
template. See [Create policy definition from constraint template](../how-to/extension-for-vscode.md) to create a custom definition from an existing
- [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) GateKeeper v3
+ [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) Gatekeeper v3
[constraint template](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates). - **constraint** (optional) - Can't be used with `templateInfo`.
When **enforcementMode** is **Disabled**_**, resources are still evaluated. Logg
logs, and the policy effect don't occur. For more information, see [policy assignment - enforcement mode](./assignment-structure.md#enforcement-mode).
-## EnforceOPAConstraint
-
-This effect is used with a policy definition _mode_ of `Microsoft.Kubernetes.Data`. It's used to
-pass Gatekeeper v3 admission control rules defined with
-[OPA Constraint Framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint#opa-constraint-framework)
-to [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) to Kubernetes clusters on Azure.
-
-> [!IMPORTANT]
-> The limited preview policy definitions with **EnforceOPAConstraint** effect and the related
-> **Kubernetes Service** category are _deprecated_. Instead, use the effects _audit_ and _deny_ with
-> Resource Provider mode `Microsoft.Kubernetes.Data`.
-
-### EnforceOPAConstraint evaluation
-
-The Open Policy Agent admission controller evaluates any new request on the cluster in real time.
-Every 15 minutes, a full scan of the cluster is completed and the results reported to Azure Policy.
-
-### EnforceOPAConstraint properties
-
-The **details** property of the EnforceOPAConstraint effect has the subproperties that describe the
-Gatekeeper v3 admission control rule.
--- **constraintTemplate** (required)
- - The Constraint template CustomResourceDefinition (CRD) that defines new Constraints. The
- template defines the Rego logic, the Constraint schema, and the Constraint parameters that are
- passed via **values** from Azure Policy.
-- **constraint** (required)
- - The CRD implementation of the Constraint template. Uses parameters passed via **values** as
- `{{ .Values.<valuename> }}`. In the following example, these values are `{{ .Values.cpuLimit }}`
- and `{{ .Values.memoryLimit }}`.
-- **values** (optional)
- - Defines any parameters and values to pass to the Constraint. Each value must exist in the
- Constraint template CRD.
-
-### EnforceOPAConstraint example
-
-Example: Gatekeeper v3 admission control rule to set container CPU and memory resource limits in
-Kubernetes.
-
-```json
-"if": {
- "allOf": [
- {
- "field": "type",
- "in": [
- "Microsoft.ContainerService/managedClusters",
- "AKS Engine"
- ]
- },
- {
- "field": "location",
- "equals": "westus2"
- }
- ]
-},
-"then": {
- "effect": "enforceOPAConstraint",
- "details": {
- "constraintTemplate": "https://raw.githubusercontent.com/Azure/azure-policy/master/built-in-references/Kubernetes/container-resource-limits/template.yaml",
- "constraint": "https://raw.githubusercontent.com/Azure/azure-policy/master/built-in-references/Kubernetes/container-resource-limits/constraint.yaml",
- "values": {
- "cpuLimit": "[parameters('cpuLimit')]",
- "memoryLimit": "[parameters('memoryLimit')]"
- }
- }
-}
-```
-
-## EnforceRegoPolicy
-
-This effect is used with a policy definition _mode_ of `Microsoft.ContainerService.Data`. It's used
-to pass Gatekeeper v2 admission control rules defined with
-[Rego](https://www.openpolicyagent.org/docs/latest/policy-language/#what-is-rego) to
-[Open Policy Agent](https://www.openpolicyagent.org/) (OPA) on
-[Azure Kubernetes Service](../../../aks/intro-kubernetes.md).
-
-> [!IMPORTANT]
-> The limited preview policy definitions with **EnforceRegoPolicy** effect and the related
-> **Kubernetes Service** category are _deprecated_. Instead, use the effects _audit_ and _deny_ with
-> Resource Provider mode `Microsoft.Kubernetes.Data`.
-
-### EnforceRegoPolicy evaluation
-
-The Open Policy Agent admission controller evaluates any new request on the cluster in real time.
-Every 15 minutes, a full scan of the cluster is completed and the results reported to Azure Policy.
-
-### EnforceRegoPolicy properties
-
-The **details** property of the EnforceRegoPolicy effect has the subproperties that describe the
-Gatekeeper v2 admission control rule.
--- **policyId** (required)
- - A unique name passed as a parameter to the Rego admission control rule.
-- **policy** (required)
- - Specifies the URI of the Rego admission control rule.
-- **policyParameters** (optional)
- - Defines any parameters and values to pass to the rego policy.
-
-### EnforceRegoPolicy example
-
-Example: Gatekeeper v2 admission control rule to allow only the specified container images in AKS.
-
-```json
-"if": {
- "allOf": [
- {
- "field": "type",
- "equals": "Microsoft.ContainerService/managedClusters"
- },
- {
- "field": "location",
- "equals": "westus2"
- }
- ]
-},
-"then": {
- "effect": "EnforceRegoPolicy",
- "details": {
- "policyId": "ContainerAllowedImages",
- "policy": "https://raw.githubusercontent.com/Azure/azure-policy/master/built-in-references/KubernetesService/container-allowed-images/limited-preview/gatekeeperpolicy.rego",
- "policyParameters": {
- "allowedContainerImagesRegex": "[parameters('allowedContainerImagesRegex')]"
- }
- }
-}
-```
- ## Manual (preview) The new `manual` (preview) effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you'll need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope which defines the boundary of resources whose compliance need attesting.
The following operations are supported by Modify:
- Add, replace, or remove resource tags. For tags, a Modify policy should have `mode` set to _Indexed_ unless the target resource is a resource group. - Add or replace the value of managed identity type (`identity.type`) of virtual machines and
- virtual machine scale sets.
+ Virtual Machine Scale Sets.
- Add or replace the values of certain aliases. - Use `Get-AzPolicyAlias | Select-Object -ExpandProperty 'Aliases' | Where-Object { $_.DefaultMetadata.Attributes -eq 'Modifiable' }`
needed for remediation and the **operations** used to add, update, or remove tag
- Determines which policy definition "wins" if more than one policy definition modifies the same property or when the Modify operation doesn't work on the specified alias. - For new or updated resources, the policy definition with _deny_ takes precedence. Policy
- definitions with _audit_ skip all **operations**. If more than one policy definition has
+ definitions with _audit_ skip all **operations**. If more than one policy definition has the effect
_deny_, the request is denied as a conflict. If all policy definitions have _audit_, then none of the **operations** of the conflicting policy definitions are processed.
- - For existing resources, if more than one policy definition has _deny_, the compliance status
- is _Conflict_. If one or fewer policy definitions have _deny_, each assignment returns a
+ - For existing resources, if more than one policy definition has the effect _deny_, the compliance status
+ is _Conflict_. If one or fewer policy definitions have the effect _deny_, each assignment returns a
compliance status of _Non-compliant_. - Available values: _audit_, _deny_, _disabled_. - Default value is _deny_.
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
# What is applicability in Azure Policy?
-When a policy definition is assigned to a scope, Azure Policy determines which resources in that scope should be considered for compliance evaluation. A resource will only be assessed for compliance if it is considered **applicable** to the given policy assignment.
+When a policy definition is assigned to a scope, Azure Policy determines which resources in that scope should be considered for compliance evaluation. A resource will only be assessed for compliance if it's considered **applicable** to the given policy assignment.
Applicability is determined by several factors: - **Conditions** in the `if` block of the [policy rule](../concepts/definition-structure.md#policy-rule).
Condition(s) in the `if` block of the policy rule are evaluated for applicabilit
> [!NOTE] > Applicability is different from compliance, and the logic used to determine each is different. If a resource is **applicable** that means it is relevant to the policy. If a resource is **compliant** that means it adheres to the policy. Sometimes only certain conditions from the policy rule impact applicability, while all conditions of the policy rule impact compliance state.
-## Applicability logic for Append/Modify/Audit/Deny/RP Mode specific effects
+## Applicability logic for Resource Manager modes
+
+### Append, Audit, Manual, Modify and Deny policy effects
Azure Policy evaluates only `type`, `name`, and `kind` conditions in the policy rule `if` expression and treats other conditions as true (or false when negated). If the final evaluation result is true, the policy is applicable. Otherwise, it's not applicable.
Following are special cases to the previously described applicability logic:
|Scenario |Result | |||
-|Any invalid aliases in the `if` conditions |The policy is not applicable |
+|Any invalid aliases in the `if` conditions |The policy isn't applicable |
|When the `if` conditions consist of only `kind` conditions |The policy is applicable to all resources | |When the `if` conditions consist of only `name` conditions |The policy is applicable to all resources | |When the `if` conditions consist of only `type` and `kind` or `type` and `name` conditions |Only type conditions are considered when deciding applicability |
-|When any conditions (including deployment parameters) include a `location` condition |Will not be applicable to subscriptions |
+|When any conditions (including deployment parameters) include a `location` condition |Won't be applicable to subscriptions |
+
+### AuditIfNotExists and DeployIfNotExists policy effects
+
+The applicability of `AuditIfNotExists` and `DeployIfNotExists` policies is based off the entire `if` condition of the policy rule. When the `if` evaluates to false, the policy isn't applicable.
+
+## Applicability logic for resource provider modes
+
+### Microsoft.Kubernetes.Data
+
+The applicability of `Microsoft.Kubernetes.Data` policies is based off the entire `if` condition of the policy rule. When the `if` evaluates to false, the policy isn't applicable.
+
+### Microsoft.KeyVault.Data
+
+Policies with mode `Microsoft.KeyVault.Data` are applicable if the `type` condition of the policy rule evaluates to true. The `type` refers to component type, such as:
+- Microsoft.KeyVault.Data/vaults/certificates
+- Microsoft.KeyVault.Data/vaults/keys
+- Microsoft.KeyVault.Data/vaults/secrets
-## Applicability logic for AuditIfNotExists and DeployIfNotExists policy effects
+### Microsoft.Network.Data
-The applicability of AuditIfNotExists and DeployIfNotExists policies is based off the entire `if` condition of the policy rule. When the `if` evaluates to false, the policy is not applicable.
+Policies with mode `Microsoft.Network.Data` are applicable if the `type` and `name` conditions of the policy rule evaluate to true. The `type` refers to component type:
+- Microsoft.Network/virtualNetworks
## Next steps
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
One of the largest benefits of Azure Policy is the insight and controls it provides over resources in a subscription or [management group](../../management-groups/overview.md) of subscriptions. This
-control can be exercised in many different ways, such as preventing resources being created in the
-wrong location, enforcing common and consistent tag usage, or auditing existing resources for
+control can be used to prevent resources from being created in the
+wrong location, enforce common and consistent tag usage, or audit existing resources for
appropriate configurations and settings. In all cases, data is generated by Azure Policy to enable you to understand the compliance state of your environment.
operations of the Azure Policy Insights REST API, see
Evaluations of assigned policies and initiatives happen as the result of various events: - A policy or initiative is newly assigned to a scope. It takes around five minutes for the assignment
- to be applied to the defined scope. Once it's applied, the evaluation cycle begins for resources
- within that scope against the newly assigned policy or initiative and depending on the effects
- used by the policy or initiative, resources are marked as compliant, non-compliant, or exempt. A
- large policy or initiative evaluated against a large scope of resources can take time. As such,
+ to be applied to the defined scope, then the evaluation cycle begins for applicable resources against the newly assigned policy or initiative. Depending on the effects
+ used, resources are marked as compliant, non-compliant, exempt, or unknown. A
+ large policy or initiative evaluated against a large scope of resources can take time, so
there's no pre-defined expectation of when the evaluation cycle completes. Once it completes, updated compliance results are available in the portal and SDKs.
to trigger an on-demand evaluation scan from your
[GitHub workflow](https://docs.github.com/actions/configuring-and-managing-workflows/configuring-a-workflow#about-workflows) on one or multiple resources, resource groups, or subscriptions, and gate the workflow based on the compliance state of resources. You can also configure the workflow to run at a scheduled time so
-that you get the latest compliance status at a convenient time. Optionally, this GitHub Actions can
+that you get the latest compliance status at a convenient time. Optionally, GitHub Actions can
generate a report on the compliance state of scanned resources for further analysis or for archiving.
For details and steps, see
## How compliance works
-In an assignment, a resource is **Non-compliant** if it doesn't follow policy or initiative rules
-and isn't _exempt_. The following table shows how different policy effects work with the condition
-evaluation for the resulting compliance state:
+When initiative or policy definitions are assigned and evaluated, resulting compliance states are determined based on conditions in the policy rule and resources' adherence to those requirements.
+
+Azure Policy supports the following compliance states:
+- Non-compliant
+- Compliant
+- Conflict
+- Exempted
+- Unknown (preview)
+
+### Compliant and non-compliant states
+
+In an assignment, a resource is **non-compliant** if it's applicable to the policy assignment and doesn't adhere to conditions in the policy rule. The following table shows how different policy effects work with the condition evaluation for the resulting compliance state:
| Resource State | Effect | Policy Evaluation | Compliance State | | | | | |
evaluation for the resulting compliance state:
> existence condition to be FALSE to be non-compliant. When TRUE, the IF condition triggers > evaluation of the existence condition for the related resources.
+#### Example
+ For example, assume that you have a resource group - ContsoRG, with some storage accounts (highlighted in red) that are exposed to public networks.
For example, assume that you have a resource group - ContsoRG, with some storage
In this example, you need to be wary of security risks. Now that you've created a policy assignment, it's evaluated for all included and non-exempt storage accounts in the ContosoRG resource group. It
-audits the three non-compliant storage accounts, consequently changing their states to
+audits the three non-compliant storage accounts, changing their states to
**Non-compliant.** :::image type="complex" source="../media/getting-compliance-data/resource-group03.png" alt-text="Diagram of storage account compliance in the Contoso R G resource group." border="false"::: Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three now have green checkmarks beneath them, while storage accounts two, four, and five now have red warning signs beneath them. :::image-end:::
+#### Understand non-compliance
+
+When a resource is determined to be **non-compliant**, there are many possible reasons. To determine
+the reason a resource is **non-compliant** or to find the change responsible, see
+[Determine non-compliance](./determine-non-compliance.md).
+
+### Other compliance states
+ Besides **Compliant** and **Non-compliant**, policies and resources have four other states: - **Exempt**: The resource is in scope of an assignment, but has a
Besides **Compliant** and **Non-compliant**, policies and resources have four ot
- **Not registered**: The Azure Policy Resource Provider hasn't been registered or the account logged in doesn't have permission to read compliance data.
-Azure Policy uses the **type**, **name**, or **kind** fields in the definition to determine whether
-a resource is a match. When the resource matches, it's considered applicable and has a status of
-either **Compliant**, **Non-compliant**, or **Exempt**. If either **name** or **kind** is the only
-property in the definition, then all included and non-exempt resources are considered applicable and
-are evaluated.
+Azure Policy relies on several factors to determine whether a resource is considered [applicable](../concepts/policy-applicability.md), then to determine its compliance state.
The compliance percentage is determined by dividing **Compliant**, **Exempt**, and **Unknown** resources by _total
-resources_. _Total resources_ is defined as the sum of the **Compliant**, **Non-compliant**,
+resources_. _Total resources_ include **Compliant**, **Non-compliant**,
**Exempt**, and **Conflicting** resources. The overall compliance numbers are the sum of distinct resources that are **Compliant**, **Exempt**, and **Unknown** divided by the sum of all distinct resources. In the image below, there are 20 distinct resources that are applicable and only one is **Non-compliant**.
The overall resource compliance is 95% (19 out of 20).
> pages in portal are different for enabled initiatives. For more information, see > [Regulatory Compliance](../concepts/regulatory-compliance.md)
+### Compliance rollup
+
+There are several ways to view aggregated compliance results:
+
+| Aggregate scope | Factors determining resulting compliance state |
+| | |
+| Initiative | All policies within |
+| Initiative group or control | All policies within |
+| Policy | All applicable resources |
+| Resource | All applicable policies |
+
+So how is the aggregate compliance state determined if multiple resources or policies have different compliance states themselves? This is done by ranking each compliance state so that one "wins" over another in this situation. The rank order is:
+1. Non-compliant
+1. Compliant
+1. Conflict
+1. Exempted
+1. Unknown (preview)
+
+This means that if there are both non-compliant and compliant states, the rolled up aggregate would be non-compliant, and so on. Let's look at an example.
+
+Assume an initiative contains 10 policies, and a resource is exempt from one policy but compliant to the remaining nine. Because a compliant state has a higher rank than an exempted state, the resource would register as compliant in the rolled-up summary of the initiative. So, a resource will only show as exempt for the entire initiative if it's exempt from, or has unknown compliance to, every other single applicable policy in that initiative. On the other extreme, if the resource is non-compliant to at least one applicable policy in the initiative, it will have an overall compliance state of non-compliant, regardless of the remaining applicable policies.
+ ## Portal The Azure portal showcases a graphical experience of visualizing and understanding the state of
resources for the current assignment. The tab defaults to **Non-compliant**, but
Events (append, audit, deny, deploy, modify) triggered by the request to create a resource are shown under the **Events** tab.
-> [!NOTE]
-> For an AKS Engine policy, the resource shown is the resource group.
- :::image type="content" source="../media/getting-compliance-data/compliance-events.png" alt-text="Screenshot of the Events tab on Compliance Details page." border="false"::: <a name="component-compliance"></a>
history.
Back on the resource compliance page, select and hold (or right-click) on the row of the event you would like to gather more details on and select **Show activity logs**. The activity log page opens and is pre-filtered to the search showing details for the assignment and the events. The activity
-log provides additional context and information about those events.
+log provides more context and information about those events.
:::image type="content" source="../media/getting-compliance-data/compliance-activitylog.png" alt-text="Screenshot of the Activity Log for Azure Policy activities and evaluations." border="false":::
-### Understand non-compliance
-
-When a resource is determined to be **non-compliant**, there are many possible reasons. To determine
-the reason a resource is **non-compliant** or to find the change responsible, see
-[Determine non-compliance](./determine-non-compliance.md).
+> [!NOTE]
+> Compliance results can be exported from the Portal through [Azure Resource Graph queries](../samples/resource-graph-samples.md).
## Command line
Use ARMClient or a similar tool to handle authentication to Azure for the REST A
### Summarize results
-With the REST API, summarization can be performed by container, definition, or assignment. Here is
+With the REST API, summarization can be performed by container, definition, or assignment. Here's
an example of summarization at the subscription level using Azure Policy Insight's [Summarize For Subscription](/rest/api/policy/policystates/summarizeforsubscription):
The output summarizes the subscription. In the example output below, the summari
under **value.results.nonCompliantResources** and **value.results.nonCompliantPolicies**. This request provides further details, including each assignment that made up the non-compliant numbers and the definition information for each assignment. Each policy object in the hierarchy provides a
-**queryResultsUri** that can be used to get additional detail at that level.
+**queryResultsUri** that can be used to get more detail at that level.
```json {
and the definition information for each assignment. Each policy object in the hi
### Query for resources In the example above, **value.policyAssignments.policyDefinitions.results.queryResultsUri** provides
-a sample URI for all non-compliant resources for a specific policy definition. Looking at the
+a sample URI for all non-compliant resources for a specific policy definition. In the
**$filter** value, ComplianceState is equal (eq) to 'NonCompliant', PolicyAssignmentId is specified for the policy definition, and then the PolicyDefinitionId itself. The reason for including the PolicyAssignmentId in the filter is because the PolicyDefinitionId could exist in several policy or
hdinsight Benefits Of Migrating To Hdinsight 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/benefits-of-migrating-to-hdinsight-40.md
Hive metastore operation takes much time and thus slow down Hive compilation. In
## Troubleshooting guide
-[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](/azure/hdinsight/interactive-query/interactive-query-troubleshoot-migrate-36-to-40) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
+[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](./interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
## References
https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/release/
## Further reading
-* [HDInsight 4.0 Announcement](/azure/hdinsight/hdinsight-version-release)
-* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
+* [HDInsight 4.0 Announcement](./hdinsight-version-release.md)
+* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
hdinsight Hdinsight Overview Before You Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-before-you-start.md
HDInsight have two options to configure the databases in the clusters.
During cluster creation, default configuration will use internal database. Once the cluster is created, customer canΓÇÖt change the database type. Hence, it's recommended to create and use the external database. You can create custom databases for Ambari, Hive, and Ranger.
-For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](/azure/hdinsight/hdinsight-custom-ambari-db)
+For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](./hdinsight-custom-ambari-db.md)
## Keep your clusters up to date
As part of the best practices, we recommend you keep your clusters updated on re
HDInsight release happens every 30 to 60 days. It's always good to move to the latest release as early possible. The recommended maximum duration for cluster upgrades is less than six months.
-For more information, see how to [Migrate HDInsight cluster to a newer version](/azure/hdinsight/hdinsight-upgrade-cluster)
+For more information, see how to [Migrate HDInsight cluster to a newer version](./hdinsight-upgrade-cluster.md)
## Next steps * [Create Apache Hadoop cluster in HDInsight](./hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md) * [Create Apache Spark cluster - Portal](./spark/apache-spark-jupyter-spark-sql-use-portal.md)
-* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
+* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
hdinsight Secure Spark Kafka Streaming Integration Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/secure-spark-kafka-streaming-integration-scenario.md
In this document, you'll learn how to execute a Spark job in a secure Spark clus
**Pre-requisites** * Create a secure Kafka cluster and secure spark cluster with the same Microsoft Azure Active Directory Domain Services (Azure AD DS) domain and same vnet. If you prefer not to create both clusters in the same vnet, you can create them in two separate vnets and pair the vnets also. If you prefer not to create both clusters in the same vnet.
-* If your clusters are in different vnets, see here [Connect virtual networks with virtual network peering using the Azure portal](/azure/virtual-network/tutorial-connect-virtual-networks-portal)
+* If your clusters are in different vnets, see here [Connect virtual networks with virtual network peering using the Azure portal](../../virtual-network/tutorial-connect-virtual-networks-portal.md)
* Create key tabs for two users. For example, `alicetest` and `bobadmin`. ## What is a keytab?
From Spark cluster, read from Kafka topic `bobtopic2` as user `bobadmin` is allo
## Next steps
-* [Set up TLS encryption and authentication for Apache Kafka in Azure HDInsight](apache-kafka-ssl-encryption-authentication.md)
+* [Set up TLS encryption and authentication for Apache Kafka in Azure HDInsight](apache-kafka-ssl-encryption-authentication.md)
healthcare-apis Events Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-use-metrics.md
In this article, you'll learn how to use Events metrics in the Azure portal. > [!TIP]
-> To learn more about Azure Monitor and metrics, see [Azure Monitor Metrics overview](/azure/azure-monitor/essentials/data-platform-metrics)]
+> To learn more about Azure Monitor and metrics, see [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md)]
> [!NOTE] > For the purposes of this article, an Azure Event Hubs event hub was used as the Events message endpoint.
To learn how to export Events Azure Event Grid system diagnostic logs and metric
> [!div class="nextstepaction"] > [Enable diagnostic settings for Events](events-enable-diagnostic-settings.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Last updated 11/10/2022
# SMART on FHIR
-[SMART on FHIR](https://docs.smarthealthit.org/) is a set of open specifications to integrate partner applications with FHIR servers and electronic medical records systems that have Fast Healthcare Interoperability Resources (FHIR&#174;) interfaces. One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence.
+Substitutable Medical Applications and Reusable Technologies [SMART on FHIR](https://docs.smarthealthit.org/) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides at least three important benefits:
+- Applications have a known method for obtaining authentication/authorization to a FHIR repository
+- Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository
+- Users have the ability to grant applications access to a further limited set of their data by using SMART clinical scopes.
-Authentication is based on OAuth2. But because SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
+<!SMART Implementation Guide v1.0.0 is supported by Azure API for FHIR and Azure API Management (APIM). This is our recommended approach, as it enabled Health IT developers to comply with 21st Century Act Criterion §170.315(g)(10) Standardized API for patient and population services.
-This tutorial describes how to use the proxy to enable SMART on FHIR applications with FHIR Service.
+Sample demonstrates and list steps that can be referenced to pass ONC G(10) with Inferno test suite.
-## Prerequisites
--- An instance of the FHIR Service-- [.NET Core 2.2](https://dotnet.microsoft.com/download/dotnet-core/2.2)
+>
-## Configure Azure AD registrations
+One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
-SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the FHIR service uses an `Audience` value of `https://azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.fhir.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
+Below tutorial describes steps to enable SMART on FHIR applications with FHIR Service.
-You'll also need a client application registration. Most SMART on FHIR applications are single-page JavaScript applications. So you should follow the instructions for configuring a [public client application in Azure AD](register-public-azure-ad-client-app.md).
-
-After you complete these steps, you should have:
--- A FHIR server with the audience set to `https://MYFHIRAPI.fhir.azurehealthcareapis.com`, where `MYFHIRAPI` is the name of your FHIR service instance.-- A public client application registration. Make a note of the application ID for this client application.
+## Prerequisites
-### Set admin consent for your app
+- An instance of the FHIR Service
+- .NET SDK 6.0
+- [Enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md)
+- [Register public client application in Azure AD](/register-public-azure-ad-client-app.md)
+ - After registering the application, make note of the applicationId for client application.
+
+<! Tutorial : To enable SMART on FHIR using APIM, follow below steps
+Step 1 : Set up FHIR SMART user role
+Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to this role will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
+
+Step 2 : Deploy the necessary components to set up the FHIR server integrated with APIM in production. Follow ReadMe
+Step 3 : Load US Core profiles
+Step 4 : Create Azure AD custom policy using this README >
+
+Lets go over individual steps to enable SMART on FHIR
+## Step 1 : Set admin consent for your client application
To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
To add yourself or another user as owner of an app:
3. Search for the app registration you created, and then select it. 4. In the left menu, under **Manage**, select **Owners**. 5. Select **Add owners**, and then add yourself or the user you want to have admin consent.
-6. Select **Save**.
+6. Select **Save**
+
+
+## Step 2 : Configure Azure AD registrations
+
+SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the FHIR service uses an `Audience` value of `https://azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.fhir.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
-## Enable the SMART on FHIR proxy
+## Step 3: Enable the SMART on FHIR proxy
Enable the SMART on FHIR proxy in the **Authentication** settings for your FHIR instance by selecting the **SMART on FHIR proxy** check box.
-Enable CORS : Because most SMART on FHIR applications are single-page JavaScript apps, you need to [enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md)
-Configure the reply URL: The SMART on FHIR proxy acts as an intermediary between the SMART on FHIR app and Azure AD. The authentication reply (the authentication code) must go to the SMART on FHIR proxy instead of the app itself. The proxy then forwards the reply to the app.
+
+The SMART on FHIR proxy acts as an intermediary between the SMART on FHIR app and Azure AD. The authentication reply (the authentication code) must go to the SMART on FHIR proxy instead of the app itself. The proxy then forwards the reply to the app.
Because of this two-step relay of the authentication code, you need to set the reply URL (callback) for your Azure AD client application to a URL that is a combination of the reply URL for the SMART on FHIR proxy and the reply URL for the SMART on FHIR app. The combined reply URL takes this form:
Add the reply URL to the public client application that you created earlier for
<!![Reply URL configured for the public client](media/tutorial-smart-on-fhir/configure-reply-url.png)>
-## Get a test patient
+
+## Step 4 : Get a test patient
To test the FHIR service and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
-## Download the SMART on FHIR app launcher
+## Step 5 : Download the SMART on FHIR app launcher
The open-source [FHIR Server for Azure repository](https://github.com/Microsoft/fhir-server) includes a simple SMART on FHIR app launcher and a sample SMART on FHIR app. In this tutorial, use this SMART on FHIR launcher locally to test the setup.
Use this command to run the application:
dotnet run ```
-## Test the SMART on FHIR proxy
+## Step 6 : Test the SMART on FHIR proxy
After you start the SMART on FHIR app launcher, you can point your browser to `https://localhost:5001`, where you should see the following screen:
healthcare-apis Healthcare Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-overview.md
Azure Health Data Services includes support for the MedTech service. The MedTech
**FHIR service**
-Azure Health Data Services includes support for FHIR service. The FHIR service enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. For more information about MedTech, see [Overview of FHIR](../healthcare-apis/fhir/overview.md).
+Azure Health Data Services includes support for FHIR service. The FHIR service enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. For more information about FHIR, see [Overview of FHIR](../healthcare-apis/fhir/overview.md).
For the secure exchange of FHIR data, Azure Health Data Services offers a few incremental capabilities that aren't available in Azure API for FHIR.
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
Previously updated : 11/14/2022 Last updated : 11/16/2022 # Tutorial: Receive device data through Azure IoT Hub
-The MedTech service may be used with devices created and managed through an [Azure IoT Hub](/azure/iot-hub/iot-concepts-and-iot-hub) for enhanced workflows and ease of use. This tutorial uses an Azure Resource Manager (ARM) template and a **Deploy to Azure** button to deploy and configure a MedTech service using an Azure IoT Hub for device creation, management, and routing of device messages to the device message event hub. The ARM template used in this article is available from the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Healthcareapis) site using the **azuredeploy.json** file located on [GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub/azuredeploy.json).
+The MedTech service may be used with devices created and managed through an [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md) for enhanced workflows and ease of use. This tutorial uses an Azure Resource Manager (ARM) template and a **Deploy to Azure** button to deploy a MedTech service using an Azure IoT Hub for device creation, management, and routing of device messages to the MedTech service device message event hub. The ARM template used in this article is available from the [Azure Quickstart Templates](/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub).
> [!TIP]
-> For more information about ARM templates, see [What are ARM templates?](/azure/azure-resource-manager/templates/overview)
+> For more information about using Azure PowerShell and CLI to deploy MedTech service ARM templates, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service with Azure Resource Manager templates](deploy-08-new-ps-cli.md).
+>
+> For more information about ARM templates, see [What are ARM templates?](../../azure-resource-manager/templates/overview.md)
-Below is a diagram of the IoT device message flow when using an IoT Hub with the MedTech service. As you can see, devices send their messages to the IoT Hub, which then routes the device messages to the device message event hub to be picked up by the MedTech service. The MedTech service will then transform the device messages and persist them into the Fast Healthcare Interoperability Resources (FHIR&#174;) service as FHIR Observations. To learn more about the MedTech service data flow, see [MedTech service data flow](iot-data-flow.md)
+Below is a diagram of the IoT device message flow when using an IoT Hub with the MedTech service. As you can see, devices send their messages to the IoT Hub, which then routes the device messages to the device message event hub to be picked up by the MedTech service. The MedTech service will then transform the device messages and persist them into the Fast Healthcare Interoperability Resources (FHIR&#174;) service as FHIR Observations. To learn more about the MedTech service data flow, see [MedTech service data flow](iot-data-flow.md).
:::image type="content" source="media\iot-hub-to-iot-connector\iot-hub-to-iot-connector.png" alt-text="Diagram of IoT message data flow through IoT Hub into the MedTech service." lightbox="media\iot-hub-to-iot-connector\iot-hub-to-iot-connector.png":::
In order to begin the deployment and complete this tutorial, you'll need to have
- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). -- **Owner** or **Contributor + User Access Administrator** access to the Azure subscription. For more information about Azure role-based access control, see [What is Azure role-based access control?](/azure/role-based-access-control/overview).
+- **Owner** or **Contributor + User Access Administrator** access to the Azure subscription. For more information about Azure role-based access control, see [What is Azure role-based access control?](../../role-based-access-control/overview.md).
-- These resource providers registered with your Azure subscription: **Microsoft.HealthcareApis**, **Microsoft.EventHub**, and **Microsoft.Devices**. To learn more about registering resource providers, see [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types).
+- These resource providers registered with your Azure subscription: **Microsoft.HealthcareApis**, **Microsoft.EventHub**, and **Microsoft.Devices**. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
- [Visual Studio Code (VSCode)](https://code.visualstudio.com/Download) installed locally and configured with the addition of the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). The **Azure IoT Tools** are a collection of extensions that makes it easy to connect to IoT Hubs, create devices, and send messages. For the purposes of this tutorial, we'll be using the **Azure IoT Hub extension** to connect to your deployed IoT Hub, create a device, and send a test message from the device to your IoT Hub.
When you've fulfilled these prerequisites, you're ready to use the **Deploy to A
> [!IMPORTANT] > For this tutorial, the ARM template will configure the MedTech service to operate in **Create** mode so that a Patient Resource and Device Resource are created for each device that sends data to your FHIR service. >
- > To learn more about the MedTech service resolution types: **Create** and **Lookup**, see: [Destination properties](/azure/healthcare-apis/iot/deploy-05-new-config#destination-properties)
+ > To learn more about the MedTech service resolution types: **Create** and **Lookup**, see: [Destination properties](deploy-05-new-config.md#destination-properties).
3. Select the **Review + create** button after all the option fields are correctly filled out. This selection will review your option inputs and check to see if all your supplied values are valid.
When you've fulfilled these prerequisites, you're ready to use the **Deploy to A
Once the deployment has competed, the following resources and access roles will be created as part of the template deployment: -- An Azure Event Hubs Namespace and device message Azure event hub. In this example, the event hub is named **devicedata**.
+- An Azure Event Hubs Namespace and device message Azure event hub. In this deployment, the event hub is named **devicedata**.
-- An Azure event hub consumer group. In this example, the consumer group is named **$Default**.
+- An Azure event hub consumer group. In this deployment, the consumer group is named **$Default**.
-- An Azure event hub sender role. In this example, the sender role is named **devicedatasender**.
+- An Azure event hub sender role. In this deployment, the sender role is named **devicedatasender**. For the purposes of this tutorial, this role won't be used. To learn more about the role and its use, see [Review of deployed resources and access permissions](deploy-02-new-button.md#required-post-deployment-tasks).
-- An Azure IoT Hub with [messaging routing](/azure/iot-hub/iot-hub-devguide-messages-d2c) configured to send device messages to the device message event hub.
+- An Azure IoT Hub with [messaging routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) configured to send device messages to the device message event hub.
-- A [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview) that provides send access from the IoT Hub to the device message event hub (**Event Hubs Data Sender** role within the [Access control section (IAM)](/azure/role-based-access-control/overview) of the device message event hub).
+- A [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) that provides send access from the IoT Hub to the device message event hub (**Event Hubs Data Sender** role within the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub).
- An Azure Health Data Services workspace. - An Azure Health Data Services FHIR service. -- An Azure Health Data Services MedTech service instance, including the necessary [system-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview) roles to the device message event hub (**Azure Events Hubs Receiver** role within the [Access control section (IAM)](/azure/role-based-access-control/overview) of the device message event hub) and FHIR service (**FHIR Data Writer** role within the [Access control section (IAM)](/azure/role-based-access-control/overview) of the FHIR service).
+- An Azure Health Data Services MedTech service instance, including the necessary [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles to the device message event hub (**Azure Events Hubs Receiver** role within the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub) and FHIR service (**FHIR Data Writer** role within the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service).
> [!TIP] > For detailed step-by-step instructions on how to manually deploy the MedTech service, see [How to manually deploy the MedTech service using the Azure portal](deploy-03-new-manual.md).
Now that your deployment has successfully completed, we'll connect to your IoT H
> After the test message is sent, it may take up to five minutes for the FHIR resources to be present in the FHIR service. > [!IMPORTANT]
- > To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](/azure/iot-hub/iot-hub-devguide-messages-construct#anti-spoofing-properties).
+ > To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](../../iot-hub/iot-hub-devguide-messages-construct.md#anti-spoofing-properties).
> > To learn more about IotJsonPathContentTemplate mappings usage with the MedTech service device mappings, see [How to use IotJsonPathContentTemplate mappings](how-to-use-iot-jsonpath-content-mappings.md).
For your MedTech service metrics, you see can see that your MedTech service perf
If you provided your own Azure AD user object ID as the optional **Fhir Contributor Principal ID** when deploying this tutorial's template, then you have access to query FHIR resources in your FHIR service.
-Use this tutorial: [Access using Postman](/azure/healthcare-apis/fhir/use-postman) to get an Azure AD access token and view FHIR resources in your FHIR service.
+Use this tutorial: [Access using Postman](../fhir/use-postman.md) to get an Azure AD access token and view FHIR resources in your FHIR service.
## Next steps
-In this tutorial, you deployed a Quickstart ARM template in the Azure portal, connected to your Azure IoT Hub, created a device, and sent a test message to your MedTech service.
+In this tutorial, you deployed an ARM template in the Azure portal, connected to your Azure IoT Hub, created a device, and sent a test message to your MedTech service.
To learn about how to use device mappings, see
To learn more about FHIR destination mappings, see
> [!div class="nextstepaction"] > [How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
Metric category|Metric name|Metric description|
> To learn how to to create an Azure portal dashboard and pin tiles, see [How to create an Azure portal dashboard and pin tiles](how-to-configure-metrics.md#how-to-create-an-azure-portal-dashboard-and-pin-tiles) > [!TIP]
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md)
## How to create an Azure portal dashboard and pin tiles
-To learn how to create an Azure portal dashboard and pin tiles, see [Create a dashboard in the Azure portal](/azure/azure-portal/azure-portal-dashboards)
+To learn how to create an Azure portal dashboard and pin tiles, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md)
## Next steps
To learn how to enable the MedTech service diagnostic settings to export logs an
> [!div class="nextstepaction"] > [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
-(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md
# How to enable diagnostic settings for the MedTech service
-In this article, you'll learn how to enable the diagnostic settings for the MedTech service to export logs to different destinations (for example: to [Azure storage](/azure/storage/) or an [Azure event hub](/azure/event-hubs/)) for audit, analysis, or backup.
+In this article, you'll learn how to enable the diagnostic settings for the MedTech service to export logs to different destinations (for example: to [Azure storage](../../storage/index.yml) or an [Azure event hub](../../event-hubs/index.yml)) for audit, analysis, or backup.
## Create a diagnostic setting for the MedTech service 1. To enable metrics export for your MedTech service, select **MedTech service** in your workspace under **Services**.
In this article, you'll learn how to enable the diagnostic settings for the MedT
:::image type="content" source="media/iot-diagnostic-settings/view-and-edit-diagnostic-settings.png" alt-text="Screenshot of Diagnostic settings options." lightbox="media/iot-diagnostic-settings/view-and-edit-diagnostic-settings.png"::: > [!TIP]
- > For more information about how to work with diagnostic settings, see [Diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal).
+ > For more information about how to work with diagnostic settings, see [Diagnostic settings in Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md?tabs=portal).
> > For more information about how to work with diagnostic logs, see the [Overview of Azure platform logs](../../azure-monitor/essentials/platform-logs-overview.md).
healthcare-apis How To Use Device Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-device-mappings.md
Title: How to configure device mappings in MedTech service - Azure Health Data Services
-description: This article provides an overview and describes how to configure the MedTech service device mappings within the Azure Health Data Services.
+description: This article describes how to configure device mappings in the Azure Health Data Services MedTech service.
Previously updated : 11/08/2022 Last updated : 11/15/2022
You can define one or more templates within the MedTech service device mapping.
## Next steps
-In this article, you learned how to use device mappings. To learn how to use FHIR destination mappings, see
+In this article, you learned how to configure device mappings. To learn how to configure FHIR destination mappings, see
> [!div class="nextstepaction"]
-> [How to use the FHIR destination mappings](how-to-use-fhir-mappings.md)
+> [How to configure FHIR destination mappings](how-to-use-fhir-mappings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Fhir Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-fhir-mappings.md
Title: FHIR destination mappings in the MedTech service - Azure Health Data Services
-description: This article describes how to configure and use the FHIR destination mappings in Azure Health Data Services MedTech service.
+description: This article describes how to configure FHIR destination mappings in Azure Health Data Services MedTech service.
Previously updated : 10/25/2022 Last updated : 11/15/2022
-# How to use the FHIR destination mappings
+# How to configure FHIR destination mappings
This article describes how to configure the MedTech service using the Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings.
Represents the [CodeableConcept](http://hl7.org/fhir/datatypes.html#CodeableConc
## Next steps
-In this article, you learned how to use FHIR destination mappings. To learn how to use device mappings, see
+In this article, you learned how to configure FHIR destination mappings. To learn how to configure device mappings, see
> [!div class="nextstepaction"]
-> [How to use device mappings](how-to-use-device-mappings.md)
+> [How to configure device mappings](how-to-use-device-mappings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-central Howto Export To Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-azure-data-explorer.md
To create the Azure Data Explorer destination in IoT Central on the **Data expor
[!INCLUDE [iot-central-data-export-device-template](../../../includes/iot-central-data-export-device-template.md)] + ## Next steps Now that you know how to export to Azure Data Explorer, a suggested next step is to learn [Export to Webhook](howto-export-to-webhook.md).
iot-central Howto Export To Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-blob-storage.md
The following example shows an exported device lifecycle message received in Azu
} ``` +
+The following example shows an exported audit log message received in Azure Blob Storage:
+
+```json
+{
+ "actor": {
+ "id": "test-audit",
+ "type": "apiToken"
+ },
+ "applicationId": "570c2d7b-1111-2222-abcd-000000000000",
+ "enqueuedTime": "2022-07-25T21:54:40.000Z",
+ "enrichments": {},
+ "messageSource": "audit",
+ "messageType": "created",
+ "resource": {
+ "displayName": "Sensor 1",
+ "id": "sensor",
+ "type": "device"
+ },
+ "schema": "default@v1"
+}
+```
+ ## Next steps Now that you know how to export to Blob Storage, a suggested next step is to learn [Export to Service Bus](howto-export-to-service-bus.md).
iot-central Howto Export To Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-event-hubs.md
To create the Event Hubs destination in IoT Central on the **Data export** page:
[!INCLUDE [iot-central-data-export-device-template](../../../includes/iot-central-data-export-device-template.md)] + For Event Hubs, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically. ## Next steps
iot-central Howto Export To Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-service-bus.md
To create the Service Bus destination in IoT Central on the **Data export** page
[!INCLUDE [iot-central-data-export-device-template](../../../includes/iot-central-data-export-device-template.md)] + For Service Bus, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically. ## Next steps
iot-central Howto Export To Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-webhook.md
To create the Azure Data Explorer destination in IoT Central on the **Data expor
[!INCLUDE [iot-central-data-export-device-lifecycle](../../../includes/iot-central-data-export-device-lifecycle.md)] +
iot-central Howto Use Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-audit-logs.md
This article describes how to use audit logs to track who made what changes at w
- Filter the audit log. - Customize the audit log. - Manage access to the audit log.
+- Export the audit log records.
The audit log records information about who made a change, information about the modified entity, the action that made change, and when the change was made. The log tracks changes made through the UI, programatically with the REST API, and through the CLI.
The built-in **App Administrator** role has access to the audit logs by default.
> [!IMPORTANT] > Any user granted permission to view the audit log can see all log entries even if they don't have permission to view or modify the entities listed in the log. Therefore, any user who can view the log can view the identity of and changes made to any modified entity.
+## Export logs
+
+You can export the audit log records to various destinations for long-term storage, detailed analysis, or integration with other logs. For more information, see [Export IoT data](howto-export-to-event-hubs.md).
+
+To send audit logs to [Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md), use IoT Central data export to send the audit logs to Event Hubs, and then use an Azure Function to add the audit log data to Log Analytics.
+ ## Next steps Now that you've learned how to manage users and roles in your IoT Central application, the suggested next step is to learn how to [Manage IoT Central organizations](howto-create-organizations.md).
iot-edge Gpu Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/gpu-acceleration.md
# GPU acceleration for Azure IoT Edge for Linux on Windows GPUs are a popular choice for artificial intelligence computations, because they offer parallel processing capabilities and can often execute vision-based inferencing faster than CPUs. To better support artificial intelligence and machine learning applications, Azure IoT Edge for Linux on Windows (EFLOW) can expose a GPU to the virtual machine's Linux module.
iot-edge How To Access Dtpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-access-dtpm.md
# dTPM access for Azure IoT Edge for Linux on Windows A Trusted platform module (TPM) chip is a secure crypto-processor that is designed to carry out cryptographic operations. This technology is designed to provide hardware-based, security-related functions. The Azure IoT Edge for Linux on Windows (EFLOW) virtual machine doesn't have a virtual TPMs attached to the VM. However, the user can enable or disable the TPM passthrough feature, that allows the EFLOW virtual machine to use the Windows host OS TPM. The TPM passthrough feature enables two main scenarios:
iot-edge How To Configure Iot Edge For Linux On Windows Iiot Dmz https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz.md
# How to configure Azure IoT Edge for Linux on Windows Industrial IoT & DMZ configuration This article describes how to configure the Azure IoT Edge for Linux (EFLOW) VM to support multiple network interface cards (NICs) and connect to multiple networks. By enabling multiple NIC support, applications running on the EFLOW VM can communicate with devices connected to the offline network, and at the same time, use IoT Edge to send data to the cloud.
iot-edge How To Configure Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-networking.md
# Networking configuration for Azure IoT Edge for Linux on Windows This article will help you decide which networking option is best for your scenario and provide insights into IoT Edge for Linux on Windows (EFLOW) configuration requirements.
iot-edge How To Configure Multiple Nics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-multiple-nics.md
# Azure IoT Edge for Linux on Windows virtual multiple NIC configurations By default, the Azure IoT Edge for Linux on Windows (EFLOW) virtual machine has a single network interface card (NIC) assigned. However, you can configure the EFLOW VM with multiple network interfaces by using the EFLOW support for attaching multiple network interfaces to the virtual machine. This functionality may be helpful in numerous scenarios where you may have a networking division or separation into different networks or zones. In order to connect the EFLOW virtual machine to the different networks, you may need to attach different network interface cards to the EFLOW virtual machine.
iot-edge How To Connect Usb Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-usb-devices.md
# How to connect a USB device to Azure IoT Edge for Linux on Windows + In some scenarios, your workloads need to get data or communicate with USB devices. Because Azure IoT Edge for Linux on Windows (EFLOW) runs as a virtual machine, you need to connect these devices to the virtual machine. This article guides you through the steps necessary to connect a USB device to the EFLOW virtual machine using the USB/IP open-source project named [usbipd-win](https://github.com/dorssel/usbipd-win). Setting up the USB/IP project on your Windows machine enables common developer USB scenarios like flashing an Arduino, connecting a USB serial device, or accessing a smartcard reader directly from the EFLOW virtual machine.
iot-edge How To Create Virtual Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-virtual-switch.md
# Azure IoT Edge for Linux on Windows virtual switch creation Azure IoT Edge for Linux on Windows uses a virtual switch on the host machine to communicate with the virtual machine. Windows desktop versions come with a default switch that can be used, but Windows Server *doesn't*. Before you can deploy IoT Edge for Linux on Windows to a Windows Server device, you need to create a virtual switch. Furthermore, you can use this guide to create your custom virtual switch, if needed.
The switch is now created. Next, you'll set up the DNS.
## Create DHCP Server
+>[!NOTE]
+> It is possible to continue the installation without a DHCP server as long as the EFLOW VM is deployed using Static IP parameters (`ip4Address`, `ip4GatewayAddress`, `ip4PrefixLength`). If dynamic IP allocation will be used, ensure to continue with the DHCP server installation.
+ >[!WARNING] >Authorization might be required to deploy a DHCP server in a corporate network environment. Check if the virtual switch configuration complies with your corporate network's policies. For more information, see [Deploy DHCP Using Windows PowerShell](/windows-server/networking/technologies/dhcp/dhcp-deploy-wps).
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
MIICdTCCAhugAwIBAgIBMDAKBggqhkjOPQQDAjAXMRUwEwYDVQQDDAxlc3RFeGFt
``` > [!TIP]
-> To test without access to certificate files provided by a PKI, see [Create demo certificates to test device features](/azure/iot-edge/how-to-create-test-certificates) to generate a short-lived non-production device identity certificate and private key.
+> To test without access to certificate files provided by a PKI, see [Create demo certificates to test device features](./how-to-create-test-certificates.md) to generate a short-lived non-production device identity certificate and private key.
Configuration example when provisioning with IoT Hub:
The following table lists what each option in `auto_renew` does:
### Example: renew device identity certificate automatically with EST
-To use EST and IoT Edge for automatic device identity certificate issuance and renewal, which is recommended for production, IoT Edge must provision as part of a [DPS CA-based enrollment group](/azure/iot-edge/how-to-provision-devices-at-scale-linux-x509?tabs=group-enrollment%2Cubuntu). For example:
+To use EST and IoT Edge for automatic device identity certificate issuance and renewal, which is recommended for production, IoT Edge must provision as part of a [DPS CA-based enrollment group](./how-to-provision-devices-at-scale-linux-x509.md?tabs=group-enrollment%2cubuntu). For example:
```toml ## DPS provisioning with X.509 certificate
Server certificates may be issued off the Edge CA certificate or through a DPS-c
## Next steps
-Installing certificates on an IoT Edge device is a necessary step before deploying your solution in production. Learn more about how to [Prepare to deploy your IoT Edge solution in production](production-checklist.md).
+Installing certificates on an IoT Edge device is a necessary step before deploying your solution in production. Learn more about how to [Prepare to deploy your IoT Edge solution in production](production-checklist.md).
iot-edge How To Provision Devices At Scale Linux On Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-symmetric.md
This article provides end-to-end instructions for autoprovisioning one or more [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) devices using symmetric keys. You can automatically provision Azure IoT Edge devices with the [Azure IoT Hub device provisioning service](../iot-dps/index.yml) (DPS). If you're unfamiliar with the process of autoprovisioning, review the [provisioning overview](../iot-dps/about-iot-dps.md#provisioning-process) before continuing. <!-- iotedge-2020-11 --> >[!NOTE]
->The latest version of IoT Edge for Linux on Windows continuous release (CR), based on IoT Edge version 1.2, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use once the general availability (GA) release is available. For more information, see [EFLOW continuous release](https://github.com/Azure/iotedge-eflow/wiki/EFLOW-Continuous-Release).
+>The latest version of IoT Edge for Linux on Windows continuous release (CR), based on IoT Edge version 1.3, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use if a general availability (GA) release is available. For more information, see [EFLOW versions](./version-history.md).
:::moniker-end <!-- end iotedge-2020-11 -->
iot-edge How To Provision Devices At Scale Linux On Windows Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-tpm.md
This article provides instructions for autoprovisioning an Azure IoT Edge for Linux on Windows device by using a Trusted Platform Module (TPM). You can automatically provision Azure IoT Edge devices with the [Azure IoT Hub device provisioning service](../iot-dps/index.yml). If you're unfamiliar with the process of autoprovisioning, review the [provisioning overview](../iot-dps/about-iot-dps.md#provisioning-process) before you continue. <!-- iotedge-2020-11 --> >[!NOTE]
->The latest version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), based on IoT Edge version 1.2, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use once the general availability (GA) release is available. For more information, see [EFLOW continuous release](https://github.com/Azure/iotedge-eflow/wiki/EFLOW-Continuous-Release).
+>The latest version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), based on IoT Edge version 1.3, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use if a general availability (GA) release is available. For more information, see [EFLOW versions](./version-history.md).
:::moniker-end <!-- end iotedge-2020-11 -->
iot-edge How To Provision Devices At Scale Linux On Windows X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-x509.md
This article provides end-to-end instructions for autoprovisioning one or more [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) devices using X.509 certificates. You can automatically provision Azure IoT Edge devices with the [Azure IoT Hub device provisioning service](../iot-dps/index.yml) (DPS). If you're unfamiliar with the process of autoprovisioning, review the [provisioning overview](../iot-dps/about-iot-dps.md#provisioning-process) before continuing. <!-- iotedge-2020-11 --> >[!NOTE]
->The latest version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), based on IoT Edge version 1.2, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use once the general availability (GA) release is available. For more information, see [EFLOW continuous release](https://github.com/Azure/iotedge-eflow/wiki/EFLOW-Continuous-Release).
+>The latest version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), based on IoT Edge version 1.3, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use if a general availability (GA) release is available. For more information, see [EFLOW versions](./version-history.md).
:::moniker-end <!-- end iotedge-2020-11 -->
iot-edge How To Provision Single Device Linux On Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-on-windows-symmetric.md
This article provides end-to-end instructions for registering and provisioning an IoT Edge for Linux on Windows device. <!-- iotedge-2020-11 --> >[!NOTE]
->The latest version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), based on IoT Edge version 1.2, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use once the general availability (GA) release is available. For more information, see [EFLOW continuous release](https://github.com/Azure/iotedge-eflow/wiki/EFLOW-Continuous-Release).
+>The latest version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), based on IoT Edge version 1.3, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use if a general availability (GA) release is available. For more information, see [EFLOW versions](./version-history.md).
:::moniker-end <!-- end iotedge-2020-11 -->
iot-edge How To Provision Single Device Linux On Windows X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-on-windows-x509.md
This article provides end-to-end instructions for registering and provisioning an IoT Edge for Linux on Windows device. <!-- iotedge-2020-11 --> >[!NOTE]
->The latest version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), based on IoT Edge version 1.2, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use once the general availability (GA) release is available. For more information, see [EFLOW continuous release](https://github.com/Azure/iotedge-eflow/wiki/EFLOW-Continuous-Release).
+>The latest version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), based on IoT Edge version 1.3, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use if a general availability (GA) release is available. For more information, see [EFLOW versions](./version-history.md).
:::moniker-end <!-- end iotedge-2020-11 -->
iot-edge Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-networking.md
# IoT Edge for Linux on Windows networking This article provides information about how to configure the networking between the Windows host OS and the IoT Edge for Linux on Windows (EFLOW) virtual machine. EFLOW uses a [CBL-Mariner](https://github.com/microsoft/CBL-Mariner) Linux virtual machine in order to run IoT Edge modules. For more information about EFLOW architecture, see [What is Azure IoT Edge for Linux on Windows](./iot-edge-for-linux-on-windows.md).
iot-edge Iot Edge For Linux On Windows Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-support.md
Azure IoT Edge for Linux on Windows (EFLOW) can run in Windows virtual machines.
| - | -- | -- | -- | -- | | EFLOW 1.1 LTS | ![1.1LTS](./media/support/green-check.png) | ![1.1LTS](./media/support/green-check.png) | ![1.1LTS](./media/support/green-check.png) | - | | EFLOW Continuous Release (CR) ([Public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)) | ![CR](./media/support/green-check.png) | ![CR](./media/support/green-check.png) | ![CR](./media/support/green-check.png) | - |
+| EFLOW 1.4 LTS | ![1.4LTS](./media/support/green-check.png) | ![1.4LTS](./media/support/green-check.png) | ![1.4LTS](./media/support/green-check.png) | - |
For more information, see [EFLOW Nested virtualization](./nested-virtualization.md).
The following table lists the components included in each release. Each release
| - | -- | -- | - | | **1.1 LTS** | 1.1 | 2.0 | - | | **Continuous Release** | 1.3 | 2.0 | 3.12.3 |
-| **1.4 LTS** | 1.4 | 2.0 | - |
+| **1.4 LTS** | 1.4 | 2.0 | 3.12.3 |
## Minimum system requirements
iot-edge Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows.md
Azure IoT Edge for Linux on Windows (EFLOW) allows you to run containerized Linu
<!-- iotedge-2020-11 --> :::moniker range="iotedge-2020-11" >[!NOTE]
->The latest version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), based on IoT Edge version 1.2, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use once the general availability (GA) release is available. For more information, see [EFLOW continuous release](https://github.com/Azure/iotedge-eflow/wiki/EFLOW-Continuous-Release).
+>The latest version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), based on IoT Edge version 1.3, is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). A clean installation may be required for devices going into production use if a general availability (GA) release is available. For more information, see [EFLOW versions](./version-history.md).
:::moniker-end <!-- end iotedge-2020-11 -->
Azure IoT Edge for Linux on Windows uses the following components to enable Linu
* **Microsoft Update**: Integration with Microsoft Update keeps the Windows runtime components, the CBL-Mariner Linux VM, and Azure IoT Edge up to date. For more information about IoT Edge for Linux on Windows updates, see [Update IoT Edge for Linux on Windows](./iot-edge-for-linux-on-windows-updates.md).
-> [!NOTE]
-> Azure IoT Edge for Linux on Windows extension for Windows Amin Center (WAC) is not supported with this EFLOW version.
- [ ![Windows and the Linux VM run in parallel, while the Windows Admin Center controls both components](./media/iot-edge-for-linux-on-windows/architecture-eflow1-2.png) ](./media/iot-edge-for-linux-on-windows/architecture-eflow1-2.png#lightbox) :::moniker-end <!-- end iotedge-2020-11 -->
iot-edge Nested Virtualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/nested-virtualization.md
# Nested virtualization for Azure IoT Edge for Linux on Windows There are three forms of nested virtualization compatible with Azure IoT Edge for Linux on Windows. Users can choose to deploy through a local virtual machine (using Hyper-V hypervisor), VMware Windows virtual machine or Azure Virtual Machine. This article will provide users clarity on which option is best for their scenario and provide insight into configuration requirements.
There are three forms of nested virtualization compatible with Azure IoT Edge fo
This is the baseline approach for any Windows VM that hosts Azure IoT Edge for Linux on Windows. For this case, nested virtualization needs to be enabled before starting the deployment. Read [Run Hyper-V in a Virtual Machine with Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization) for more information on how to configure this scenario.
-If you're using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server).
+If you're using Windows Server or Azure Stack HCI, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server).
## Deployment on Windows VM on VMware ESXi Intel-based VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-67-installation-setup-guide.pdf) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions can host Azure IoT Edge for Linux on Windows on top of a Windows virtual machine. Read [VMware KB2009916](https://kb.vmware.com/s/article/2009916) for more information on VMware ESXi nested virtualization support.
iot-edge Troubleshoot Iot Edge For Linux On Windows Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows-common-errors.md
# Common issues and resolutions for Azure IoT Edge for Linux on Windows + Use this article to help resolve common issues that can occur when deploying IoT Edge for Linux on Windows solutions. ## Installation and Deployment
iot-edge Troubleshoot Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows-networking.md
# Troubleshoot your IoT Edge for Linux on Windows networking If you experience networking issues using Azure IoT Edge for Linux on Windows (EFLOW) in your environment, use this article as a guide for troubleshooting and diagnostics. Also, check [Troubleshoot your IoT Edge for Linux on Windows device](./troubleshoot-iot-edge-for-linux-on-windows.md) for more EFLOW virtual machine troubleshooting help.
iot-edge Troubleshoot Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows.md
# Troubleshoot your IoT Edge for Linux on Windows device If you experience issues running Azure IoT Edge for Linux on Windows (EFLOW) in your environment, use this article as a guide for troubleshooting and diagnostics.
iot-edge Tutorial Develop For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux-on-windows.md
# Tutorial: Develop IoT Edge modules with Linux containers using IoT Edge for Linux on Windows Use Visual Studio 2019 to develop, debug and deploy code to devices running IoT Edge for Linux on Windows.
iot-hub-device-update Delta Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/delta-updates.md
An update handler integrates with the Device Update agent to perform the actual
The delta processor re-creates the original SWU image file on your device after the delta file has been downloaded, so your update handler can install the SWU file. You'll find all the delta processor code in the [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta) GitHub repo.
-To add the delta processor component to your device image and configure it for use, use apt-get to install the proper Debian package for your platform (it should be named `ms-adu_diffs_x.x.x_amd64.deb` for amd64):
-
-```bash
-sudo apt-get install <path to Debian package>
-```
-
-Alternatively, on a non-Debian Linux device you can install the shared object (libadudiffapi.so) directly by copying it to the `/usr/lib` directory:
+To add the delta processor component to your device image and configure it for use, follow the README.md instructions to use CMAKE to build the delta processor from source. From there, install the shared object (libadudiffapi.so) directly by copying it to the `/usr/lib` directory:
```bash sudo cp <path to libadudiffapi.so> /usr/lib/libadudiffapi.so
iot-hub-device-update Device Update Ubuntu Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-ubuntu-agent.md
In this tutorial, you'll learn how to:
* If you used the [Simulator agent tutorial](device-update-simulator.md) for prior testing, run the following command to invoke the APT handler and deploy over-the-air package updates in this tutorial: ```sh
- sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
+ sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/apt:1'
``` ## Prepare a device
iot-hub Iot Concepts And Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-concepts-and-iot-hub.md
iot-hub-devguide-endpoints.md#list-of-built-in-iot-hub-endpoints)
Data can also be routed to different services for further processing. As the IoT solution scales out, the number of devices, volume of events, variety of events, and different services also varies. A flexible, scalable, consistent, and reliable method to route events is necessary to serve this pattern. Once a message route has been created, data stops flowing to the built-in-endpoint unless a fallback route has been configured. For a tutorial showing multiple uses of message routing, see the [Routing Tutorial](tutorial-routing.md).
+IoT Hub supports setting up custom endpoints for various existing Azure services like Storage containers, Event Hubs, Service Bus queues, Service Bus topics, and Cosmos DB. Once the endpoint has been set up, you can route your IoT data to any of these endpoints to perform downstream data operations.
+ IoT Hub also integrates with Event Grid, which enables you to fan out data to multiple subscribers. Event Grid is a fully managed event service that enables you to easily manage events across many different Azure services and applications. Made for performance and scale, it simplifies building event-driven applications and serverless architectures. The differences between message routing and using Event Grid are explained in the [Message Routing and Event Grid Comparison](iot-hub-event-grid-routing-comparison.md) ## Next steps
To learn more about the ways you can build and deploy IoT solutions with Azure I
- [What is Azure IoT device and application development](../iot-develop/about-iot-develop.md) - [Fundamentals: Azure IoT technologies and solutions](../iot-fundamentals/iot-services-and-technologies.md)+
iot-hub Iot Hub Devguide Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-endpoints.md
You can link existing Azure services in your Azure subscriptions to your IoT hub
IoT Hub currently supports the following Azure services as additional endpoints:
-* Azure Storage containers
+* Storage containers
* Event Hubs * Service Bus Queues * Service Bus Topics-
+* Cosmos DB (preview)
+
For the limits on the number of endpoints you can add, see [Quotas and throttling](iot-hub-devguide-quotas-throttling.md). ## Endpoint Health
Other reference topics in this IoT Hub developer guide include:
* [IoT Hub query language for device twins, jobs, and message routing](iot-hub-devguide-query-language.md) * [Quotas and throttling](iot-hub-devguide-quotas-throttling.md) * [IoT Hub MQTT support](iot-hub-mqtt-support.md)
-* [Understand your IoT hub IP address](iot-hub-understand-ip-address.md)
+* [Understand your IoT hub IP address](iot-hub-understand-ip-address.md)
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md
If your custom endpoint has firewall configurations, consider using the [Microso
IoT Hub currently supports the following endpoints: - Built-in endpoint
+ - Storage containers
- Service Bus Queues and Service Bus Topics-
+ - Event Hubs
+ - Cosmos DB (preview)
+
## Built-in endpoint as a routing endpoint You can use standard [Event Hubs integration and SDKs](iot-hub-devguide-messages-read-builtin.md) to receive device-to-cloud messages from the built-in endpoint (**messages/events**). Once a Route is created, data stops flowing to the built-in-endpoint unless a Route is created to that endpoint. Even if no routes are created, a fallback route must be enabled to route messages to the built-in endpoint. The fallback is enabled by default if you create your hub using the portal or the CLI.
Service Bus queues and topics used as IoT Hub endpoints must not have **Sessions
Apart from the built-in-Event Hubs compatible endpoint, you can also route data to custom endpoints of type Event Hubs.
+## Azure Cosmos DB as a routing endpoint (preview)
+You can send data directly to Azure Cosmos DB from IoT Hub. Cosmos DB is a fully managed hyperscale multi-model database service. It provides very low latency and high availability, making it a great choice for scenarios like connected solutions and manufacturing which require extensive downstream data analysis.
+
+IoT Hub supports writing to Cosmos DB in JSON (if specified in the message content-type) or as Base64 encoded binary. In order to set up a route to Cosmos DB, you will have to do the following:
+
+From your provisioned IoT Hub, go to the Hub settings and click on message routing. Go to the Custom endpoints tab, click on Add and select Cosmos DB. The following image shows the endpoint addition:
+
+![Screenshot that shows how to add a Cosmos DB endpoint.](media/iot-hub-devguide-messages-d2c/add-cosmos-db-endpoint.png)
+
+Enter your endpoint name. You should be able to choose from a list of Cosmos DB accounts available for selection, along with the Database and collection.
+
+As Cosmos DB is a hyperscale datastore, all data/documents written to it must contain a field that represents a logical partition. The partition key property name is defined at the Container level and cannot be changed once it has been set. Each logical partition has a maximum size of 20GB. To effectively support high-scale scenarios, you can enable [Synthetic Partition Keys](/azure/cosmos-db/nosql/synthetic-partition-keys) for the Cosmos DB endpoint and configure them based on your estimated data volume. For example, in manufacturing scenarios, your logical partition might be expected to approach its max limit of 20 GB within a month. In that case, you can define a Synthetic Partition Key which is a combination of the device id and the month. This key will be automatically added to the partition key field for each new Cosmos DB record, ensuring logical partitions are created each month for each device.
+
+ You can choose any of the supported authentication types for accessing the database, based on your system setup.
+
+> [!Caution]
+> If you are using the System managed identity for authenticating to CosmosDB, you will need to have a ΓÇ£Cosmos DB Built in Data ContributorΓÇ¥ Role assigned via CLI. The role setup is not supported from the portal today. For more details on the various roles, see [Configure role-based access for Azure Cosmos DB](/azure/cosmos-db/how-to-setup-rbac). To understand assigning roles via CLI, see [Manage Azure Cosmos DB SQL role resources.](/cli/azure/cosmosdb/sql/role)
+
+Once you have selected all the details, click on create and complete the setup of the custom endpoint.
+ ## Reading data that has been routed You can configure a route by following this [tutorial](tutorial-routing.md).
Use the following tutorials to learn how to read messages from an endpoint.
* Read from [Service Bus Topics](../service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md) - ## Fallback route The fallback route sends all the messages that don't satisfy query conditions on any of the existing routes to the built-in-Event Hubs (**messages/events**), that is compatible with [Event Hubs](../event-hubs/index.yml). If message routing is turned on, you can enable the fallback route capability. Once a route is created, data stops flowing to the built-in-endpoint, unless a route is created to that endpoint. If there are no routes to the built-in-endpoint and a fallback route is enabled, only messages that don't match any query conditions on routes will be sent to the built-in-endpoint. Also, if all existing routes are deleted, fallback route must be enabled to receive all data at the built-in-endpoint.
You can enable/disable the fallback route in the Azure portal->Message Routing b
In addition to device telemetry, message routing also enables sending device twin change events, device lifecycle events, digital twin change events, and device connection state events. For example, if a route is created with data source set to **device twin change events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with data source set to **device lifecycle events**, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see [Device and module lifecycle notifications](./iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications). When using [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), a developer can create routes with data source set to **digital twin change events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **device connection state events**, IoT Hub sends a message indicating whether the device was connected or disconnected. - [IoT Hub also integrates with Azure Event Grid](iot-hub-event-grid.md) to publish device events to support real-time integrations and automation of workflows based on these events. See key [differences between message routing and Event Grid](iot-hub-event-grid-routing-comparison.md) to learn which works best for your scenario. ## Limitations for device connection state events
Use the [troubleshooting guide for routing](troubleshoot-message-routing.md) for
* [How to send device-to-cloud messages](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) * For information about the SDKs you can use to send device-to-cloud messages, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).++
iot-hub Iot Hub Devguide Messages Read Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-custom.md
For more information about creating custom endpoints in IoT Hub, see [IoT Hub en
For more information about reading from custom endpoints, see:
-* Reading from [Azure Storage containers](../storage/blobs/storage-blobs-introduction.md).
-
+* Reading from [Storage containers](../storage/blobs/storage-blobs-introduction.md).
* Reading from [Event Hubs](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md). * Reading from [Service Bus queues](../service-bus-messaging/service-bus-dotnet-get-started-with-queues.md). * Reading from [Service Bus topics](../service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md).-
+* Reading from [Cosmos DB](/azure/cosmos-db/nosql/query/getting-started)
## Next steps * For more information about IoT Hub endpoints, see [IoT Hub endpoints](iot-hub-devguide-endpoints.md).
For more information about reading from custom endpoints, see:
* For more information about the query language you use to define routing queries, see [Message Routing query syntax](iot-hub-devguide-routing-query-syntax.md). * The [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md) tutorial shows you how to use routing queries and custom endpoints.++
key-vault About Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/about-certificates.md
Title: About Azure Key Vault Certificates - Azure Key Vault
-description: Overview of Azure Key Vault REST interface and certificates.
+ Title: About Azure Key Vault certificates
+description: Get an overview of the Azure Key Vault REST interface and certificates.
tags: azure-resource-manager
# About Azure Key Vault certificates
-Key Vault certificates support provides for management of your x509 certificates and the following behaviors:
+Azure Key Vault certificate support provides for management of your X.509 certificates and the following behaviors:
-- Allows a certificate owner to create a certificate through a Key Vault creation process or through the import of an existing certificate. Includes both self-signed and Certificate Authority generated certificates.-- Allows a Key Vault certificate owner to implement secure storage and management of X509 certificates without interaction with private key material. -- Allows a certificate owner to create a policy that directs Key Vault to manage the life-cycle of a certificate. -- Allows certificate owners to provide contact information for notification about life-cycle events of expiration and renewal of certificate. -- Supports automatic renewal with selected issuers - Key Vault partner X509 certificate providers / certificate authorities.
+- Allows a certificate owner to create a certificate through a key vault creation process or through the import of an existing certificate. This includes both self-signed certificates and certificates that are generated from a certificate authority (CA).
+- Allows a Key Vault certificate owner to implement secure storage and management of X.509 certificates without interacting with private key material.
+- Allows a certificate owner to create a policy that directs Key Vault to manage the lifecycle of a certificate.
+- Allows a certificate owner to provide contact information for notifications about the lifecycle events of expiration and renewal.
+- Supports automatic renewal with selected issuers: Key Vault partner X.509 certificate providers and CAs.
->[!Note]
->Non-partnered providers/authorities are also allowed but, will not support the auto renewal feature.
+ > [!Note]
+ > Non-partnered providers and authorities are also allowed but don't support automatic renewal.
For details on certificate creation, see [Certificate creation methods](create-certificate.md).
-## Composition of a Certificate
+## Composition of a certificate
-When a Key Vault certificate is created, an addressable key and secret are also created with the same name. The Key Vault key allows key operations and the Key Vault secret allows retrieval of the certificate value as a secret. A Key Vault certificate also contains public x509 certificate metadata.
+When a Key Vault certificate is created, an addressable key and secret are also created with the same name. The Key Vault key allows key operations, and the Key Vault secret allows retrieval of the certificate value as a secret. A Key Vault certificate also contains public X.509 certificate metadata.
-The identifier and version of certificates is similar to that of keys and secrets. A specific version of an addressable key and secret created with the Key Vault certificate version is available in the Key Vault certificate response.
+The identifier and version of certificates are similar to those of keys and secrets. A specific version of an addressable key and secret created with the Key Vault certificate version is available in the Key Vault certificate response.
-![Certificates are complex objects](../media/azure-key-vault.png)
-
-## Exportable or Non-exportable key
+![Diagram that shows the role of certificates in a key vault.](../media/azure-key-vault.png)
-When a Key Vault certificate is created, it can be retrieved from the addressable secret with the private key in either PFX or PEM format. The policy used to create the certificate must indicate that the key is exportable. If the policy indicates non-exportable, then the private key isn't a part of the value when retrieved as a secret.
+## Exportable or non-exportable key
-The addressable key becomes more relevant with non-exportable KV certificates. The addressable KV key's operations are mapped from *keyusage* field of the KV certificate policy used to create the KV Certificate.
+When a Key Vault certificate is created, it can be retrieved from the addressable secret with the private key in either PFX or PEM format. The policy that's used to create the certificate must indicate that the key is exportable. If the policy indicates that the key is non-exportable, then the private key isn't a part of the value when it's retrieved as a secret.
-The type of key pair to supported for certificates
+The addressable key becomes more relevant with non-exportable Key Vault certificates. The addressable Key Vault key's operations are mapped from the `keyusage` field of the Key Vault certificate policy that's used to create the Key Vault certificate.
- Exportable is only allowed with RSA, EC. HSM keys would be non-exportable.
+The following table lists supported key types.
|Key type|About|Security| |--|--|--|
-|**RSA**| "Software-protected" RSA key|FIPS 140-2 Level 1|
-|**RSA-HSM**| "HSM-protected" RSA key (Premium SKU only)|FIPS 140-2 Level 2 HSM|
-|**EC**| "Software-protected" Elliptic Curve key|FIPS 140-2 Level 1|
-|**EC-HSM**| "HSM-protected" Elliptic Curve key (Premium SKU only)|FIPS 140-2 Level 2 HSM|
-|||
+|**RSA**| Software-protected RSA key|FIPS 140-2 Level 1|
+|**RSA-HSM**| HSM-protected RSA key (Premium SKU only)|FIPS 140-2 Level 2 HSM|
+|**EC**| Software-protected elliptic curve key|FIPS 140-2 Level 1|
+|**EC-HSM**| HSM-protected elliptic curve key (Premium SKU only)|FIPS 140-2 Level 2 HSM|
+|**oct**| Software-protected octet key| FIPS 140-2 Level 1|
+
-## Certificate Attributes and Tags
+Exportable keys are allowed only with RSA and EC. HSM keys are non-exportable. For more information about key types, see [Create certificates](/rest/api/keyvault/certificates/create-certificate/create-certificate#jsonwebkeytype).
-In addition to certificate metadata, an addressable key and addressable secret, a Key Vault certificate also contains attributes and tags.
+## Certificate attributes and tags
+
+In addition to certificate metadata, an addressable key, and an addressable secret, a Key Vault certificate contains attributes and tags.
### Attributes
-The certificate attributes are mirrored to attributes of the addressable key and secret created when KV certificate is created.
+The certificate attributes are mirrored to attributes of the addressable key and secret that are created when the Key Vault certificate is created.
+
+A Key Vault certificate has the following attribute:
-A Key Vault certificate has the following attributes:
+- `enabled`: This Boolean attribute is optional. Default is `true`. It can be specified to indicate if the certificate data can be retrieved as a secret or operable as a key.
-- *enabled*: boolean, optional, default is **true**. Can be specified to indicate if the certificate data can be retrieved as secret or operable as a key. Also used in conjunction with *nbf* and *exp* when an operation occurs between *nbf* and *exp*, and will only be permitted if enabled is set to true. Operations outside the *nbf* and *exp* window are automatically disallowed.
+ This attribute is also used in conjunction with `nbf` and `exp` when an operation occurs between `nbf` and `exp`, but only if `enabled` is set to `true`. Operations outside the `nbf` and `exp` window are automatically disallowed.
-There are additional read-only attributes that are included in response:
+A response includes these additional read-only attributes:
-- *created*: IntDate: indicates when this version of the certificate was created. -- *updated*: IntDate: indicates when this version of the certificate was updated. -- *exp*: IntDate: contains the value of the expiry date of the x509 certificate. -- *nbf*: IntDate: contains the value of the date of the x509 certificate.
+- `created`: `IntDate` indicates when this version of the certificate was created.
+- `updated`: `IntDate` indicates when this version of the certificate was updated.
+- `exp`: `IntDate` contains the value of the expiration date of the X.509 certificate.
+- `nbf`: `IntDate` contains the value of the "not before" date of the X.509 certificate.
> [!Note]
-> If a Key Vault certificate expires, it's addressable key and secret become inoperable.
+> If a Key Vault certificate expires, its addressable key and secret become inoperable.
### Tags
- Client specified dictionary of key value pairs, similar to tags in keys and secrets.
+Tags for certificates are a client-specified dictionary of key/value pairs, much like tags in keys and secrets.
- > [!Note]
-> Tags are readable by a caller if they have the *list* or *get* permission to that object type (keys, secrets, or certificates).
+> [!Note]
+> A caller can read tags if they have the *list* or *get* permission to that object type (keys, secrets, or certificates).
## Certificate policy
-A certificate policy contains information on how to create and manage lifecycle of a Key Vault certificate. When a certificate with private key is imported into the key vault, a default policy is created by reading the x509 certificate.
+A certificate policy contains information on how to create and manage the lifecycle of a Key Vault certificate. When a certificate with private key is imported into the key vault, the Key Vault service creates a default policy by reading the X.509 certificate.
-When a Key Vault certificate is created from scratch, a policy needs to be supplied. The policy specifies how to create this Key Vault certificate version, or the next Key Vault certificate version. Once a policy has been established, it isn't required with successive create operations for future versions. There's only one instance of a policy for all the versions of a Key Vault certificate.
+When a Key Vault certificate is created from scratch, a policy needs to be supplied. The policy specifies how to create this Key Vault certificate version or the next Key Vault certificate version. After a policy has been established, it isn't required with successive create operations for future versions. There's only one instance of a policy for all the versions of a Key Vault certificate.
-At a high level, a certificate policy contains the following information (their definitions can be found [here](/powershell/module/az.keyvault/set-azkeyvaultcertificatepolicy)):
+At a high level, a certificate policy contains the following information:
-- X509 certificate properties: Contains subject name, subject alternate names, and other properties used to create an x509 certificate request. -- Key Properties: contains key type, key length, exportable, and ReuseKeyOnRenewal fields. These fields instruct key vault on how to generate a key.
- - Supported keytypes: RSA, RSA-HSM, EC, EC-HSM, oct (listed [here](/rest/api/keyvault/certificates/create-certificate/create-certificate#jsonwebkeytype))
-- Secret properties: contains secret properties such as content type of addressable secret to generate the secret value, for retrieving certificate as a secret. -- Lifetime Actions: contains lifetime actions for the KV Certificate. Each lifetime action contains:
+- X.509 certificate properties, which include subject name, subject alternate names, and other properties that are used to create an X.509 certificate request.
+- Key properties, which include key type, key length, exportable, and `ReuseKeyOnRenewal` fields. These fields instruct Key Vault on how to generate a key.
+
+ [Supported key types](/rest/api/keyvault/certificates/create-certificate/create-certificate#jsonwebkeytype) are RSA, RSA-HSM, EC, EC-HSM, and oct.
+- Secret properties, such as the content type of an addressable secret to generate the secret value, for retrieving a certificate as a secret.
+- Lifetime actions for the Key Vault certificate. Each lifetime action contains:
- - Trigger: specified via days before expiry or lifetime span percentage
+ - Trigger: Specified as days before expiration or lifetime span percentage.
+ - Action: `emailContacts` or `autoRenew`.
- - Action: specifying action type ΓÇô *emailContacts* or *autoRenew*
+- Parameters about the certificate issuer to use for issuing X.509 certificates.
+- Attributes associated with the policy.
-- Issuer: Parameters about the certificate issuer to use to issue x509 certificates. -- Policy Attributes: contains attributes associated with the policy
+For more information, see [Set-AzKeyVaultCertificatePolicy](/powershell/module/az.keyvault/set-azkeyvaultcertificatepolicy).
-### X509 to Key Vault usage mapping
+### Mapping X.509 usage to key operations
-The following table represents the mapping of x509 key usage policy to effective key operations of a key created as part of a Key Vault certificate creation.
+The following table represents the mapping of X.509 key usage policies to effective key operations of a key that's created as part of Key Vault certificate creation.
-|**X509 Key Usage flags**|**Key Vault key ops**|**Default behavior**|
+|X.509 key usage flags|Key Vault key operations|Default behavior|
|-|--|--|
-|DataEncipherment|encrypt, decrypt| N/A |
-|DecipherOnly|decrypt| N/A |
-|DigitalSignature|sign, verify| Key Vault default without a usage specification at certificate creation time |
-|EncipherOnly|encrypt| N/A |
-|KeyCertSign|sign, verify|N/A|
-|KeyEncipherment|wrapKey, unwrapKey| Key Vault default without a usage specification at certificate creation time |
-|NonRepudiation|sign, verify| N/A |
-|crlsign|sign, verify| N/A |
+|`DataEncipherment`|`encrypt`, `decrypt`| Not applicable |
+|`DecipherOnly`|`decrypt`| Not applicable |
+|`DigitalSignature`|`sign`, `verify`| Key Vault default without a usage specification at certificate creation time |
+|`EncipherOnly`|`encrypt`| Not applicable |
+|`KeyCertSign`|`sign`, `verify`|Not applicable|
+|`KeyEncipherment`|`wrapKey`, `unwrapKey`| Key Vault default without a usage specification at certificate creation time |
+|`NonRepudiation`|`sign`, `verify`| Not applicable |
+|`crlsign`|`sign`, `verify`| Not applicable |
-## Certificate Issuer
+## Certificate issuer
-A Key Vault certificate object holds a configuration used to communicate with a selected certificate issuer provider to order x509 certificates.
+A Key Vault certificate object holds a configuration that's used to communicate with a selected certificate issuer provider to order X.509 certificates.
-- Key Vault partners with following certificate issuer providers for TLS/SSL certificates
+Key Vault partners with the following certificate issuer providers for TLS/SSL certificates.
-|**Provider Name**|**Locations**|
+|Provider name|Locations|
|-|--|
-|DigiCert|Supported in all key vault service locations in public cloud and Azure Government|
-|GlobalSign|Supported in all key vault service locations in public cloud and Azure Government|
-
-Before a certificate issuer can be created in a Key Vault, following prerequisite steps 1 and 2 must be successfully accomplished.
+|DigiCert|Supported in all Key Vault service locations in public cloud and Azure Government|
+|GlobalSign|Supported in all Key Vault service locations in public cloud and Azure Government|
-1. Onboard to Certificate Authority (CA) Providers
+Before a certificate issuer can be created in a key vault, an administrator must take the following prerequisite steps:
- - An organization administrator must on-board their company (ex. Contoso) with at least one CA provider.
+1. Onboard the organization with at least one CA provider.
-1. Admin creates requester credentials for Key Vault to enroll (and renew) TLS/SSL certificates
+1. Create requester credentials for Key Vault to enroll (and renew) TLS/SSL certificates. This step provides the configuration for creating an issuer object of the provider in the key vault.
- - Provides the configuration to be used to create an issuer object of the provider in the key vault
+For more information on creating issuer objects from the certificate portal, see the [Key Vault Team Blog](/archive/blogs/kv/manage-certificates-via-azure-key-vault).
-For more information on creating Issuer objects from the Certificates portal, see the [Key Vault Certificates blog](/archive/blogs/kv/manage-certificates-via-azure-key-vault)
+Key Vault allows for the creation of multiple issuer objects with different issuer provider configurations. After an issuer object is created, its name can be referenced in one or multiple certificate policies. Referencing the issuer object instructs Key Vault to use the configuration as specified in the issuer object when it's requesting the X.509 certificate from the CA provider during certificate creation and renewal.
-Key Vault allows for creation of multiple issuer objects with different issuer provider configuration. Once an issuer object is created, its name can be referenced in one or multiple certificate policies. Referencing the issuer object instructs Key Vault to use configuration as specified in the issuer object when requesting the x509 certificate from CA provider during the certificate creation and renewal.
-
-Issuer objects are created in the vault and can only be used with KV certificates in the same vault.
+Issuer objects are created in the vault. They can be used only with Key Vault certificates in the same vault.
>[!Note]
->Publicly trusted certificates are sent to Certificate Authorities (CAs) and Certificate Transparency (CT) logs outside of the Azure boundary during enrollment and will be covered by the GDPR policies of those entities.
+>Publicly trusted certificates are sent to CAs and certificate transparency (CT) logs outside the Azure boundary during enrollment. They're covered by the GDPR policies of those entities.
## Certificate contacts
-Certificate contacts contain contact information to send notifications triggered by certificate lifetime events. The contacts information is shared by all the certificates in the key vault. A notification is sent to all the specified contacts for an event for any certificate in the key vault. For information on how to set Certificate contact, see [here](overview-renew-certificate.md#steps-to-set-certificate-notifications)
+Certificate contacts contain contact information for sending notifications triggered by certificate lifetime events. All the certificates in the key vault share the contact information.
+
+A notification is sent to all the specified contacts for an event for any certificate in the key vault. For information on how to set a certificate contact, see [Renew your Azure Key Vault certificates](overview-renew-certificate.md#steps-to-set-certificate-notifications).
+
+## Certificate access control
-## Certificate Access Control
+Key Vault manages access control for certificates. The key vault that contains those certificates provides access control. The access control policy for certificates is distinct from the access control policies for keys and secrets in the same key vault.
- Access control for certificates is managed by Key Vault, and is provided by the Key Vault that contains those certificates. The access control policy for certificates is distinct from the access control policies for keys and secrets in the same Key Vault. Users may create one or more vaults to hold certificates, to maintain scenario appropriate segmentation and management of certificates. For more information on certificate access control, see [here](certificate-access-control.md)
+Users can create one or more vaults to hold certificates, to maintain scenario-appropriate segmentation and management of certificates. For more information, see [Certificate access control](certificate-access-control.md).
-## Certificate Use Cases
+## Certificate use cases
### Secure communication and authentication
-TLS certificates can help encrypt communications over the internet and establish the identity of websites, making the entry point and mode of communication secure. Additionally, a chained certificate signed by a public CA can help verify that the entities holding the certificates are whom they claim to be. As an example, the following are some excellent use cases of using certificates to secure communication and enable authentication:
-* Intranet/Internet websites: protect access to your intranet site and ensure encrypted data transfer over the internet using TLS certificates.
-* IoT and Networking devices: protect and secure your devices by using certificates for authentication and communication.
-* Cloud/Multi-Cloud: secure cloud-based applications on-premises, cross-cloud, or in your cloud provider's tenant.
+TLS certificates can help encrypt communications over the internet and establish the identity of websites. This encryption makes the entry point and mode of communication more secure. Additionally, a chained certificate that's signed by a public CA can help verify that the entities holding the certificates are legitimate.
+
+As an example, here are some use cases of using certificates to secure communication and enable authentication:
+
+* **Intranet/internet websites**: Protect access to your intranet site and ensure encrypted data transfer over the internet through TLS certificates.
+* **IoT and networking devices**: Protect and secure your devices by using certificates for authentication and communication.
+* **Cloud/multicloud**: Secure cloud-based applications on-premises, cross-cloud, or in your cloud provider's tenant.
### Code signing
-A certificate can help secure the code/script of software, thereby ensuring that the author can share the software over the internet without being changed by malicious entities. Furthermore, once the author signs the code using a certificate leveraging the code signing technology, the software is marked with a stamp of authentication displaying the author and their website. Therefore, the certificate used in code signing helps validate the software's authenticity, promoting end-to-end security.
+A certificate can help secure the code/script of software, to ensure that the author can share the software over the internet without interference by malicious entities. After the author signs the code by using a certificate and taking advantage of code-signing technology, the software is marked with a stamp of authentication that displays the author and their website. The certificate used in code signing helps validate the software's authenticity, promoting end-to-end security.
## Next steps - [Certificate creation methods](create-certificate.md)
A certificate can help secure the code/script of software, thereby ensuring that
- [About secrets](../secrets/about-secrets.md) - [Key management in Azure](../../security/fundamentals/key-management.md) - [Authentication, requests, and responses](../general/authentication-requests-and-responses.md)-- [Key Vault Developer's Guide](../general/developers-guide.md)
+- [Key Vault developer's guide](../general/developers-guide.md)
key-vault Tutorial Javascript Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-javascript-virtual-machine.md
az keyvault set-policy --name "<your-unique-keyvault-name>" --object-id "<system
## Log in to the VM
-To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](/azure/virtual-machines/linux-vm-connect) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
+To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](../../virtual-machines/linux-vm-connect.md) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
To log into a Linux VM, you can use the ssh command with the \<publicIpAddress\> given in the [Create a virtual machine](#create-a-virtual-machine) step:
az group delete -g myResourceGroup
## Next steps
-[Azure Key Vault REST API](/rest/api/keyvault/)
+[Azure Key Vault REST API](/rest/api/keyvault/)
key-vault Tutorial Net Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-virtual-machine.md
Set-AzKeyVaultAccessPolicy -ResourceGroupName <YourResourceGroupName> -VaultName
## Sign in to the virtual machine
-To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure Windows virtual machine](../../virtual-machines/windows/connect-logon.md) or [Connect and sign in to an Azure Linux virtual machine](/azure/virtual-machines/linux-vm-connect).
+To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure Windows virtual machine](../../virtual-machines/windows/connect-logon.md) or [Connect and sign in to an Azure Linux virtual machine](../../virtual-machines/linux-vm-connect.md).
## Set up the console app
When they are no longer needed, delete the virtual machine and your key vault.
## Next steps > [!div class="nextstepaction"]
-> [Azure Key Vault REST API](/rest/api/keyvault/)
+> [Azure Key Vault REST API](/rest/api/keyvault/)
key-vault Tutorial Python Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-python-virtual-machine.md
az keyvault set-policy --name "<your-unique-keyvault-name>" --object-id "<system
## Log in to the VM
-To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](/azure/virtual-machines/linux-vm-connect) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
+To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](../../virtual-machines/linux-vm-connect.md) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
To log into a Linux VM, you can use the ssh command with the \<publicIpAddress\> given in the [Create a virtual machine](#create-a-virtual-machine) step:
az group delete -g myResourceGroup
## Next steps
-[Azure Key Vault REST API](/rest/api/keyvault/)
+[Azure Key Vault REST API](/rest/api/keyvault/)
key-vault Key Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/key-management.md
Previously updated : 09/15/2020 Last updated : 11/14/2022
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/logging.md
tags: azure-resource-manager
Previously updated : 03/30/2021 Last updated : 11/14/2022 #Customer intent: As a Managed HSM administrator, I want to enable logging so I can monitor how my HSM is accessed.
key-vault Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/private-link.md
Title: Configure Azure Key Vault Managed HSM with private endpoints
description: Learn how to integrate Azure Key Vault Managed HSM with Azure Private Link Service Previously updated : 06/21/2021 Last updated : 11/14/2022
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-cli.md
tags: azure-resource-manager
Previously updated : 06/21/2021 Last updated : 11/14/2022 ms.devlang: azurecli
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-powershell.md
Title: Create and retrieve attributes of a managed key in Azure Key Vault ΓÇô Az
description: Quickstart showing how to set and retrieve a managed key from Azure Key Vault using Azure PowerShell Previously updated : 01/26/2021 Last updated : 11/14/2022
key-vault Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/recovery.md
Previously updated : 06/01/2021 Last updated : 11/14/2022 # Managed HSM soft-delete and purge protection
key-vault Role Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/role-management.md
Previously updated : 09/15/2020 Last updated : 11/14/2022
key-vault Secure Your Managed Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/secure-your-managed-hsm.md
tags: azure-resource-manager
Previously updated : 09/15/2020 Last updated : 11/14/2022 # Customer intent: As a managed HSM administrator, I want to set access control and configure the Managed HSM, so that I can ensure it's secure and auditors can properly monitor all activities for this Managed HSM.
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/soft-delete-overview.md
Previously updated : 06/01/2021 Last updated : 11/14/2022 # Managed HSM soft-delete overview
key-vault Third Party Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/third-party-solutions.md
editor: ''
Previously updated : 06/23/2021 Last updated : 11/14/2022
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
When you're assigning roles, it helps to follow these tips:
- To give educators the ability to create new labs and manage the labs that they create, you need only assign them the Lab Creator role. - To give educators the ability to manage specific labs, but *not* the ability to create new labs, assign them either the Owner or Contributor role for each lab that they'll manage. For example, you might want to allow a professor and a teaching assistant to co-own a lab. -
-For more detail about the permissions assigned to each role, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles#lab-assistant)
+For more detail about the permissions assigned to each role, see [Azure built-in roles](../role-based-access-control/built-in-roles.md#lab-assistant)
## Content filtering
For more information about setting up and managing labs, see:
- [Configure a lab plan](lab-plan-setup-guide.md) - [Configure a lab](setup-guide.md)-- [Manage costs for labs](cost-management-guide.md)
+- [Manage costs for labs](cost-management-guide.md)
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
If a flow is started from Seattle, traffic enters West US. This region is the cl
Azure cross-region load balancer uses geo-proximity load-balancing algorithm for the routing decision.
-The configured load distribution mode of the regional load balancers is used for making the final routing decision when multiple regional load balancers are used for geo-proximity.
+The configured load distribution mode of the regional load balancers is used for making the final routing decision when multiple regional load balancers are used for geo-proximity.
For more information, see [Configure the distribution mode for Azure Load Balancer](./load-balancer-distribution-mode.md).
+Egress traffic will follow the routing preference set on the regional load balancers.
### Ability to scale up/down behind a single endpoint
Cross-region load balancer routes the traffic to the appropriate regional load b
* A health probe can't be configured currently. A default health probe automatically collects availability information about the regional load balancer every 20 seconds.
+* Currently, regional load load balancers with floating IP enabled aren't supported by the cross-region load balancer.
+ ## Pricing and SLA Cross-region load balancer, shares the [SLA](https://azure.microsoft.com/support/legal/sla/load-balancer/v1_0/ ) of standard load balancer.
logic-apps Logic Apps Create Logic Apps From Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-logic-apps-from-templates.md
This how-to guide shows how to use these templates as provided or edit them to f
| **Region** | <*your-Azure-datacenter-region*> | Select the datacenter region for deploying your logic app, for example, **West US**. | | **Enable log analytics** | **No** (default) or **Yes** | To set up [diagnostic logging](../logic-apps/monitor-logic-apps-log-analytics.md) for your logic app resource by using [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md), select **Yes**. This selection requires that you already have a Log Analytics workspace. | | **Plan type** | **Consumption** or **Standard** | Select **Consumption** to create a Consumption logic app workflow from a template. |
- | **Zone redundancy** | **Disabled** (default) or **Enabled** | If this option is available, select **Enabled** if you want to protect your logic app resource from a regional failure. But first [check that zone redundancy is available in your Azure region](/azure/logic-apps/set-up-zone-redundancy-availability-zones?tabs=consumption#considerations). |
+ | **Zone redundancy** | **Disabled** (default) or **Enabled** | If this option is available, select **Enabled** if you want to protect your logic app resource from a regional failure. But first [check that zone redundancy is available in your Azure region](./set-up-zone-redundancy-availability-zones.md?tabs=consumption#considerations). |
:::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-settings.png" alt-text="Screenshot showing the 'Create Logic App' page with example property values provided and the 'Consumption' plan type selected.":::
This how-to guide shows how to use these templates as provided or edit them to f
Learn about building logic app workflows through examples, scenarios, customer stories, and walkthroughs. > [!div class="nextstepaction"]
-> [Review logic app examples, scenarios, and walkthroughs](../logic-apps/logic-apps-examples-and-scenarios.md)
+> [Review logic app examples, scenarios, and walkthroughs](../logic-apps/logic-apps-examples-and-scenarios.md)
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
If the schema of the data changes, then it can be updated in a single place (the
Just like `uri_file` and `uri_folder`, you can create a data asset with `mltable` types.
+For more information about the MLTable YAML schema, see [CLI (v2) mltable YAML schema](./reference-yaml-mltable.md).
+ ## Next steps - [Install and set up the CLI (v2)](how-to-configure-cli.md#install-and-set-up-the-cli-v2)
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
For more information, see the following documents:
* [Virtual network isolation and privacy overview](how-to-network-security-overview.md) * [Secure workspace resources](how-to-secure-workspace-vnet.md) * [Secure training environment](how-to-secure-training-vnet.md)
-* [Secure inference environment](/azure/machine-learning/how-to-secure-inferencing-vnet)
+* [Secure inference environment](./how-to-secure-inferencing-vnet.md)
* [Use studio in a secured virtual network](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md) * [Configure firewall](how-to-access-azureml-behind-firewall.md)
Azure Machine Learning has several inbound and outbound network dependencies. So
* [Use Azure Machine Learning with Azure Firewall](how-to-access-azureml-behind-firewall.md) * [Use Azure Machine Learning with Azure Virtual Network](how-to-network-security-overview.md) * [Data encryption at rest and in transit](concept-data-encryption.md)
-* [Build a real-time recommendation API on Azure](/azure/architecture/reference-architectures/ai/real-time-recommendation)
+* [Build a real-time recommendation API on Azure](/azure/architecture/reference-architectures/ai/real-time-recommendation)
machine-learning Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-soft-delete.md
+
+ Title: 'Workspace soft-deletion'
+
+description: Soft-delete allows you to recover workspace data after accidental deletion
++++++++ Last updated : 11/07/2022
+#Customer intent: As an IT pro, understand how to enable data protection capabilities, to protect against accidental deletion.
++
+# Recover workspace data after accidental deletion with soft delete (Preview)
+
+The soft-delete feature for Azure Machine Learning workspace provides a data protection capability that enables you to attempt recovery of workspace data after accidental deletion. Soft delete introduces a two-step approach in deleting a workspace. When a workspace is deleted, it's first soft deleted. While in soft-deleted state, you can choose to recover or permanently delete a workspace and its data during a data retention period.
+
+> [!IMPORTANT]
+> Workspace soft delete is currently in public preview. This preview is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> To enroll your Azure Subscription, see [Register soft-delete on an Azure subscription](#register-soft-delete-on-an-azure-subscription).
+
+## How workspace soft delete works
+
+When a workspace is soft-deleted, data and metadata stored service-side get soft-deleted, but some configurations get hard-deleted. Below table provides an overview of which configurations and objects get soft-deleted, and which are hard-deleted.
+
+> [!IMPORTANT]
+> Soft delete is not supported for workspaces encrypted with customer-managed keys (CMK), and these workspaces are always hard deleted.
+
+Data / configuration | Soft-deleted | Hard-deleted
+||
+Run History | Γ£ô |
+Models | Γ£ô |
+Data | Γ£ô |
+Environments | Γ£ô |
+Components | Γ£ô |
+Notebooks | Γ£ô |
+Pipelines | Γ£ô |
+Designer pipelines | Γ£ô |
+AutoML jobs | Γ£ô |
+Data labeling projects | Γ£ô |
+Datastores | Γ£ô |
+Queued or running jobs | | Γ£ô
+Role assignments | | Γ£ô*
+Internal cache | | Γ£ô
+Compute instance | | Γ£ô
+Compute clusters | | Γ£ô
+Inference endpoints | | Γ£ô
+Linked Databricks workspaces | | Γ£ô*
+
+\* *Microsoft attempts recreation or reattachment when a workspace is recovered. Recovery isn't guaranteed, and a best effort attempt.*
+
+After soft-deletion, the service keeps necessary data and metadata during the recovery [retention period](#soft-delete-retention-period). When the retention period expires, or in case you permanently delete a workspace, data and metadata will be actively deleted.
+
+## Soft-delete retention period
+
+A default retention period of 14 days holds for deleted workspaces. The retention period indicates how long workspace data remains available after it's deleted. The clock starts on the retention period as soon as a workspace is soft-deleted.
+
+During the retention period, soft-deleted workspaces can be recovered or permanently deleted. Any other operations on the workspace, like submitting a training job, will fail. You can't reuse the name of a workspace that has been soft-deleted until the retention period has passed. Once the retention period elapses, a soft deleted workspace automatically gets permanently deleted.
+
+> [!TIP]
+> During preview of workspace soft-delete, the retention period is fixed to 14 days and canΓÇÖt be modified.
+
+## Deleting a workspace
+
+The default deletion behavior when deleting a workspace is soft delete. This behavior excludes workspaces that are [encrypted with a customer-managed key](concept-customer-managed-keys.md), which aren't supported for soft delete.
+
+Optionally, you may permanently delete a workspace going to soft delete state first by checking __Delete the workspace permanently__ in the Azure portal. Permanently deleting workspaces can only be done one workspace at time, and not using a batch operation.
+
+Permanently deleting a workspace allows a workspace name to be reused immediately after deletion. This behavior may be useful in dev/test scenarios where you want to create and later delete a workspace. Permanently deleting a workspace may also be required for compliance if you manage highly sensitive data. See [General Data Protection Regulation (GDPR) implications](#general-data-protection-regulation-gdpr-implications) to learn more on how deletions are handled when soft delete is enabled.
++
+## Manage soft-deleted workspaces
+
+Soft-deleted workspaces can be managed under the Azure Machine Learning resource provider in the Azure portal. To list soft-deleted workspaces, use the following steps:
+
+1. From the [Azure portal](https://portal.azure.com), select __More services__. From the __AI + machine learning__ category, select __Azure Machine Learning__.
+1. From the top of the page, select __Recently deleted__ to view workspaces that were soft-deleted and are still within the retention period.
+
+ :::image type="content" source="./media/concept-soft-delete/soft-delete-manage-recently-deleted.png" alt-text="Screenshot highlighting the recently deleted link.":::
+
+1. From the recently deleted workspaces view, you can recover or permanently delete a workspace.
+
+ :::image type="content" source="./media/concept-soft-delete/soft-delete-manage-recently-deleted-panel.png" alt-text="Screenshot of the recently deleted workspaces view.":::
+
+## Recover a soft-deleted workspace
+
+When you select *Recover* on a soft-deleted workspace, it initiates an operation to restore the workspace state. The service attempts recreation or reattachment of a subset of resources, including Azure RBAC role assignments. Hard-deleted resources including compute clusters should be recreated by you.
+
+Azure Machine Learning recovers Azure RBAC role assignments for the workspace identity, but doesn't recover role assignments you may have added for users or user groups. It may take up to 15 minutes for role assignments to propagate after workspace recovery.
+
+Recovery of a workspace may not always be possible. Azure Machine Learning stores workspace metadata on [other Azure resources associated with the workspace](concept-workspace.md#associated-resources). In the event these dependent Azure resources were deleted, it may prevent the workspace from being recovered or correctly restored. Dependencies of the Azure Machine Learning workspace must be recovered first, before recovering a deleted workspace. Azure Container Registry isn't a hard requirement required for recovery.
+
+Enable [data protection capabilities on Azure Storage](/azure/storage/blobs/soft-delete-blob-overview) to improve chances of successful recovery.
+
+## Permanently delete a soft-deleted workspace
+
+When you select *Permanently delete* on a soft-deleted workspace, it triggers hard deletion of workspace data. Once deleted, workspace data can no longer be recovered. Permanent deletion of workspace data is also triggered when the soft delete retention period expires.
+
+## Register soft-delete on an Azure subscription
+
+During the time of preview, workspace soft delete is enabled on an opt-in basis per Azure subscription. When soft delete is enabled for a subscription, it's enabled for all Azure Machine Learning workspaces in that subscription.
+
+To enable workspace soft delete on your Azure subscription, [register the preview feature](/azure/azure-resource-manager/management/preview-features?tabs=azure-portal#register-preview-feature) in the Azure portal. Select `Workspace soft delete` under the `Microsoft.MachineLearningServices` resource provider. It may take 15 minutes for the UX to appear in the Azure portal after registering your subscription.
+
+Before disabling workspace soft delete on an Azure subscription, purge or recover soft-deleted workspaces. After you disable soft delete on a subscription, workspaces that remain in soft deleted state are automatically purged when the retention period elapses.
+
+## Billing implications
+
+In general, when a workspace is in soft-deleted state, there are only two operations possible: 'permanently delete' and 'recover'. All other operations will fail. Therefore, even though the workspace exists, no compute operations can be performed and hence no usage will occur. When a workspace is soft-deleted, any cost-incurring resources including compute clusters are hard deleted.
+
+## General Data Protection Regulation (GDPR) implications
+
+After soft-deletion, the service keeps necessary data and metadata during the recovery [retention period](#soft-delete-retention-period). From a GDPR and privacy perspective, a request to delete personal data should be interpreted as a request for *permanent* deletion of a workspace and not soft delete.
+
+When the retention period expires, or in case you permanently delete a workspace, data and metadata will be actively deleted. You could choose to permanently delete a workspace at the time of deletion.
+
+For more information, see the [Export or delete workspace data](how-to-export-delete-data.md) article.
+
+## Next steps
+++ [Create and manage a workspace](how-to-manage-workspace.md)++ [Export or delete workspace data](how-to-export-delete-data.md)
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
cat ~/.ssh/id_rsa.pub
> * Mac OS: `Cmd-c` to copy and `Cmd-v` to paste. > * FireFox/IE may not support clipboard permissions properly.
-2) Select and copy the key output in the clipboard.
+2) Select and copy the SSH key output to your clipboard.
+3) Next, follow the steps to add the SSH key to your preferred account type:
+ [GitHub](https://docs.github.com/github/authenticating-to-github/adding-a-new-ssh-key-to-your-github-account)
-+ [GitLab](https://docs.gitlab.com/ee/ssh/#adding-an-ssh-key-to-your-gitlab-account)
++ [GitLab](https://docs.gitlab.com/ee/user/ssh.html#add-an-ssh-key-to-your-gitlab-account) + [Azure DevOps](/azure/devops/repos/git/use-ssh-keys-to-authenticate#step-2--add-the-public-key-to-azure-devops-servicestfs) Start at **Step 2**.
-+ [BitBucket](https://support.atlassian.com/bitbucket-cloud/docs/set-up-an-ssh-key/#SetupanSSHkey-ssh2). Start at **Step 4**.
++ [BitBucket](https://support.atlassian.com/bitbucket-cloud/docs/set-up-an-ssh-key/#SetupanSSHkey-ssh2). Follow **Step 4**. ### Clone the Git repository with SSH
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
It's a shared responsibility between you and Microsoft to ensure that your envir
### Compute instance
-Compute instances get the latest VM images at the time of provisioning. Microsoft releases new VM images on a monthly basis. Once a compute instance is deployed, it does not get actively updated. To keep current with the latest software updates and security patches, you could:
+Compute instances get the latest VM images at the time of provisioning. Microsoft releases new VM images on a monthly basis. Once a compute instance is deployed, it does not get actively updated. You could [query an instance's operating system version](how-to-create-manage-compute-instance.md#audit-and-observe-compute-instance-version-preview). To keep current with the latest software updates and security patches, you could:
1. Recreate a compute instance to get the latest OS image (recommended) * Data and customizations such as installed packages that are stored on the instanceΓÇÖs OS and temporary disks will be lost. * [Store notebooks under "User files"](./concept-compute-instance.md#accessing-files) to persist them when recreating your instance.
- * [Mount data using datasets and datastores](./v1/concept-azure-machine-learning-architecture.md#datasets-and-datastores) to persist files when recreating your instance.
+ * [Mount data](how-to-customize-compute-instance.md) to persist files when recreating your instance.
* See [Compute Instance release notes](azure-machine-learning-ci-image-release-notes.md) for details on image releases. 1. Alternatively, regularly update OS and python packages.
For code-based training experiences, you control which Azure Machine Learning en
* [Azure Machine Learning Base Images Repository](https://github.com/Azure/AzureML-Containers) * [Data Science Virtual Machine release notes](./data-science-virtual-machine/release-notes.md) * [AzureML Python SDK Release Notes](./azure-machine-learning-release-notes.md)
-* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
+* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
To get started with Azure Machine Learning, see:
+ [What is Azure Machine Learning?](overview-what-is-azure-machine-learning.md) + [Create and manage a workspace](how-to-manage-workspace.md)++ [Recover a workspace after deletion (soft-delete)](concept-soft-delete.md) + [Tutorial: Get started with Azure Machine Learning](quickstart-create-resources.md) + [Tutorial: Create your first classification model with automated machine learning](tutorial-first-experiment-automated-ml.md) + [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md)
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
These rule collections are described in more detail in [What are some Azure Fire
For more information on configuring application rules, see [Deploy and configure Azure Firewall](../firewall/tutorial-firewall-deploy-portal.md#configure-an-application-rule).
-1. To restrict outbound traffic for models deployed to Azure Kubernetes Service (AKS), see the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) and [Deploy ML models to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md#connectivity) articles.
+1. To restrict outbound traffic for models deployed to Azure Kubernetes Service (AKS), see the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) and [Secure AKS inference environment](how-to-secure-kubernetes-inferencing-environment.md) articles.
## Kubernetes Compute
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
Previously updated : 04/08/2022 Last updated : 11/16/2022
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
To create a compute instance, you'll need permissions for the following actions:
* *Microsoft.MachineLearningServices/workspaces/computes/write* * *Microsoft.MachineLearningServices/workspaces/checkComputeNameAvailability/action*
+### Audit and observe compute instance version (preview)
+
+Once a compute instance is deployed, it does not get automatically updated. Microsoft [releases](azure-machine-learning-ci-image-release-notes.md) new VM images on a monthly basis. To understand options for keeping recent with the latest version, see [vulnerability management](concept-vulnerability-management.md#compute-instance).
+
+To keep track of whether a compute instance's operating system version is current, you could query an instance's version using the Studio UI, CLI and SDK.
+
+# [Python SDK](#tab/python)
++
+```python
+from azure.ai.ml.entities import ComputeInstance, AmlCompute
+
+# Display operating system version
+instance = ml_client.compute.get("myci")
+print instance.os_image_metadata
+```
+
+For more information on the classes, methods, and parameters used in this example, see the following reference documents:
+
+* [`AmlCompute` class](/python/api/azure-ai-ml/azure.ai.ml.entities.amlcompute)
+* [`ComputeInstance` class](/python/api/azure-ai-ml/azure.ai.ml.entities.computeinstance)
+
+# [Azure CLI](#tab/azure-cli)
++
+```azurecli
+az ml compute show --name "myci"
+```
+
+# [Studio](#tab/azure-studio)
+
+In your workspace in Azure Machine Learning studio, select Compute, then select compute instance on the top. Select a compute instance's compute name to see its properties including the current operating system. When a more recent instance OS version is, use the creation wizard to create a new instance. Enable 'audit and observe compute instance os version' under the previews management panel to see these preview properties.
+++
+Administrators can use [Azure Policy](./../governance/policy/overview.md) definitions to audit instances that are running on outdated operating system versions across workspaces and subscriptions. The following is a sample policy:
+
+```json
+{
+ "mode": "All",
+ "policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.MachineLearningServices/workspaces/computes"
+ },
+ {
+ "field": "Microsoft.MachineLearningServices/workspaces/computes/computeType",
+ "equals": "ComputeInstance"
+ },
+ {
+ "field": "Microsoft.MachineLearningServices/workspaces/computes/osImageMetadata.isLatestOsImageVersion",
+ "equals": "false"
+ }
+ ]
+ },
+ "then": {
+ "effect": "Audit"
+ }
+ }
+}
+```
+ ## Next steps * [Access the compute instance terminal](how-to-access-terminal.md)
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md
Previously updated : 07/28/2022 Last updated : 11/16/2022
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
When you create a workspace, Azure creates several resources within the resource
- An Applications Insights instance - A key vault
-These resources can be deleted by selecting them from the list and choosing **Delete**
+These resources can be deleted by selecting them from the list and choosing **Delete**:
+
+> [!IMPORTANT]
+> If the resource is configured for soft delete, the data won't be deleted unless you optionally select to delete the resource permanently. For more information, see the following articles:
+> * [Workspace soft-deletion](concept-soft-delete.md).
+> * [Soft delete for blobs](/azure/storage/soft-delete-blob-overview.md).
+> * [Soft delete in Azure Container Registry](/azure/container-registry/container-registry-soft-delete-policy).
+> * [Azure log analytics workspace](/azure/azure-monitor/logs/delete-workspace).
+> * [Azure Key Vault soft-delete](/azure/key-vault/general/soft-delete-overview).
:::image type="content" source="media/how-to-export-delete-data/delete-resource-group-resources.png" alt-text="Screenshot of portal, with delete icon highlighted.":::
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
The following steps outline how to set up identity-based data access for trainin
By default, Azure Machine Learning can't communicate with a storage account that's behind a firewall or in a virtual network.
-You can configure storage accounts to allow access only from within specific virtual networks. This configuration requires extra steps to ensure data isn't leaked outside of the network. This behavior is the same for credential-based data access. For more information, see [How to configure virtual network scenarios](v1/how-to-access-data.md#virtual-network).
+You can configure storage accounts to allow access only from within specific virtual networks. This configuration requires extra steps to ensure data isn't leaked outside of the network. This behavior is the same for credential-based data access. For more information, see [How to prevent data exfiltration](how-to-prevent-data-loss-exfiltration.md).
If your storage account has virtual network settings, that dictates what identity type and permissions access is needed. For example for data preview and data profile, the virtual network settings determine what type of identity is used to authenticate data access.
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
az ml workspace create -g <resource-group-name> --file cmk.yml
> Authorize the __Machine Learning App__ (in Identity and Access Management) with contributor permissions on your subscription to manage the data encryption additional resources. > [!NOTE]
-> Azure Cosmos DB is __not__ used to store information such as model performance, information logged by experiments, or information logged from your model deployments. For more information on monitoring these items, see the [Monitoring and logging](v1/concept-azure-machine-learning-architecture.md) section of the architecture and concepts article.
+> Azure Cosmos DB is __not__ used to store information such as model performance, information logged by experiments, or information logged from your model deployments.
> [!IMPORTANT] > Selecting high business impact can only be done when creating a workspace. You cannot change this setting after workspace creation.
machine-learning How To Move Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-move-workspace.md
Previously updated : 08/04/2022 Last updated : 11/16/2022 # Move Azure Machine Learning workspaces between subscriptions (preview)
machine-learning How To Private Endpoint Integration Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-private-endpoint-integration-synapse.md
Previously updated : 02/03/2022 Last updated : 11/16/2022
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
Previously updated : 07/28/2022 Last updated : 11/16/2022 ms.devlang: azurecli
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
For more information on customer-managed keys with Azure Cosmos DB, see [Configu
### Azure Container Instance > [!IMPORTANT]
-> Deploying to Azure Container Instances is not available in SDK or CLI v2. Only through SDK & CL v1.
+> Deploying to Azure Container Instances is not available in SDK or CLI v2. Only through SDK & CLI v1.
When __deploying__ a trained model to an Azure Container instance (ACI), you can encrypt the deployed resource using a customer-managed key. For information on generating a key, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md#generate-a-new-key).
For more information on creating and using a deployment configuration, see the f
* [AciWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none-) reference * [Where and how to deploy](how-to-deploy-managed-online-endpoints.md)
-* [Deploy a model to Azure Container Instances](v1/how-to-deploy-azure-container-instance.md)
+* [Deploy a model to Azure Container Instances (SDK/CLI v1)](v1/how-to-deploy-azure-container-instance.md)
-For more information on using a customer-managed key with ACI, see [Encrypt deployment data](../container-instances/container-instances-encrypt-data.md).
+ For more information on using a customer-managed key with ACI, see [Encrypt deployment data](../container-instances/container-instances-encrypt-data.md).
### Azure Kubernetes Service
machine-learning How To Use Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid.md
This example shows how to use event grid with an Azure Logic App to trigger retr
Before you begin, perform the following actions:
-* Set up a dataset monitor to [detect data drift](v1/how-to-monitor-datasets.md) in a workspace
+* Set up a dataset monitor to [detect data drift (SDK/CLI v1)](v1/how-to-monitor-datasets.md) in a workspace
* Create a published [Azure Data Factory pipeline](../data-factory/index.yml). In this example, a simple Data Factory pipeline is used to copy files into a blob store and run a published Machine Learning pipeline. For more information on this scenario, see how to set up a [Machine Learning step in Azure Data Factory](../data-factory/transform-data-machine-learning-service.md)
machine-learning Migrate To V2 Execution Hyperdrive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-hyperdrive.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
For more information, see:
-* [SDK v1 - Tune Hyperparameters](/azure/machine-learning/v1/how-to-tune-hyperparameters-v1)
+* [SDK v1 - Tune Hyperparameters](./v1/how-to-tune-hyperparameters-v1.md)
* [SDK v2 - Tune Hyperparameters](/python/api/azure-ai-ml/azure.ai.ml.sweep)
-* [SDK v2 - Sweep in Pipeline](how-to-use-sweep-in-pipeline.md)
+* [SDK v2 - Sweep in Pipeline](how-to-use-sweep-in-pipeline.md)
machine-learning Monitor Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-azure-machine-learning.md
Previously updated : 10/21/2021 Last updated : 11/16/2022 # Monitor Azure Machine Learning
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
The following sections build on this article by describing the specific data gathered for Azure Machine Learning. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. > [!TIP]
-> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor//usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
+> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor/usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
## Monitoring data from Azure Machine Learning
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
If you don't yet have a workspace, create one now:
Region | Select the Azure region closest to your users and the data resources to create your workspace. 1. Select **Create** to create the workspace
+> [!NOTE]
+> This creates a workspace along with all required resources. If you would like to reuse resources, such as Storage Account, Azure Container Registry, Azure KeyVault, or Application Insights, use the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.MachineLearningServices) instead.
+ ## Create compute instance You could install Azure Machine Learning on your own computer. But in this quickstart, you'll create an online compute resource that has a development environment already installed and ready to go. You'll use this online machine, a *compute instance*, for your development environment to write and run code in Python scripts and Jupyter notebooks.
machine-learning Reference Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md
This article contains reference information that may be useful when [configuring
## Prerequisites for ARO or OCP clusters ### Disable Security Enhanced Linux (SELinux)
-[AzureML dataset](v1/how-to-train-with-datasets.md) (used in AzureML training jobs) isn't supported on machines with SELinux enabled. Therefore, you need to disable `selinux` on all workers in order to use AzureML dataset.
+[AzureML dataset](v1/how-to-train-with-datasets.md) (an SDK v1 feature used in AzureML training jobs) isn't supported on machines with SELinux enabled. Therefore, you need to disable `selinux` on all workers in order to use AzureML dataset.
### Privileged setup for ARO and OCP
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
The information in the rest of this document provides information on what featur
| View, edit, or delete dataset drift monitors from the SDK | Public Preview | YES | YES | | View, edit, or delete dataset drift monitors from the UI | Public Preview | YES | YES | | **Machine learning lifecycle** | | | |
-| [Model profiling](v1/how-to-deploy-profile-model.md) | GA | YES | PARTIAL |
+| [Model profiling (SDK/CLI v1)](v1/how-to-deploy-profile-model.md) | GA | YES | PARTIAL |
| [The Azure ML CLI 1.0](v1/reference-azure-machine-learning-cli.md) | GA | YES | YES |
-| [FPGA-based Hardware Accelerated Models](./v1/how-to-deploy-fpga-web-service.md) | GA | NO | NO |
+| [FPGA-based Hardware Accelerated Models (SDK/CLI v1)](./v1/how-to-deploy-fpga-web-service.md) | GA | NO | NO |
| [Visual Studio Code integration](how-to-setup-vs-code.md) | Public Preview | NO | NO | | [Event Grid integration](how-to-use-event-grid.md) | Public Preview | NO | NO | | [Integrate Azure Stream Analytics with Azure Machine Learning](../stream-analytics/machine-learning-udf.md) | Public Preview | NO | NO |
The information in the rest of this document provides information on what featur
| Interpretability SDK | GA | YES | YES | | **Training** | | | | | [Experimentation log streaming](how-to-track-monitor-analyze-runs.md) | GA | YES | YES |
-| [Reinforcement Learning](./v1/how-to-use-reinforcement-learning.md) | Public Preview | NO | NO |
+| [Reinforcement Learning (SDK/CLI v1)](./v1/how-to-use-reinforcement-learning.md) | Public Preview | NO | NO |
| [Experimentation UI](how-to-track-monitor-analyze-runs.md) | Public Preview | YES | YES | | [.NET integration ML.NET 1.0](/dotnet/machine-learning/tutorials/object-detection-model-builder) | GA | YES | YES | | **Inference** | | | | | Managed online endpoints | GA | YES | YES | | [Batch inferencing](tutorial-pipeline-batch-scoring-classification.md) | GA | YES | YES |
-| [Azure Stack Edge with FPGA](./v1/how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO |
+| [Azure Stack Edge with FPGA (SDK/CLI v1)](./v1/how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO |
| **Other** | | | | | [Open Datasets](../open-datasets/samples.md) | Public Preview | YES | YES | | [Custom Cognitive Search](how-to-deploy-model-cognitive-search.md) | Public Preview | YES | YES |
The information in the rest of this document provides information on what featur
* Model Profiling does not support 4 CPUs in the US-Arizona region. * Sample notebooks may not work in Azure Government if it needs access to public data. * IP addresses: The CLI command used in the [required public internet access](how-to-secure-training-vnet.md#required-public-internet-access) instructions does not return IP ranges. Use the [Azure IP ranges and service tags for Azure Government](https://www.microsoft.com/download/details.aspx?id=57063) instead.
-* For scheduled pipelines, we also provide a blob-based trigger mechanism. This mechanism is not supported for CMK workspaces. For enabling a blob-based trigger for CMK workspaces, you have to do extra setup. For more information, see [Trigger a run of a machine learning pipeline from a Logic App](v1/how-to-trigger-published-pipeline.md).
+* For scheduled pipelines, we also provide a blob-based trigger mechanism. This mechanism is not supported for CMK workspaces. For enabling a blob-based trigger for CMK workspaces, you have to do extra setup. For more information, see [Trigger a run of a machine learning pipeline from a Logic App (SDK/CLI v1)](v1/how-to-trigger-published-pipeline.md).
* Firewalls: When using an Azure Government region, add the following hosts to your firewall setting: * For Arizona use: `usgovarizona.api.ml.azure.us`
machine-learning Reference Yaml Core Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-core-syntax.md
Previously updated : 03/31/2022 Last updated : 11/16/2022
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Default value | | | - | -- | - |
-| `period` | integer | How often (in seconds) to perform the probe. | `10` |
| `initial_delay` | integer | The number of seconds after the container has started before the probe is initiated. Minimum value is `1`. | `10` |
+| `period` | integer | How often (in seconds) to perform the probe. | `10` |
| `timeout` | integer | The number of seconds after which the probe times out. Minimum value is `1`. | `2` | | `success_threshold` | integer | The minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is `1`. | `1` | | `failure_threshold` | integer | When a probe fails, the system will try `failure_threshold` times before giving up. Giving up in the case of a liveness probe means the container will be restarted. In the case of a readiness probe the container will be marked Unready. Minimum value is `1`. | `30` |
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
When Azure Container Registry is behind the virtual network, Azure Machine Learn
## Use the workspace > [!IMPORTANT]
-> The steps in this article put Azure Container Registry behind the VNet. In this configuration, you cannot deploy a model to Azure Container Instances inside the VNet. We do not recommend using Azure Container Instances with Azure Machine Learning in a virtual network. For more information, see [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md).
+> The steps in this article put Azure Container Registry behind the VNet. In this configuration, you cannot deploy a model to Azure Container Instances inside the VNet. We do not recommend using Azure Container Instances with Azure Machine Learning in a virtual network. For more information, see [Secure the inference environment (SDK/CLI v1)](./v1/how-to-secure-inferencing-vnet.md).
> > As an alternative to Azure Container Instances, try Azure Machine Learning managed online endpoints. For more information, see [Enable network isolation for managed online endpoints (preview)](how-to-secure-online-endpoint.md).
machine-learning Concept Network Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-network-data-access.md
Previously updated : 11/19/2021 Last updated : 11/16/2022
machine-learning How To Authenticate Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-authenticate-web-service.md
Previously updated : 08/15/2022 Last updated : 11/16/2022
machine-learning How To Consume Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-consume-web-service.md
Previously updated : 08/15/2022 Last updated : 11/16/2022 ms.devlang: csharp, golang, java, python
machine-learning How To Debug Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-debug-parallel-run-step.md
-+ Previously updated : 10/21/2021 Last updated : 11/16/2022 #Customer intent: As a data scientist, I want to figure out why my ParallelRunStep doesn't run so that I can fix it.
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-and-where.md
Previously updated : 07/28/2022 Last updated : 11/16/2022 adobe-target: true
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-azure-kubernetes-service.md
--++ Previously updated : 08/15/2022 Last updated : 11/16/2022 # Deploy a model to an Azure Kubernetes Service cluster with v1
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-inferencing-gpus.md
Previously updated : 08/08/2022 Last updated : 11/16/2022
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-network-security-overview.md
Previously updated : 08/08/2022 Last updated : 11/16/2022
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-workspace-vnet.md
In this article you learn how to enable the following workspaces resources in a
### Azure Container Registry
-* Your Azure Container Registry must be Premium version. For more information on upgrading, see [Changing SKUs](/azure/container-registry/container-registry-skus#changing-tiers).
+* Your Azure Container Registry must be Premium version. For more information on upgrading, see [Changing SKUs](../../container-registry/container-registry-skus.md#changing-tiers).
* If your Azure Container Registry uses a __private endpoint__, it must be in the same _virtual network_ as the storage account and compute targets used for training or inference. If it uses a __service endpoint__, it must be in the same _virtual network_ and _subnet_ as the storage account and compute targets.
This article is part of a series on securing an Azure Machine Learning workflow.
* [Use a firewall](../how-to-access-azureml-behind-firewall.md) * [Tutorial: Create a secure workspace](../tutorial-create-secure-workspace.md) * [Tutorial: Create a secure workspace using a template](../tutorial-create-secure-workspace-template.md)
-* [API platform network isolation](../how-to-configure-network-isolation-with-v2.md)
+* [API platform network isolation](../how-to-configure-network-isolation-with-v2.md)
machine-learning How To Troubleshoot Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-deployment.md
description: Learn how to work around, solve, and troubleshoot some common Docke
Previously updated : 08/15/2022 Last updated : 11/16/2022
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-managed-identities.md
Previously updated : 05/06/2021 Last updated : 11/16/2022
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-secrets-in-runs.md
Previously updated : 10/21/2021 Last updated : 11/16/2022
machine-learning Reference Pipeline Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-pipeline-yaml.md
-+ Last updated 07/31/2020
marketplace Marketplace Rewards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-rewards.md
Last updated 05/28/2021
-# Marketplace Rewards
+# ISV Success program and Marketplace Rewards
+Microsoft continues its strong commitment to the growth and success of ISVs and supporting them throughout the entire journey of building, publishing, and selling apps through the Microsoft commercial marketplace. To further this mission, Marketplace Rewards is now included in the ISV Success program, available—at no cost—to all participants of the program. As you grow through the Microsoft commercial marketplace, you unlock new benefits designed to help you convert customers and close deals. For details on the program and benefits, see [Marketplace Rewards](https://aka.ms/marketplacerewards) (PPT). To see what other Microsoft partners are saying about their experiences with Marketplace Rewards, visit Marketplace [Rewards testimonials](https://aka.ms/MarketplaceRewardsTestimonials).
+the benefits at each stage of growth help you progress to the next stage, helping you to grow your business to Microsoft customers, with Microsoft's field, and through Microsoft's channel by applying the commercial marketplace as your platform.
-Marketplace Rewards supports you at your specific stage of growth, starting with awareness activities to help you get your first customers. As you grow through the Microsoft commercial marketplace, you unlock new benefits designed to help you convert customers and close deals. For details on the program and benefits, see [Marketplace Rewards](https://aka.ms/marketplacerewards) (PPT).
+Your benefits are differentiated based on whether your offer is [List, Trial, Consulting or Transact](/azure/marketplace/determine-your-listing-type).
-The program creates a positive feedback loop: the benefits at each stage of growth help you progress to the next stage, helping you to grow your business to Microsoft customers, with Microsoft's field, and through Microsoft's channel by leveraging the commercial marketplace as your platform.
+Based on your eligibility, you'll be contacted by a member of the Rewards team when your offer goes live.  
-Your benefits are differentiated based on whether your offer is [Contact Me, Free Trial, or Transact](determine-your-listing-type.md).
+List, Trial and Consulting offers receive one-time use benefits. Transact offers are eligible for evergreen benefit engagement. For transacting partners, as you grow your billed sales through the commercial marketplace, you unlock greater benefits per billed sales (or seats sold) tier. 
-You will be contacted by a member of the Rewards team when your offer goes live, based on your eligibility.
+The minimum requirement to publish in the online stores is an MPNID, so these benefits are available to all partners regardless of MPN competency status or partner type. Every partner is empowered to grow your business through the commercial marketplace as a platform. 
-For Transact partners, as you grow your billed sales through the commercial marketplace platform, you unlock greater benefits per tier.
-
-The minimum requirement to publish in the online stores is an PartnerID, so these benefits are available to all partners regardless of competency status or partner type. Each partner is empowered to grow their business through the commercial marketplace as a platform.
-
-You will get support in understanding the resources available to you and in implementing the best practices, which you can also [review on your own](https://partner.microsoft.com/asset/collection/azure-marketplace-and-appsource-publisher-toolkit#/).
+You'll get support in understanding the resources available to you and in implementing the best practices, which you can also [review on your own](https://partner.microsoft.com/asset/collection/azure-marketplace-and-appsource-publisher-toolkit).
To check your eligibility for the Marketplace Rewards program, see the [Marketplace Rewards](https://partner.microsoft.com/dashboard/mpn/program/commercialmarketplace) page in Partner Center.
Your steps to get started are easy:
1. To activate sales and marketing benefit, you must first assign a company marketing contact. This contact will receive follow-up communications about your Marketplace Rewards. 1. To add or update your marketing contact information, go to the top of the Sales and Marketing benefits tab on Marketplace Rewards page, then select **Add, update, or change**. Next, do the following:
- 1. Select a user from the list. If the user you want to assign is not in the list, you can add new users in **Account settings**.
- 1. Provide an email address for the user that's different from the email address associated with your company's Partner Center account. We will email instructions for using your Marketplace Rewards benefit to your designated marketing contact's email address.
+ 1. Select a user from the list. If the user you want to assign isn't in the list, you can add new users in **Account settings**.
+
+ 1. Provide an email address for the user that's different from the email address associated with your company's Partner Center account. W'll email instructions for using your Marketplace Rewards benefit to your designated marketing contact's email address.
+
1. Provide the contact phone and preferred language for this marketing contact. After you finish entering this information, select **Assign user**. 1. After you've updated the marketing contact, select **Activate** for the benefit you want to start using. Once you activate a benefit, your marketing contact will be contacted by a member of the Rewards team within a week.
Your steps to get started are easy:
>If your offer has been live for more than four weeks and you have not received a message, check in Partner Center to find who in your organization owns the offer. They should have the communication and next steps. If you cannot determine the owner, or if the owner has left your company, open a [support ticket](https://go.microsoft.com/fwlink/?linkid=2165533). The scope of the activities available to you expands as you grow your offerings in the marketplace. All listings receive a base level of optimization recommendations and promotion as part of a self-serve email of resources and best practices.+++
marketplace Price Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/price-changes.md
The price change feature does not support the following scenarios:
- Price changes on hidden plans. - Price changes on plans available in Azure Government cloud. - Price increase and decrease on the same plan. To make both changes, first schedule the price decrease. Once it becomes effective, publish the price increase. See [Plan for a price change](#plan-a-price-change) below.-- Canceling and modifying a price change through Partner Center. To cancel a price update, contact [support](https://go.microsoft.com/fwlink/?linkid=2056405). - Changing prices from free or $0 to paid. - Changing prices via APIs.
To update the monthly or yearly price of a SaaS or Azure app offer:
1. Export the prices using **Export pricing data**. 2. Update the prices for each market in the downloaded spreadsheet and save it. 3. Import the spreadsheet using **Import pricing data**.
-7. To change prices across all markets, edit the desired **billing term price** box.
-
- > [!NOTE]
- > If the plan is available in multiple markets, the new price for each market is calculated according to current exchange rates.
+1. 7. To change prices across all markets, edit the desired **billing term price** box.
+ ```
+ If the plan is available in multiple markets, the new price for each market is calculated according to current exchange rates.
+ ```
8. Select **Save draft**. 9. Confirm you understand the effects of changing the price by entering the **ID of the plan**. 10. Verify the current and new prices on the **Compare** page, which is accessible from the top of the pricing and availability page.
To update the per user monthly or yearly fee of a SaaS offer:
1. Export the prices using **Export pricing data**. 2. Update the prices for each market in the downloaded spreadsheet and save it. 3. Import the spreadsheet using **Import pricing data**.
-7. To change prices across all markets, edit the desired **billing term price** box.
+1. 7. To change prices across all markets, edit the desired **billing term price** box.
> [!NOTE] > If the plan is available in multiple markets, the new price for each market is calculated according to current exchange rates.-
+
8. Select **Save draft**. 9. Confirm you understand the effects of changing the price by entering the **ID of the plan**. 10. Verify the current and new prices on the **Compare** page, which is accessible from the top of the pricing and availability page.
To update the price per unit of a meter dimension of a SaaS or Azure app offer:
4. Import the spreadsheet using **Import pricing data**. 1. To change prices across all markets: 1. Locate the dimension to update.
- 1. Edit the **Price per unit in USD** box.
-
- > [!NOTE]
+ 1- Edit the **Price per unit in USD** box.
+1. > [!NOTE]
> If the plan is available in multiple markets, the new price for each market is calculated according to current exchange rates.-
+
8. Select **Save draft**. 9. Confirm you understand the effects of changing the price by entering the **ID of the plan**. 10. Verify the current and new prices on the **Compare** page, which is accessible from the top of the pricing and availability page.
To update the price per core or per core size of a VM offer.
2. Update the market and core size prices in the downloaded spreadsheet and save it. 3. Import the spreadsheet using **Import pricing data**.
-7. To change prices across all markets:
+1. 7. To change prices across all markets:
> [!NOTE] > If the plan is available in multiple markets, the new price for each market is calculated according to current exchange rates.-
- 1. **Per core**: Edit the price per core in the **USD/hour** box.
+
+1. **Per core**: Edit the price per core in the **USD/hour** box.
2. **Per core size**: Edit each core size in the **Price per hour in USD** box. 8. Select **Save draft**.
Customers are billed the new price for consumption of the resource that happens
## Canceling or modifying a price change
-To modify an already scheduled price change, request the cancellation by submitting a [support request](https://partner.microsoft.com/support/?stage=1) that includes the Plan ID, price, and the market (if the change was market-specific).
+If the price change was configured within the last 2 days, it can be cancelled using the cancel button next to the price change expected on date and then publishing the changes. For a price change configured more than 2 days ago that has not yet taken affect, [submit a support request](https://partner.microsoft.com/support/?stage=1), that includes the Plan ID, price, and the market (if the change was market specific) in the request.
+If the price change was an increase and the cancelation was after the 2-day period, we will email the customers a second time to inform them of the cancelation.
+After the price change is canceled, follow the steps in the appropriate part of this article to schedule a new price change with the needed modifications. 
+## Next steps
+
+- Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
-If the price change was an increase, we will email customers a second time to inform them the price increase has been canceled.
-After the price change is canceled, follow the steps in the appropriate part of this document to schedule a new price change with the needed modifications.
-## Next steps
-- Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
migrate Resources Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/resources-faq.md
ms. Previously updated : 09/27/2022 Last updated : 11/16/2022 # Azure Migrate: Common questions
You can track your migration journey from within the Azure Migrate project, acro
Learn how to [delete a project](how-to-delete-project.md).
+## Can an Azure Migrate resource be moved?
+
+No, Azure Migrate does not support moving resources. To move resources created by Azure Migrate, consider creating a new project in the desired region.
+ ## Next steps Read the [Azure Migrate overview](migrate-services-overview.md).
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-java.md
az identity create \
``` > [!IMPORTANT]
-> After creating the user-assigned identity, ask your *Global Administrator* or *Privileged Role Administrator* to grant the following permissions for this identity: `User.Read.All`, `GroupMember.Read.All`, and `Application.Read.ALL`. For more information, see the [Permissions](/azure/mysql/flexible-server/concepts-azure-ad-authentication#permissions) section of [Active Directory authentication](/azure/mysql/flexible-server/concepts-azure-ad-authentication).
+> After creating the user-assigned identity, ask your *Global Administrator* or *Privileged Role Administrator* to grant the following permissions for this identity: `User.Read.All`, `GroupMember.Read.All`, and `Application.Read.ALL`. For more information, see the [Permissions](./concepts-azure-ad-authentication.md#permissions) section of [Active Directory authentication](./concepts-azure-ad-authentication.md).
Run the following command to assign the identity to MySQL server for creating Azure AD admin:
az mysql flexible-server db create \
Next, create a non-admin user and grant all permissions on the `demo` database to it. > [!NOTE]
-> You can read more detailed information about creating MySQL users in [Create users in Azure Database for MySQL](/azure/mysql/single-server/how-to-create-users).
+> You can read more detailed information about creating MySQL users in [Create users in Azure Database for MySQL](../single-server/how-to-create-users.md).
#### [Passwordless connection (Recommended)](#tab/passwordless)
az group delete \
## Next steps > [!div class="nextstepaction"]
-> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](../concepts-migrate-dump-restore.md)
+> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](../concepts-migrate-dump-restore.md)
mysql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-cli.md
You can verify the above attributes of the key by using the following command:
az keyvault key show --vault-name \<key\_vault\_name\> -n \<key\_name\> ```
-> [!Note]
-> In the Public Preview, we can't enable geo redundancy on a flexible server that has CMK enabled, nor can we enable geo redundancy on a flexible server that has CMK enabled.
- ## Update an existing MySQL flexible server with data encryption Set or change key and identity for data encryption:
mysql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-portal.md
In this tutorial, you learn how to:
4. Select **Create**.
-> [!Note]
-> In the Public Preview, we can't enable geo redundancy on a flexible server that has CMK enabled, nor can we enable geo redundancy on a flexible server that has CMK enabled.
- ## Configure customer managed key To set up the customer managed key, perform the following steps.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
## November 2022
+- **Azure Active Directory authentication for Azure Database for MySQL ΓÇô Flexible Server (General Availability)**
+
+ You can now authenticate to Azure Database for MySQL - Flexible server using Microsoft Azure Active Directory (Azure AD) using identities. With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management. [Learn More](concepts-azure-ad-authentication.md)
+
+- **Customer managed keys data encryption ΓÇô Azure Database for MySQL ΓÇô Flexible Server (General Availability)**
+
+ With data encryption with customer-managed keys (CMKs) for Azure Database for MySQL - Flexible Server Preview, you can bring your own key (BYOK) for data protection at rest and implement separation of duties for managing keys and data. Data encryption with CMKs is set at the server level. For a given server, a CMK, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. With customer managed keys (CMKs), the customer is responsible for and in a full control of key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing operations on keys. [Learn More](concepts-customer-managed-key.md)
+ - **General availability in Azure US Government regions** The Azure Database for MySQL - Flexible Server is now available in the following Azure regions: - USGov Virginia - USGov Arizona - USGov Texas - ## October 2022 - **AMD compute SKUs for General Purpose and Business Critical tiers in in Azure Database for MySQL - Flexible Server**
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/whats-happening-to-mysql-single-server.md
Last updated 09/29/2022
[!INCLUDE [applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
-Hello! We have news to share - **Azure Database for MySQL - Single Server is on the retirement path**.
+Hello! We have news to share - **Azure Database for MySQL - Single Server is on the retirement path** and Azure Database for MySQL - Single Server is scheduled for retirement by **September 16, 2024**.
After years of evolving the Azure Database for MySQL - Single Server service, it can no longer handle all the new features, functions, and security needs. We recommend upgrading to Azure Database for MySQL - Flexible Server. Azure Database for MySQL - Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. For more information about Flexible Server, visit **[Azure Database for MySQL - Flexible Server](../flexible-server/overview.md)**.
-If you currently have an Azure Database for MySQL - Single Server service hosting production servers, we're glad to let you know that you can migrate your Azure Database for MySQL - Single Server servers to the Azure Database for MySQL - Flexible Server service.
-
-However, we know change can be disruptive to any environment, so we want to help you with this transition. Review the different ways using the Azure Data Migration Service to [migrate from Azure Database for MySQL - Single Server to MySQL - Flexible Server.](#migrate-from-single-server-to-flexible-server)
+If you currently have an Azure Database for MySQL - Single Server service hosting production servers, we're glad to let you know that you can migrate your Azure Database for MySQL - Single Server servers to the Azure Database for MySQL - Flexible Server service free of cost using Azure Database Migration Service. Review the different ways to migrate using Azure Data Migration Service in the section below.
## Migrate from Single Server to Flexible Server
Learn how to migrate from Azure Database for MySQL - Single Server to Azure Data
| Offline | Database Migration Service (DMS) and the Azure portal | [Tutorial: DMS with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) | | Online | Database Migration Service (DMS) and the Azure portal | [Tutorial: DMS with the Azure portal (online)](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) |
-For more information on migrating from Single Server to Flexible Server, visit [Select the right tools for migration to Azure Database for MySQL](../migrate/how-to-decide-on-right-migration-tools.md).
+For more information on migrating from Single Server to Flexible Server using other migration tools, visit [Select the right tools for migration to Azure Database for MySQL](../migrate/how-to-decide-on-right-migration-tools.md).
## Migration Eligibility
A. We aren't stopping new single server creations immediately, so you can provis
**Q. Are there additional costs associated with performing the migration?**
-A. When running the migration, you pay for the target flexible server and the source single server. The configuration and compute of the target flexible server determines the additional costs incurred. For more information, see, [Pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/). Once you've decommissioned the source single server post successful migration, you only pay for your running flexible server. There are no more costs on running the migration through the migration tooling.
+A. When running the migration, you pay for the target flexible server and the source single server. The configuration and compute of the target flexible server determines the additional costs incurred. For more information, see, [Pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/). Once you've decommissioned the source single server post successful migration, you only pay for your running flexible server. There are no costs incurred while running the migration through the Azure Database Migration Service migration tooling.
**Q. Will my billing be affected by running Flexible Server as compared to Single Server?**
network-watcher Network Insights Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-topology.md
Follow these steps to find the next hop.
## Next steps
-[Learn more](/azure/network-watcher/connection-monitor-overview) about connectivity related metrics.
+[Learn more](./connection-monitor-overview.md) about connectivity related metrics.
orbital Organize Stac Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/organize-stac-data.md
To catalog more data sources or to catalog your own data source, consider the fo
Security provides assurances against deliberate attacks and the abuse of your valuable data and systems. For more information, see [Overview of the security pillar](/azure/architecture/framework/security/overview). -- Azure Kubernetes Service [Container Security](/azure/aks/concepts-security) implementation ensures the processors are built and run as containers are secure.-- API Management Service [Security baseline](/azure/aks/concepts-security) provides recommendations on how to secure your cloud solutions on Azure.-- [Azure Database for PostgreSQL Security](/azure/postgresql/flexible-server/concepts-security) covers in-depth the security at multiple layers when data is stored in PostgreSQL Flexible Server including data at rest and data in transit scenarios.
+- Azure Kubernetes Service [Container Security](../aks/concepts-security.md) implementation ensures the processors are built and run as containers are secure.
+- API Management Service [Security baseline](../aks/concepts-security.md) provides recommendations on how to secure your cloud solutions on Azure.
+- [Azure Database for PostgreSQL Security](../postgresql/flexible-server/concepts-security.md) covers in-depth the security at multiple layers when data is stored in PostgreSQL Flexible Server including data at rest and data in transit scenarios.
### Cost optimization
If you want to start building this, we have put together a [sample solution](htt
|STAC Item|The core atomic unit, representing a single spatiotemporal asset as a GeoJSON feature plus metadata like datetime and reference links.| |STAC Catalog|A simple, flexible JSON that provides a structure and organized the metadata like STAC items, collections and other catalogs.| |STAC Collection|Provides additional information such as the extents, license, keywords, providers, and so forth, that describe STAC Items within the Collection.|
-|STAC API|Provides a RESTful endpoint that enables search of STAC Items, specified in OpenAPI.|
+|STAC API|Provides a RESTful endpoint that enables search of STAC Items, specified in OpenAPI.|
partner-solutions Dynatrace Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md
Use the Azure portal to find Dynatrace for Azure application.
| **Property** | **Description** | |--|-| | Subscription | Select the Azure subscription you want to use for creating the Dynatrace resource. You must have owner or contributor access.|
- | Resource group | Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure/azure-resource-manager/management/overview) is a container that holds related resources for an Azure solution. |
+ | Resource group | Specify whether you want to create a new resource group or use an existing one. A [resource group](../../azure-resource-manager/management/overview.md) is a container that holds related resources for an Azure solution. |
| Resource name | Specify a name for the Dynatrace resource. This name will be the friendly name of the new Dynatrace environment.| | Location | Select the region. Select the region where the Dynatrace resource in Azure and the Dynatrace environment is created.| | Pricing plan | Select from the list of available plans. |
Use the Azure portal to find Dynatrace for Azure application.
:::image type="content" source="media/dynatrace-create/dynatrace-metrics-and-logs.png" alt-text="Screenshot showing options for metrics and logs.":::
- - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription.
+ - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription.
- - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+ - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
1. To send subscription level logs to Dynatrace, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Dynatrace.
-1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure/azure-monitor/essentials/resource-logs-categories).
+1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md).
When the checkbox for Azure resource logs is selected, by default, logs are forwarded for all resources. To filter the set of Azure resources sending logs to Dynatrace, use inclusion and exclusion rules and set the Azure resource tags:
Use the Azure portal to find Dynatrace for Azure application.
## Next steps -- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
+- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
partner-solutions Dynatrace How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md
You can filter the list of resources by resource type, resource group name, regi
The column **Logs to Dynatrace** indicates whether the resource is sending logs to Dynatrace. If the resource isn't sending logs, this field indicates why logs aren't being sent. The reasons could be: -- _Resource doesn't support sending logs_ - Only resource types with monitoring log categories can be configured to send logs. See [supported categories](/azure/azure-monitor/essentials/resource-logs-categories).
+- _Resource doesn't support sending logs_ - Only resource types with monitoring log categories can be configured to send logs. See [supported categories](../../azure-monitor/essentials/resource-logs-categories.md).
- _Limit of five diagnostic settings reached_ - Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/cli/azure/monitor/diagnostic-settings). - _Error_ - The resource is configured to send logs to Dynatrace, but is blocked by an error. - _Logs not configured_ - Only Azure resources that have the appropriate resource tags are configured to send logs to Dynatrace.
If more than one Dynatrace resource is mapped to the Dynatrace environment using
## Next steps
-For help with troubleshooting, see [Troubleshooting Dynatrace integration with Azure](dynatrace-troubleshoot.md).
+For help with troubleshooting, see [Troubleshooting Dynatrace integration with Azure](dynatrace-troubleshoot.md).
partner-solutions Dynatrace Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-link-to-existing.md
Select **Next: Metrics and logs** to configure metrics and logs.
:::image type="content" source="media/dynatrace-link-to-existing/dynatrace-metrics-and-logs.png" alt-text="Screenshot showing options for metrics and logs.":::
- - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There\'s a single activity log for each Azure subscription.
+ - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There\'s a single activity log for each Azure subscription.
- - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+ - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md).
When you've finished adding tags, select **Next: Review+Create.**
## Next steps -- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
+- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
payment-hsm Create Different Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-ip-addresses.md
Last updated 09/12/2022
# Create a payment HSM with host and management port with IP addresses in different virtual networks using ARM template
-This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure payment HSM. Azure Payment HSM is a "BareMetal" service delivered using [Thales payShield 10K payment hardware security modules (HSM)](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-10k) to provide cryptographic key operations for real-time, critical payment transactions in the Azure cloud. Azure Payment HSM is designed specifically to help a service provider and an individual financial institution accelerate their payment system's digital transformation strategy and adopt the public cloud. For more information, see [Azure Payment HSM: Overview](/azure/payment-hsm/overview).
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure payment HSM. Azure Payment HSM is a "BareMetal" service delivered using [Thales payShield 10K payment hardware security modules (HSM)](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-10k) to provide cryptographic key operations for real-time, critical payment transactions in the Azure cloud. Azure Payment HSM is designed specifically to help a service provider and an individual financial institution accelerate their payment system's digital transformation strategy and adopt the public cloud. For more information, see [Azure Payment HSM: Overview](./overview.md).
This article describes how to create a payment HSM with the host and management port in same virtual network. You can instead:
payment-hsm Create Different Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-vnet.md
Last updated 09/12/2022
# Create a payment HSM with host and management port in different virtual networks using ARM template
-This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure payment HSM. Azure Payment HSM is a "BareMetal" service delivered using [Thales payShield 10K payment hardware security modules (HSM)](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-10k) to provide cryptographic key operations for real-time, critical payment transactions in the Azure cloud. Azure Payment HSM is designed specifically to help a service provider and an individual financial institution accelerate their payment system's digital transformation strategy and adopt the public cloud. For more information, see [Azure Payment HSM: Overview](/azure/payment-hsm/overview).
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure payment HSM. Azure Payment HSM is a "BareMetal" service delivered using [Thales payShield 10K payment hardware security modules (HSM)](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-10k) to provide cryptographic key operations for real-time, critical payment transactions in the Azure cloud. Azure Payment HSM is designed specifically to help a service provider and an individual financial institution accelerate their payment system's digital transformation strategy and adopt the public cloud. For more information, see [Azure Payment HSM: Overview](./overview.md).
This article describes how to create a payment HSM with the host and management port in same virtual network. You can instead: - [Create a payment HSM with the host and management port in same virtual network using an ARM template](quickstart-template.md)
payment-hsm Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-template.md
# Quickstart: Create an Azure payment HSM using an ARM template
-This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure payment HSM. Azure Payment HSM is a "BareMetal" service delivered using [Thales payShield 10K payment hardware security modules (HSM)](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-10k) to provide cryptographic key operations for real-time, critical payment transactions in the Azure cloud. Azure Payment HSM is designed specifically to help a service provider and an individual financial institution accelerate their payment system's digital transformation strategy and adopt the public cloud. For more information, see [Azure Payment HSM: Overview](/azure/payment-hsm/overview).
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure payment HSM. Azure Payment HSM is a "BareMetal" service delivered using [Thales payShield 10K payment hardware security modules (HSM)](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-10k) to provide cryptographic key operations for real-time, critical payment transactions in the Azure cloud. Azure Payment HSM is designed specifically to help a service provider and an individual financial institution accelerate their payment system's digital transformation strategy and adopt the public cloud. For more information, see [Azure Payment HSM: Overview](./overview.md).
This article describes how to create a payment HSM with the host and management port in same virtual network. You can instead: - [Create a payment HSM with host and management port in different virtual network using an ARM template](create-different-vnet.md)
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
-# Azure Database for PostgreSQL - Flexible Server Data Encryption with a Customer-managed Key Preview
+# Azure Database for PostgreSQL - Flexible Server Data Encryption with a Customer-managed Key
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
+> [!NOTE]
+> Azure Database for PostgreSQL - Flexible Server Data Encryption with a Customer-managed Key is currently in preview.
+ Azure PostgreSQL uses [Azure Storage encryption](../../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using Microsoft-managed keys. For Azure PostgreSQL users, it's similar to Transparent Data Encryption (TDE) in other databases such as SQL Server. Many organizations require full control of access to the data using a customer-managed key. Data encryption with customer-managed keys for Azure Database for PostgreSQL Flexible server - Preview enables you to bring your key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys. Data encryption with customer-managed keys for Azure Database for PostgreSQL Flexible server - Preview is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the service's data encryption key (DEK). The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](https://azure.microsoft.com/services/key-vault/)) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) are described in more detail later in this article.
The key vault administrator can also [enable logging of Key Vault audit events](
When the server is configured to use the customer-managed key stored in the key Vault, the server sends the DEK to the key Vault for encryptions. Key Vault returns the encrypted DEK stored in the user database. Similarly, when needed, the server sends the protected DEK to the key Vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled.
-## Requirements for configuring data encryption in preview for Azure Database for PostgreSQL Flexible server
+## Requirements for configuring data encryption for Azure Database for PostgreSQL Flexible server
The following are requirements for configuring Key Vault:
Prerequisites:
- Azure Active Directory (Azure AD) user managed identity in region where Postgres Flex Server will be created. Follow this [tutorial](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) to create identity. -- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key. Follow [requirements section above](#requirements-for-configuring-data-encryption-in-preview-for-azure-database-for-postgresql-flexible-server) for required Azure Key Vault settings
+- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key. Follow [requirements section above](#requirements-for-configuring-data-encryption-for-azure-database-for-postgresql-flexible-server) for required Azure Key Vault settings
Follow the steps below to enable CMK while creating Postgres Flexible Server.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
To delete old data on Saturday at 3:30am (GMT)
``` SELECT cron.schedule('30 3 * * 6', $$DELETE FROM events WHERE event_time < now() - interval '1 week'$$); ```
-To run vacuum every day at 10:00am (GMT)
+To run vacuum every day at 10:00am (GMT) in default database 'postgres'
``` SELECT cron.schedule('0 10 * * *', 'VACUUM'); ```
To unschedule all tasks from pg_cron
``` SELECT cron.unschedule(jobid) FROM cron.job; ```
+To see all jobs currently scheduled with pg_cron
+```
+SELECT * FROM cron.job;
+```
+To run vaccuum every day at 10:00 am (GMT) in database 'testcron' under azure_pg_admin role account
+```
+SELECT cron.schedule_in_database('VACUUM','0 10 * * * ','VACUUM','testcron',null,TRUE)
+```
+ > [!NOTE]
-> pg_cron extension is preloaded in every Azure Database for PostgreSQL -Flexible Server inside postgres database to provide you with ability to schedule jobs to run in other databases within your PostgreSQL DB instance without compromising security.
+> pg_cron extension is preloaded in shared_preload_libraries for every Azure Database for PostgreSQL -Flexible Server inside postgres database to provide you with ability to schedule jobs to run in other databases within your PostgreSQL DB instance without compromising security. However, for security reasons, you still have to [allow list](#how-to-use-postgresql-extensions) pg_cron extension and install it using [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command.
## pg_stat_statements
-The [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Database for PostgreSQL flexible server to provide you a means of tracking execution statistics of SQL statements.
+The [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) gives you a view of all the queries that have run on your database. That is very useful to get an understanding of what your query workload performance looks like on a production system.
+
+The [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded in shared_preload_libraries on every Azure Database for PostgreSQL flexible server to provide you a means of tracking execution statistics of SQL statements.
+However, for security reasons, you still have to [allow list](#how-to-use-postgresql-extensions) [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) and install it using [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command.
The setting `pg_stat_statements.track`, which controls what statements are counted by the extension, defaults to `top`, meaning all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`. This setting is configurable as a server parameter. There is a tradeoff between the query execution information pg_stat_statements provides and the impact on server performance as it logs each SQL statement. If you are not actively using the pg_stat_statements extension, we recommend that you set `pg_stat_statements.track` to `none`. Note that some third party monitoring services may rely on pg_stat_statements to deliver query performance insights, so confirm whether this is the case for you or not.
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
The feature is meant for scenarios where the lag is acceptable and meant for off
You can create a read replica in a different region from your primary server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
-You can have a primary server in any [Azure Database for PostgreSQL region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can have replicas also in any global region of Azure that supports Azure Database for PostgreSQL. Currently [special Azure regions](/azure/virtual-machines/regions#special-azure-regions) are not supported.
+You can have a primary server in any [Azure Database for PostgreSQL region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can have replicas also in any global region of Azure that supports Azure Database for PostgreSQL. Currently [special Azure regions](../../virtual-machines/regions.md#special-azure-regions) are not supported.
[//]: # (### Paired regions)
Scaling vCores or between General Purpose and Memory Optimized:
* Learn how to [create and manage read replicas in the Azure portal](how-to-read-replicas-portal.md).
-[//]: # (* Learn how to [create and manage read replicas in the Azure CLI and REST API]&#40;how-to-read-replicas-cli.md&#41;.)
+[//]: # (* Learn how to [create and manage read replicas in the Azure CLI and REST API]&#40;how-to-read-replicas-cli.md&#41;.)
private-5g-core Collect Required Information For Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-private-mobile-network.md
If you want to provision SIMs as part of deploying your private mobile network:
1. Choose one of the following encryption types for the new SIM group to which all of the SIMs you provision will be added: Note that once the SIM group is created, the encryption type cannot be changed.
- - Microsoft-managed keys (MMK) that Microsoft manages internally for [Encryption at rest](/azure/security/fundamentals/encryption-atrest).
+ - Microsoft-managed keys (MMK) that Microsoft manages internally for [Encryption at rest](../security/fundamentals/encryption-atrest.md).
- Customer-managed keys (CMK) that you must manually configure.
- You must create a Key URI in your [Azure Key Vault](/azure/key-vault/) and a [User-assigned identity](/azure/active-directory/managed-identities-azure-resources/overview) with read, wrap, and unwrap access to the key.
- - The key must be configured to have an activation and expiration date and we recommend that you [configure cryptographic key auto-rotation in Azure Key Vault](/azure/key-vault/keys/how-to-configure-key-rotation).
+ You must create a Key URI in your [Azure Key Vault](../key-vault/index.yml) and a [User-assigned identity](../active-directory/managed-identities-azure-resources/overview.md) with read, wrap, and unwrap access to the key.
+ - The key must be configured to have an activation and expiration date and we recommend that you [configure cryptographic key auto-rotation in Azure Key Vault](../key-vault/keys/how-to-configure-key-rotation.md).
- The SIM group accesses the key via the user-assigned identity. - For additional information on configuring CMK for a SIM group, see [Configure customer-managed keys](/azure/cosmos-db/how-to-setup-cmk).
For detailed information on services and SIM policies, see [Policy control](poli
You can now use the information you've collected to deploy your private mobile network. - [Deploy a private mobile network - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md)-- [Quickstart: Deploy a private mobile network and site - ARM template](deploy-private-mobile-network-with-site-arm-template.md)
+- [Quickstart: Deploy a private mobile network and site - ARM template](deploy-private-mobile-network-with-site-arm-template.md)
private-5g-core Private Mobile Network Design Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-mobile-network-design-requirements.md
Being able to build enterprise networks using automation and other programmatic
You should adopt a programmatic, *infrastructure as code* approach to your deployments. You can use templates or the Azure REST API to build your deployment using parameters as inputs with values that you have collected during the design phase of the project. You should save provisioning information such as SIM data, switch/router configuration, and network policies in machine-readable format so that, in the event of a failure, you can reapply the configuration in the same way as you originally did. Another best practice to recover from failure is to deploy a spare Azure Stack Edge server to minimize recovery time if the first unit fails; you can then use your saved templates and inputs to quickly recreate the deployment. For more information on deploying a network using templates, refer to [Quickstart: Deploy a private mobile network and site - ARM template](deploy-private-mobile-network-with-site-arm-template.md).
-You must also consider how you'll integrate other Azure products and services with the private enterprise network. These products include [Azure Active Directory](/azure/active-directory/fundamentals/active-directory-whatis) and [role-based access control (RBAC)](/azure/role-based-access-control/overview), where you must consider how tenants, subscriptions and resource permissions will align with the business model that exists between you and the enterprise, as well as your own approach to customer system management. For example, you might use [Azure Blueprints](/azure/governance/blueprints/overview) to set up the subscriptions and resource group model that works best for your organization.
+You must also consider how you'll integrate other Azure products and services with the private enterprise network. These products include [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) and [role-based access control (RBAC)](../role-based-access-control/overview.md), where you must consider how tenants, subscriptions and resource permissions will align with the business model that exists between you and the enterprise, as well as your own approach to customer system management. For example, you might use [Azure Blueprints](../governance/blueprints/overview.md) to set up the subscriptions and resource group model that works best for your organization.
## Next steps - [Learn more about the key components of a private mobile network](key-components-of-a-private-mobile-network.md)-- [Learn more about the prerequisites for deploying a private mobile network](complete-private-mobile-network-prerequisites.md)
+- [Learn more about the prerequisites for deploying a private mobile network](complete-private-mobile-network-prerequisites.md)
private-5g-core Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/security.md
Azure Private 5G Core packet core instances are deployed on Azure Stack Edge dev
In addition to the default [Encryption at rest](#encryption-at-rest) using Microsoft-Managed Keys (MMK), you can optionally use Customer Managed Keys (CMK) when [creating a SIM group](manage-sim-groups.md#create-a-sim-group) or [when deploying a private mobile network](how-to-guide-deploy-a-private-mobile-network-azure-portal.md#deploy-your-private-mobile-network) to encrypt data with your own key.
-If you elect to use a CMK, you will need to create a Key URI in your [Azure Key Vault](/azure/key-vault/) and a [User-assigned identity](/azure/active-directory/managed-identities-azure-resources/overview) with read, wrap, and unwrap access to the key.
+If you elect to use a CMK, you will need to create a Key URI in your [Azure Key Vault](../key-vault/index.yml) and a [User-assigned identity](../active-directory/managed-identities-azure-resources/overview.md) with read, wrap, and unwrap access to the key.
-- The key must be configured to have an activation and expiration date and we recommend that you [configure cryptographic key auto-rotation in Azure Key Vault](/azure/key-vault/keys/how-to-configure-key-rotation).
+- The key must be configured to have an activation and expiration date and we recommend that you [configure cryptographic key auto-rotation in Azure Key Vault](../key-vault/keys/how-to-configure-key-rotation.md).
- The SIM group accesses the key via the user-assigned identity. - For additional information on configuring CMK for a SIM group, see [Configure customer-managed keys](/azure/cosmos-db/how-to-setup-cmk).
As these credentials are highly sensitive, Azure Private 5G Core won't allow use
## Next steps -- [Deploy a private mobile network - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md)
+- [Deploy a private mobile network - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md)
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
|Azure Machine Learning | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Machine Learning.](../machine-learning/how-to-configure-private-link.md) | |Azure Bot Service | All public regions | Supported only on Direct Line App Service extension | GA </br> [Learn how to create a private endpoint for Azure Bot Service](/azure/bot-service/dl-network-isolation-concept) | | Azure Cognitive Services | All public regions<br/>All Government regions | | GA <br/> [Use private endpoints.](../cognitive-services/cognitive-services-virtual-networks.md#use-private-endpoints) |
-| Azure Cognitive Search | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Cognitive Search](/azure/search/service-create-private-endpoint) |
+| Azure Cognitive Search | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Cognitive Search](../search/service-create-private-endpoint.md) |
### Analytics
The following tables list the Private Link services and the regions where they'r
|Azure Data Factory | All public regions<br/> All Government regions<br/>All China regions | Credentials need to be stored in an Azure key vault| GA <br/> [Learn how to create a private endpoint for Azure Data Factory.](../data-factory/data-factory-private-link.md) | |Azure HDInsight | All public regions<br/>All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure HDInsight.](../hdinsight/hdinsight-private-link.md) | | Azure Data Explorer | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Data Explorer.](/azure/data-explorer/security-network-private-endpoint) |
-| Azure Stream Analytics | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Stream Analytics.](/azure/stream-analytics/private-endpoints) |
+| Azure Stream Analytics | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Stream Analytics.](../stream-analytics/private-endpoints.md) |
### Compute
The following tables list the Private Link services and the regions where they'r
|Azure-managed Disks | All public regions<br/> All Government regions<br/>All China regions | [Select for known limitations](../virtual-machines/disks-enable-private-links-for-import-export-portal.md#limitations) | GA <br/> [Learn how to create a private endpoint for Azure Managed Disks.](../virtual-machines/disks-enable-private-links-for-import-export-portal.md) | | Azure Batch (batchAccount) | All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) | | Azure Batch (nodeManagement) | [Selected regions](../batch/simplified-compute-node-communication.md#supported-regions) | Supported for [simplified compute node communication](../batch/simplified-compute-node-communication.md) | Preview <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) |
-| Azure Functions | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Functions.](/azure/azure-functions/functions-create-vnet) |
+| Azure Functions | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Functions.](../azure-functions/functions-create-vnet.md) |
### Containers
The following tables list the Private Link services and the regions where they'r
|Azure Event Grid| All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Event Grid.](../event-grid/network-security.md) | |Azure Service Bus | All public region<br/>All Government regions | Supported with premium tier of Azure Service Bus. [Select for tiers](../service-bus-messaging/service-bus-premium-messaging.md) | GA <br/> [Learn how to create a private endpoint for Azure Service Bus.](../service-bus-messaging/private-link-service.md) | | Azure API Management | All public regions<br/> All Government regions | | Preview <br/> [Connect privately to API Management using a private endpoint.](../event-grid/network-security.md) |
-| Azure Logic Apps | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Logic Apps.](/azure/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint) |
+| Azure Logic Apps | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Logic Apps.](../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md) |
### Internet of Things (IoT)
The following tables list the Private Link services and the regions where they'r
|Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--| | Azure SignalR | All Public Regions<br/> All China regions<br/> All Government Regions | Supported on Standard Tier or above | GA <br/> [Learn how to create a private endpoint for Azure SignalR.](../azure-signalr/howto-private-endpoints.md) |
-|Azure App Service | All public regions<br/> China North 2 & East 2 | Supported with PremiumV2, PremiumV3, or Function Premium plan | GA <br/> [Learn how to create a private endpoint for Azure App Service.](/azure/app-service/networking/private-endpoint) |
+|Azure App Service | All public regions<br/> China North 2 & East 2 | Supported with PremiumV2, PremiumV3, or Function Premium plan | GA <br/> [Learn how to create a private endpoint for Azure App Service.](../app-service/networking/private-endpoint.md) |
|Azure Search | All public regions <br/> All Government regions | Supported with service in Private Mode | GA <br/> [Learn how to create a private endpoint for Azure Search.](../search/service-create-private-endpoint.md) | |Azure Relay | All public regions | | Preview <br/> [Learn how to create a private endpoint for Azure Relay.](../azure-relay/private-link-service.md) | |Azure Static Web Apps | All public regions | | Preview <br/> [Configure private endpoint in Azure Static Web Apps](../static-web-apps/private-endpoint.md) |
The following tables list the Private Link services and the regions where they'r
Learn more about Azure Private Link service: - [What is Azure Private Link?](private-link-overview.md)-- [Create a Private Endpoint using the Azure portal](create-private-endpoint-portal.md)
+- [Create a Private Endpoint using the Azure portal](create-private-endpoint-portal.md)
purview Concept Policies Data Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-data-owner.md
Last updated 03/20/2022
-# Concepts for Microsoft Purview data owner policies
+# Concepts for Microsoft Purview data owner policies (preview)
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
purview Concept Policies Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-devops.md
Previously updated : 10/07/2022 Last updated : 11/16/2022 # Concepts for Microsoft Purview DevOps policies - This article discusses concepts related to managing access to data sources in your data estate from within the Microsoft Purview governance portal. In particular, it focuses on DevOps policies. > [!Note]
purview Concept Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-self-service-data-access-policy.md
Previously updated : 03/10/2022 Last updated : 11/11/2022 # Microsoft Purview Self-service data discovery and access (Preview)
A **workflow admin** will need to map a self-service data access workflow to a c
* **Self-service data access workflow** is the workflow that is initiated when a data consumer requests access to data.
-* **Approver** is either security group or Azure Active Directory (Azure AD) users that can approve self-service access requests.
+* **Approver** is either security group or Azure Active Directory (Azure AD) users or Azure AD Groups that can approve self-service access requests.
## How to use Microsoft Purview self-service data access policy
Microsoft Purview allows organizations to catalog metadata about all registered
With self-service data access workflow, data consumers can not only find data assets but also request access to the data assets. When the data consumer requests access to a data asset, the associated self-service data access workflow is triggered.
-A default self-service data access workflow template is provided with every Microsoft Purview account. The default template can be amended to add more approvers and/or set the approver's email address. For more details refer [Create and enable self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md).
+A default self-service data access workflow template is provided with every Microsoft Purview account. The default template can be amended to add more approvers and/or set the approver's email address. For more details, refer [Create and enable self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md).
-Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Microsoft Purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **Data Use Management**. The pre-requisites mentioned within the [Data Use Management](./how-to-enable-data-use-management.md#prerequisites) have to be satisfied.
+Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Microsoft Purview governance portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **Data Use Management**. The pre-requisites mentioned within the [Data Use Management](./how-to-enable-data-use-management.md#prerequisites) have to be satisfied.
-Data consumer can access the requested dataset using tools such as PowerBI or Azure Synapse Analytics workspace.
+Data consumer can access the requested dataset using tools such as Power BI or Azure Synapse Analytics workspace.
>[!NOTE] > Users will not be able to browse to the asset using the Azure Portal or Storage explorer if the only permission granted is read/modify access at the file or folder level of the storage account.
If you would like to preview these features in your environment, follow the link
- [create self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - [working with policies at file level](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166) - [working with policies at folder level](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583)
+- [self-service policies for Azure SQL Database tables and views](./how-to-policies-self-service-azure-sql-db.md)
purview How To Create Import Export Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-create-import-export-glossary.md
Previously updated : 03/09/2022 Last updated : 11/14/2022 # Create, import, export, and delete glossary terms
The system will upload the file and add all the terms to your glossary.
## Export terms from the glossary with custom attributes
-You can export terms from the glossary as long as the selected terms belong to same term template.
- When you're in the glossary, the **Export terms** button is disabled by default. After you select the terms that you want to export, the **Export terms** button is enabled. > [!NOTE]
Select **Export terms** to download the selected terms.
:::image type="content" source="media/how-to-create-import-export-glossary/select-term-template-for-export.png" lightbox="media/how-to-create-import-export-glossary/select-term-template-for-export.png" alt-text="Screenshot of the button to export terms on the glossary terms page."::: > [!Important]
-> If the terms in a hierarchy belong to different term templates, you need to split them into different .CSV files for import. Also, the import process currently doesn't support updating the parent of a term.
+> The import process currently doesn't support updating the parent of a term.
## Delete terms
purview How To Enable Data Use Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-management.md
# Enable Data use management on your Microsoft Purview sources - *Data use management* is an option within the data source registration in Microsoft Purview. This option lets Microsoft Purview manage data access for your resources. The high level concept is that the data owner allows its data resource to be available for access policies by enabling *Data use management*. Currently, a data owner can enable Data use management on a data resource, which enables it for these types of access policies:
purview How To Policies Data Owner Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-authoring-generic.md
This guide describes how to create, update, and publish data owner policies in t
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
-## Microsoft Purview configuration
+### Configuration
+Before authoring policies in the Microsoft Purview policy portal, you'll need to configure Microsoft Purview and the data sources so that they can enforce those policies.
-### Data source configuration
+1. Follow any policy-specific prerequisites for your source. Check the [Microsoft Purview supported data sources table](./microsoft-purview-connector-overview.md) and select the link in the **Access Policy** column for sources where access policies are available. Follow any steps listed in the Access policy or Prerequisites sections.
+1. Register the data source in Microsoft Purview. Follow the **Prerequisites** and **Register** sections of the [source pages](./microsoft-purview-connector-overview.md) for your resources.
+1. Enable the Data use management option on the data source registration. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
-Before authoring data policies in the Microsoft Purview governance portal, you'll need to configure the data sources so that they can enforce those policies.
-
-1. Follow any policy-specific prerequisites for your source. Check the [Microsoft Purview supported data sources table](microsoft-purview-connector-overview.md) and select the link in the **Access Policy** column for sources where access policies are available. Follow any steps listed in the Access policy or Prerequisites sections.
-1. Register the data source in Microsoft Purview. Follow the **Prerequisites** and **Register** sections of the [source pages](microsoft-purview-connector-overview.md) for your resources.
-1. Enable the Data use management option on the data source. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
-
-
## Create a new policy
purview How To Policies Devops Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-arc-sql-server.md
Title: Provision access to Arc-enabled SQL Server for DevOps actions (preview)
+ Title: Provision access to Arc-enabled SQL Server for DevOps actions
description: Step-by-step guide on provisioning access to Arc-enabled SQL Server through Microsoft Purview DevOps policies Previously updated : 11/04/2022 Last updated : 11/16/2022
-# Provision access to system metadata in Arc-enabled SQL Server (preview)
-
+# Provision access to system metadata in Arc-enabled SQL Server
[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after being saved they get automatically published and then get enforced by the data source.
purview How To Policies Devops Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-authoring-generic.md
Title: Create, list, update and delete Microsoft Purview DevOps policies (preview)
+ Title: Create, list, update and delete Microsoft Purview DevOps policies
description: Step-by-step guide on provisioning access through Microsoft Purview DevOps policies Previously updated : 11/04/2022 Last updated : 11/16/2022
-# Create, list, update and delete Microsoft Purview DevOps policies (preview)
-
+# Create, list, update and delete Microsoft Purview DevOps policies
[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after being saved they get automatically published and then get enforced by the data source.
This how-to guide covers how to provision access from Microsoft Purview to SQL-t
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
-### Data source configuration
+### Configuration
Before authoring policies in the Microsoft Purview policy portal, you'll need to configure the data sources so that they can enforce those policies. 1. Follow any policy-specific prerequisites for your source. Check the [Microsoft Purview supported data sources table](./microsoft-purview-connector-overview.md) and select the link in the **Access Policy** column for sources where access policies are available. Follow any steps listed in the Access policy or Prerequisites sections. 1. Register the data source in Microsoft Purview. Follow the **Prerequisites** and **Register** sections of the [source pages](./microsoft-purview-connector-overview.md) for your resources.
-1. [Enable the "Data use management" toggle on the data source](how-to-enable-data-use-management.md). Additional permissions for this step are described in the linked document.
+1. [Enable the "Data use management" toggle in the data source registration](how-to-enable-data-use-management.md). Additional permissions for this step are described in the linked document.
## Create a new DevOps policy
purview How To Policies Purview Account Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-purview-account-delete.md
Title: Impact of deleting Microsoft Purview account on access policies (preview)
+ Title: Impact of deleting Microsoft Purview account on access policies
description: This guide discusses the consequences of deleting a Microsoft Purview account on published access policies Previously updated : 10/31/2022 Last updated : 11/16/2022 # Impact of deleting Microsoft Purview account on access policies - ## Important considerations Deleting a Microsoft Purview account that has active (that is, published) policies will remove those policies. This means that the access to data sources or datasets that was previously provisioned via those policies will also be removed. This can lead to outages, that is, users or groups in your organization not able to access critical data. Review the decision to delete the Microsoft Purview account with the people in Policy Author role at root collection level before proceeding. To find out who holds that role in the Microsoft Purview account, review the section on managing role assignments in this [guide](./how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
purview How To Policies Self Service Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-self-service-azure-sql-db.md
+
+ Title: Self-service policies for Azure SQL Database (preview)
+description: Step-by-step guide on how self-service policy is created for Azure SQL Database through Microsoft Purview access policies.
+++++ Last updated : 11/11/2022++
+# Self-service policies for Azure SQL Database (preview)
++
+[Self-service policies](concept-self-service-data-access-policy.md) allow you to manage access from Microsoft Purview to data sources that have been registered for **Data Use Management**.
+
+This how-to guide describes how self-service policies get created in Microsoft Purview to enable access to Azure SQL Database. The following actions are currently enabled: *Read Tables*, and *Read Views*.
+
+> [!CAUTION]
+> *Ownership chaining* must exist for *select* to work on Azure SQL Database *views*.
+
+## Prerequisites
+
+## Microsoft Purview Configuration
+
+### Register the data sources in Microsoft Purview
+The Azure SQL Database resources need to be registered first with Microsoft Purview to later define access policies. You can follow these guides:
+
+[Register and scan Azure SQL DB](./register-scan-azure-sql-database.md)
+
+After you've registered your resources, you'll need to enable data use management. Data use management can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to data use management in this guide**:
+
+[How to enable data use management](./how-to-enable-data-use-management.md)
+
+Once your data source has the **Data use management** toggle *Enabled*, it will look like this picture. This will enable the access policies to be used with the given SQL server and all its contained databases.
+![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-azure-sql-db.png)
++
+## Create a self-service data access request
+++
+>[!Important]
+> - Publish is a background operation. It can take up to **5 minutes** for the changes to be reflected in this data source.
+> - Changing a policy does not require a new publish operation. The changes will be picked up with the next pull.
++
+## View a self-service Policy
+
+To view the policies you've created, follow the article to [view the self-service policies](how-to-view-self-service-data-access-policy.md).
++
+### Test the policy
+
+The Azure Active Directory Account, group, MSI, or SPN for which the self-service policies were created, should now be able to connect to the database on the server and execute a select query against the requested table or view.
+
+#### Force policy download
+It's possible to force an immediate download of the latest published policies to the current SQL database by running the following command. The minimal permission required to run it's membership in ##MS_ServerStateManager##-server role.
+
+```sql
+-- Force immediate download of latest published policies
+exec sp_external_policy_refresh reload
+```
+
+#### Analyze downloaded policy state from SQL
+The following DMVs can be used to analyze which policies have been downloaded and are currently assigned to Azure AD accounts. The minimal permission required to run them is VIEW DATABASE SECURITY STATE - or assigned Action Group *SQL Security Auditor*.
+
+```sql
+
+-- Lists generally supported actions
+SELECT * FROM sys.dm_server_external_policy_actions
+
+-- Lists the roles that are part of a policy published to this server
+SELECT * FROM sys.dm_server_external_policy_roles
+
+-- Lists the links between the roles and actions, could be used to join the two
+SELECT * FROM sys.dm_server_external_policy_role_actions
+
+-- Lists all Azure AD principals that were given connect permissions
+SELECT * FROM sys.dm_server_external_policy_principals
+
+-- Lists Azure AD principals assigned to a given role on a given resource scope
+SELECT * FROM sys.dm_server_external_policy_role_members
+
+-- Lists Azure AD principals, joined with roles, joined with their data actions
+SELECT * FROM sys.dm_server_external_policy_principal_assigned_actions
+```
+
+## Additional information
+
+### Policy action mapping
+
+This section contains a reference of how actions in Microsoft Purview data policies map to specific actions in Azure SQL Database.
+
+| **Microsoft Purview policy action** | **Data source specific actions** |
+|-|--|
+|||
+| *Read* |Microsoft.Sql/sqlservers/Connect |
+||Microsoft.Sql/sqlservers/databases/Connect |
+||Microsoft.Sql/Sqlservers/Databases/Schemas/Tables/Rows|
+||Microsoft.Sql/Sqlservers/Databases/Schemas/Views/Rows |
+|||
+
+## Next steps
+Check blog, demo and related how-to guides
+- [self-service policies](concept-self-service-data-access-policy.md)
+- [What are Microsoft Purview workflows](concept-workflow.md)
+- [Self-service data access workflow for hybrid data estates](how-to-workflow-self-service-data-access-hybrid.md)
purview How To Policies Self Service Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-self-service-storage.md
+
+ Title: Self-service policies for Azure Storage (preview)
+description: Step-by-step guide on how self-service policy is created for storage through Microsoft Purview access policies.
+++++ Last updated : 10/24/2022+++
+# Self-service access provisioning for Azure Storage datasets (Preview)
++
+[Access policies](concept-policies-data-owner.md) allow you to manage access from Microsoft Purview to data sources that have been registered for *Data Use Management*.
+
+This how-to guide describes how self-service policies get created in Microsoft Purview to enable access to Azure storage datasets. Currently, these two Azure Storage sources are supported:
+
+- Blob storage
+- Azure Data Lake Storage (ADLS) Gen2
+
+## Prerequisites
++
+## Configuration
+
+### Register the data sources in Microsoft Purview for Data Use Management
+The Azure Storage resources need to be registered first with Microsoft Purview to later define access policies.
+
+To register your resources, follow the **Prerequisites** and **Register** sections of these guides:
+
+- [Register and scan Azure Storage Blob - Microsoft Purview](register-scan-azure-blob-storage-source.md#prerequisites)
+
+- [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Microsoft Purview](register-scan-adls-gen2.md#prerequisites)
+
+After you've registered your resources, you'll need to enable data use management. Data use management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable data use management](./how-to-enable-data-use-management.md)
+
+Once your data source has the **Data Use Management** toggle **Enabled**, it will look like this picture:
++
+## Create a self-service data access request
++
+>[!Important]
+> - Publish is a background operation. Azure Storage accounts can take up to **2 hours** to reflect the changes.
+
+## View a self-service policy
+
+To view the policies you've created, follow the article to [view the self-service policies](how-to-view-self-service-data-access-policy.md).
+
+## Data consumption
+
+- Data consumer can access the requested dataset using tools such as Power BI or Azure Synapse Analytics workspace.
+
+>[!NOTE]
+> Users will not be able to browse to the asset using the Azure Portal or Storage explorer if the only permission granted is read/modify access at the file or folder level of the storage account.
+
+> [!CAUTION]
+> Folder level permission is required to access data in ADLS Gen 2 using PowerBI.
+> Additionally, resource sets are not supported by self-service policies. Hence, folder level permission needs to be granted to access resource set files such as CSV or parquet.
++
+### Known issues
+
+**Known issues** related to Policy creation
+- self-service policies aren't supported for Microsoft Purview resource sets. Even if displayed in Microsoft Purview, it isn't yet enforced. Learn more about [resource sets](concept-resource-sets.md).
++
+## Next steps
+Check blog, demo and related tutorials:
+
+* [self-service policies concept](./concept-self-service-data-access-policy.md)
+* [Demo of self-service policies for storage](https://www.youtube.com/watch?v=AYKZ6_imorE)
+* [Blog: Accessing data when folder level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583)
+* [Blog: Accessing data when file level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166)
purview How To Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-request-access.md
This article outlines how to make an access request.
## Request access
-1. To find a data asset, use Microsoft Purview's [search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) functionality.
-
- :::image type="content" source="./media/how-to-request-access/search-or-browse.png" alt-text="Screenshot of the Microsoft Purview governance portal, with the search bar and browse buttons highlighted.":::
-
-1. Select the asset to go to asset details.
-
-1. Select **Request access**.
-
- :::image type="content" source="./media/how-to-request-access/request-access.png" alt-text="Screenshot of a data asset's overview page, with the Request button highlighted in the mid-page menu.":::
-
- > [!NOTE]
- > If this option isn't available, a [self-service access workflow](how-to-workflow-self-service-data-access-hybrid.md) either hasn't been created, or hasn't been assigned to the collection where the resource is registered. Contact the collection administrator, data source administrator, or workflow administrator of your collection for more information.
- > Or, for information on how to create a self-service access workflow, see our [self-service access workflow documentation](how-to-workflow-self-service-data-access-hybrid.md).
-
-1. The **Request access** window will open. You can provide comments on why data access is requested.
-1. Select **Send** to trigger the self-service data access workflow.
-
- > [!NOTE]
- > If you want to request access on behalf of another user, select the checkbox **Request for someone else** and populate the email id of that user.
-
- :::image type="content" source="./media/how-to-request-access/send.png" alt-text="Screenshot of a data asset's overview page, with the Request access window overlaid. The Send button is highlighted at the bottom of the Request access window.":::
-
- > [!NOTE]
- > A request access to resource set will actually submit the data access request for the folder one level up which contains all these resource set files.
-
-1. Data owners will be notified of your request and will either approve or reject the request.
- ## Next steps - [What are Microsoft Purview workflows](concept-workflow.md) - [Approval workflow for business terms](how-to-workflow-business-terms-approval.md) - [Self-service data access workflow for hybrid data estates](how-to-workflow-self-service-data-access-hybrid.md)
+- [Self-service policies](concept-self-service-data-access-policy.md)
purview How To Use Workflow Dynamic Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-use-workflow-dynamic-content.md
+
+ Title: Workflow dynamic content
+description: This article describes how to use connectors in Purview dynamic content
+++++ Last updated : 11/11/2022+++
+# Workflow dynamic content
++
+You can use dynamic content inside Microsoft Purview workflows to associate certain variables in the workflow.
+
+## Current workflow dynamic content
+
+Currently, the following dynamic content options are available for a workflow connector in Microsoft Purview:
+
+|Prerequisite connector |Built-in dynamic content |Functionality |
+||||
+|When data access request is submitted |Workflow.Requestor |The requestor of the workflow |
+| |Workflow.RequestRecepient |The request recipient of the workflow |
+| |Asset.Name |The name of the asset |
+| |Asset.Description |The description of the asset |
+| |Asset.Type |The type of the asset |
+| |Asset.FullyQualifiedName |The fully qualified name of the asset |
+| |Asset.Owner |The owner of the asset |
+| |Asset.Classification |The display names of classifications of the asset |
+| |Asset.Certified |The indicator of whether the asset meets your organization's quality standards and can be regarded as reliable |
+|Start and wait for an approval |Approval.Outcome |The outcome of the approval |
+| |Approval.Assigned To |The IDs of the approvers |
+| |Approval.Comments |The IDs of the approvers |
+|Check data source registration for data use governance |DataUseGovernance |The result of the data use governance check|
+|When term creation request is submitted |Workflow.Requestor |The requestor of the workflow |
+| |Term.Name |The name of the term |
+| |Term.Formal Name |The formal name of the term |
+| |Term.Definition |The definition of the term |
+| |Term.Experts |The experts of the term |
+| |Term.Stewards |The stewards of the term |
+| |Term.Parent.Name |The name of parent term if exists |
+| |Term.Parent.Formal Name |The formal name of parent term if exists |
+|When term update request is submitted <br> When term deletion request is submitted | Workflow.Requestor |The requestor of the workflow |
+| |Term.Name |The name of the term |
+| |Term.Formal Name |The formal name of the term |
+| |Term.Definition |The definition of the term |
+| |Term.Experts |The experts of the term |
+| |Term.Stewards |The stewards of the term |
+| |Term.Parent.Name |The name of parent term if exists |
+| |Term.Parent.Formal Name |The formal name of parent term if exists |
+| |Term.Created By |The creator of the term |
+| |Term.Last Updated By |The last updator of the term |
+|When term import request is submitted |Workflow.Requestor |The requestor of the workflow |
+| |Import File.Name |The name of the file to import |
+
+## Next steps
+
+For more information about workflows, see these articles:
+
+- [Workflows in Microsoft Purview](concept-workflow.md)
+- [Approval workflow for business terms](how-to-workflow-business-terms-approval.md)
+- [Manage workflow requests and approvals](how-to-workflow-manage-requests-approvals.md)
purview How To Workflow Self Service Data Access Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md
For more information about workflows, see these articles:
- [Workflows in Microsoft Purview](concept-workflow.md) - [Approval workflow for business terms](how-to-workflow-business-terms-approval.md) - [Manage workflow requests and approvals](how-to-workflow-manage-requests-approvals.md)-
+- [Self-service access policies](concept-self-service-data-access-policy.md)
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
The table below shows the supported capabilities for each data source. Select th
|**Category**| **Data Store** |**Technical metadata** |**Classification** |**Lineage** | **Access Policy** | **Data Sharing** | ||||||||
-| Azure |[Multiple sources](register-scan-azure-multiple-sources.md)| [Yes](register-scan-azure-multiple-sources.md#register) | [Yes](register-scan-azure-multiple-sources.md#scan) | No |[Yes (Preview)](register-scan-azure-multiple-sources.md#access-policy) | No |
-||[Azure Blob Storage](register-scan-azure-blob-storage-source.md)| [Yes](register-scan-azure-blob-storage-source.md#register) | [Yes](register-scan-azure-blob-storage-source.md#scan)| Limited* | [Yes (Preview)](register-scan-azure-blob-storage-source.md#access-policy) | [Yes](register-scan-azure-blob-storage-source.md#data-sharing)|
+| Azure |[Multiple sources](register-scan-azure-multiple-sources.md)| [Yes](register-scan-azure-multiple-sources.md#register) | [Yes](register-scan-azure-multiple-sources.md#scan) | No |[Yes](register-scan-azure-multiple-sources.md#access-policy) | No |
+||[Azure Blob Storage](register-scan-azure-blob-storage-source.md)| [Yes](register-scan-azure-blob-storage-source.md#register) | [Yes](register-scan-azure-blob-storage-source.md#scan)| Limited* | [Yes](register-scan-azure-blob-storage-source.md#access-policy) (Preview) | [Yes](register-scan-azure-blob-storage-source.md#data-sharing)|
|| [Azure Cosmos DB](register-scan-azure-cosmos-database.md)| [Yes](register-scan-azure-cosmos-database.md#register) | [Yes](register-scan-azure-cosmos-database.md#scan)|No*|No| No| || [Azure Data Explorer](register-scan-azure-data-explorer.md)| [Yes](register-scan-azure-data-explorer.md#register) | [Yes](register-scan-azure-data-explorer.md#scan)| No* | No | No| || [Azure Data Factory](how-to-link-azure-data-factory.md) | [Yes](how-to-link-azure-data-factory.md) | No | [Yes](how-to-link-azure-data-factory.md) | No | No| || [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md)| [Yes](register-scan-adls-gen1.md#register) | [Yes](register-scan-adls-gen1.md#scan)| Limited* | No | No|
-|| [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)| [Yes](register-scan-adls-gen2.md#register) | [Yes](register-scan-adls-gen2.md#scan)| Limited* | [Yes (Preview)](register-scan-adls-gen2.md#access-policy) | [Yes](register-scan-adls-gen2.md#data-sharing) |
+|| [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)| [Yes](register-scan-adls-gen2.md#register) | [Yes](register-scan-adls-gen2.md#scan)| Limited* | [Yes](register-scan-adls-gen2.md#access-policy) (Preview) | [Yes](register-scan-adls-gen2.md#data-sharing) |
|| [Azure Data Share](how-to-link-azure-data-share.md) | [Yes](how-to-link-azure-data-share.md) | No | [Yes](how-to-link-azure-data-share.md) | No | No| || [Azure Database for MySQL](register-scan-azure-mysql-database.md) | [Yes](register-scan-azure-mysql-database.md#register) | [Yes](register-scan-azure-mysql-database.md#scan) | No* | No | No | || [Azure Database for PostgreSQL](register-scan-azure-postgresql.md) | [Yes](register-scan-azure-postgresql.md#register) | [Yes](register-scan-azure-postgresql.md#scan) | No* | No | No | || [Azure Dedicated SQL pool (formerly SQL DW)](register-scan-azure-synapse-analytics.md)| [Yes](register-scan-azure-synapse-analytics.md#register) | [Yes](register-scan-azure-synapse-analytics.md#scan)| No* | No | No | || [Azure Files](register-scan-azure-files-storage-source.md)|[Yes](register-scan-azure-files-storage-source.md#register) | [Yes](register-scan-azure-files-storage-source.md#scan) | Limited* | No | No |
-|| [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register) |[Yes](register-scan-azure-sql-database.md#scan)| [Yes (Preview)](register-scan-azure-sql-database.md#lineagepreview) | [Yes (Preview)](register-scan-azure-sql-database.md#access-policy) | No |
+|| [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register) |[Yes](register-scan-azure-sql-database.md#scan)| [Yes (Preview)](register-scan-azure-sql-database.md#lineagepreview) | [Yes](register-scan-azure-sql-database.md#access-policy) (Preview) | No |
|| [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md)| [Yes](register-scan-azure-sql-managed-instance.md#scan) | [Yes](register-scan-azure-sql-managed-instance.md#scan) | No* | No | No | || [Azure Synapse Analytics (Workspace)](register-scan-synapse-workspace.md)| [Yes](register-scan-synapse-workspace.md#register) | [Yes](register-scan-synapse-workspace.md#scan)| [Yes - Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| No| No | |Database| [Amazon RDS](register-scan-amazon-rds.md) | [Yes](register-scan-amazon-rds.md#register-an-amazon-rds-data-source) | [Yes](register-scan-amazon-rds.md#scan-an-amazon-rds-database) | No | No | No |
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Previously updated : 11/10/2022 Last updated : 11/16/2022 # What's available in the Microsoft Purview governance portal?
For more information, see our [introduction to Data Sharing](concept-data-share.
## Data Policy Microsoft Purview Data Policy is a set of central, cloud-based experiences that help you manage access to data sources and datasets securely and at scale. - Manage access to data sources from a single-pane of glass, cloud-based experience-- Introduces a new data-plane permission model that is external to data sources-- Seamless integration with Microsoft Purview Data Map and Catalog helps search for data assets and grant access only to what is required via fine-grained policies-- Based on role definitions that are simple and abstracted (e.g. Read, Modify) - At-scale access provisioning
+- Introduces a new data-plane permission model that is external to data sources
+- Seamless integration with Microsoft Purview Data Map and Catalog:
+ - Search for data assets and grant access only to what is required via fine-grained policies.
+ - Path to support SaaS, on-premises, and multicloud data sources
+ - Path to leverage all associated metadata for policies
+- Based on role definitions that are simple and abstracted (for example: Read, Modify)
For more information, see our introductory guides:
-* [Data owner access policies](concept-policies-data-owner.md)(preview): Provision fine-grained to broad access to users and groups via intuitive authoring experience.
-* [Self-service access policies](concept-self-service-data-access-policy.md)(preview): Self-Service: Workflow approval and automatic provisioning of access requests initiated by business analysts that discover data assets in Microsoft PurviewΓÇÖs catalog.
-* [DevOps policies](concept-policies-devops.md)(preview): Provision access to system metadata for IT operations and other DevOps personnel, supporting typical functions like SQL Performance Monitor and SQL Security Auditor.
+* [Data owner access policies](concept-policies-data-owner.md) (preview): Provision fine-grained to broad access to users and groups via intuitive authoring experience.
+* [Self-service access policies](concept-self-service-data-access-policy.md) (preview): Self-Service: Workflow approval and automatic provisioning of access requests initiated by business analysts that discover data assets in Microsoft PurviewΓÇÖs catalog.
+* [DevOps policies](concept-policies-devops.md): Provision access for IT operations and other DevOps users from Microsoft Purview Studio, enabling them to monitor SQL database system health and security, while limiting insider threat.
## Traditional challenges that Microsoft Purview seeks to address
Discovering and understanding data sources and their use is the primary purpose
At the same time, users can contribute to the catalog by tagging, documenting, and annotating data sources that have already been registered. They can also register new data sources, which are then discovered, understood, and consumed by the community of catalog users.
-Lastly, Microsoft Purview Data Policy app leverages the metadata in the Data Map, providing a superior solution to keep your data secure.
+Lastly, Microsoft Purview Data Policy app applies the metadata in the Data Map, providing a superior solution to keep your data secure.
* Structure and simplify the process of granting/revoking access. * Reduce the effort of access provisioning. * Access decision in Microsoft data systems has negligible latency penalty.
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
Once your data source has the **Data Use Management** option set to **Enabled**
![Screenshot shows how to register a data source for policy with the option Data use management set to enable](./media/how-to-policies-data-owner-storage/register-data-source-for-policy-storage.png) ### Create a policy
-To create an access policy for Azure Data Lake Storage Gen2, follow these guides:
-* [Data owner policy on a single storage account](./how-to-policies-data-owner-storage.md#create-and-publish-a-data-owner-policy) - This guide will allow you to provision access on a single Azure Storage account in your subscription.
-* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled.
+To create an access policy for Azure Data Lake Storage Gen2, follow this guide:
+* [Data owner policy on a single storage account](./how-to-policies-data-owner-storage.md#create-and-publish-a-data-owner-policy)
+
+To create policies that cover all data sources inside a resource group or Azure subscription you can refer to [this section](register-scan-azure-multiple-sources.md#access-policy).
## Next steps Follow the below guides to learn more about Microsoft Purview and your data.
purview Register Scan Azure Arc Enabled Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-arc-enabled-sql-server.md
To create and run a new scan, do the following:
### Supported policies The following types of policies are supported on this data resource from Microsoft Purview: - [DevOps policies](concept-policies-devops.md)-- [Data owner policies](concept-policies-data-owner.md)
+- [Data owner policies](concept-policies-data-owner.md)(preview)
### Access policy pre-requisites on Arc enabled SQL Server [!INCLUDE [Access policies Arc enabled SQL Server pre-requisites](./includes/access-policies-prerequisites-arc-sql-server.md)]
Once your data source has the **Data Use Management** toggle *Enabled*, it will
### Create a policy To create an access policy for Arc-enabled SQL Server, follow these guides: * [DevOps policy on a single Arc-enabled SQL Server](./how-to-policies-devops-arc-sql-server.md#create-a-new-devops-policy)
-* [Data owner policy on a single Arc-enabled SQL Server](./how-to-policies-data-owner-arc-sql-server.md#create-and-publish-a-data-owner-policy) - This guide will allow you to provision access on a single Arc-enabled SQL Server in your subscription.
-* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled.
+* [Data owner policy on a single Arc-enabled SQL Server](./how-to-policies-data-owner-arc-sql-server.md#create-and-publish-a-data-owner-policy)
+To create policies that cover all data sources inside a resource group or Azure subscription you can refer to [this section](register-scan-azure-multiple-sources.md#access-policy).
## Next steps
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
Once your data source has the **Data Use Management** option set to **Enabled**
![Screenshot shows how to register a data source for policy with the option Data use management set to enable](./media/how-to-policies-data-owner-storage/register-data-source-for-policy-storage.png) ### Create a policy
-To create an access policy for Azure Blob Storage, follow these guides:
-* [Data owner policy on a single storage account](./how-to-policies-data-owner-storage.md#create-and-publish-a-data-owner-policy) - This guide will allow you to provision access on a single Azure Storage account in your subscription.
-* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled.
+To create an access policy for Azure Blob Storage, follow this guide: [Data owner policy on a single storage account](./how-to-policies-data-owner-storage.md#create-and-publish-a-data-owner-policy).
+
+To create policies that cover all data sources inside a resource group or Azure subscription you can refer to [this section](register-scan-azure-multiple-sources.md#access-policy).
## Next steps
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
Once your data source has the **Data Use Management** option set to **Enabled**
![Screenshot shows how to register a data source for policy with the option Data use management set to enable.](./media/how-to-policies-data-owner-resource-group/register-resource-group-for-policy.png) ### Create a policy
-To create an access policy on an entire Azure subscription or resource group, follow these guide:
-* [DevOps policy covering all sources in a subscription or resource group](./how-to-policies-devops-authoring-generic.md#create-a-new-devops-policy)
-* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md#create-and-publish-a-data-owner-policy) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled.
+To create an access policy on an entire Azure subscription or resource group, follow these guides:
+* [DevOps policy covering all sources in a subscription or resource group](./how-to-policies-devops-resource-group.md#create-a-new-devops-policy)
+* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md#create-and-publish-a-data-owner-policy)
## Next steps
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Scans can be managed or run again on completion
The following types of policies are supported on this data resource from Microsoft Purview: - [DevOps policies](concept-policies-devops.md) - [Data owner policies](concept-policies-data-owner.md)
+- [self-service policies](concept-self-service-data-access-policy.md)
### Access policy pre-requisites on Azure SQL Database [!INCLUDE [Access policies specific Azure SQL DB pre-requisites](./includes/access-policies-prerequisites-azure-sql-db.md)]
To create an access policy for Azure SQL Database, follow these guides:
* [DevOps policy on a single Azure SQL Database](./how-to-policies-devops-azure-sql-db.md#create-a-new-devops-policy) * [Data owner policy on a single Azure SQL Database](./how-to-policies-data-owner-azure-sql-db.md#create-and-publish-a-data-owner-policy) - This guide will allow you to provision access on a single Azure SQL Database account in your subscription. * [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled.
+* [self-service policy for Azure SQL Database](./how-to-policies-self-service-azure-sql-db.md) - This guide will allow data consumers to request access to data assets using self-service workflow.
## Lineage (Preview) <a id="lineagepreview"></a>
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
For more information about Microsoft Purview network settings, see [Use private
To create and run a new scan, do the following:
-1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** and create an App Registration in the tenant. Provide a web URL in the **Redirect URI**. [For information about the Redirect URI see this documenation from Azure Active Directory](/azure/active-directory/develop/reply-url).
+1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** and create an App Registration in the tenant. Provide a web URL in the **Redirect URI**. [For information about the Redirect URI see this documenation from Azure Active Directory](../active-directory/develop/reply-url.md).
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in Azure AD.":::
Now that you've registered your source, follow the below guides to learn more ab
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)-- [Search Data Catalog](how-to-search-catalog.md)
+- [Search Data Catalog](how-to-search-catalog.md)
search Knowledge Store Projection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-projection-overview.md
Recall that projections are exclusive to knowledge stores, and are not used to s
1. Check your results in Azure Storage. On subsequent runs, avoid naming collisions by deleting objects in Azure Storage or changing project names in the skillset.
-1. If you are using [Table projections](knowledge-store-projections-examples.md#define-a-table-projection) check [Understanding the Table Service data model](/rest/api/storageservices/Understanding-the-Table-Service-Data-Model) and [Scalability and performance targets for Table storage](/azure/storage/tables/scalability-targets) to make sure your data requirements are within Table storage documented limits.
+1. If you are using [Table projections](knowledge-store-projections-examples.md#define-a-table-projection) check [Understanding the Table Service data model](/rest/api/storageservices/Understanding-the-Table-Service-Data-Model) and [Scalability and performance targets for Table storage](../storage/tables/scalability-targets.md) to make sure your data requirements are within Table storage documented limits.
## Next steps Review syntax and examples for each projection type. > [!div class="nextstepaction"]
-> [Define projections in a knowledge store](knowledge-store-projections-examples.md)
+> [Define projections in a knowledge store](knowledge-store-projections-examples.md)
search Search Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-explorer.md
Equivalent syntax for an empty search is `*` or `search=*`.
Free-form queries, with or without operators, are useful for simulating user-defined queries sent from a custom app to Azure Cognitive Search. Only those fields attributed as **Searchable** in the index definition are scanned for matches.
-Notice that when you provide search criteria, such as query terms or expressions, search rank comes into play. The following example illustrates a free text search.
+Notice that when you provide search criteria, such as query terms or expressions, search rank comes into play. The following example illustrates a free text search. The "@search.score" is a relevance score computed for the match using the [default scoring algorithm](index-ranking-similarity.md#default-scoring-algorithm).
```http Seattle apartment "Lake Washington" miele OR thermador appliance
Notice that when you provide search criteria, such as query terms or expressions
## Count of matching documents
-Add **$count=true** to get the number of matches found in an index. On an empty search, count is the total number of documents in the index. On a qualified search, it's the number of documents matching the query input. Recall that the service returns the top 50 matches by default, so you might have more matches in the index than what's included in the results.
+Add **$count=true** to get the number of matches found in an index. On an empty search, count is the total number of documents in the index. On a qualified search, it's the number of documents matching the query input. Recall that the service returns the top 50 matches by default, so the count might indicate more matches in the index than what's returned in the results.
```http $count=true
Add **$count=true** to get the number of matches found in an index. On an empty
## Limit fields in search results
-Add [**$select**](search-query-odata-select.md) to limit results to the explicitly named fields for more readable output in **Search explorer**. To keep the search string and **$count=true**, prefix arguments with **&**.
+Add [**$select**](search-query-odata-select.md) to limit results to the explicitly named fields for more readable output in **Search explorer**. To keep the previously mentioned parameters in the query, use **&** to separate each parameter.
```http search=seattle condo&$select=listingId,beds,baths,description,street,city,price&$count=true
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
Previously updated : 09/22/2022 Last updated : 11/16/2022 # Quickstart: Create an Azure Cognitive Search index in the Azure portal
Although you won't use the options in this quickstart, the wizard includes a pag
### Check for space
-Many customers start with the free service. The free tier is limited to three indexes, three data sources, and three indexers. Make sure you have room for extra items before you begin. This tutorial creates one of each object.
+Many customers start with the free service. The free tier is limited to three indexes, three data sources, and three indexers. Make sure you have room for extra items before you begin. This quickstart creates one of each object.
Check the service overview page to find out how many indexes, indexers, and data sources you already have.
Check the service overview page to find out how many indexes, indexers, and data
Search queries iterate over an [*index*](search-what-is-an-index.md) that contains searchable data, metadata, and other constructs that optimize certain search behaviors.
-For this tutorial, we'll create and load the index using a built-in sample dataset that can be crawled using an [*indexer*](search-indexer-overview.md) via the [**Import data wizard**](search-import-data-portal.md). The hotels-sample data set is hosted by Microsoft on Azure Cosmos DB and accessed over an internal connection. You don't need your own Cosmos DB account or source files to access the data.
+For this quickstart, we'll create and load the index using a built-in sample dataset that can be crawled using an [*indexer*](search-indexer-overview.md) via the [**Import data wizard**](search-import-data-portal.md). The hotels-sample data set is hosted on Microsoft on Azure Cosmos DB and accessed over an internal connection. You don't need your own Cosmos DB account or source files to access the data.
An indexer is a source-specific crawler that can read metadata and content from supported Azure data sources. Normally, indexers are created programmatically, but in the portal, you can create them through the **Import data wizard**.
For the built-in hotels sample index, a default index schema is defined for you.
:::image type="content" source="media/search-get-started-portal/hotelsindex.png" alt-text="Screenshot of the generated hotels index definition in the wizard." border="true":::
-Typically, in a code-based exercise, index creation is completed prior to loading data. The Import data wizard condenses these steps by generating a basic index for any data source it can crawl. Minimally, an index requires a name and a fields collection; one of the fields should be marked as the document key to uniquely identify each document. Additionally, you can specify language analyzers or suggesters if you want autocomplete or suggested queries.
+Typically, in a code-based exercise, index creation is completed prior to loading data. The Import data wizard condenses these steps by generating a basic index for any data source it can crawl. Minimally, an index requires a name and a fields collection. One of the fields should be marked as the document key to uniquely identify each document. Additionally, you can specify language analyzers or suggesters if you want autocomplete or suggested queries.
Fields have a data type and attributes. The check boxes across the top are *attributes* controlling how the field is used.
-+ **Retrievable** means that it shows up in search results list. You can mark individual fields as off limits for search results by clearing this checkbox, for example for fields used only in filter expressions.
+ **Key** is the unique document identifier. It's always a string, and it's required. Only one field can be the key.++ **Retrievable** means that field contents show up in search results list. You can mark individual fields as off limits for search results by clearing this checkbox, for example for fields used only in filter expressions. + **Filterable**, **Sortable**, and **Facetable** determine whether fields are used in a filter, sort, or faceted navigation structure. + **Searchable** means that a field is included in full text search. Strings are searchable. Numeric fields and Boolean fields are often marked as not searchable.
-[Storage requirements](search-what-is-an-index.md#example-demonstrating-the-storage-implications-of-attributes-and-suggesters) can vary as a result of attribute selection. For example, **filterable** requires more storage, but **Retrievable** doesn't.
+[Storage requirements](search-what-is-an-index.md#example-demonstrating-the-storage-implications-of-attributes-and-suggesters) can vary as a result of attribute selection. For example, **Filterable** requires more storage, but **Retrievable** doesn't.
By default, the wizard scans the data source for unique identifiers as the basis for the key field. *Strings* are attributed as **Retrievable** and **Searchable**. *Integers* are attributed as **Retrievable**, **Filterable**, **Sortable**, and **Facetable**.
All of the queries in this section are designed for **Search Explorer** and the
## Takeaways
-This tutorial provided a quick introduction to Azure Cognitive Search using the Azure portal.
+This quickstart provided a quick introduction to Azure Cognitive Search using the Azure portal.
You learned how to create a search index using the **Import data** wizard. You created your first [indexer](search-indexer-overview.md) and learned the basic workflow for index design. See [Import data wizard in Azure Cognitive Search](search-import-data-portal.md) for more information about the wizard's benefits and limitations.
search Search Howto Managed Identities Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md
Previously updated : 07/28/2022 Last updated : 11/15/2022 # Connect a search service to other Azure resources using a managed identity
A knowledge store definition includes a connection string to Azure Storage. On A
```json "knowledgeStore": {
- "storageConnectionString": "ResourceId=/subscriptions/{subscription-ID}/resourceGroups/{resource-group-name}/providers/Microsoft.Storage/storageAccounts/storage-account-name};",
+ "storageConnectionString": "ResourceId=/subscriptions/{subscription-ID}/resourceGroups/{resource-group-name}/providers/Microsoft.Storage/storageAccounts/storage-account-name};"
+}
``` [**Enrichment cache:**](search-howto-incremental-index.md)
An indexer creates, uses, and remembers the container used for the cached enrich
"cache": { "enableReprocessing": true, "storageConnectionString": "ResourceId=/subscriptions/{subscription-ID}/resourceGroups/{resource-group-name}/providers/Microsoft.Storage/storageAccounts/{storage-account-name};"
-},
+}
``` [**Debug session:**](cognitive-search-debug-session.md)
search Tutorial Javascript Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-overview.md
The [sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tr
Install the following for your local development environment. - [Node.js LTS](https://nodejs.org/en/download)
- - Select latest runtime and version from this [list of supported language versions](/azure/azure-functions/functions-versions?tabs=azure-cli%2Clinux%2Cin-process%2Cv4&pivots=programming-language-javascript#languages).
+ - Select latest runtime and version from this [list of supported language versions](../azure-functions/functions-versions.md?pivots=programming-language-javascript&tabs=azure-cli%2clinux%2cin-process%2cv4#languages).
- If you have a different version of Node.js installed on your local computer, consider using [Node Version Manager](https://github.com/nvm-sh/nvm) (nvm) or a Docker container. - [Git](https://git-scm.com/downloads) - [Visual Studio Code](https://code.visualstudio.com/) and the following extensions
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 10/31/2022 Last updated : 11/15/2022
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description | |--||--|
+| **Add search to websites** <ul><li>[C#](tutorial-csharp-overview.md)</li><li>[Python](tutorial-python-overview.md)</li><li>[JavaScript](tutorial-javascript-overview.md) </li></ul>| Sample | "Add search to websites" is a tutorial series with sample code available in three languages. This series was updated in November to run with current versions of React and the SDK client libraries. If you're integrating client code with a search index, these samples demonstrate an end-to-end approach to integration. |
| [Visual Studio Code extension for Azure Cognitive Search](https://github.com/microsoft/vscode-azurecognitivesearch/blob/master/README.md) | Feature | **Retired**. This preview feature isn't moving forward to general availability and has been removed from Visual Studio Code Marketplace. See the [documentation](search-get-started-vs-code.md) for details. | | [Query performance dashboard](https://github.com/Azure-Samples/azure-samples-search-evaluation) | Sample | This Application Insights sample demonstrates an approach for deep monitoring of query usage and performance of an Azure Cognitive Search index. It includes a JSON template that creates a workbook and dashboard in Application Insights and a Jupyter Notebook that populates the dashboard with simulated data. |
security Azure Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-domains.md
This page is a partial list of the Azure domains in use. Some of them are REST A
|[Azure Files](../../storage/files/storage-files-introduction.md)|*.file.core.windows.net| |[Azure Front Door](https://azure.microsoft.com/services/frontdoor/)|*.azurefd.net| |[Azure Key Vault](../../key-vault/general/overview.md)| *.vault.azure.net|
-|[Azure Kubernetes Service](/azure/aks/)|*.azmk8s.io|
+|[Azure Kubernetes Service](../../aks/index.yml)|*.azmk8s.io|
|Azure Management Services|*.management.core.windows.net| |[Azure Media Services](https://azure.microsoft.com/services/media-services/)|*.origin.mediaservices.windows.net| |[Azure Mobile Apps](https://azure.microsoft.com/services/app-service/mobile/)|*.azure-mobile.net|
This page is a partial list of the Azure domains in use. Some of them are REST A
|[Azure Table Storage](../../storage/tables/table-storage-overview.md)|*.table.core.windows.net| |[Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md)|*.trafficmanager.net| |Azure Websites|*.azurewebsites.net|
-|[GitHub Codespaces](https://visualstudio.microsoft.com/services/github-codespaces/)|*.visualstudio.com|
+|[GitHub Codespaces](https://visualstudio.microsoft.com/services/github-codespaces/)|*.visualstudio.com|
security Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/iaas.md
In most infrastructure as a service (IaaS) scenarios, [Azure virtual machines (V
The first step in protecting your VMs is to ensure that only authorized users can set up new VMs and access VMs. > [!NOTE]
-> To improve the security of Linux VMs on Azure, you can integrate with Azure AD authentication. When you use [Azure AD authentication for Linux VMs](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-linux), you centrally control and enforce policies that allow or deny access to the VMs.
+> To improve the security of Linux VMs on Azure, you can integrate with Azure AD authentication. When you use [Azure AD authentication for Linux VMs](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md), you centrally control and enforce policies that allow or deny access to the VMs.
> >
See [Azure security best practices and patterns](best-practices-and-patterns.md)
The following resources are available to provide more general information about Azure security and related Microsoft * [Azure Security Team Blog](/archive/blogs/azuresecurity/) - for up to date information on the latest in Azure Security
-* [Microsoft Security Response Center](https://technet.microsoft.com/library/dn440717.aspx) - where Microsoft security vulnerabilities, including issues with Azure, can be reported or via email to secure@microsoft.com
+* [Microsoft Security Response Center](https://technet.microsoft.com/library/dn440717.aspx) - where Microsoft security vulnerabilities, including issues with Azure, can be reported or via email to secure@microsoft.com
security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/overview.md
na Previously updated : 01/06/2022 Last updated : 10/26/2022
You can enable the following diagnostic log categories for NSGs:
## Compute The section provides additional information regarding key features in this area and summary information about these capabilities.
+### Azure confidential computing
+[Azure confidential computing](../../confidential-computing/TOC.yml) provides the final, missing piece, of the data protection protection puzzle. It allows you to keep your data encrypted at all times. While at rest, when in motion through the network, and now, even while loaded in memory and in use. Additionally, by making [Remote Attestion](../../attestation/overview.md) possible, it allows you to cryptographically verify that the VM you provision has booted securely and is configured correctly, prior to unlocking your data.
+
+The spectrum of option ranges from enabling "lift and shift" scenarios of existing applications, to a full control of security features. For Infrastructure as a Service (IaaS), you can use [confidential virtual machines powered by AMD SEV-SNP](../../confidential-computing/confidential-vm-overview.md) or confidential application enclaves for virtual machines that run [Intel Software Guard Extensions (SGX)](../../confidential-computing/application-development.md). For Platform as a Service, we have multiple [container based](../../confidential-computing/choose-confidential-containers-offerings.md) options, including integrations with [Azure Kubernetes Service (AKS)](../../confidential-computing/confidential-nodes-aks-overview.md).
+ ### Antimalware & Antivirus With Azure IaaS, you can use antimalware software from security vendors such as Microsoft, Symantec, Trend Micro, McAfee, and Kaspersky to protect your virtual machines from malicious files, adware, and other threats. [Microsoft Antimalware](antimalware.md) for Azure Cloud Services and Virtual Machines is a protection capability that helps identify and remove viruses, spyware, and other malicious software. Microsoft Antimalware provides configurable alerts when known malicious or unwanted software attempts to install itself or run on your Azure systems. Microsoft Antimalware can also be deployed using Microsoft Defender for Cloud
sentinel Ci Cd Custom Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd-custom-deploy.md
A sample repository is available with demonstrating the deployment config file a
For more information, see: - [Sentinel CICD repositories sample](https://github.com/SentinelCICD/RepositoriesSampleContent)-- [Create Resource Manager parameter file](/azure/azure-resource-manager/templates/parameter-files)-- [Parameters in ARM templates](/azure/azure-resource-manager/templates/parameters)--
+- [Create Resource Manager parameter file](../azure-resource-manager/templates/parameter-files.md)
+- [Parameters in ARM templates](../azure-resource-manager/templates/parameters.md)
sentinel Sentinel Solutions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-deploy.md
If you're a partner who wants to create your own solution, see the [Microsoft Se
## Prerequisites
-In order to install, update or delete solutions in content hub, you need the **Template Spec Contributor** role at the resource group level. See [Azure RBAC built in roles](/azure/role-based-access-control/built-in-roles#template-spec-contributor) for details on this role.
+In order to install, update or delete solutions in content hub, you need the **Template Spec Contributor** role at the resource group level. See [Azure RBAC built in roles](../role-based-access-control/built-in-roles.md#template-spec-contributor) for details on this role.
This is in addition to Sentinel specific roles. For more information about other roles and permissions supported for Microsoft Sentinel, see [Permissions in Microsoft Sentinel](roles.md).
In this document, you learned about Microsoft Sentinel solutions and how to find
Many solutions include data connectors that you'll need to configure so that you can start ingesting your data into Microsoft Sentinel. Each data connector will have its own set of requirements, detailed on the data connector page in Microsoft Sentinel.
-For more information, see [Connect your data source](data-connectors-reference.md).
+For more information, see [Connect your data source](data-connectors-reference.md).
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
- [Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip) - [Out of the box anomaly detection on the SAP audit log (Preview)](#out-of-the-box-anomaly-detection-on-the-sap-audit-log-preview) - [IoT device entity page (Preview)](#iot-device-entity-page-preview)-- [Common Event Format (CEF) via AMA](#common-event-format-cef-via-ama-preview) ### Account enrichment fields removed from Azure AD Identity Protection connector
The new [IoT device entity page](entity-pages.md) is designed to help the SOC in
Learn more about [investigating IoT device entities in Microsoft Sentinel](iot-advanced-threat-monitoring.md).
-### Common Event Format (CEF) via AMA (Preview)
-
-The [Common Event Format (CEF) via AMA](connect-cef-ama.md) connector allows you to quickly filter and upload logs over CEF from multiple on-premises appliances to Microsoft Sentinel via the Azure Monitor Agent (AMA).
-
-The AMA supports Data Collection Rules (DCRs), which you can use to filter the logs before ingestion, for quicker upload, efficient analysis, and querying.
-
-Here are some benefits of using AMA for CEF log collection:
--- AMA is faster compared to the existing Log Analytics Agent (MMA/OMS). -- AMA provides centralized configuration using Data Collection Rules (DCRs), and also supports multiple DCRs.-- AMA is Syslog RFC compliant, a faster and a more resilient and reliant agent, more secure with lower footprint on the installed machine.- ## September 2022 - [Create automation rule conditions based on custom details (Preview)](#create-automation-rule-conditions-based-on-custom-details-preview)
service-fabric How To Managed Identity Managed Cluster Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-identity-managed-cluster-virtual-machine-scale-sets.md
For an example of a Service Fabric managed cluster deployment that makes use of
> [!NOTE] > Only user-assigned identities are currently supported for this feature.
+> [!NOTE]
+> See [Configure and use applications with managed identity on a Service Fabric managed cluster](./how-to-managed-cluster-application-managed-identity.md) for application configuration.
+ ## Prerequisites Before you begin:
service-fabric Service Fabric Application And Service Manifests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-and-service-manifests.md
For more information about other features supported by application manifests, re
- [Configure security policies for your application](service-fabric-application-runas-security.md). - [Setup HTTPS endpoints](service-fabric-service-manifest-resources.md#example-specifying-an-https-endpoint-for-your-service). - [Encrypt secrets in the application manifest](service-fabric-application-secret-management.md)
+- [Azure Service Fabric security best practices](service-fabric-best-practices-security.md)
<!--Image references--> [appmodel-diagram]: ./media/service-fabric-application-model/application-model.png
service-fabric Service Fabric Application And Service Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-and-service-security.md
You can establish secure connection between the reverse proxy and services, thus
The Reliable Services application framework provides a few prebuilt communication stacks and tools that you can use to improve security. Learn how to improve security when you're using service remoting (in [C#](service-fabric-reliable-services-secure-communication.md) or [Java](service-fabric-reliable-services-secure-communication-java.md)) or using [WCF](service-fabric-reliable-services-secure-communication-wcf.md). +
+### Include endpoint certificate in Service Fabric applications
+
+To configure your application endpoint certificate, include the certificate by adding a **EndpointCertificate** element along with the **User** element for the principal account to the application manifest. By default the principal account is NetworkService. This will provide management of the application certificate private key ACL for the provided principal.
+
+```xml
+<ApplicationManifest … >
+ ...
+ <Principals>
+ <Users>
+ <User Name="Service1" AccountType="NetworkService" />
+ </Users>
+ </Principals>
+ <Certificates>
+ <EndpointCertificate Name="MyCert" X509FindType="FindByThumbprint" X509FindValue="[YourCertThumbprint]"/>
+ </Certificates>
+</ApplicationManifest>
+```
+ ## Encrypt application data at rest Each [node type](service-fabric-cluster-nodetypes.md) in a Service Fabric cluster running in Azure is backed by a [virtual machine scale set](../virtual-machine-scale-sets/overview.md). Using an Azure Resource Manager template, you can attach data disks to the scale set(s) that make up the Service Fabric cluster. If your services save data to an attached data disk, you can [encrypt those data disks](../virtual-machine-scale-sets/disk-encryption-powershell.md) to protect your application data.
service-fabric Service Fabric Assign Policy To Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-assign-policy-to-endpoint.md
For next steps, read the following articles:
* [Understand the application model](service-fabric-application-model.md) * [Specify resources in a service manifest](service-fabric-service-manifest-resources.md) * [Deploy an application](service-fabric-deploy-remove-applications.md)
+* [Azure Service Fabric security best practices](service-fabric-best-practices-security.md)
[image1]: ./media/service-fabric-application-runas-security/copy-to-output.png
service-fabric Service Fabric Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-security.md
user@linux:$ openssl smime -encrypt -in plaintext_UTF-16.txt -binary -outform de
After encrypting your protected values, [specify encrypted secrets in Service Fabric Application](./service-fabric-application-secret-management.md#specify-encrypted-secrets-in-an-application), and [decrypt encrypted secrets from service code](./service-fabric-application-secret-management.md#decrypt-encrypted-secrets-from-service-code).
-## Include certificate in Service Fabric applications
+## Include endpoint certificate in Service Fabric applications
+
+To configure your application endpoint certificate, include the certificate by adding a **EndpointCertificate** element along with the **User** element for the principal account to the application manifest. By default the principal account is NetworkService. This will provide management of the application certificate private key ACL for the provided principal.
+
+```xml
+<ApplicationManifest … >
+ ...
+ <Principals>
+ <Users>
+ <User Name="Service1" AccountType="NetworkService" />
+ </Users>
+ </Principals>
+ <Certificates>
+ <EndpointCertificate Name="MyCert" X509FindType="FindByThumbprint" X509FindValue="[YourCertThumbprint]"/>
+ </Certificates>
+</ApplicationManifest>
+```
+
+## Include secret certificate in Service Fabric applications
To give your application access to secrets, include the certificate by adding a **SecretsCertificate** element to the application manifest.
To give your application access to secrets, include the certificate by adding a
<ApplicationManifest … > ... <Certificates>
- <SecretsCertificate Name="MyCert" X509FindType="FindByThumbprint" X509FindValue="[YourCertThumbrint]"/>
+ <SecretsCertificate Name="MyCert" X509FindType="FindByThumbprint" X509FindValue="[YourCertThumbprint]"/>
</Certificates> </ApplicationManifest> ```
service-health Alerts Activity Log Service Notifications Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-portal.md
Last updated 06/27/2019
# Create activity log alerts on service notifications using the Azure portal
-## Overview
This article shows you how to use the Azure portal to set up activity log alerts for service health notifications by using the Azure portal.
To learn more about action groups, see [Create and manage action groups](../azur
For information on how to configure service health notification alerts by using Azure Resource Manager templates, see [Resource Manager templates](../azure-monitor/alerts/alerts-activity-log.md).
-### Watch a video on setting up your first Azure Service Health alert
+## Watch a video on setting up your first Azure Service Health alert
>[!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE2OaXt]
-## Create Service Health alert using Azure portal
+## Create a Service Health alert using the Azure portal
1. In the [portal](https://portal.azure.com), select **Service Health**. ![The "Service Health" service](media/alerts-activity-log-service-notifications/home-servicehealth.png)
For information on how to configure service health notification alerts by using
![The "Health alerts" tab](media/alerts-activity-log-service-notifications/alerts-blades-sh.png)
-1. Select **Add service health alert** and fill in the fields.
+1. Select **Add service health alert**.
![The "Create service health alert" command](media/alerts-activity-log-service-notifications/service-health-alert.png)
-1. Select the **Subscription**, **Services**, and **Regions** for which you want to be alerted.
+1. The **Create an alert rule wizard** opens to the **Conditions** tab, with the **Scope** tab already populated. Follow the steps for Service Health alerts, starting from the **Conditions** tab, in the [create a new alert rule wizard](../azure-monitor/alerts/alerts-create-new-alert-rule.md).
- [![The "Add activity log alert" dialog box](./media/alerts-activity-log-service-notifications/activity-log-alert-new-ux.png)](./media/alerts-activity-log-service-notifications/activity-log-alert-new-ux.png#lightbox)
-
-> [!NOTE]
-> This subscription is used to save the activity log alert. The alert resource is deployed to this subscription and monitors events in the activity log for it.
-
-> [!NOTE]
-> If selecting specific regions, make sure you always add the "Global" region. This would make sure your alert rule covers resources and services that are global by nature, i.e. not specific to a single region.
-
-5. Choose the **Event types** you want to be alerted for: *Service issue*, *Planned maintenance*, *Health advisories*, and *Security advisory*.
-
-6. Click **Select action group** to choose an existing action group or to create a new action group. For more information on action groups, see [Create and manage action groups in the Azure portal](../azure-monitor/alerts/action-groups.md).
--
-7. Define your alert details by entering an **Alert rule name** and **Description**.
-
-8. Select the **Resource group** where you want the alert to be saved.
---
-Within a few minutes, the alert is active and begins to trigger based on the conditions you specified during creation.
Learn how to [Configure webhook notifications for existing problem management systems](service-health-alert-webhook-guide.md). For information on the webhook schema for activity log alerts, see [Webhooks for Azure activity log alerts](../azure-monitor/alerts/activity-log-alerts-webhook.md).
service-health Resource Health Alert Monitor Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-alert-monitor-guide.md
Title: Create Resource Health Alerts Using Azure Portal
+ Title: Create Resource Health Alerts using Azure portal
description: Create alert using Azure portal that notify you when your Azure resources become unavailable. Last updated 6/23/2020
-# Configure resource health alerts using Azure portal
+# Configure Resource Health alerts in the Azure portal
-This article shows you how to set up activity log alerts for resource health notifications by using the Azure portal.
+This article shows you how to set up activity log alerts for resource health notifications in the Azure portal.
Azure Resource Health keeps you informed about the current and historical health status of your Azure resources. Azure Resource Health alerts can notify you in near real-time when these resources have a change in their health status. Creating Resource Health alerts programmatically allow for users to create and customize alerts in bulk.
To learn more about action groups, see [Create and manage action groups](../azur
For information on how to configure resource health notification alerts by using Azure Resource Manager templates, see [Resource Manager templates](./resource-health-alert-arm-template-guide.md). Resource Health Alert using Azure portal
-## Resource Health Alert Using Azure Portal
+## Create a Resource Health alert rule in the Azure portal
1. In the Azure [portal](https://portal.azure.com/), select **Service Health**. ![Service Health Selection](./media/resource-health-alert-monitor-guide/service-health-selection.png)
-2. In the **Resource Health** section, select **Service Health**.
-3. Select **Add resource health alert** and fill in the fields.
-4. Under Alert target, select the **Subscription**, **Resource Types**, **Resource Groups** and **Resource** you want to be alerted for.
-
- ![Target selection Selection](./media/resource-health-alert-monitor-guide/alert-target.png)
-
-5. Under alert condition select:
- 1. The **Event Status** you want to be alerted for. The severity level of the event: Active, Resolved, In Progress, Updated
- 2. The **Resource Status** you want to be alerted for. The resource status of the event: Available, Unavailable, Unknown, Degraded
- 3. The **Reason Type** you want to be alerted for. The cause of the event: Platform Initiated, User Initiated
- ![Alert condition selection Health Selection](./media/resource-health-alert-monitor-guide/alert-condition.png)
-6. Under Define alert details, provide the following details:
- 1. **Alert rule name**: The name for the new alert rule.
- 2. **Description**: The description for the new alert rule.
- 3. **Save alert to resource group**: Select the resource group where you want to save this new rule.
-7. Under **Action group**, from the drop-down menu, specify the action group that you want to assign to this new alert rule. Or, [create a new action group](../azure-monitor/alerts/action-groups.md) and assign it to the new rule. To create a new group, select + **New group**.
-8. To enable the rules after you create them, select **Yes** for the **Enable rule upon creation** option.
-9. Select **Create alert rule**.
-
-The new alert rule for the activity log is created, and a confirmation message appears in the upper-right corner of the window.
-You can enable, disable, edit, or delete a rule. Learn more about [how to manage activity log rules](../azure-monitor/alerts/alerts-activity-log.md#view-and-manage-in-the-azure-portal).
+1. In the **Resource Health** section, select **Service Health**.
+1. Select **Add resource health alert**.
+1. The **Create an alert rule wizard** opens to the **Conditions** tab, with the **Scope** tab already populated. Follow the steps for Resource Health alerts, starting from the **Conditions** tab, in the [create a new alert rule wizard](../azure-monitor/alerts/alerts-create-new-alert-rule.md).
## Next steps
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
This table summarized support for the Azure VM OS disk, data disk, and temporary
OS disk maximum size | 4096 GB | [Learn more](../virtual-machines/managed-disks-overview.md) about VM disks. Temporary disk | Not supported | The temporary disk is always excluded from replication.<br/><br/> Don't store any persistent data on the temporary disk. [Learn more](../virtual-machines/managed-disks-overview.md). Data disk maximum size | 32 TB for managed disks<br></br>4 TB for unmanaged disks|
-Data disk minimum size | No restriction for unmanaged disks. 2 GB for managed disks |
+Data disk minimum size | No restriction for unmanaged disks. 1 GB for managed disks |
Data disk maximum number | Up to 64, in accordance with support for a specific Azure VM size | [Learn more](../virtual-machines/sizes.md) about VM sizes. Data disk maximum size per storage account (for unmanaged disks) | 35 TB | This is an upper limit for cumulative size of page blobs created in a premium Storage Account Data disk change rate | Maximum of 20 MBps per disk for premium storage. Maximum of 2 MBps per disk for Standard storage. | If the average data change rate on the disk is continuously higher than the maximum, replication won't catch up.<br/><br/> However, if the maximum is exceeded sporadically, replication can catch up, but you might see slightly delayed recovery points.
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-with-custom-container-image.md
AppPlatformContainerEventLogs
### Scan your image for vulnerabilities
-We recommend that you use Microsoft Defender for Cloud with ACR to prevent your images from being vulnerable. For more information, see [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks#scanning-images-in-acr-registries)
+We recommend that you use Microsoft Defender for Cloud with ACR to prevent your images from being vulnerable. For more information, see [Microsoft Defender for Cloud](../defender-for-cloud/defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks)
### Switch between JAR deployment and container deployment
spring-apps Quickstart Sample App Acme Fitness Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-sample-app-acme-fitness-store-introduction.md
# Introduction to the Fitness Store sample app
+> [!NOTE]
+> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
+ > [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
spring-apps Troubleshoot Build Exit Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot-build-exit-code.md
+
+ Title: Troubleshoot common build issues in Azure Spring Apps
+description: Learn how to troubleshoot common build issues in Azure Spring Apps.
+++ Last updated : 10/24/2022+++
+# Troubleshoot common build issues in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This article describes how to troubleshoot build issues with your Azure Spring Apps deployment.
+
+## Build exit codes
+
+Azure Spring Apps Enterprise tier uses Tanzu Buildpacks to transform your application source code into images. For more information, see [Tanzu Buildpacks](https://docs.vmware.com/en/VMware-Tanzu-Buildpacks/https://docsupdatetracker.net/index.html).
+
+When you deploy your app in Azure Spring Apps using the [Azure CLI](/cli/azure/install-azure-cli), you'll see a build log in the Azure CLI console. If the build fails, Azure Spring Apps displays an exit code and error message in the CLI console indicating why the buildpack execution failed during different phases of the buildpack [lifecycle](https://buildpacks.io/docs/concepts/components/lifecycle/).
+
+The following list describes some common exit codes:
+
+- **20** - All buildpack groups have failed to detect.
+
+ Consider the following possible causes of an exit code of *20*:
+
+ - The builder you're using doesn't support the language your project used.
+
+ If you're using the default builder, check the language the default builder supports. For more information, see the [Default Builder and Tanzu Buildpacks](how-to-enterprise-build-service.md#default-builder-and-tanzu-buildpacks) section of [Use Tanzu Build Service](how-to-enterprise-build-service.md).
+
+ If you're using the custom builder, check whether your custom builder's buildpack supports the language your project used.
+
+ - You're running against the wrong path; for example, your Maven project's *pom.xml* file isn't in the root path.
+
+ Set `BP_MAVEN_POM_FILE` to specify the location of the project's *pom.xml* file.
+
+ - There's something wrong with your application; for example, your *.jar* file doesn't have a */META-INF/MANIFEST.MF* file that contains a `Main-Class` entry.
+
+- **51** - Buildpack build error.
+
+ Consider the following possible causes of an exit code of *51*:
+
+ - If Azure Spring Apps displays the error message `Build failed in stage build with reason OOMKilled` in the Azure CLI console, the build failed due to insufficient memory.
+
+ Use the following command to increase memory using the `build-memory` environment variable:
+
+ ```azurecli
+ az spring app deploy \
+ --resource-group <your-resource-group-name> \
+ --service <your-Azure-Spring-Apps-name> \
+ --name <your-app-name> \
+ --build-memory 3Gi
+ ```
+
+ - The build failed because of an application source code error; for example, there's a compilation error in your source code.
+
+ Check the build log to find the root cause.
+
+ - The build failed because of a download dependency error; for example, a network issue caused the Maven dependency download to fail.
+
+- **62** - Failed to write image to Azure Container Registry.
+
+ Consider the following possible cause of an exit code of *62*:
+
+ - If Azure Spring Apps displays the error message `Failed to write image to the following tags` in the build log, the build failed because of a network issue.
+
+ Retry to fix the issue.
+
+ If your application is a static file or dynamic front-end application served by a web server, see the [Common build and deployment errors](how-to-enterprise-deploy-static-file.md#common-build-and-deployment-errors) section of [Deploy static files in Azure Spring Apps Enterprise tier](how-to-enterprise-deploy-static-file.md).
+
+## Next steps
+
+- [Troubleshoot common Azure Spring Apps issues](./troubleshoot.md)
static-web-apps Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/monitor.md
Use the following steps to add Application Insights monitoring to your static we
Once you create the Application Insights instance, it creates an associated application setting in the Azure Static Web Apps instance used to link the services together. > [!NOTE]
-> If you want to track how the different features of your web app are used end-to-end client side, you can insert trace calls in your JavaScript code. For more information, see [Application Insights for webpages](/azure/azure-monitor/app/javascript?tabs=snippet).
+> If you want to track how the different features of your web app are used end-to-end client side, you can insert trace calls in your JavaScript code. For more information, see [Application Insights for webpages](../azure-monitor/app/javascript.md?tabs=snippet).
## Access data
In some cases, you may want to limit logging while still capturing details on er
## Next steps > [!div class="nextstepaction"]
-> [Set up authentication and authorization](authentication-authorization.md)
+> [Set up authentication and authorization](authentication-authorization.md)
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
For general destination limitations, see [Destination limitations](../../azure-m
If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](lifecycle-management-overview.md).
- If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](../../azure-monitor/logs/data-retention-archive.md).
## Analyzing metrics
The following example shows how to read metric data on the metric supporting mul
## Analyzing logs
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md).
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). The schema for Azure Blob Storage resource logs is found in [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../azure-monitor/essentials/resource-logs-schema.md). The schema for Azure Blob Storage resource logs is found in [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md).
To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages). Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure Blob Storage service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../../azure-monitor/essentials/activity-log.md) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
### Log authenticated requests
All other failed anonymous requests aren't logged. For a full list of the logged
### Sample Kusto queries
-If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial).
+If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](../../azure-monitor/logs/log-analytics-tutorial.md).
Here are some queries that you can enter in the **Log search** bar to help you monitor your Blob storage. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
Get started with any of these guides.
| [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md) | A reference of the logs and metrics created by Azure Blob Storage | | [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json)| Common performance issues and guidance about how to troubleshoot them. | | [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json)| Common availability issues and guidance about how to troubleshoot them.|
-| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
+| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
See the [limitations and known issues article](secure-file-transfer-protocol-kno
## Pricing and billing
-Enabling the SFTP endpoint has a cost of $0.30 per hour. We will start applying this hourly cost on or after December 1, 2022.
+Enabling the SFTP endpoint has an hourly cost. We will start applying this hourly cost on or after January 1, 2023. For the latest pricing information, see [Azure Blob Storage pricing](/pricing/details/storage/blobs/).
Transaction, storage, and networking prices for the underlying storage account apply. To learn more, see [Understand the full billing model for Azure Blob Storage](../common/storage-plan-manage-costs.md#understand-the-full-billing-model-for-azure-blob-storage).
storage Storage Encrypt Decrypt Blobs Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-encrypt-decrypt-blobs-key-vault.md
This tutorial shows you how to:
- Azure subscription - [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) - Azure storage account - [create a storage account](../common/storage-account-create.md)-- Key vault - create one using [Azure portal](/azure/key-vault/general/quick-create-portal), [Azure CLI](/azure/key-vault/general/quick-create-cli), or [PowerShell](/azure/key-vault/general/quick-create-powershell)
+- Key vault - create one using [Azure portal](../../key-vault/general/quick-create-portal.md), [Azure CLI](../../key-vault/general/quick-create-cli.md), or [PowerShell](../../key-vault/general/quick-create-powershell.md)
- [Visual Studio 2022](https://visualstudio.microsoft.com) installed ## Assign a role to your Azure AD user
-When developing locally, make sure that the user account that is accessing the key vault has the correct permissions. You'll need the [Key Vault Crypto Officer role](/azure/role-based-access-control/built-in-roles#key-vault-crypto-officer) to create a key and perform actions on keys in a key vault. You can assign Azure RBAC roles to a user using the Azure portal, Azure CLI, or Azure PowerShell. You can learn more about the available scopes for role assignments on the [scope overview](../../../articles/role-based-access-control/scope-overview.md) page.
+When developing locally, make sure that the user account that is accessing the key vault has the correct permissions. You'll need the [Key Vault Crypto Officer role](../../role-based-access-control/built-in-roles.md#key-vault-crypto-officer) to create a key and perform actions on keys in a key vault. You can assign Azure RBAC roles to a user using the Azure portal, Azure CLI, or Azure PowerShell. You can learn more about the available scopes for role assignments on the [scope overview](../../../articles/role-based-access-control/scope-overview.md) page.
In this scenario, you'll assign permissions to your user account, scoped to the key vault, to follow the [Principle of Least Privilege](../../../articles/active-directory/develop/secure-least-privileged-access.md). This practice gives users only the minimum permissions needed and creates more secure production environments.
In this tutorial, you learned how to use .NET client libraries to perform client
For a broad overview of client-side encryption for blobs, including instructions for migrating encrypted data to version 2, see [Client-side encryption for blobs](client-side-encryption.md).
-For more information about Azure Key Vault, see the [Azure Key Vault overview page](../../key-vault/general/overview.md)
+For more information about Azure Key Vault, see the [Azure Key Vault overview page](../../key-vault/general/overview.md)
storage Storage Quickstart Blobs Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md
Sample code is also available on [GitHub](https://github.com/Azure-Samples/Azure
The order and locations in which `DefaultAzureCredential` looks for credentials can be found in the [Azure Identity library overview](/javascript/api/overview/azure/identity-readme#defaultazurecredential).
-For example, your app can authenticate using your Azure CLI sign-in credentials with when developing locally. Your app can then use a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) once it has been deployed to Azure. No code changes are required for this transition.
+For example, your app can authenticate using your Azure CLI sign-in credentials with when developing locally. Your app can then use a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) once it has been deployed to Azure. No code changes are required for this transition.
#### Assign roles to your Azure AD user account
For tutorials, samples, quickstarts, and other documentation, visit:
- To learn how to deploy a web app that uses Azure Blob storage, see [Tutorial: Upload image data in the cloud with Azure Storage](./storage-upload-process-images.md?preserve-view=true&tabs=javascript) - To see Blob storage sample apps, continue to [Azure Blob storage package library JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples).-- To learn more, see the [Azure Blob storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/storage/storage-blob).
+- To learn more, see the [Azure Blob storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/storage/storage-blob).
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
These example code snippets show you how to do the following tasks with the Azur
The order and locations in which `DefaultAzureCredential` looks for credentials can be found in the [Azure Identity library overview](/python/api/overview/azure/identity-readme#defaultazurecredential).
-For example, your app can authenticate using your Azure CLI sign-in credentials with when developing locally. Your app can then use a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) once it has been deployed to Azure. No code changes are required for this transition.
+For example, your app can authenticate using your Azure CLI sign-in credentials with when developing locally. Your app can then use a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) once it has been deployed to Azure. No code changes are required for this transition.
#### Assign roles to your Azure AD user account
To see Blob storage sample apps, continue to:
> [Azure Blob Storage library for Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob/samples) - To learn more, see the [Azure Storage client libraries for Python](/azure/developer/python/sdk/storage/overview).-- For tutorials, samples, quickstarts, and other documentation, visit [Azure for Python Developers](/azure/python/).
+- For tutorials, samples, quickstarts, and other documentation, visit [Azure for Python Developers](/azure/python/).
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
description: Learn about file shares hosted in Azure Files using the Network Fil
Previously updated : 05/25/2022 Last updated : 11/15/2022
NFS file shares are often used in the following scenarios:
## Features - Fully POSIX-compliant file system. - Hard link support.-- Symbolic link support.
+- Symbolic link support.
- NFS file shares currently only support most features from the [4.1 protocol specification](https://tools.ietf.org/html/rfc5661). Some features such as delegations and callback of all kinds, Kerberos authentication, and encryption-in-transit are not supported.
+> [!NOTE]
+> Creating a hard link from an existing symbolic link isn't currently supported.
## Security and networking All data stored in Azure Files is encrypted at rest using Azure storage service encryption (SSE). Storage service encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. Because data is encrypted beneath the Azure file share's file system, as it's encoded to disk, you don't have to have access to the underlying key on the client to read or write to the Azure file share. Encryption at rest applies to both the SMB and NFS protocols.
storage Storage Files Identity Ad Ds Update Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-update-password.md
Previously updated : 09/28/2022 Last updated : 11/16/2022
Update-AzStorageAccountADObjectPassword `
-StorageAccountName "<your-storage-account-name-here>" ```
+This action will change the password for the AD object from kerb1 to kerb2. This is intended to be a two-stage process: rotate from kerb1 to kerb2 (kerb2 will be regenerated on the storage account before being set), wait several hours, and then rotate back to kerb1 (this cmdlet will likewise regenerate kerb1).
+ ## Applies to | File share type | SMB | NFS | |-|:-:|:-:|
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring.md
For general destination limitations, see [Destination limitations](../../azure-m
If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](../blobs/lifecycle-management-overview.md).
- If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](../../azure-monitor/logs/data-retention-archive.md).
## Analyzing metrics
The following example shows how to read metric data on the metric supporting mul
## Analyzing logs
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md).
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). The schema for Azure Files resource logs is found in [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../azure-monitor/essentials/resource-logs-schema.md). The schema for Azure Files resource logs is found in [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages). Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure File service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../../azure-monitor/essentials/activity-log.md) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
### Log authenticated requests
Requests made by the Azure Files service itself, such as log creation or deletio
### Sample Kusto queries
-If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial).
+If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](../../azure-monitor/logs/log-analytics-tutorial.md).
Here are some queries that you can enter in the **Log search** bar to help you monitor your Azure Files. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
The following table lists some example scenarios to monitor and the proper metri
- [Planning for an Azure Files deployment](./storage-files-planning.md) - [How to deploy Azure Files](./storage-how-to-create-file-share.md) - [Troubleshoot Azure Files on Windows](./storage-troubleshoot-windows-file-connection-problems.md)-- [Troubleshoot Azure Files on Linux](./storage-troubleshoot-linux-file-connection-problems.md)-
+- [Troubleshoot Azure Files on Linux](./storage-troubleshoot-linux-file-connection-problems.md)
storage Storage Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md
description: Understand planning for an Azure Files deployment. You can either d
Previously updated : 08/29/2022 Last updated : 11/15/2022
With both SMB and NFS file shares, Azure Files offers enterprise-grade file shar
| Symbolic link support | Not supported | Supported | | Optionally internet accessible | Yes (SMB 3.0+ only) | No | | Supports FileREST | Yes | Subset: <br /><ul><li>[Operations on the `FileService`](/rest/api/storageservices/operations-on-the-account--file-service-)</li><li>[Operations on `FileShares`](/rest/api/storageservices/operations-on-shares--file-service-)</li><li>[Operations on `Directories`](/rest/api/storageservices/operations-on-directories)</li><li>[Operations on `Files`](/rest/api/storageservices/operations-on-files)</li></ul> |
-| Mandatory lock/advisory byte range lock | Not supported | Supported |
+| Mandatory byte range locks | Supported | Not supported |
+| Advisory byte range locks | Not supported | Supported |
| Extended/named attributes | Not supported | Not supported | | Alternate data streams | Not supported | N/A | | Object identifiers | Not supported | N/A |
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md
For general destination limitations, see [Destination limitations](../../azure-m
If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](../blobs/lifecycle-management-overview.md).
- If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](../../azure-monitor/logs/data-retention-archive.md).
## Analyzing metrics
The following example shows how to read metric data on the metric supporting mul
****
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md).
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). The schema for Azure Queue Storage resource logs is found in [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../azure-monitor/essentials/resource-logs-schema.md). The schema for Azure Queue Storage resource logs is found in [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md).
To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages).
Log entries are created only if there are requests made against the service endp
Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its queue endpoint but not in its table or blob endpoints, only logs that pertain to Queue Storage are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../../azure-monitor/essentials/activity-log.md) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
### Log authenticated requests
All other failed anonymous requests aren't logged. For a full list of the logged
### Sample Kusto queries
-If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial).
+If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](../../azure-monitor/logs/log-analytics-tutorial.md).
Here are some queries that you can enter in the **Log search** bar to help you monitor your queues. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
Get started with any of these guides.
| [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md) | A reference of the logs and metrics created by Azure Queue Storage | | [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/queues/toc.json)| Common performance issues and guidance about how to troubleshoot them. | | [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/queues/toc.json)| Common availability issues and guidance about how to troubleshoot them.|
-| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/queues/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
+| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/queues/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
storage Veeam Solution Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/backup-archive-disaster-recovery/veeam/veeam-solution-guide.md
This diagram provides an overview of these capabilities.
## Before you begin
-As you plan your Azure Storage strategy with Veeam, it's recommended to review the [Microsoft Cloud Adoption Framework](https://docs.microsoft.com/azure/cloud-adoption-framework/) for guidance on setting up your Azure environment. The [Azure Setup Guide](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/azure-setup-guide/) includes step-by-step details to help you establish a foundation for operating efficiently and securely within Azure.
+As you plan your Azure Storage strategy with Veeam, it's recommended to review the [Microsoft Cloud Adoption Framework](/azure/cloud-adoption-framework/) for guidance on setting up your Azure environment. The [Azure Setup Guide](/azure/cloud-adoption-framework/ready/azure-setup-guide/) includes step-by-step details to help you establish a foundation for operating efficiently and securely within Azure.
## Using Azure Blob Storage with Veeam
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md
For general destination limitations, see [Destination limitations](../../azure-m
If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](../blobs/lifecycle-management-overview.md).
- If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](../../azure-monitor/logs/data-retention-archive.md).
## Analyzing metrics
The following example shows how to read metric data on the metric supporting mul
## Analyzing logs
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md).
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). The schema for Azure Table Storage resource logs is found in [Azure Table storage monitoring data reference](monitor-table-storage-reference.md).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../azure-monitor/essentials/resource-logs-schema.md). The schema for Azure Table Storage resource logs is found in [Azure Table storage monitoring data reference](monitor-table-storage-reference.md).
To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages). Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure Blob Storage service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../../azure-monitor/essentials/activity-log.md) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
### Log authenticated requests
All other failed anonymous requests aren't logged. For a full list of the logged
### Sample Kusto queries
-If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial).
+If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](../../azure-monitor/logs/log-analytics-tutorial.md).
Here are some queries that you can enter in the **Log search** bar to help you monitor your Blob storage. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
No. Azure Compute supports the metrics on disks. For more information, see [Per
| [Azure Table storage monitoring data reference](monitor-table-storage-reference.md)| A reference of the logs and metrics created by Azure Table Storage | | [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/tables/toc.json)| Common performance issues and guidance about how to troubleshoot them. | | [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/tables/toc.json)| Common availability issues and guidance about how to troubleshoot them.|
-| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/tables/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
+| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/tables/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/7-beyond-data-warehouse-migration.md
A key reason to migrate your existing data warehouse to Azure Synapse Analytics
- [Azure HDInsight](../../../hdinsight/index.yml) to process large amounts of data, and to join big data with Azure Synapse data by creating a logical data warehouse using PolyBase. -- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md), and [Apache Kafka](/azure/hdinsight/kafka/apache-kafka-introduction) to integrate live streaming data from Azure Synapse.
+- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md), and [Apache Kafka](../../../hdinsight/kafk) to integrate live streaming data from Azure Synapse.
The growth of big data has led to an acute demand for [machine learning](../../machine-learning/what-is-machine-learning.md) to enable custom-built, trained machine learning models for use in Azure Synapse. Machine learning models enable in-database analytics to run at scale in batch, on an event-driven basis and on-demand. The ability to take advantage of in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees consistent predictions and recommendations.
By migrating your data warehouse to Azure Synapse, you can take advantage of the
## Next steps
-To learn about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
+To learn about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
synapse-analytics Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md
A *data warehouse snapshot* creates a restore point you can leverage to recover
A *data warehouse restore* is a new data warehouse that is created from a restore point of an existing or deleted data warehouse. Restoring your data warehouse is an essential part of any business continuity and disaster recovery strategy because it re-creates your data after accidental corruption or deletion. Data warehouse snapshot is also a powerful mechanism to create copies of your data warehouse for test or development purposes.
-> [!NOTE]Dedicated SQL pool Recovery Time Objective (RTO) rates can vary. Factors that may affect the recovery (restore) time:
+> [!NOTE]
+> Dedicated SQL pool Recovery Time Objective (RTO) rates can vary. Factors that may affect the recovery (restore) time:
> - The database size > - The location of the source and target data warehouse (i.e., geo-restore)
To restore a deleted data warehouse, see [Restore a deleted database (formerly S
> [!NOTE] > Table-level restore is not supported in dedicated SQL Pools. You can only recover an entire database from your backup, and then copy the require table(s) by using
-> - ETL tools activities such as [Copy Activity](/azure/data-factory/copy-activity-overview)
+> - ETL tools activities such as [Copy Activity](../../data-factory/copy-activity-overview.md)
> - Export and Import > - Export the data from the restored backup into your Data Lake by using CETAS [CETAS Example](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=sql-server-linux-ver16&preserve-view=true#d-use-create-external-table-as-select-exporting-data-as-parquet)
-> - Import the data by using [COPY](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest) or [Polybase](/azure/synapse-analytics/sql/load-data-overview#options-for-loading-with-polybase)
+> - Import the data by using [COPY](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest) or [Polybase](../sql/load-data-overview.md#options-for-loading-with-polybase)
## Cross subscription restore
You can [submit a support ticket](sql-data-warehouse-get-started-create-support-
## Next steps
-For more information about restore points, see [User-defined restore points](sql-data-warehouse-restore-points.md)
+For more information about restore points, see [User-defined restore points](sql-data-warehouse-restore-points.md)
synapse-analytics Maintenance Scheduling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/maintenance-scheduling.md
All maintenance operations should finish within the specified maintenance window
Integration with Service Health notifications and the Resource Health Check Monitor allows customers to stay informed of impending maintenance activity. This automation takes advantage of Azure Monitor. You can decide how you want to be notified of impending maintenance events. Also, you can choose which automated flows will help you manage downtime and minimize operational impact.
-A 24-hour advance notification precedes all maintenance events that aren't for the DWC400c and lower tiers.
+A 24-hour advance notification precedes all maintenance events that aren't for the DW400c and lower tiers.
> [!NOTE] > In the event we are required to deploy a time critical update, advanced notification times may be significantly reduced. This could occur outside an identified maintenance window due to the critical nature of the update.
synapse-analytics Connect Synapse Link Sql Database Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database-vnet.md
Title: Configure Azure Synapse Link for Azure SQL Database with network security (preview)
-description: Learn how to configure Azure Synapse Link for Azure SQL Database with network security (preview).
+ Title: Configure Azure Synapse Link for Azure SQL Database with network security
+description: Learn how to configure Azure Synapse Link for Azure SQL Database with network security.
Previously updated : 09/28/2022 Last updated : 11/16/2022
-# Configure Azure Synapse Link for Azure SQL Database with network security (preview)
+# Configure Azure Synapse Link for Azure SQL Database with network security
This article is a guide for configuring Azure Synapse Link for Azure SQL Database with network security. Before you begin, you should know how to create and start Azure Synapse Link for Azure SQL Database from [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md).
-> [!IMPORTANT]
-> Azure Synapse Link for SQL is currently in preview.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Create a managed workspace virtual network without data exfiltration In this section, you create an Azure Synapse workspace with a managed virtual network enabled. For **Managed virtual network**, you'll select **Enable**, and for **Allow outbound data traffic only to approved targets**, you'll select **No**. For an overview, see [Azure Synapse Analytics managed virtual network](../security/synapse-workspace-managed-vnet.md).
In this section, you create an Azure Synapse workspace with managed virtual netw
d. Go to the Azure portal for your SQL Server instance that hosts an Azure SQL database as a source store, and then approve the private endpoint connections. :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe3.png" alt-text="Screenshot of a new Azure SQL database linked service private endpoint 3.":::
-
+
1. Now you can create a link connection from the **Integrate** pane to replicate data from your Azure SQL database to an Azure Synapse SQL pool. :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot that shows how to create a link.":::
synapse-analytics Connect Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database.md
Title: Get started with Azure Synapse Link for Azure SQL Database (preview)
-description: Learn how to connect an Azure SQL database to an Azure Synapse workspace with Azure Synapse Link (preview).
+ Title: Get started with Azure Synapse Link for Azure SQL Database
+description: Learn how to connect an Azure SQL database to an Azure Synapse workspace with Azure Synapse Link.
Previously updated : 05/09/2022 Last updated : 11/16/2022
-# Get started with Azure Synapse Link for Azure SQL Database (preview)
+# Get started with Azure Synapse Link for Azure SQL Database
-This article is a step-by-step guide for getting started with Azure Synapse Link for Azure SQL Database. For an overview of this feature, see [Azure Synapse Link for Azure SQL Database (preview)](sql-database-synapse-link.md).
-
-> [!IMPORTANT]
-> Azure Synapse Link for SQL is currently in preview.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+This article is a step-by-step guide for getting started with Azure Synapse Link for Azure SQL Database. For an overview of this feature, see [Azure Synapse Link for Azure SQL Database](sql-database-synapse-link.md).
## Prerequisites
This article is a step-by-step guide for getting started with Azure Synapse Link
1. On the left pane of the Azure portal, select **Integrate**.
-1. On the **Integrate** pane, select the plus sign (**+**), and then select **Link connection (Preview)**.
+1. On the **Integrate** pane, select the plus sign (**+**), and then select **Link connection**.
:::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-link-connection.png" alt-text="Screenshot that shows how to select a new link connection from Synapse Studio.":::
synapse-analytics Connect Synapse Link Sql Server 2022 Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022-vnet.md
Title: Configure Azure Synapse Link for SQL Server 2022 with network security (preview)
-description: Learn how to configure Azure Synapse Link for SQL Server 2022 with network security (preview).
+ Title: Configure Azure Synapse Link for SQL Server 2022 with network security
+description: Learn how to configure Azure Synapse Link for SQL Server 2022 with network security.
Previously updated : 09/28/2022 Last updated : 11/16/2022
-# Configure Azure Synapse Link for SQL Server 2022 with network security (preview)
+# Configure Azure Synapse Link for SQL Server 2022 with network security
This article is a guide for configuring Azure Synapse Link for SQL Server 2022 with network security. Before you begin this process, you should know how to create and start Azure Synapse Link for SQL Server 2022. For information, see [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md).
-> [!IMPORTANT]
-> Azure Synapse Link for SQL is currently in preview.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Create a managed workspace virtual network without data exfiltration In this section, you create an Azure Synapse workspace with a managed virtual network enabled. You'll enable **managed virtual network**, and then select **No** to allow outbound traffic from the workspace to any target. For an overview, see [Azure Synapse Analytics managed virtual network](../security/synapse-workspace-managed-vnet.md).
In this section, you create an Azure Synapse workspace with managed virtual netw
1. Create a linked service that connects to your SQL Server 2022 instance.
- To learn how, see the "Create a linked service for your source SQL Server 2022 database" section of [Get started with Azure Synapse Link for SQL Server 2022 (preview)](connect-synapse-link-sql-server-2022.md#create-a-linked-service-for-your-source-sql-server-2022-database).
+ To learn how, see the "Create a linked service for your source SQL Server 2022 database" section of [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md#create-a-linked-service-for-your-source-sql-server-2022-database).
1. Add a role assignment to ensure that you've granted your Azure Synapse workspace managed identity permissions to your Azure Data Lake Storage Gen2 storage account that's used as the landing zone.
- To learn how, see the "Create a linked service to connect to your landing zone on Azure Data Lake Storage Gen2" section of [Get started with Azure Synapse Link for SQL Server 2022 (preview)](connect-synapse-link-sql-server-2022.md#create-a-linked-service-to-connect-to-your-landing-zone-on-azure-data-lake-storage-gen2).
+ To learn how, see the "Create a linked service to connect to your landing zone on Azure Data Lake Storage Gen2" section of [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md#create-a-linked-service-to-connect-to-your-landing-zone-on-azure-data-lake-storage-gen2).
1. Create a linked service that connects to your Azure Data Lake Storage Gen2 storage (landing zone) with managed private endpoint enabled.
In this section, you create an Azure Synapse workspace with managed virtual netw
d. Complete the creation of the linked service for Azure Data Lake Storage Gen2 storage. :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe4.png" alt-text="Screenshot of new sql db linked service pe4.":::
-
+
1. Now you can create a link connection from the **Integrate** pane to replicate data from your SQL Server 2022 instance to an Azure Synapse SQL pool. :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot that shows how to create a link.":::
synapse-analytics Connect Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022.md
Title: Create Azure Synapse Link for SQL Server 2022 (preview)
-description: Learn how to create and connect a SQL Server 2022 instance to an Azure Synapse workspace by using Azure Synapse Link (preview).
+ Title: Create Azure Synapse Link for SQL Server 2022
+description: Learn how to create and connect a SQL Server 2022 instance to an Azure Synapse workspace by using Azure Synapse Link.
Previously updated : 09/27/2022 Last updated : 11/16/2022
-# Get started with Azure Synapse Link for SQL Server 2022 (preview)
+# Get started with Azure Synapse Link for SQL Server 2022
This article is a step-by-step guide for getting started with Azure Synapse Link for SQL Server 2022. For an overview, see [Azure Synapse Link for SQL Server 2022](sql-server-2022-synapse-link.md).
-> [!IMPORTANT]
-> Azure Synapse Link for SQL is currently in preview.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Prerequisites
-* Before you begin, see [Create a new Azure Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. The current tutorial is to create Azure Synapse Link for SQL in a public network. This article assumes that you selected **Disable Managed virtual network** and **Allow connections from all IP addresses** when you created an Azure Synapse workspace. If you want to configure Azure Synapse Link for SQL Server 2022 with network security, also see [Configure Azure Synapse Link for SQL Server 2022 with network security (preview)](connect-synapse-link-sql-server-2022-vnet.md).
+* Before you begin, see [Create a new Azure Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. The current tutorial is to create Azure Synapse Link for SQL in a public network. This article assumes that you selected **Disable Managed virtual network** and **Allow connections from all IP addresses** when you created an Azure Synapse workspace. If you want to configure Azure Synapse Link for SQL Server 2022 with network security, also see [Configure Azure Synapse Link for SQL Server 2022 with network security](connect-synapse-link-sql-server-2022-vnet.md).
* Create an Azure Data Lake Storage Gen2 account, which is different from the account you create with the Azure Synapse Analytics workspace. You'll use this account as the landing zone to stage the data submitted by SQL Server 2022. For more information, see [Create an Azure Data Lake Storage Gen2 account](../../storage/blobs/create-data-lake-storage-account.md).
This article is a step-by-step guide for getting started with Azure Synapse Link
1. From Synapse Studio, open the **Integrate** hub.
-1. On the **Integrate** pane, select the plus sign (**+**), and then select **Link connection (Preview)**.
+1. On the **Integrate** pane, select the plus sign (**+**), and then select **Link connection**.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/new-link-connection.png" alt-text="Screenshot that shows the 'Link connection (Preview)' button.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/new-link-connection.png" alt-text="Screenshot that shows the 'Link connection' button.":::
1. Enter your source database:
synapse-analytics How To Monitor Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-monitor-synapse-link-sql-database.md
You can monitor the status of your Azure Synapse Link connection, see which tabl
| Status | **Initial**, **Starting**, **Running**, **Stopping**, **Stopped**, **Pausing**, **Paused**, or **Resuming**. Details of what each status means can be found here: [Azure Synapse Link for Azure SQL Database](sql-database-synapse-link.md)| | Start time | Start date and time for the link connection run (Month, Date, Year, HH:MM:SS AM/PM) | | End time | End date and time for the link connection run (Month, Date, Year, HH:MM:SS AM/PM) |
- | Landing zone SAS token expire time | Expiration date/time for the SAS token that is used to access the landing zone storage. More details can be found here: [Configure an expiration policy for shared accessed signatures (SAS)](/azure/storage/common/sas-expiration-policy.md?context=/azure/synapse-analytics/context/context) |
+ | Landing zone SAS token expire time | Expiration date/time for the SAS token that is used to access the landing zone storage. More details can be found here: [Configure an expiration policy for shared accessed signatures (SAS)](../../storage/common/sas-expiration-policy.md?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext) |
| Continuous run ID | ID of the link connection run *Helpful when troubleshooting any issues and contacting Microsoft support. | 1. You need to manually select the **Refresh** button to refresh the list of link connections and their corresponding monitoring details. Autorefresh is currently not supported.
If you're using a database other than an Azure SQL database, see:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
-* [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
+* [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
synapse-analytics How To Monitor Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-monitor-synapse-link-sql-server-2022.md
You can monitor the status of your Azure Synapse Link connection, see which tabl
| Status | **Initial**, **Starting**, **Running**, **Stopping**, **Stopped**, **Pausing**, **Paused**, or **Resuming**. Details of what each status means can be found here: [Azure Synapse Link for SQL Server 2022](sql-server-2022-synapse-link.md) | | Start time | Start date and time for the link connection run (Month, Date, Year, HH:MM:SS AM/PM) | | End time | End date and time for the link connection run (Month, Date, Year, HH:MM:SS AM/PM) |
- | Landing zone SAS token expire time | Expiration date/time for the SAS token that is used to access the landing zone storage. More details can be found here: [Configure an expiration policy for shared accessed signatures (SAS)](/azure/storage/common/sas-expiration-policy.md?context=/azure/synapse-analytics/context/context) |
+ | Landing zone SAS token expire time | Expiration date/time for the SAS token that is used to access the landing zone storage. More details can be found here: [Configure an expiration policy for shared accessed signatures (SAS)](../../storage/common/sas-expiration-policy.md?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext) |
| Continuous run ID | ID of the link connection run *Helpful when troubleshooting any issues and contacting Microsoft support. | 1. You need to manually select the **Refresh** button to refresh the list of link connections and their corresponding monitoring details. Autorefresh is currently not supported.
If you're using a database other than a SQL Server 2022 instance, see:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
-* [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md)
+* [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md)
synapse-analytics Sql Database Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-database-synapse-link.md
Title: Azure Synapse Link for Azure SQL Database (Preview)
+ Title: Azure Synapse Link for Azure SQL Database
description: Learn about Azure Synapse Link for Azure SQL Database, the link connection, and monitoring the Synapse Link.
Previously updated : 05/09/2022 Last updated : 11/16/2022
-# Azure Synapse Link for Azure SQL Database (Preview)
+# Azure Synapse Link for Azure SQL Database
This article helps you to understand the functions of Azure Synapse Link for Azure SQL Database. You can use the Azure Synapse Link for SQL functionality to replicate your operational data into an Azure Synapse Analytics dedicated SQL pool from Azure SQL Database.
-> [!IMPORTANT]
-> Azure Synapse Link for SQL is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Link connection A link connection identifies a mapping relationship between an Azure SQL database and an Azure Synapse Analytics dedicated SQL pool. You can create, manage, monitor and delete link connections in your Synapse workspace. When creating a link connection, you can select both source database and a destination Synapse dedicated SQL pool so that the operational data from your source database will be automatically replicated to the specified destination Synapse dedicated SQL pool. You can also add or remove one or more tables from your source database to be replicated.
You can enable transactional consistency across tables for each link connection.
## <a name="known-issues"></a>Known limitations
-A consolidated list of known limitations and issues can be found at [Known limitations and issues with Azure Synapse Link for SQL (Preview)](synapse-link-for-sql-known-issues.md).
+A consolidated list of known limitations and issues can be found at [Known limitations and issues with Azure Synapse Link for SQL](synapse-link-for-sql-known-issues.md).
## Next steps
-* To learn more, see how to [Configure Synapse Link for Azure SQL Database (Preview)](connect-synapse-link-sql-database.md).
+* To learn more, see how to [Configure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md).
synapse-analytics Sql Server 2022 Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-server-2022-synapse-link.md
Title: Azure Synapse Link for SQL Server 2022 (Preview)
+ Title: Azure Synapse Link for SQL Server 2022
description: Learn about Azure Synapse Link for SQL Server 2022, the link connection, landing zone, Self-hosted integration runtime, and monitoring the Azure Synapse Link for SQL.
Previously updated : 05/09/2022 Last updated : 11/16/2022
-# Azure Synapse Link for SQL Server 2022 (Preview)
+# Azure Synapse Link for SQL Server 2022
This article helps you to understand the functions of Azure Synapse Link for SQL Server 2022. You can use the Azure Synapse Link for SQL functionality to replicate your operational data into an Azure Synapse Analytics dedicated SQL pool from SQL Server 2022.
-> [!IMPORTANT]
-> Azure Synapse Link for SQL is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Link connection A link connection identifies a mapping relationship between an SQL Server 2022 and an Azure Synapse Analytics dedicated SQL pool. You can create, manage, monitor and delete link connections in your Synapse workspace. When creating a link connection, you can select both source database and destination Synapse dedicated SQL pool so that the operational data from your source database will be automatically replicated to the specified destination Synapse dedicated SQL pool. You can also add or remove one or more tables from your source database to be replicated.
You can enable transactional consistency across table for each link connection.
## Known limitations
-A consolidated list of known limitations and issues can be found at [Known limitations and issues with Azure Synapse Link for SQL (Preview)](synapse-link-for-sql-known-issues.md).
+A consolidated list of known limitations and issues can be found at [Known limitations and issues with Azure Synapse Link for SQL](synapse-link-for-sql-known-issues.md).
## Next steps
-* To learn more, see how to [Configure Synapse Link for SQL Server 2022 (Preview)](connect-synapse-link-sql-server-2022.md).
+* To learn more, see how to [Configure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md).
synapse-analytics Sql Synapse Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-synapse-link-overview.md
Title: What is Azure Synapse Link for SQL? (Preview)
-description: Learn about Azure Synapse Link for SQL, the benefit it offers and price
+ Title: What is Azure Synapse Link for SQL?
+description: Learn about Azure Synapse Link for SQL, the benefits it offers, and price.
Previously updated : 04/18/2022 Last updated : 11/16/2022
-# What is Azure Synapse Link for SQL? (Preview)
+# What is Azure Synapse Link for SQL?
Azure Synapse Link for SQL enables near real time analytics over operational data in Azure SQL Database or SQL Server 2022. With a seamless integration between operational stores including Azure SQL Database and SQL Server 2022 and Azure Synapse Analytics, Azure Synapse Link for SQL enables you to run analytics, business intelligence and machine learning scenarios on your operational data with minimum impact on source databases with a new change feed technology.
-> [!IMPORTANT]
-> Azure Synapse Link for Azure SQL is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- The following image shows the Azure Synapse Link integration with Azure SQL DB, SQL Server 2022, and Azure Synapse Analytics: :::image type="content" source="../media/sql-synapse-link-overview/synapse-link-sql-architecture.png" alt-text="Diagram of the Azure Synapse Link for SQL architecture.":::
You can now get rich insights by analyzing operational data in Azure SQL Databas
## Next steps
-* [Azure Synapse Link for Azure SQL Database (Preview)](sql-database-synapse-link.md).
-* [Azure Synapse Link for SQL Server 2022 (Preview)](sql-server-2022-synapse-link.md).
-* How to [Configure Azure Synapse Link for SQL Server 2022 (Preview)](connect-synapse-link-sql-server-2022.md).
-* How to [Configure Azure Synapse Link for Azure SQL Database (Preview)](connect-synapse-link-sql-database.md).
+* [Azure Synapse Link for Azure SQL Database](sql-database-synapse-link.md).
+* [Azure Synapse Link for SQL Server 2022](sql-server-2022-synapse-link.md).
+* How to [Configure Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md).
+* How to [Configure Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md).
synapse-analytics Synapse Link For Sql Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/synapse-link-for-sql-known-issues.md
Title: Known limitations and issues with Azure Synapse Link for SQL (Preview)
-description: Learn about known limitations and issues with Azure Synapse Link for SQL (Preview).
+ Title: Known limitations and issues with Azure Synapse Link for SQL
+description: Learn about known limitations and issues with Azure Synapse Link for SQL.
Previously updated : 11/09/2022 Last updated : 11/16/2022
This is the list of known limitations for Azure Synapse Link for SQL.
* System tables can't be replicated. * The security configuration from the source database will **NOT** be reflected in the target dedicated SQL pool. * Enabling Azure Synapse Link for SQL will create a new schema called `changefeed`. Don't use this schema, as it is reserved for system use.
-* Azure Synapse Link for SQL will **NOT** work and can't be enabled if your database contains a schema or user named `changefeed`.
* Source tables with collations that are unsupported by dedicated SQL pools, such as UTF8 and certain Japanese collations, can't be replicated. Here's the [supported collations in Synapse SQL Pool](../sql/reference-collation-types.md). * Additionally, some Thai language collations are currently supported by Azure Synapse Link for SQL. These unsupported collations include: * Thai100CaseInsensitiveAccentInsensitiveKanaSensitive
synapse-analytics Troubleshoot Sql Database Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/troubleshoot/troubleshoot-sql-database-failover.md
This article is a guide to troubleshoot and configure Azure Synapse Link for Azu
## Symptom
-For the safety of data, users may choose to set [auto-failover group](/sql/azure-sql/database/failover-group-add-single-database-tutorial) for Azure SQL Database. By setting failover group, users can group multiple geo-replicated databases that can protect a potential data loss. However, when Azure Synapse Link for Azure SQL Database has been started for the table in the Azure SQL Database and the database experiences failover, Synapse Link will be disabled in the backend even though its status is still displayed as running.
+For the safety of data, users may choose to set [auto-failover group](/azure/azure-sql/database/failover-group-add-single-database-tutorial) for Azure SQL Database. By setting failover group, users can group multiple geo-replicated databases that can protect a potential data loss. However, when Azure Synapse Link for Azure SQL Database has been started for the table in the Azure SQL Database and the database experiences failover, Synapse Link will be disabled in the backend even though its status is still displayed as running.
## Resolution
You must stop Synapse Link manually and configure Synapse Link according to the
## Next steps
+ - [Tutorial: Add an Azure SQL Database to an auto-failover group](/azure/azure-sql/database/failover-group-add-single-database-tutorial)
- [Get started with Azure Synapse Link for Azure SQL Database](../connect-synapse-link-sql-database.md)
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
The following updates are new to Azure Synapse Analytics this month.
### Azure Synapse Link
-**Azure Synapse Link for SQL [Public Preview]** - At Microsoft Build 2022, we announced the Public Preview availability of Azure Synapse Link for SQL, for both SQL Server 2022 and Azure SQL Database. Data-driven, quality insights are critical for companies to stay competitive. The speed to achieve those insights can make all the difference. The costly and time-consuming nature of traditional ETL and ELT pipelines is no longer enough. With this release, you can now take advantage of low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. This makes it easier to run BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, read [Announcing the Public Preview of Azure Synapse Link for SQL](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986) and [watch our YouTube video](https://www.youtube.com/embed/pgusZy34-Ek).
+**Azure Synapse Link for SQL Server** - At Microsoft Build 2022, we announced the Public Preview availability of Azure Synapse Link for SQL, for both SQL Server 2022 and Azure SQL Database. Data-driven, quality insights are critical for companies to stay competitive. The speed to achieve those insights can make all the difference. The costly and time-consuming nature of traditional ETL and ELT pipelines is no longer enough. With this release, you can now take advantage of low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. This makes it easier to run BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, read [Announcing the Public Preview of Azure Synapse Link for SQL](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986) and [watch our YouTube video](https://www.youtube.com/embed/pgusZy34-Ek).
## Apr 2022 update
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
description: Learn about the new features and documentation improvements for Azu
Previously updated : 11/09/2022 Last updated : 11/16/2022
The following table lists the features of Azure Synapse Analytics that are curre
| **Apache Spark Optimized Write** | [Optimize Write is a Delta Lake on Azure Synapse](spark/optimize-write-for-apache-spark.md) feature reduces the number of files written by Apache Spark 3 (3.1 and 3.2) and aims to increase individual file size of the written data.| | **Apache Spark R language support** | Built-in [R support for Apache Spark](spark/apache-spark-r-language.md) is now in preview. | | **Azure Synapse Data Explorer** | The [Azure Synapse Data Explorer](./data-explorer/data-explorer-overview.md) provides an interactive query experience to unlock insights from log and telemetry data. Connectors for Azure Data Explorer are available for Synapse Data Explorer. |
-| **Azure Synapse Link for SQL** | Azure Synapse Link is in preview for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, read [Announcing the Public Preview of Azure Synapse Link for SQL](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986) and [watch our YouTube video](https://www.youtube.com/embed/pgusZy34-Ek). |
| **Browse ADLS Gen2 folders in the Azure Synapse Analytics workspace** | You can now browse an Azure Data Lake Storage Gen2 (ADLS Gen2) container or folder in your Azure Synapse Analytics workspace by connecting to a specific container or folder in Synapse Studio. To learn more, see [Browse an ADLS Gen2 folder with ACLs in Azure Synapse Analytics](how-to-access-container-with-access-control-lists.md).| | **Custom partitions for Synapse link for Azure Cosmos DB** | Improve query execution times for your Spark queries, by creating custom partitions based on fields frequently used in your queries. To learn more, see [Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview)](../cosmos-db/custom-partitioning-analytical-store.md). | | **Data flow improvements to Data Preview** | To learn more, see [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). |
The following table lists the features of Azure Synapse Analytics that have tran
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| November 2022 | **Azure Synapse Link for SQL** | Azure Synapse Link for SQL is now generally available for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link for SQL feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, visit [What is Azure Synapse Link for SQL?](synapse-link/sql-synapse-link-overview.md)|
| October 2022 | **SAP CDC connector GA** | The data connector for SAP Change Data Capture (CDC) is now GA. For more information, see [Announcing Public Preview of the SAP CDC solution in Azure Data Factory and Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-the-sap-cdc-solution-in-azure-dat).| | September 2022 | **MERGE T-SQL syntax** | [MERGE T-SQL syntax](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true) has been a highly requested addition to the Synapse T-SQL library. As in SQL Server, the MERGE syntax encapsulates INSERTs/UPDATEs/DELETEs into a single high-performance statement. Available in dedicated SQL pools in version 10.0.17829 and above. For more, see the [MERGE T-SQL announcement blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/merge-t-sql-for-dedicated-sql-pools-is-now-ga/ba-p/3634331).| | July 2022 | **Apache Spark&trade; 3.2 for Synapse Analytics** | Apache Spark&trade; 3.2 for Synapse Analytics is now generally available. Review the [official release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) and [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). Highlights of what got better in Spark 3.2 in the [Azure Synapse Analytics July Update 2022](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_1).|
Azure Synapse Link is an automated system for replicating data from [SQL Server
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| November 2022 | **Azure Synapse Link for SQL** | Azure Synapse Link for SQL is now generally available for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link for SQL feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. For more information, see [What is Azure Synapse Link for SQL?](synapse-link/sql-synapse-link-overview.md)|
| July 2022 | **Batch mode** | Decide between cost and latency in Azure Synapse Link for SQL by selecting *continuous* or *batch* mode to replicate your data. Batch mode allows you to save even more on costs by only paying for ingestion service during the batch loads instead of it being continuously on. You can select between 20 and 60 minutes for batch processing.|
-| May 2022 | **Synapse Link for SQL preview** | Azure Synapse Link for SQL is in preview for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. The [Azure Synapse Link for SQL preview has been announced](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986). For more information, see [Blog: Azure Synapse Link for SQL Deep Dive](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-link-for-sql-deep-dive/ba-p/3567645).|
+| May 2022 | **Azure Synapse Link for SQL preview** | Azure Synapse Link for SQL is in preview for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. The [Azure Synapse Link for SQL preview has been announced](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986). For more information, see [Blog: Azure Synapse Link for SQL Deep Dive](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-link-for-sql-deep-dive/ba-p/3567645).|
## Synapse SQL
This section summarizes recent improvements and features in SQL pools in Azure S
- [Become an Azure Synapse Influencer](https://aka.ms/synapseinfluencers) - [Azure Synapse Analytics terminology](overview-terminology.md) - [Azure Synapse Analytics migration guides](migration-guides/index.yml)-- [Azure Synapse Analytics frequently asked questions](overview-faq.yml)
+- [Azure Synapse Analytics frequently asked questions](overview-faq.yml)
virtual-desktop Compare Remote Desktop Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/compare-remote-desktop-clients.md
The following table compares the features of each Remote Desktop client when con
| Smart sizing | X | X | | | X | | Remote Desktop in Windowed mode is dynamically scaled to the window's size. | | Localization | X | X | English only | X | | X | Client user interface is available in multiple languages. | | Multi-factor authentication | X | X | X | X | X | X | Supports multi-factor authentication for remote connections. |
-| Teams optimization for Azure Virtual Desktop | X | | | | X | | Media optimizations for Microsoft Teams to provide high quality calls and screen sharing experiences. Learn more at [Use Microsoft Teams on Azure Virtual Desktop](/azure/virtual-desktop/teams-on-avd). |
+| Teams optimization for Azure Virtual Desktop | X | | | | X | | Media optimizations for Microsoft Teams to provide high quality calls and screen sharing experiences. Learn more at [Use Microsoft Teams on Azure Virtual Desktop](./teams-on-avd.md). |
## Redirections comparison
When you enable USB port redirection, all USB devices attached to USB ports are
\* Limited to uploading and downloading files through the Remote Desktop Web client.
-\*\* For printer redirection, the macOS app supports the Publisher Imagesetter printer driver by default. The app doesn't support the native printer drivers.
+\*\* For printer redirection, the macOS app supports the Publisher Imagesetter printer driver by default. The app doesn't support the native printer drivers.
virtual-desktop Customize Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-rdp-properties.md
>[!IMPORTANT] >This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/customize-rdp-properties-2019.md).
-You can customize a host pool's Remote Desktop Protocol (RDP) properties, such as multi-monitor experience and audio redirection, to deliver an optimal experience for your users based on their needs. If you'd like to change the default RDP file properties, you can customize RDP properties in Azure Virtual Desktop by either using the Azure portal or by using the *-CustomRdpProperty* parameter in the **Update-AzWvdHostPool** cmdlet.
+You can customize a host pool's Remote Desktop Protocol (RDP) properties, such as multi-monitor experience and audio redirection, to deliver an optimal experience for your users based on their needs. If you'd like to change the default RDP file properties, you can customize RDP properties in Azure Virtual Desktop by either using the Azure portal or by using the `-CustomRdpProperty` parameter in the `Update-AzWvdHostPool` cmdlet.
-See [supported RDP file settings](/windows-server/remote/remote-desktop-services/clients/rdp-files?context=%2fazure%2fvirtual-desktop%2fcontext%2fcontext) for a full list of supported properties and their default values.
+See [Supported RDP properties with Azure Virtual Desktop](rdp-properties.md) for a full list of supported properties and their default values.
## Default RDP file properties
virtual-desktop Environment Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/environment-setup.md
Azure Virtual Desktop is a service that gives users easy and secure access to th
## Host pools
-A host pool is a collection of Azure virtual machines that register to Azure Virtual Desktop as session hosts when you run the Azure Virtual Desktop agent. All session host virtual machines in a host pool should be sourced from the same image for a consistent user experience.
+A host pool is a collection of Azure virtual machines that register to Azure Virtual Desktop as session hosts when you run the Azure Virtual Desktop agent. All session host virtual machines in a host pool should be sourced from the same image for a consistent user experience. You control the resources published to users through app groups.
A host pool can be one of two types: - Personal, where each session host is assigned to an individual user. Personal host pools provide dedicated desktops to end-users that optimize environments for performance and data separation. -- Pooled, where user sessions can be load balanced to any session host in the host pool. There can be multiple user sessions on a single session host. Pooled host pools provide a shared remote experience to end-users, which ensures lower costs costs and greater efficiency.
+- Pooled, where user sessions can be load balanced to any session host in the host pool. There can be multiple different users on a single session host at the same time. Pooled host pools provide a shared remote experience to end-users, which ensures lower costs costs and greater efficiency.
The following table goes into more detail about the features each type of host pool has: |Feature|Personal host pools|Pooled host pools| |||| |Load balancing| User sessions are always load balanced to the session host the user is assigned to. If the user isn't currently assigned to a session host, the user session is load balanced to the next available session host in the host pool. | User sessions are load balanced to session hosts in the host pool based on user session count. You can choose which [load balancing algorithm](host-pool-load-balancing.md) to use: breadth-first or depth-first. |
-|Maximum session limit| One. | As many as the user wants. |
-|User assignment process| Customers can either directly assign users to session hosts or choose to have users automatically assigned to the first available session host. Users always have sessions on the session hosts they are assigned to. | Users aren't assigned to session hosts. After a user signs out and signs back in, their user session might get load balanced to a different session host. |
+|Maximum session limit| One. | As configured by the **Max session limit** value of the properties of a host pool. |
+|User assignment process| Users can either be directly assigned to session hosts or be automatically assigned to the first available session host. Users always have sessions on the session hosts they are assigned to. | Users aren't assigned to session hosts. After a user signs out and signs back in, their user session might get load balanced to a different session host. |
|Scaling|None. | [Autoscale](autoscale-scaling-plan.md) for pooled host pools turns VMs on and off based on the capacity thresholds and schedules the customer defines. |
-|Updates|Updated with Windows Updates, [System Center Configuration Manager (SCCM)](configure-automatic-updates.md), or other software distribution configuration tools.|Updated by redeploying session hosts from updated images instead of traditional updates.|
-|User data| Each user only ever uses one session host, so they can store their user profile data in drive C on the operating system (OS) disk of the VM. | Users can connect to different session hosts every time they connect, so they should store their user profile data in FSLogix. |
-
-You can set additional properties on the host pool to change its load-balancing behavior, how many sessions each session host can take, and what the user can do to session hosts in the host pool while signed in to their Azure Virtual Desktop sessions. You control the resources published to users through app groups.
+|Windows Updates|Updated with Windows Updates, [System Center Configuration Manager (SCCM)](configure-automatic-updates.md), or other software distribution configuration tools.|Updated by redeploying session hosts from updated images instead of traditional updates.|
+|User data| Each user only ever uses one session host, so they can store their user profile data on the operating system (OS) disk of the VM. | Users can connect to different session hosts every time they connect, so they should store their user profile data in [FSLogix](/fslogix/configure-profile-container-tutorial). |
## App groups
virtual-desktop Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-properties.md
Title: Supported RDP properties with Azure Virtual Desktop - Azure Virtual Deskt
description: Learn about the supported RDP properties you can use with Azure Virtual Desktop. Previously updated : 09/26/2022 Last updated : 11/15/2022 # Supported RDP properties with Azure Virtual Desktop
-Organizations can configure RDP properties centrally in Azure Virtual Desktop to determine how a connection to Azure Virtual Desktop should behave. There are a wide range of RDP properties that can be be set, such as for device redirection, display settings, session behavior, and more.
+Organizations can configure Remote Desktop Protocol (RDP) properties centrally in Azure Virtual Desktop to determine how a connection to Azure Virtual Desktop should behave. There are a wide range of RDP properties that can be set, such as for device redirection, display settings, session behavior, and more. For more information, see [Customize RDP properties for a host pool](customize-rdp-properties.md).
-Supported RDP properties differ when using Azure Virtual Desktop compared to Remote Desktop Services. Use the following tables to understand each setting and whether it applies when connecting to Azure Virtual Desktop, Remote Desktop Services, or both.
+> [!NOTE]
+> Supported RDP properties differ when using Azure Virtual Desktop compared to Remote Desktop Services. Use the following tables to understand each setting and whether it applies when connecting to Azure Virtual Desktop, Remote Desktop Services, or both.
-## Connection information
-
-| Display name | RDP setting | Azure Virtual Desktop | Remote Desktop Services | Description | Values | Default value |
-|--|--|:-:|:-:|--|--|:-:|
-| Azure AD authentication | enablerdsaadauth:i:*value* | Γ£ö | Γ£ö | Determines whether the client will use Azure AD to authenticate to the remote PC if it's available. | - 0: RDP won't use Azure AD authentication, even if the remote PC supports it.</br>- 1: RDP will use Azure AD authentication if the remote PC supports it. | 0 |
-| Credential Security Support Provider | enablecredsspsupport:i:*value* | Γ£ö | Γ£ö | Determines whether the client will use the Credential Security Support Provider (CredSSP) for authentication if it's available. | - 0: RDP won't use CredSSP, even if the operating system supports CredSSP.</br>- 1: RDP will use CredSSP if the operating system supports CredSSP. | 1 |
-| Alternate shell | alternate shell:s:*value* | Γ£ö | Γ£ö | Specifies a program to be started automatically in the remote session as the shell instead of explorer. | Valid path to an executable file, such as "C:\ProgramFiles\Office\word.exe". | None |
-| KDC proxy name | kdcproxyname:s:*value* | Γ£ö | Γ£ù | Specifies the fully qualified domain name of a KDC proxy. | Valid path to a KDC proxy server, such as `kdc.contoso.com`. | None |
-| Address | full address:s:value | Γ£ù | Γ£ö | This setting specifies the hostname or IP address of the remote computer that you want to connect to.</br></br>This is the only required setting in an RDP file. | A valid name, IPv4 address, or IPv6 address. | None |
-| Alternate address | alternate full address:s:value | Γ£ù | Γ£ö | Specifies an alternate name or IP address of the remote computer. | A valid name, IPv4 address, or IPv6 address. | None |
-| Username | username:s:value | Γ£ù | Γ£ö | Specifies the name of the user account that will be used to sign in to the remote computer. | Any valid username. | None |
-| Domain | domain:s:value | Γ£ù | Γ£ö | Specifies the name of the domain in which the user account that will be used to sign in to the remote computer is located. | A valid domain name, such as *CONTOSO*. | None |
-| RD Gateway hostname | gatewayhostname:s:value | Γ£ù | Γ£ö | Specifies the RD Gateway host name. | A valid name, IPv4 address, or IPv6 address. | None |
-| RD Gateway authentication | gatewaycredentialssource:i:value | Γ£ù | Γ£ö | Specifies the RD Gateway authentication method. | - 0: Ask for password (NTLM).</br>- 1: Use smart card.</br>- 2: Use the credentials for the currently signed in user.</br>- 3: Prompt the user for their credentials and use basic authentication.</br>- 4: Allow user to select later.</br>- 5: Use cookie-based authentication. | 0 |
-| Use RD Gateway | gatewayusagemethod:i:value | Γ£ù | Γ£ö | Specifies when to use an RD Gateway for the connection. | - 0: Don't use an RD Gateway.</br>- 1: Always use an RD Gateway.</br>- 2: Use an RD Gateway if a direct connection can't be made to the RD Session Host.</br>- 3: Use the default RD Gateway settings.</br>- 4: Don't use an RD Gateway, bypass gateway for local addresses.</br>Setting this property value to 0 or 4 are effectively equivalent, but setting this property to 4 enables the option to bypass local addresses. | 0 |
-| Save credentials | promptcredentialonce:i:value | Γ£ù | Γ£ö | Determines whether a user's credentials are saved and used for both the RD Gateway and the remote computer. | - 0: Remote session won't use the same credentials.</br>- 1: Remote session will use the same credentials. | 1 |
-| Server authentication | authentication level:i:value | Γ£ù | Γ£ö | Defines the server authentication level settings. | - 0: If server authentication fails, connect to the computer without warning (Connect and don't warn me).</br>- 1: If server authentication fails, don't establish a connection (Don't connect).</br>- 2: If server authentication fails, show a warning, and allow me to connect or refuse the connection (Warn me).</br>- 3: No authentication requirement specified. | 3 |
-| Connection sharing | disableconnectionsharing:i:value | Γ£ù | Γ£ö | Determines whether the client reconnects to any existing disconnected session or initiate a new connection when a new connection is launched. | - 0: Reconnect to any existing session.</br>- 1: Initiate new connection. | 0 |
-
-## Session behavior
-
-| Display name | RDP setting | Azure Virtual Desktop | Remote Desktop Services | Description | Values | Default value |
-|--|--|:-:|:-:|--|--|:-:|
-| Reconnection | autoreconnection enabled:i:*value* | Γ£ö | Γ£ö | Determines whether the client will automatically try to reconnect to the remote computer if the connection is dropped, such as when there's a network connectivity interruption. | - 0: Client doesn't automatically try to reconnect.</br>- 1: Client automatically tries to reconnect. | 1 |
-| Bandwidth auto detect | bandwidthautodetect:i:*value* | Γ£ö | Γ£ö | Determines whether or not to use automatic network bandwidth detection. Requires bandwidthautodetect to be set to 1. | - 0: Disable automatic network type detection.</br>- 1: Enable automatic network type detection. | 1 |
-| Network auto detect | networkautodetect:i:*value* | Γ£ö | Γ£ö | Determines whether automatic network type detection is enabled. | - 0: Don't use automatic network bandwidth detection.</br> - 1: Use automatic network bandwidth detection. | 1 |
-| Compression | compression:i:*value* | Γ£ö | Γ£ö | Determines whether bulk compression is enabled when it's transmitted by RDP to the local computer. | - 0: Disable RDP bulk compression.</br>- 1: Enable RDP bulk compression. | 1 |
-| Video playback | videoplaybackmode:i:*value* | Γ£ö | Γ£ö | Determines if the connection will use RDP-efficient multimedia streaming for video playback. | - 0: Don't use RDP efficient multimedia streaming for video playback.</br>- 1: Use RDP-efficient multimedia streaming for video playback when possible | 1 |
-
-## Device redirection
-
-> [!IMPORTANT]
-> You can only enable redirections with binary settings that apply to both to and from the remote machine. The service doesn't currently support one-way blocking of redirections from only one side of the connection.
-
-| Display name | RDP setting | Azure Virtual Desktop | Remote Desktop Services | Description | Values | Default value |
-|--|--|:-:|:-:|--|--|:-:|
-| Microphone redirection | audiocapturemode:i:*value* | Γ£ö | Γ£ö | Indicates whether audio input redirection is enabled. | - 0: Disable audio capture from the local device.</br>- 1: Enable audio capture from the local device and redirection to an audio application in the remote session. | 0 |
-| Redirect video encoding | encode redirected video capture:i:*value* | Γ£ö | Γ£ö | Enables or disables encoding of redirected video. | - 0: Disable encoding of redirected video.</br>- 1: Enable encoding of redirected video. | 1 |
-| Encoded video quality | redirected video capture encoding quality:i:*value* | Γ£ö | Γ£ö | Controls the quality of encoded video. | - 0: High compression video. Quality may suffer when there's a lot of motion. </br>- 1: Medium compression.</br>- 2: Low compression video with high picture quality. | 0 |
-| Audio output location | audiomode:i:*value* | Γ£ö | Γ£ö | Determines whether the local or remote machine plays audio. | - 0: Play sounds on the local computer (Play on this computer).</br>- 1: Play sounds on the remote computer (Play on remote computer).</br>- 2: Don't play sounds (Do not play). | 0 |
-| Camera redirection | camerastoredirect:s:*value* | Γ£ö | Γ£ö | Configures which cameras to redirect. This setting uses a semicolon-delimited list of KSCATEGORY_VIDEO_CAMERA interfaces of cameras enabled for redirection. | - * : Redirect all cameras.</br> - List of cameras, such as `\\?\usb#vid_0bda&pid_58b0&mi`.</br>- You can exclude a specific camera by prepending the symbolic link string with "-". | Don't redirect any cameras |
-| Media Transfer Protocol (MTP) and Picture Transfer Protocol (PTP) | devicestoredirect:s:*value* | Γ£ö | Γ£ö | Determines which devices on the local computer will be redirected and available in the remote session. | - *: Redirect all supported devices, including ones that are connected later.</br> - Valid hardware ID for one or more devices.</br> - DynamicDevices: Redirect all supported devices that are connected later. | Don't redirect any devices |
-| Drive/storage redirection | drivestoredirect:s:*value* | Γ£ö | Γ£ö | Determines which disk drives on the local computer will be redirected and available in the remote session. | - No value specified: don't redirect any drives.</br>- * : Redirect all disk drives, including drives that are connected later.</br>- DynamicDrives: redirect any drives that are connected later.</br>- The drive and labels for one or more drives, such as `drivestoredirect:s:C\:;E\:;`, redirect the specified drive(s). | Don't redirect any drives |
-| Windows key combinations | keyboardhook:i:*value* | Γ£ö | Γ£ö | Determines when Windows key combinations (Windows key, Alt+Tab) are applied to the remote session for desktop and RemoteApp connections. | - 0: Windows key combinations are applied on the local computer.</br>- 1: (Desktop only) Windows key combinations are applied on the remote computer when in focus.</br>- 2: (Desktop only) Windows key combinations are applied on the remote computer in full screen mode only.</br>- 3: (RemoteApp only) Windows key combinations are applied on the RemoteApp when in focus. We recommend you use this value only when publishing the Remote Desktop Connection app (*mstsc.exe*) from the host pool on Azure Virtual Desktop. This value is only supported when using the [Windows Desktop client](users/connect-windows.md). | 2 |
-| Clipboard redirection | redirectclipboard:i:*value* | Γ£ö | Γ£ö | Determines whether clipboard redirection is enabled. | - 0: Clipboard on local computer isn't available in remote session.</br>- 1: Clipboard on local computer is available in remote session. | 1 |
-| COM ports redirection | redirectcomports:i:*value* | Γ£ö | Γ£ö | Determines whether COM (serial) ports on the local computer will be redirected and available in the remote session. | - 0: COM ports on the local computer aren't available in the remote session.</br>- 1: COM ports on the local computer are available in the remote session. | 0 |
-| Location service redirection | redirectlocation:i:*value* | Γ£ö | Γ£ö | Determines whether the location of the local device will be redirected and available in the remote session. | - 0: The remote session uses the location of the remote computer or virtual machine.</br>- 1: The remote session uses the location of the local device. | 0 |
-| Printer redirection | redirectprinters:i:*value* | Γ£ö | Γ£ö | Determines whether printers configured on the local computer will be redirected and available in the remote session. | - 0: The printers on the local computer aren't available in the remote session.</br>- 1: The printers on the local computer are available in the remote session. | 1 |
-| Smart card redirection | redirectsmartcards:i:*value* | Γ£ö | Γ£ö | Determines whether smart card devices on the local computer will be redirected and available in the remote session. | - 0: The smart card device on the local computer isn't available in the remote session.</br>- 1: The smart card device on the local computer is available in the remote session. | 1 |
-| WebAuthn redirection | redirectwebauthn:i:*value* | Γ£ö | Γ£ö | Determines whether WebAuthn requests on the remote computer will be redirected to the local computer allowing the use of local authenticators (such as Windows Hello for Business, security key, and so on). | - 0: WebAuthn requests from the remote session aren't sent to the local computer for authentication and must be completed in the remote session.</br>- 1: WebAuthn requests from the remote session are sent to the local computer for authentication. | 1 |
-| USB device redirection | usbdevicestoredirect:s:*value* | Γ£ö | Γ£ö | Determines which supported RemoteFX USB devices on the client computer will be redirected and available in the remote session when you connect to a remote session that supports RemoteFX USB redirection. | - \*: Redirect all USB devices that aren't already redirected by another high-level redirection.</br> - {*Device Setup Class GUID*}: Redirect all devices that are members of the specified [device setup class](/windows-hardware/drivers/install/system-defined-device-setup-classes-available-to-vendors/).</br> - *USBInstanceID*: Redirect a specific USB device identified by the instance ID. | Don't redirect any USB devices |
-
-## Display settings
-
-| Display name | RDP setting | Azure Virtual Desktop | Remote Desktop Services | Description | Values | Default value |
-|--|--|:-:|:-:|--|--|:-:|
-| Muitiple displays | use multimon:i:*value* | Γ£ö | Γ£ö | Determines whether the remote session will use one or multiple displays from the local computer. | - 0: Don't enable multiple display support.</br>- 1: Enable multiple display support. | 0 |
-| Selected monitors | selectedmonitors:s:*value* | Γ£ö | Γ£ö | Specifies which local displays to use from the remote session. The selected displays must be contiguous. Requires use multimon to be set to 1.</br></br>Only available on the Windows Inbox (MSTSC) and Windows Desktop (MSRDC) clients. | Comma separated list of machine-specific display IDs. You can retrieve IDs by calling `mstsc.exe /l`. The first ID listed will be set as the primary display in the session. | All displays |
-| Maximize to current displays | maximizetocurrentdisplays:i:*value* | Γ£ö | Γ£ö | Determines which display the remote session goes full screen on when maximizing. Requires use multimon to be set to 1.</br></br>Only available on the Windows Desktop (MSRDC) client. | - 0: Session goes full screen on the displays initially selected when maximizing.</br>- 1: Session dynamically goes full screen on the displays touched by the session window when maximizing. | 0 |
-| Multi to single display switch | singlemoninwindowedmode:i:*value* | Γ£ö | Γ£ö | Determines whether a multi display remote session automatically switches to single display when exiting full screen. Requires use multimon to be set to 1.</br></br>Only available on the Windows Desktop (MSRDC) client. | - 0: Session retains all displays when exiting full screen.</br>- 1: Session switches to single display when exiting full screen. | 0 |
-| Screen mode | screen mode id:i:*value* | Γ£ö | Γ£ö | Determines whether the remote session window appears full screen when you launch the connection. | - 1: The remote session will appear in a window.</br>- 2: The remote session will appear full screen. | 2 |
-| Smart sizing | smart sizing:i:*value* | Γ£ö | Γ£ö | Determines whether or not the local computer scales the content of the remote session to fit the window size. | - 0: The local window content won't scale when resized.</br>- 1: The local window content will scale when resized. | 0 |
-| Dynamic resolution | dynamic resolution:i:*value* | Γ£ö | Γ£ö | Determines whether the resolution of the remote session is automatically updated when the local window is resized. | - 0: Session resolution remains static during the session.</br>- 1: Session resolution updates as the local window resizes. | 1 |
-| Desktop size | desktop size id:i:*value* | ✔ | ✔ | Specifies the dimensions of the remote session desktop from a set of predefined options. This setting is overridden if desktopheight and desktopwidth are specified. | - 0: 640×480</br>- 1: 800×600</br>- 2: 1024×768</br>- 3: 1280×1024</br>- 4: 1600×1200 | Match the local computer |
-| Desktop height | desktopheight:i:*value* | Γ£ö | Γ£ö | Specifies the resolution height (in pixels) of the remote session. | Numerical value between 200 and 8192. | Match the local computer |
-| Desktop width | desktopwidth:i:*value* | Γ£ö | Γ£ö | Specifies the resolution width (in pixels) of the remote session. | Numerical value between 200 and 8192. | Match the local computer |
-| Desktop scale factor | desktopscalefactor:i:*value* | Γ£ö | Γ£ö | Specifies the scale factor of the remote session to make the content appear larger. | Numerical value from the following list: 100, 125, 150, 175, 200, 250, 300, 400, 500. | Match the local computer |
-
-## RemoteApp
-
-| Display name | RDP setting | Azure Virtual Desktop | Remote Desktop Services | Description | Values | Default value |
-|--|--|:-:|:-:|--|--|:-:|
-| Command-line parameters | remoteapplicationcmdline:s:value | Γ£ù | Γ£ö | Optional command-line parameters for the RemoteApp. | Valid command-line parameters. | N/A |
-| Command-line variables | remoteapplicationexpandcmdline:i:value | Γ£ù | Γ£ö | Determines whether environment variables contained in the RemoteApp command-line parameter should be expanded locally or remotely. | - 0: Environment variables should be expanded to the values of the local computer.</br>- 1: Environment variables should be expanded to the values of the remote computer. | 1 |
-| Working directory variables | remoteapplicationexpandworkingdir:i:value | Γ£ù | Γ£ö | Determines whether environment variables contained in the RemoteApp working directory parameter should be expanded locally or remotely. | - 0: Environment variables should be expanded to the values of the local computer.</br> - 1: Environment variables should be expanded to the values of the remote computer.</br>The RemoteApp working directory is specified through the shell working directory parameter. | 1 |
-| Open file | remoteapplicationfile:s:value | Γ£ù | Γ£ö | Specifies a file to be opened on the remote computer by the RemoteApp.</br>For local files to be opened, you must also enable drive redirection for the source drive. | Valid file path. | N/A |
-| Icon file | remoteapplicationicon:s:value | Γ£ù | Γ£ö | Specifies the icon file to be displayed in the client UI while launching a RemoteApp. If no file name is specified, the client will use the standard Remote Desktop icon. Only ".ico" files are supported. | Valid file path. | N/A |
-| Application mode | remoteapplicationmode:i:value | Γ£ù | Γ£ö | Determines whether a connection is launched as a RemoteApp session. | - 0: Don't launch a RemoteApp session.</br>- 1: Launch a RemoteApp session. | 1 |
-| Application display name | remoteapplicationname:s:value | Γ£ù | Γ£ö | Specifies the name of the RemoteApp in the client interface while starting the RemoteApp. | App display name. For example, "Excel 2016". | N/A |
-| Alias/executable name | remoteapplicationprogram:s:value | Γ£ù | Γ£ö | Specifies the alias or executable name of the RemoteApp. | Valid alias or name. For example, "EXCEL". | N/A |
virtual-desktop Client Features Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-web.md
There are several keyboard shortcuts you can use to help use some of the feature
#### Input Method Editor
-The web client supports Input Method Editor (IME) in the remote session. Before you can use the IME, you must install the language pack for the keyboard you want to use in the remote session must be installed on your session host by your admin. To learn more about setting up language packs in the remote session, see [Add language packs to a Windows 10 multi-session image](/azure/virtual-desktop/language-packs).
+The web client supports Input Method Editor (IME) in the remote session. Before you can use the IME, you must install the language pack for the keyboard you want to use in the remote session must be installed on your session host by your admin. To learn more about setting up language packs in the remote session, see [Add language packs to a Windows 10 multi-session image](../language-packs.md).
To enable IME input using the web client:
If you want to provide feedback to us on the Remote Desktop Web client, you can
## Next steps
-If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md).
+If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md).
virtual-desktop Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows.md
Before you can access your resources, you'll need to meet the prerequisites:
- .NET Framework 4.6.2 or later. You may need to install this on Windows 7, Windows Server 2012 R2, Windows Server 2016, and some versions of Windows 10. To download the latest version, see [Download .NET Framework](https://dotnet.microsoft.com/download/dotnet-framework). -- You cannot sign-in using the built-in Administrator user account.- > [!IMPORTANT] > Extended support for using Windows 7 to connect to Azure Virtual Desktop ends on January 10, 2023.
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
Previously updated : 11/03/2022 Last updated : 11/16/2022 # What's new in the Remote Desktop client for Windows
The client can be configured to enable Windows Insider releases. The following t
| Release | Latest version | Minimum supported version | ||-||
-| Public | 1.2.3577 | 1.2.1672 |
+| Public | 1.2.3577 | 1.2.1672 |
| Insider | 1.2.3667 | 1.2.1672 | ## Updates for version 1.2.3667 (Insider)
Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios
- Added page to installer warning users running client on Windows 7 that support for Windows 7 will end starting January 10, 2023. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Updates to multimedia redirection (MMR) for Azure Virtual Desktop, including the following:
- - MMR now works on remote app browser and supports up to 30 sites. For more information, see [Understanding multimedia redirection for Azure Virtual Desktop](/azure/virtual-desktop/multimedia-redirection-intro).
- - MMR introduces better diagnostic tools with the new status icon and one-click Tracelog. For more information, see [Multimedia redirection for Azure Virtual Desktop (preview)](/azure/virtual-desktop/multimedia-redirection).
+ - MMR now works on remote app browser and supports up to 30 sites. For more information, see [Understanding multimedia redirection for Azure Virtual Desktop](./multimedia-redirection-intro.md).
+ - MMR introduces better diagnostic tools with the new status icon and one-click Tracelog. For more information, see [Multimedia redirection for Azure Virtual Desktop (preview)](./multimedia-redirection.md).
## Updates for version 1.2.3497
Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios
- Updated the error message that appears when users are unable to subscribe to their feed. - Updated the disconnect dialog boxes that appear when the user locks their remote session or puts their local computer in sleep mode to be only informational. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.-- [Multimedia redirection for Azure Virtual Desktop (preview)](/azure/virtual-desktop/multimedia-redirection) now has an update that gives it more site and media control compatibility.
+- [Multimedia redirection for Azure Virtual Desktop (preview)](./multimedia-redirection.md) now has an update that gives it more site and media control compatibility.
- Improved connection reliability for Teams on Azure Virtual Desktop. ## Updates for version 1.2.2927
Fixed an issue where the number pad didn't work on initial focus.
- Added updates to Teams on Azure Virtual Desktop, including: - Fixed an issue that caused the screen to turn black when Direct X wasn't available for hardware decoding. - Fixed a software decoding and camera preview issue that happened when falling back to software decode.-- [Multimedia redirection for Azure Virtual Desktop](/azure/virtual-desktop/multimedia-redirection) is now in public preview.
+- [Multimedia redirection for Azure Virtual Desktop](./multimedia-redirection.md) is now in public preview.
## Updates for version 1.2.2223
Fixed an issue where the number pad didn't work on initial focus.
*Date published: 01/26/2021* -- Added support for the screen capture protection feature for Windows 10 endpoints. To learn more, see [Session host security best practices](/azure/virtual-desktop/security-guide#session-host-security-best-practices).
+- Added support for the screen capture protection feature for Windows 10 endpoints. To learn more, see [Session host security best practices](./security-guide.md#session-host-security-best-practices).
- Added support for proxies that require authentication for feed subscription. - The client now shows a notification with an option to retry if an update didn't successfully download. - Addressed some accessibility issues with keyboard focus and high-contrast mode.
Fixed an issue where the number pad didn't work on initial focus.
- The client can now be used on Windows 10 in S mode. - Fixed an issue that caused the update process to fail for users with a space in their username. - Fixed a crash that happened when authenticating during a connection.-- Fixed a crash that happened when closing the client.
+- Fixed a crash that happened when closing the client.
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
This article describes how to expand managed disks for a Linux virtual machine (
> [!WARNING] > Always make sure that your filesystem is in a healthy state, your disk partition table type (GPT or MBR) will support the new size, and ensure your data is backed up before you perform disk expansion operations. For more information, see the [Azure Backup quickstart](../../backup/quick-backup-vm-portal.md).
-## Identify Azure data disk object within the operating system
+## <a id="identifyDisk"></a>Identify Azure data disk object within the operating system ##
In the case of expanding a data disk when there are several data disks present on the VM, it may be difficult to relate the Azure LUNs to the Linux devices. If the OS disk needs expansion, it will be clearly labeled in the Azure portal as the OS disk.
Filesystem Type Size Used Avail Use% Mounted on
/dev/sde1 ext4 32G 49M 30G 1% /opt/db/log ```
-Here we can see, for example, the ```/opt/db/data``` filesystem is nearly full, and is located on the ```/dev/sdd1``` partition. The output of ```df``` will show the device path regardless of whether the disk is mounted by device path or the (preferred) UUID in the fstab. Also take note of the Type column, indicating the format of the filesystem. This will be important later.
+Here we can see, for example, the `/opt/db/data` filesystem is nearly full, and is located on the `/dev/sdd1` partition. The output of `df` will show the device path regardless of whether the disk is mounted by device path or the (preferred) UUID in the fstab. Also take note of the Type column, indicating the format of the filesystem. This will be important later.
-Now locate the LUN which correlates to ```/dev/sdd``` by examining the contents of ```/dev/disk/azure/scsi1```. The output of the following ```ls``` command will show that the device known as ```/dev/sdd``` within the Linux OS is located at LUN1 when looking in the Azure portal.
+Now locate the LUN which correlates to `/dev/sdd` by examining the contents of `/dev/disk/azure/scsi1`. The output of the following `ls` command will show that the device known as `/dev/sdd` within the Linux OS is located at LUN1 when looking in the Azure portal.
-```bash
+```output
linux:~ # ls -alF /dev/disk/azure/scsi1/ total 0 drwxr-xr-x. 2 root root 140 Sep 9 21:54 ./
In the following samples, replace example parameter names such as *myResourceGro
## Expand a disk partition and filesystem > [!NOTE]
-> While there are several tools that may be used for performing the partition resizing, the tools selected here are the same tools used by certain automated processes such as cloud-init. Using the ```parted``` tool also provides more universal compatibility with GPT disks, as older versions of some tools such as ```fdisk``` did not support the GUID Partition Table (GPT).
+> While there are many tools that may be used for performing the partition resizing, the tools detailed in the remainder of this document are the same tools used by certain automated processes such as cloud-init. As described here, the `growpart` tool with the `gdisk` package provides universal compatibility with GUID Partition Table (GPT) disks, as older versions of some tools such as `fdisk` did not support GPT.
-The remainder of this article describes how to increase the size of a volume at the OS level, using the OS disk as the example. If the disk needing expansion is a data disk, the following procedures can be used as a guideline, substituting the disk device (for example ```/dev/sda```), volume names, mount points, and filesystem formats, as necessary.
+The remainder of this article describes uses the OS disk for the examples of the procedure for increasing the size of a volume at the OS level. If the disk which needs to be expanded is a data disk, use the [previous guidance for identifying the data disk device](#identifyDisk), and follow these instructions as a guideline, substituting the data disk (for example `/dev/sda`), partition numbers, volume names, mount points, and filesystem formats, as necessary.
### Increase the size of the OS disk
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
properties: {
If the `stagingResourceGroup` property is specified with a resource group that does exist, then the Image Builder service will check to make sure the resource group isn't associated with another image template, is empty (no resources inside), in the same region as the image template, and has either "Contributor" or "Owner" RBAC applied to the identity assigned to the Azure Image Builder image template resource. If any of the aforementioned requirements aren't met, an error will be thrown. The staging resource group will have the following tags added to it: `usedBy`, `imageTemplateName`, `imageTemplateResourceGroupName`. Pre-existing tags aren't deleted. > [!IMPORTANT]
-> You will need to assign the contributor role to the resource group for the service principal corresponding to Azure Image Builder's first party app when trying to specify a pre-existing resource group and VNet to the Azure Image Builder service with a Windows source image. For the CLI command and portal instructions on how to assign the contributor role to the resource group see the following documentation [Troubleshoot VM Azure Image Builder: Authorization error creating disk](/azure/virtual-machines/linux/image-builder-troubleshoot#authorization-error-creating-disk)
+> You will need to assign the contributor role to the resource group for the service principal corresponding to Azure Image Builder's first party app when trying to specify a pre-existing resource group and VNet to the Azure Image Builder service with a Windows source image. For the CLI command and portal instructions on how to assign the contributor role to the resource group see the following documentation [Troubleshoot VM Azure Image Builder: Authorization error creating disk](./image-builder-troubleshoot.md#authorization-error-creating-disk)
- **The stagingResourceGroup property is specified with a resource group that doesn't exist**
az resource invoke-action \
## Next steps
-There are sample .json files for different scenarios in the [Azure Image Builder GitHub](https://github.com/azure/azvmimagebuilder).
+There are sample .json files for different scenarios in the [Azure Image Builder GitHub](https://github.com/azure/azvmimagebuilder).
virtual-machines Vm Naming Conventions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-naming-conventions.md
This page outlines the naming conventions used for Azure VMs. VMs use these nami
| *Sub-family | Used for specialized VM differentiations only| | # of vCPUs| Denotes the number of vCPUs of the VM | | *Constrained vCPUs| Used for certain VM sizes only. Denotes the number of vCPUs for the [constrained vCPU capable size](./constrained-vcpu.md) |
-| Additive Features | One or more lower case letters denote additive features, such as: <br> a = AMD-based processor <br> b = Block Storage performance <br> c = confidential <br> d = diskful (i.e., a local temp disk is present); this is for newer Azure VMs, see [Ddv4 and Ddsv4-series](./ddv4-ddsv4-series.md) <br> i = isolated size <br> l = low memory; a lower amount of memory than the memory intensive size <br> m = memory intensive; the most amount of memory in a particular size <br> t = tiny memory; the smallest amount of memory in a particular size <br> s = Premium Storage capable, including possible use of [Ultra SSD](./disks-types.md#ultra-disks) (Note: some newer sizes without the attribute of s can still support Premium Storage e.g. M128, M64, etc.)<br> NP = node packing <br> p = ARM Cpu <br>|
+| Additive Features | One or more lower case letters denote additive features, such as: <br> a = AMD-based processor <br> b = Block Storage performance <br> d = diskful (i.e., a local temp disk is present); this is for newer Azure VMs, see [Ddv4 and Ddsv4-series](./ddv4-ddsv4-series.md) <br> i = isolated size <br> l = low memory; a lower amount of memory than the memory intensive size <br> m = memory intensive; the most amount of memory in a particular size <br>p = ARM Cpu <br> t = tiny memory; the smallest amount of memory in a particular size <br> s = Premium Storage capable, including possible use of [Ultra SSD](./disks-types.md#ultra-disks) (Note: some newer sizes without the attribute of s can still support Premium Storage e.g. M128, M64, etc.)<br> C = Confidential <br>NP = node packing <br>
| *Accelerator Type | Denotes the type of hardware accelerator in the specialized/GPU SKUs. Only the new specialized/GPU SKUs launched from Q3 2020 will have the hardware accelerator in the name. | | Version | Denotes the version of the VM Family Series |
virtual-machines Redhat Imagelist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-imagelist.md
RHEL | 7-RAW | RAW | Linux Agent | RHEL 7.x family of images. <br
| | 85-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Attached to regular repositories (EUS unavailable for RHEL 8.5) | | 8.6 | LVM | Linux Agent | Attached to EUS repositories | | 86-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Hyper-V Generation 2 - Attached to EUS repositories
-| | 9.0 | LVM | Linux Agent | Attached to EUS repositories
-| | 90-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Hyper-V Generation 2 - Attached to EUS repositories
+| | 9.0 | LVM | Linux Agent | Currently attached to regular repositores. Will be attached to EUS repositories once they become available
+| | 90-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Currently attached to regular repositores. Will be attached to EUS repositories once they become available
RHEL-SAP-APPS | 6.8 | RAW | Linux Agent | RHEL 6.8 for SAP Business Applications. Outdated in favor of the RHEL-SAP images. | | 7.3 | LVM | Linux Agent | RHEL 7.3 for SAP Business Applications. Outdated in favor of the RHEL-SAP images. | | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP Business Applications
rhel-byos |rhel-lvm74| LVM | Linux Agent | RHEL 7.4 BYOS images, not atta
| |rhel-lvm81-gen2 | LVM | Linux Agent | RHEL 8.1 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium | |rhel-lvm82 | LVM | Linux Agent | RHEL 8.2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium | |rhel-lvm82-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
-| |rhel-lvm83 | LVM | Linux Agent | RHEL 8.2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
-| |rhel-lvm83-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
-| |rhel-lvm84 | LVM | Linux Agent | RHEL 8.2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
-| |rhel-lvm84-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
-| |rhel-lvm85 | LVM | Linux Agent | RHEL 8.2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
-| |rhel-lvm85-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
-| |rhel-lvm86 | LVM | Linux Agent | RHEL 8.2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
-| |rhel-lvm86-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm83 | LVM | Linux Agent | RHEL 8.3 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm83-gen2 | LVM | Linux Agent | RHEL 8.3 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm84 | LVM | Linux Agent | RHEL 8.4 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm84-gen2 | LVM | Linux Agent | RHEL 8.4 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm85 | LVM | Linux Agent | RHEL 8.5 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm85-gen2 | LVM | Linux Agent | RHEL 8.5 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm86 | LVM | Linux Agent | RHEL 8.6 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm86-gen2 | LVM | Linux Agent | RHEL 8.6 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm90 | LVM | Linux Agent | RHEL 9.0 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
+| |rhel-lvm90-gen2 | LVM | Linux Agent | RHEL 9.0 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium
RHEL-SAP (out of support) | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee | | 74sap-gen2| LVM | Linux Agent | RHEL 7.4 for SAP HANA and Business Apps. Generation 2 image. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee | | 7.5 | LVM | Linux Agent | RHEL 7.5 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
virtual-machines Redhat Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-images.md
Title: Overview of Red Hat Enterprise Linux images in Azure description: Learn about Red Hat Enterprise Linux images in Microsoft Azure.-+
Details for RHEL 8 image types are below.
|RedHat | RHEL | RHEL-SAP-APPS | Concatenated values of the RHEL minor version and the date published (for example, 8.1.2021012201) | These images are RHEL for SAP Applications images. They're entitled to access SAP Applications repositories and base RHEL repositories. |RedHat | RHEL | RHEL-SAP-HA | Concatenated values of the RHEL minor version and the date published (for example, 8.1.2021010602) | These images are RHEL for SAP with High Availability and Update Services images. They're entitled to access the SAP Solutions and Applications repositories and the High Availability repositories as well as RHEL E4S repositories. Billing includes the RHEL premium, SAP premium, and High Availability premium on top of the base compute fee. +
+Details for RHEL 9 image types are below.
+
+|Publisher | Offer | SKU value | Version | Details
+|-|-|||--
+|RedHat | RHEL | 9 | Concatenated values of the RHEL minor version and the date published (for example, 9.0.2022090613) | These images are RHEL 9 LVM-partitioned images connected to standard Red Hat repositories.
+|RedHat | RHEL | 9-gen2 | Concatenated values of the RHEL minor version and the date published (for example, 9.0.2022090613) | These images are Hyper-V Generation 2 RHEL 9 LVM-partitioned images connected to standard Red Hat repositories. For more information about Generation 2 VMs in Azure, see [Support for Generation 2 VMs on Azure](../../generation-2.md).
+|RedHat | RHEL | RHEL-SAP-APPS (Not yet published) | Concatenated values of the RHEL minor version and the date published | These images are RHEL for SAP Applications images. They're entitled to access SAP Applications repositories and base RHEL repositories.
+|RedHat | RHEL | RHEL-SAP-HA (Not yet published) | Concatenated values of the RHEL minor version and the date published | These images are RHEL for SAP with High Availability and Update Services images. They're entitled to access the SAP Solutions and Applications repositories and the High Availability repositories as well as RHEL E4S repositories. Billing includes the RHEL premium, SAP premium, and High Availability premium on top of the base compute fee.
+ ## RHEL Extended Support add-ons ### Extended Life-cycle Support
virtual-machines Expose Sap Odata To Power Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-odata-to-power-query.md
For more information on which Microsoft products support Power Query in general,
End users have a choice between local desktop or web-based clients (for instance Excel or Power BI). The client execution environment needs to be considered for the network path between the client application and the target SAP workload. Network access solutions such as VPN aren't in scope for apps like Excel for the web.
-[Azure API Management](/azure/api-management/) reflects local and web-based environment needs with different deployment modes that can be applied to Azure landscapes ([internal](../../../api-management/api-management-using-with-internal-vnet.md?tabs=stv2)
+[Azure API Management](../../../api-management/index.yml) reflects local and web-based environment needs with different deployment modes that can be applied to Azure landscapes ([internal](../../../api-management/api-management-using-with-internal-vnet.md?tabs=stv2)
or [external](../../../api-management/api-management-using-with-vnet.md?tabs=stv2)). `Internal` refers to instances that are fully restricted to a private virtual network whereas `external` retains public access to Azure API Management. On-premises installations require a hybrid deployment to apply the approach as is using the Azure API Management [self-hosted Gateway](../../../api-management/self-hosted-gateway-overview.md). Power Query requires matching API service URL and Azure AD application ID URL. Configure a [custom domain for Azure API Management](../../../api-management/configure-custom-domain.md) to meet the requirement.
The highlighted button triggers a flow that forwards the OData PATCH request to
[Understand Azure Application Gateway and Web Application Firewall for SAP](https://blogs.sap.com/2020/12/03/sap-on-azure-application-gateway-web-application-firewall-waf-v2-setup-for-internet-facing-sap-fiori-apps/)
-[Automate API deployments with APIOps](/azure/architecture/example-scenario/devops/automated-api-deployments-apiops)
+[Automate API deployments with APIOps](/azure/architecture/example-scenario/devops/automated-api-deployments-apiops)
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 11/14/2022 Last updated : 11/15/2022
In the SAP workload documentation space, you can find the following areas:
## Change Log -- November 14, 2022: Proveded more details about nconnect mount option in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
+- November 15, 2022: Change in [HA for SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to add recommendation to use mount option `nconnect` for workloads with higher throughput requirements
+- November 15, 2022: Add a recommendation for minimum required version of package resource-agents in [High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server](./high-availability-guide-rhel-ibm-db2-luw.md)
+- November 14, 2022: Provided more details about nconnect mount option in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
- November 14, 2022: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) to update suggested timeouts for `FileSystem` Pacemaker cluster resources - November 07, 2022: Added HANA hook susChkSrv for scale-up pacemaker cluster in [High availability of SAP HANA on Azure VMs on SLES](sap-hana-high-availability.md), [High availability of SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md) - November 07, 2022: Added monitor operation for azure-lb resource in [High availability of SAP HANA on Azure VMs on SLES](sap-hana-high-availability.md), [SAP HANA scale-out with HSR and Pacemaker on SLES](sap-hana-high-availability-scale-out-hsr-suse.md), [Set up IBM Db2 HADR on Azure virtual machines (VMs)](dbms-guide-ha-ibm.md), [Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs high availability for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md), [Azure VMs high availability for SAP NetWeaver on SLES](high-availability-guide-suse.md), [High availability for NFS on Azure VMs on SLES](high-availability-guide-suse-nfs.md), [Azure VMs high availability for SAP NetWeaver on SLES multi-SID guide](high-availability-guide-suse-multi-sid.md)
virtual-machines High Availability Guide Rhel Ibm Db2 Luw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-ibm-db2-luw.md
vm-linux Previously updated : 06/29/2022 Last updated : 11/15/2022
sudo pcs property set maintenance-mode=true
**[1]** Create IBM Db2 resources:
-If building a cluster on **RHEL 7.x**, use the following commands:
+If building a cluster on **RHEL 7.x**, make sure to update package **resource-agents** to version `resource-agents-4.1.1-61.el7_9.15` or higher. Use the following commands to create the cluster resources:
<pre><code># Replace <b>bold strings</b> with your instance name db2sid, database SID, and virtual IP address/Azure Load Balancer. sudo pcs resource create Db2_HADR_<b>ID2</b> db2 instance='<b>db2id2</b>' dblist='<b>ID2</b>' master meta notify=true resource-stickiness=5000
sudo pcs constraint colocation add g_ipnc_<b>db2id2</b>_<b>ID2</b> with master D
sudo pcs constraint order promote Db2_HADR_<b>ID2</b>-master then g_ipnc_<b>db2id2</b>_<b>ID2</b> </code></pre>
-If building a cluster on **RHEL 8.x**, use the following commands:
++
+If building a cluster on **RHEL 8.x**, make sure to update package **resource-agents** to version `resource-agents-4.1.1-93.el8` or higher. For details see Red Hat KB [A `db2` resource with HADR fails promote with state `PRIMARY/REMOTE_CATCHUP_PENDING/CONNECTED`](https://access.redhat.com/solutions/6516791). Use the following commands to create the cluster resources:
<pre><code># Replace <b>bold strings</b> with your instance name db2sid, database SID, and virtual IP address/Azure Load Balancer. sudo pcs resource create Db2_HADR_<b>ID2</b> db2 instance='<b>db2id2</b>' dblist='<b>ID2</b>' promotable meta notify=true resource-stickiness=5000
virtual-machines Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat.md
vm-linux Previously updated : 11/14/2022 Last updated : 11/15/2022
In this example each cluster node has its own HANA NFS filesystems /hana/shared,
The suggested timeouts values allow the cluster resources to withstand protocol-specific pause, related to NFSv4.1 lease renewals. For more information see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf). The timeouts in the above configuration may need to be adapted to the specific SAP setup.
+ For workloads, that require higher throughput, consider using the `nconnect` mount option, as described in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#nconnect-mount-option). Check if `nconnect` is [supported by Azure NetApp Files](../../../azure-netapp-files/performance-linux-mount-options.md#nconnect) on your Linux release.
+ 4. **[1]** Configuring Location Constraints Configure location constraints to ensure that the resources that manage hanadb1 unique mounts can never run on hanadb2, and vice-versa.
virtual-machines Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-suse.md
vm-linux Previously updated : 05/10/2022 Last updated : 11/15/2022
For more information about the required ports for SAP HANA, read the chapter [Co
# Mount all volumes sudo mount -a ```
+ For workloads, that require higher throughput, consider using the `nconnect` mount option, as described in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#nconnect-mount-option). Check if `nconnect` is [supported by Azure NetApp Files](../../../azure-netapp-files/performance-linux-mount-options.md#nconnect) on your Linux release.
+ 4.**[A]** Verify that all HANA volumes are mounted with NFS protocol version NFSv4.
virtual-machines Sap Hana Scale Out Standby Netapp Files Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel.md
vm-windows Previously updated : 05/09/2022 Last updated : 11/15/2022
Configure and prepare your OS by doing the following steps:
sudo mount -a </code></pre>
+ For workloads, that require higher throughput, consider using the `nconnect` mount option, as described in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#nconnect-mount-option). Check if `nconnect` is [supported by Azure NetApp Files](../../../azure-netapp-files/performance-linux-mount-options.md#nconnect) on your Linux release.
+ 7. **[1]** Mount the node-specific volumes on **hanadb1**. <pre><code>
virtual-machines Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse.md
vm-windows Previously updated : 10/31/2022 Last updated : 11/15/2022
Configure and prepare your OS by doing the following steps:
sudo mount -a </code></pre>
+ For workloads, that require higher throughput, consider using the `nconnect` mount option, as described in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#nconnect-mount-option). Check if `nconnect` is [supported by Azure NetApp Files](../../../azure-netapp-files/performance-linux-mount-options.md#nconnect) on your Linux release.
+ 7. **[1]** Mount the node-specific volumes on **hanadb1**. <pre><code>
virtual-network Create Custom Ip Address Prefix Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-powershell.md
The steps in this article detail the process to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell. - Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).-- Ensure your Az.Network module is 4.21.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.
+- Ensure your Az.Network module is 5.1.1 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.
- A customer owned IP range to provision in Azure. - A sample customer range (2a05:f500:2::/48) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
It is possible to commission the global custom IPv6 prefix prior to the regional
- To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md). -- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md).
+- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md).
virtual-network Create Custom Ip Address Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md
The steps in this article detail the process to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell. - Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).-- Ensure your Az.Network module is 4.3.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.
+- Ensure your Az.Network module is 5.1.1 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.
- A customer owned IPv4 range to provision in Azure. - A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
As before, the operation is asynchronous. Use [Get-AzCustomIpPrefix](/powershell
- To create a custom IP address prefix using the Azure CLI, see [Create custom IP address prefix using the Azure CLI](create-custom-ip-address-prefix-cli.md). -- To create a custom IP address prefix using the Azure portal, see [Create a custom IP address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md).
+- To create a custom IP address prefix using the Azure portal, see [Create a custom IP address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md).
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
Previously updated : 12/27/2021 Last updated : 11/16/2022
Public IP addresses allow Internet resources to communicate inbound to Azure resources. Public IP addresses enable Azure resources to communicate to Internet and public-facing Azure services. The address is dedicated to the resource, until it's unassigned by you. A resource without a public IP assigned can communicate outbound. Azure dynamically assigns an available IP address that isn't dedicated to the resource. For more information about outbound connections in Azure, see [Understand outbound connections](../../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
-In Azure Resource Manager, a [public IP](virtual-network-public-ip-address.md) address is a resource that has its own properties. Some of the resources you can associate a public IP address resource with:
+In Azure Resource Manager, a [public IP](virtual-network-public-ip-address.md) address is a resource that has its own properties.
+
+The following resources can be associated with a public IP address:
* Virtual machine network interfaces
-* Virtual machine scale sets
+
+* Virtual Machine Scale Sets
+ * Public Load Balancers+ * Virtual Network Gateways (VPN/ER)+ * NAT gateways+ * Application Gateways+ * Azure Firewall+ * Bastion Host+ * Route Server For Virtual Machine Scale Sets, use [Public IP Prefixes](public-ip-address-prefix.md). ## At-a-glance
-The following table shows the property a public IP can be associated to a resource and the allocation methods. Note that public IPv6 support isn't available for all resource types at this time.
+The following table shows the property a public IP can be associated to a resource and the allocation methods. Public IPv6 support isn't available for all resource types at this time.
| Top-level resource | IP Address association | Dynamic IPv4 | Static IPv4 | Dynamic IPv6 | Static IPv6 | | | | | | | | | Virtual machine |Network interface |Yes | Yes | Yes | Yes |
-| Public Load balancer |Front-end configuration |Yes | Yes | Yes |Yes |
-| Virtual Network gateway (VPN) |Gateway IP configuration |Yes (non-AZ only) |Yes | No |No |
-| Virtual Network gateway (ER) |Gateway IP configuration |Yes | No | Yes (preview) |No |
+| Public Load Balancer |Front-end configuration |Yes | Yes | Yes |Yes |
+| Virtual Network Gateway (VPN) |Gateway IP configuration |Yes (non-AZ only) |Yes | No |No |
+| Virtual Network Gateway (ER) |Gateway IP configuration |Yes | No | Yes (preview) |No |
| NAT gateway |Gateway IP configuration |No |Yes | No |No |
-| Application gateway |Front-end configuration |Yes (V1 only) |Yes (V2 only) | No | No |
+| Application Gateway |Front-end configuration |Yes (V1 only) |Yes (V2 only) | No | No |
| Azure Firewall | Front-end configuration | No | Yes | No | No | | Bastion Host | Public IP configuration | No | Yes | No | No | | Route Server | Front-end configuration | No | Yes | No | No |
Public IP addresses are created with one of the following SKUs:
> Basic SKU IPv4 addresses can be upgraded after creation to Standard SKU. To learn about SKU upgrade, refer to [Public IP upgrade](public-ip-upgrade-portal.md). >[!IMPORTANT]
-> Matching SKUs are required for Load Balancer and Public IP resources. You can't have a mixture of Basic SKU resources and standard SKU resources. You can't attach standalone virtual machines, virtual machines in an availability set resource, or a virtual machine scale set resources to both SKUs simultaneously. New designs should consider using Standard SKU resources. Please review [Standard Load Balancer](../../load-balancer/load-balancer-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for details.
+> Matching SKUs are required for load balancer and public IP resources. You can't have a mixture of basic SKU resources and standard SKU resources. You can't attach standalone virtual machines, virtual machines in an availability set resource, or a virtual machine scale set resources to both SKUs simultaneously. New designs should consider using Standard SKU resources. For more information about a standard load balancer, see [Standard Load Balancer](../../load-balancer/load-balancer-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
## IP address assignment Public IPs have two types of assignments:+ - **Static** - The resource is assigned an IP address at the time it's created. The IP address is released when the resource is deleted. -- **Dynamic** - The IP address *isn't* given to the resource at the time of creation when selecting dynamic. The IP is assigned when you associate the public IP address with a resource. The IP address is released when you stop, or delete the resource+
+- **Dynamic** - The IP address **isn't** given to the resource at the time of creation when selecting dynamic. The IP is assigned when you associate the public IP address with a resource. The IP address is released when you stop, or delete the resource.
**Static public IP addresses** are commonly used in the following scenarios:+ * When you must update firewall rules to communicate with your Azure resources.+ * DNS name resolution, where a change in IP address would require updating A records.+ * Your Azure resources communicate with other apps or services that use an IP address-based security model.+ * You use TLS/SSL certificates linked to an IP address. > [!NOTE] > Even when you set the allocation method to **static**, you cannot specify the actual IP address assigned to the public IP address resource. Azure assigns the IP address from a pool of available IP addresses in the Azure location the resource is created in.
-**Basic public IP addresses** are commonly used for when there is no dependency on the IP address. For example, a public IP resource is released from a resource named **Resource A**. **Resource A** receives a different IP on start-up if the public IP resource is reassigned. Any associated IP address is released if the allocation method is changed from **static** to **dynamic**. Any associated IP address is unchanged if the allocation method is changed from **dynamic** to **static**. Set the allocation method to **static** to ensure the IP address remains the same.
+**Basic public IP addresses** are commonly used for when there's no dependency on the IP address.
+
+For example, a public IP resource is released from a resource named **Resource A**. **Resource A** receives a different IP on start-up if the public IP resource is reassigned. Any associated IP address is released if the allocation method is changed from **static** to **dynamic**. Any associated IP address is unchanged if the allocation method is changed from **dynamic** to **static**. Set the allocation method to **static** to ensure the IP address remains the same.
| Resource | Static | Dynamic | | | | |
Public IPs have two types of assignments:
## DNS Name Label
-Select this option to specify a DNS label for a public IP resource. This functionality works for both IPv4 addresses (32-bit A records) and IPv6 addresses (128-bit AAAA records). This selection creates a mapping for **domainnamelabel**.**location**.cloudapp.azure.com to the public IP in the Azure-managed DNS.
+Select this option to specify a DNS label for a public IP resource. This functionality works for both IPv4 addresses (32-bit A records) and IPv6 addresses (128-bit AAAA records). This selection creates a mapping for **domainnamelabel**.**location**.cloudapp.azure.com to the public IP in the Azure-managed DNS.
-For instance, creation of a public IP with:
+For instance, creation of a public IP with the following settings:
* **contoso** as a **domainnamelabel**+ * **West US** Azure **location** The fully qualified domain name (FQDN) **contoso.westus.cloudapp.azure.com** resolves to the public IP address of the resource.
The fully qualified domain name (FQDN) **contoso.westus.cloudapp.azure.com** res
> [!IMPORTANT] > Each domain name label created must be unique within its Azure location.
-If a custom domain is desired for services that use a Public IP, you can use [Azure DNS](../../dns/dns-custom-domain.md?toc=%2fazure%2fvirtual-network%2ftoc.json#public-ip-address) or an external DNS provider for your DNS Record.
+If a custom domain is desired for services that use a public IP, you can use [Azure DNS](../../dns/dns-custom-domain.md?toc=%2fazure%2fvirtual-network%2ftoc.json#public-ip-address) or an external DNS provider for your DNS Record.
## Availability Zone
-Public IP addresses with a Standard SKU can be created as non-zonal, zonal, or zone-redundant in [regions that support availability zones](../../availability-zones/az-region.md). A zone-redundant IP is created in all zones for a region and can survive any single zone failure. A zonal IP is tied to a specific availability zone, and shares fate with the health of the zone. A "non-zonal" public IP addresses is placed into a zone for you by Azure and does not give a guarantee of redundancy.
+Public IP addresses with a standard SKU can be created as non-zonal, zonal, or zone-redundant in [regions that support availability zones](../../availability-zones/az-region.md).
+
+A zone-redundant IP is created in all zones for a region and can survive any single zone failure. A zonal IP is tied to a specific availability zone, and shares fate with the health of the zone. A "non-zonal" public IP addresses are placed into a zone for you by Azure and doesn't give a guarantee of redundancy.
In regions without availability zones, all public IP addresses are created as non-zonal. Public IP addresses created in a region that is later upgraded to have availability zones remain non-zonal. > [!NOTE]
-> All Basic SKU public IP addresses are created as non-zonal. Any IP that is upgraded from a Basic SKU to Standard SKU remains non-zonal.
+> All basic SKU public IP addresses are created as non-zonal. Any IP that is upgraded from a basic SKU to standard SKU remains non-zonal.
## Other public IP address features There are other attributes that can be used for a public IP address. * The Global **Tier** allows a public IP address to be used with cross-region load balancers. + * The Internet **Routing Preference** option minimizes the time that traffic spends on the Microsoft network, lowering the egress data transfer cost. > [!NOTE]
-> At this time, both the **Tier** and **Routing Preference** feature are available for standard SKU IPv4 addresses only. They also cannot be utilized on the same IP address concurrently.
+> At this time, both the **Tier** and **Routing Preference** feature are available for standard SKU IPv4 addresses only. They can't be utilized on the same IP address concurrently.
> [!INCLUDE [ephemeral-ip-note.md](../../../includes/ephemeral-ip-note.md)] ## Limits
-The limits for IP addressing are listed in the full set of [limits for networking](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits) in Azure. The limits are per region and per subscription. [Contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to increase above the default limits based on your business needs.
+The limits for IP addressing are listed in the full set of [limits for networking](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits) in Azure. The limits are per region and per subscription.
+
+[Contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to increase above the default limits based on your business needs.
## Pricing
-Public IPv4 addresses have a nominal charge; Public IPv6 addresses have no charge. To learn more about IP address pricing in Azure, review the [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses) page.
+Public IPv4 addresses have a nominal charge; Public IPv6 addresses have no charge.
+
+To learn more about IP address pricing in Azure, review the [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses) page.
## Limitations for IPv6
-* VPN gateways cannot be used in a virtual network with IPv6 enabled, either directly or peered with "UseRemoteGateway".
+* VPN gateways can't be used in a virtual network with IPv6 enabled, either directly or peered with "UseRemoteGateway".
+ * Public IPv6 addresses are locked at an idle timeout of 4 minutes.+ * Azure doesn't support IPv6 communication for containers.+ * Use of IPv6-only virtual machines or virtual machines scale sets aren't supported. Each NIC must include at least one IPv4 IP configuration (dual-stack).
-* When adding IPv6 to existing IPv4 deployments, IPv6 ranges can't be added to a virtual network with existing resource navigation links.
+
+* IPv6 ranges can't be added to a virtual network with existing resource navigation links when adding IPv6 to existing IPv4 deployments.
+ * Forward DNS for IPv6 is supported for Azure public DNS. Reverse DNS isn't supported.
-* Routing Preference and cross-region load-balancing isn't supported.
+
+* Routing Preference and cross-region load balancer aren't supported.
For more information on IPv6 in Azure, see [here](ipv6-overview.md). ## Next steps+ * Learn about [Private IP Addresses in Azure](private-ip-addresses.md)+ * [Deploy a VM with a static public IP using the Azure portal](./virtual-network-deploy-static-pip-arm-portal.md)
virtual-network Virtual Network Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-public-ip-address.md
- Previously updated : 05/20/2021 Last updated : 11/16/2022 + # Create, change, or delete an Azure public IP address
-Learn about a public IP address and how to create, change, and delete one. A public IP address is a resource with configurable settings. Assigning a public IP address to an Azure resource that supports public IP addresses enables:
+Learn about a public IP address and how to create, change, and delete one. A public IP address is a resource with configurable settings.
+
+When you assign a public IP address to an Azure resource, you enable the following operations:
- Inbound communication from the Internet to the resource, such as Azure Virtual Machines (VM), Azure Application Gateways, Azure Load Balancers, Azure VPN Gateways, and others.+ - Outbound connectivity to the Internet using a predictable IP address. [!INCLUDE [ephemeral-ip-note.md](../../../includes/ephemeral-ip-note.md)] ## Create a public IP address
-For instructions on how to create public IP addresses using the Portal, PowerShell, CLI, or Resource Manager templates, refer to the following pages:
+For instructions on how to create public IP addresses using the Azure portal, PowerShell, CLI, or Resource Manager templates, refer to the following pages:
- * [Create public IP addresses - Portal](./create-public-ip-portal.md?tabs=option-create-public-ip-standard-zones)
- * [Create public IP addresses - PowerShell](./create-public-ip-powershell.md?tabs=option-create-public-ip-standard-zones)
- * [Create public IP addresses - Azure CLI](./create-public-ip-cli.md?tabs=option-create-public-ip-standard-zones)
- * [Create public IP addresses - Template](./create-public-ip-template.md)
+* [Create a public IP address - Azure portal](./create-public-ip-portal.md?tabs=option-create-public-ip-standard-zones)
+
+* [Create a public IP address - PowerShell](./create-public-ip-powershell.md?tabs=option-create-public-ip-standard-zones)
+
+* [Create a public IP address - Azure CLI](./create-public-ip-cli.md?tabs=option-create-public-ip-standard-zones)
+
+* [Create a public IP address - Template](./create-public-ip-template.md)
>[!NOTE]
->Though the portal provides the option to create two public IP address resources (one IPv4 and one IPv6), the PowerShell and CLI commands create one resource with an address for one IP version or the other. If you want two public IP address resources, one for each IP version, you must run the command twice, specifying different names and IP versions for the public IP address resources.
+>The portal provides the option to create an IPv4 and IPv6 address concurrently during resource deployment. The PowerShell and Azure CLI commands create one resource, either IPv4 or IPv6. If you want an IPv4 and a IPv6 address, execute the PowerShell or CLI command twice. Specify different names and IP versions for the public IP address resources.
For more detail on the specific attributes of a public IP address during creation, see the following table: |Setting|Required?|Details| ||||
- |IP Version|Yes| Select IPv4 or IPv6 or Both. Selecting Both will result in 2 Public IP addresses being create- 1 IPv4 address and 1 IPv6 address. Learn more about [IPv6 in Azure VNETs](ipv6-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json).|
- |SKU|Yes|All public IP addresses created before the introduction of SKUs are **Basic** SKU public IP addresses. You cannot change the SKU after the public IP address is created. A standalone virtual machine, virtual machines within an availability set, or virtual machine scale sets can use Basic or Standard SKUs. Mixing SKUs between virtual machines within availability sets or scale sets or standalone VMs is not allowed. **Basic** SKU: If you are creating a public IP address in a region that supports availability zones, the **Availability zone** setting is set to *None* by default. Basic Public IPs do not support Availability zones. **Standard** SKU: A Standard SKU public IP can be associated to a virtual machine or a load balancer front end. If you're creating a public IP address in a region that supports availability zones, the **Availability zone** setting is set to *Zone-redundant* by default. For more information about availability zones, see the **Availability zone** setting. The standard SKU is required if you associate the address to a Standard load balancer. To learn more about standard load balancers, see [Azure load balancer standard SKU](../../load-balancer/load-balancer-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json). When you assign a standard SKU public IP address to a virtual machine's network interface, you must explicitly allow the intended traffic with a [network security group](../../virtual-network/network-security-groups-overview.md#network-security-groups). Communication with the resource fails until you create and associate a network security group and explicitly allow the desired traffic.|
- |Tier|Yes|Indicates if the IP address is associated with a region (**Regional**) or is "anycast" from multiple regions (**Global**). *Note that a "Global Tier" IP is preview functionality for Standard IPs, and currently only utilized for the Cross-Region Load Balancer*.|
+ |IP Version|Yes| Select **IPv4** or **IPv6** or **Both**. Selection of **Both** results in two public IP addresses created, one IPv4 and one IPv6. For more information, [Overview of IPv6 for Azure Virtual Network](ipv6-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json).|
+ |SKU|Yes|All public IP addresses created before the introduction of SKUs are **Basic** SKU public IP addresses. You can't change the SKU after the public IP address is created. </br></br>A standalone virtual machine, virtual machines within an availability set, or Virtual Machine Scale Sets can use Basic or Standard SKUs. Mixing SKUs between virtual machines within availability sets or scale sets or standalone VMs isn't allowed.</br></br> **Basic**: Basic public IP addresses don't support Availability zones. The **Availability zone** setting is set to **None** by default if the public IP address is created in a region that supports availability zones. </br></br> **Standard**: Standard public IP addresses can be associated to Azure resources that support public IPs, such as virtual machines, load balancers, and Azure Firewall. The **Availability zone** setting is set to **Zone-redundant** by default if the IP address is created in a region that supports availability zones. For more information about availability zones, see the **Availability zone** setting. </br></br>The standard SKU is required if you associate the address to a standard load balancer. For more information about standard load balancers, see [Azure load balancer standard SKU](../../load-balancer/load-balancer-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json). </br></br> When you assign a standard SKU public IP address to a virtual machine's network interface, you must explicitly allow the intended traffic with a [network security group](../../virtual-network/network-security-groups-overview.md#network-security-groups). Communication with the resource fails until you create and associate a network security group and explicitly allow the desired traffic.|
+ |Tier|Yes|Indicates if the IP address is associated with a region **(Regional)** or is *"anycast"* from multiple regions **(Global)**. </br></br> *A **Global tier** IP is preview functionality for Standard SKU IP addresses, and currently only utilized for the Cross-region Azure Load Balancer*.|
|Name|Yes|The name must be unique within the resource group you select.|
- |IP address assignment|Yes|**Dynamic:** Dynamic addresses are assigned after a public IP address is associated to an Azure resource and is started for the first time. Dynamic addresses can change if a resource such as a virtual machine is stopped (deallocated) and then restarted through Azure. The address remains the same if a virtual machine is rebooted or stopped from within the guest OS. When a public IP address resource is removed from a resource, the dynamic address is released.<br> **Static:** Static addresses are assigned when a public IP address is created. Static addresses aren't released until a public IP address resource is deleted. <br>Note: If you select *IPv6* for the **IP version**, the assignment method must be *Dynamic* for Basic SKU. Standard SKU addresses are *Static* for both IPv4 and IPv6. |
- |Routing preference |Yes| By default, the routing preference for public IP addresses is set to "Microsoft network", which delivers traffic over Microsoft's global wide area network to the user. The selection of "Internet" minimizes travel on Microsoft's network, instead using the transit ISP network to deliver traffic at a cost-optimized rate. A public IP addresses routing preference canΓÇÖt be changed once created. For more information on routing preference, see [What is routing preference (preview)?](routing-preference-overview.md). |
- |Idle timeout (minutes)|No|How many minutes to keep a TCP or HTTP connection open without relying on clients to send keep-alive messages. If you select IPv6 for **IP Version**, this value is set to 4 minutes and cannot be changed. |
- |DNS name label|No|Must be unique within the Azure location you create the name in (across all subscriptions and all customers). Azure automatically registers the name and IP address in its DNS so you can connect to a resource with the name. Azure appends a default subnet such as *location.cloudapp.azure.com* to the name you provide to create the fully qualified DNS name. If you choose to create both address versions, the same DNS name is assigned to both the IPv4 and IPv6 addresses. Azure's default DNS contains both IPv4 A and IPv6 AAAA name records. The default DNS responds with both records during DNS lookup. The client chooses which address (IPv4 or IPv6) to communicate with. You can use the Azure DNS service to configure a DNS name with a custom suffix that resolves to the public IP address. For more information, see [Use Azure DNS with an Azure public IP address](../../dns/dns-custom-domain.md?toc=%2fazure%2fvirtual-network%2ftoc.json#public-ip-address).|
- |Name (Only visible if you select IP Version of **Both**)|Yes, if you select IP Version of **Both**|The name must be different than the name you enter for the first **Name** in this list. If you choose to create both an IPv4 and an IPv6 address, the portal creates two separate public IP address resources, one with each IP address version assigned to it.|
+ |IP address assignment|Yes|**Dynamic:** Dynamic addresses are assigned after a public IP address is associated to an Azure resource and is started for the first time. Dynamic addresses can change if a resource such as a virtual machine is stopped (deallocated) and then restarted through Azure. The address remains the same if a virtual machine is rebooted or stopped from within the guest OS. When a public IP address resource is removed from a resource, the dynamic address is released.<br></br> **Static:** Static addresses are assigned when a public IP address is created. Static addresses aren't released until a public IP address resource is deleted. <br></br> *If you select **IPv6** for the **IP version**, the assignment method must be *Dynamic* for Basic SKU. Standard SKU addresses are *Static* for both IPv4 and IPv6.* |
+ |Routing preference |Yes| By default, the routing preference for public IP addresses is set to **Microsoft network**. The **Microsoft network** setting delivers traffic over Microsoft's global wide area network to the user. </br></br> The selection of **Internet** minimizes travel on Microsoft's network. The **Internet** setting uses the transit ISP network to deliver traffic at a cost-optimized rate. A public IP addresses routing preference canΓÇÖt be changed once created. For more information on routing preference, see [What is routing preference (preview)?](routing-preference-overview.md). |
+ |Idle timeout (minutes)|No| The number of minutes to keep a TCP or HTTP connection open without relying on clients to send keep-alive messages. If you select IPv6 for **IP Version**, this value is set to 4 minutes, and can't be changed. |
+ |DNS name label|No|Must be unique within the Azure location you create the name in across all subscriptions and all customers. Azure automatically registers the name and IP address in its DNS so you can connect to a resource with the name. </br></br> Azure appends a default subnet such as **location.cloudapp.azure.com** to the name you provide to create the fully qualified DNS name. If you choose to create both address versions, the same DNS name is assigned to both the IPv4 and IPv6 addresses. Azure's default DNS contains both IPv4 A and IPv6 AAAA name records. </br></br> The default DNS responds with both records during DNS lookup. The client chooses which address (IPv4 or IPv6) to communicate with. You can use the Azure DNS service to configure a DNS name with a custom suffix that resolves to the public IP address. </br></br>For more information, see [Use Azure DNS with an Azure public IP address](../../dns/dns-custom-domain.md?toc=%2fazure%2fvirtual-network%2ftoc.json#public-ip-address).|
+ |Name (Only visible if you select IP Version of **Both**)|Yes, if you select IP Version of **Both**|The name must be different than the name you entered previously for **Name** in this list. If you create both an IPv4 and an IPv6 address, the portal creates two separate public IP address resources. The deployment creates one IPv4 address and one IPv6 address.|
|IP address assignment (Only visible if you select IP Version of **Both**)|Yes, if you select IP Version of **Both**| Same restrictions as IP address assignment above. | |Subscription|Yes|Must exist in the same [subscription](../../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription) as the resource to which you'll associate the public IPs.| |Resource group|Yes|Can exist in the same, or different, [resource group](../../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-group) as the resource to which you'll associate the public IPs.|
- |Location|Yes|Must exist in the same [location](https://azure.microsoft.com/regions), also referred to as region, as the resource to which you'll associate the Public IPs.|
- |Availability zone| No | This setting only appears if you select a supported location and IP address type. **Basic** SKU public IPs and **Global** Tier public IPs don't support Availability Zones. You can select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. For a list of supported locations and more information about Availability Zones, see [Availability zones overview](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+ |Location|Yes|Must exist in the same [location](https://azure.microsoft.com/regions), also referred to as region, as the resource to which you'll associate the public IPs.|
+ |Availability zone| No | This setting only appears if you select a supported location and IP address type. **Basic** SKU public IPs and **Global** Tier public IPs don't support Availability Zones. You can select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements.</br></br>For a list of supported locations and more information about Availability Zones, see [Availability zones overview](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
## View, modify settings for, or delete a public IP address
For more detail on the specific attributes of a public IP address during creatio
## Virtual Machine Scale Sets
-When using a virtual machine scale set with Public IPs, there are not separate Public IP objects associated with the individual virtual machine instances. However, a Public IP Prefix object [can be used to generate the instance IPs](https://azure.microsoft.com/resources/templates/vmss-with-public-ip-prefix/).
+There aren't separate public IP objects associated with the individual virtual machine instances for a Virtual Machine Scale Set with public IPs. A public IP prefix object [can be used to generate the instance IPs](https://azure.microsoft.com/resources/templates/vmss-with-public-ip-prefix/).
-To list the Public IPs on a virtual machine scale set, you can use PowerShell ([Get-AzPublicIpAddress -VirtualMachineScaleSetName](/powershell/module/az.network/get-azpublicipaddress)) or CLI ([az virtual machine scale set list-instance-public-ips](/cli/azure/vmss#az-vmss-list-instance-public-ips)).
+To list the Public IPs on a Virtual Machine Scale Set, you can use PowerShell ([Get-AzPublicIpAddress -VirtualMachineScaleSetName](/powershell/module/az.network/get-azpublicipaddress)) or CLI ([az Virtual Machine Scale Set list-instance-public-ips](/cli/azure/vmss#az-vmss-list-instance-public-ips)).
For more information, see [Networking for Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine).
virtual-network Manage Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/manage-nat-gateway.md
This article explains how to manage the following aspects of NAT gateway:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An existing Azure Virtual Network. For information about creating an Azure Virtual Network, see [Quickstart: Create a virtual network using the Azure portal](/azure/virtual-network/quick-create-portal).
+- An existing Azure Virtual Network. For information about creating an Azure Virtual Network, see [Quickstart: Create a virtual network using the Azure portal](../quick-create-portal.md).
- The example virtual network used in this article is named **myVNet**. Replace the example value with the name of your virtual network.
To learn more about Azure Virtual Network NAT and its capabilities, see the foll
- [NAT gateway and availability zones](nat-availability-zones.md) -- [Design virtual networks with NAT gateway](nat-gateway-resource.md)-
+- [Design virtual networks with NAT gateway](nat-gateway-resource.md)
virtual-network Nat Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-metrics.md
To create the alert, use the following steps:
11. Select **Create** to create the alert rule. >[!NOTE]
->SNAT port exhaustion on your NAT gateway resource is uncommon. If you see SNAT port exhaustion, your NAT gateway's idle timeout timer may be holding on to SNAT ports too long or your may need to scale with additional public IPs. To troubleshoot these kinds of issues, refer to the [NAT gateway connectivity troubleshooting guide](/azure/virtual-network/nat-gateway/troubleshoot-nat-connectivity#snat-exhaustion-due-to-nat-gateway-configuration).
+>SNAT port exhaustion on your NAT gateway resource is uncommon. If you see SNAT port exhaustion, your NAT gateway's idle timeout timer may be holding on to SNAT ports too long or your may need to scale with additional public IPs. To troubleshoot these kinds of issues, refer to the [NAT gateway connectivity troubleshooting guide](./troubleshoot-nat-connectivity.md#snat-exhaustion-due-to-nat-gateway-configuration).
## Network Insights
For more information on what each metric is showing you and how to analyze these
* Learn about [Virtual Network NAT](nat-overview.md) * Learn about [NAT gateway resource](nat-gateway-resource.md) * Learn about [Azure Monitor](../../azure-monitor/overview.md)
-* Learn about [troubleshooting NAT gateway resources](troubleshoot-nat.md).
+* Learn about [troubleshooting NAT gateway resources](troubleshoot-nat.md).
virtual-network Quickstart Create Nat Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-portal.md
This quickstart shows you how to use the Azure Virtual Network NAT service. You'
Before you deploy the NAT gateway resource and the other resources, a resource group is required to contain the resources deployed. In the following steps, you'll create a resource group, NAT gateway resource, and a public IP address. You can use one or more public IP address resources, public IP prefixes, or both.
-For information about public IP prefixes and a NAT gateway, see [Manage NAT gateway](/azure/virtual-network/nat-gateway/manage-nat-gateway?tabs=manage-nat-portal#add-or-remove-a-public-ip-prefix).
+For information about public IP prefixes and a NAT gateway, see [Manage NAT gateway](./manage-nat-gateway.md?tabs=manage-nat-portal#add-or-remove-a-public-ip-prefix).
1. Sign in to the [Azure portal](https://portal.azure.com).
For information about public IP prefixes and a NAT gateway, see [Manage NAT gate
| Availability Zone | Select **No Zone**. | | Idle timeout (minutes) | Enter **10**. |
- For information about availability zones and NAT gateway, see [NAT gateway and availability zones](/azure/virtual-network/nat-gateway/nat-availability-zones).
+ For information about availability zones and NAT gateway, see [NAT gateway and availability zones](./nat-availability-zones.md).
5. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
the virtual network, virtual machine, and NAT gateway with the following steps:
For more information on Azure Virtual Network NAT, see: > [!div class="nextstepaction"]
-> [Virtual Network NAT overview](nat-overview.md)
+> [Virtual Network NAT overview](nat-overview.md)
virtual-network Virtual Network Network Interface Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface-vm.md
Title: Add network interfaces to or remove from Azure VMs description: Learn how to add network interfaces to or remove network interfaces from virtual machines. tags: azure-resource-manager Previously updated : 03/13/2020 Last updated : 11/15/2022 + # Add network interfaces to or remove network interfaces from virtual machines
-Learn how to add an existing network interface when you create an Azure virtual machine (VM). Also learn to add or remove network interfaces from an existing VM in the stopped (deallocated) state. A network interface enables an Azure VM to communicate with internet, Azure, and on-premises resources. A VM has one or more network interfaces.
+Learn how to add an existing network interface when you create an Azure virtual machine (VM). Also learn how to add or remove network interfaces from an existing VM in the stopped (deallocated) state. A network interface enables an Azure VM to communicate with internet, Azure, and on-premises resources. A VM has one or more network interfaces.
-If you need to add, change, or remove IP addresses for a network interface, see [Manage network interface IP addresses](./ip-services/virtual-network-network-interface-addresses.md). To create, change, or delete network interfaces, see [Manage network interfaces](virtual-network-network-interface.md).
+If you need to add, change, or remove IP addresses for a network interface, see [Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md). To manage network interfaces, see [Create, change, or delete a network interface](virtual-network-network-interface.md).
## Before you begin -
-If you don't have one, set up an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Complete one of these tasks before starting the remainder of this article:
+If you don't have one, set up an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Complete one of these tasks before starting the remainder of this article:
- **Portal users**: Sign in to the [Azure portal](https://portal.azure.com) with your Azure account. -- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
+- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
+
+ If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run `Connect-AzAccount` to sign in to Azure.
- If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run `Connect-AzAccount` to create a connection with Azure.
+- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.
-- **Azure CLI users**: Run the commands via either the [Azure Cloud Shell](https://shell.azure.com/bash) the Azure CLI running locally. Use Azure CLI version 2.0.26 or later if you're running the Azure CLI locally. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to create a connection with Azure.
+ If you're running Azure CLI locally, use Azure CLI version 2.0.26 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to create a connection with Azure.
## Add existing network interfaces to a new VM
-When you create a virtual machine through the portal, the portal creates a network interface with default settings and attaches the network interface to the VM for you. You can't use the portal to add existing network interfaces to a new VM, or to create a VM with multiple network interfaces. You can do both by using the CLI or PowerShell. Be sure to familiarize yourself with the [constraints](#constraints). If you create a VM with multiple network interfaces, you must also configure the operating system to use them properly after you create the VM. Learn how to configure [Linux](../virtual-machines/linux/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json#configure-guest-os-for-multiple-nics) or [Windows](../virtual-machines/windows/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json#configure-guest-os-for-multiple-nics) for multiple network interfaces.
+When you create a virtual machine through the portal, the portal creates a network interface with default settings and attaches the network interface to the VM for you. You can't use the portal to add existing network interfaces to a new VM, or to create a VM with multiple network interfaces. You can do both by using the CLI or PowerShell. Be sure to familiarize yourself with the [constraints](#constraints). If you create a VM with multiple network interfaces, you must also configure the operating system to use them properly after you create the VM. Learn how to configure [Linux](../virtual-machines/linux/multiple-nics.md#configure-guest-os-for-multiple-nics) or [Windows](../virtual-machines/windows/multiple-nics.md#configure-guest-os-for-multiple-nics) for multiple network interfaces.
### Commands
-Before you create the VM, [Create a network interface](virtual-network-network-interface.md#create-a-network-interface).
+Before you create the VM, [create a network interface](virtual-network-network-interface.md#create-a-network-interface).
|Tool|Command| |||
-|CLI|[az network nic create](/cli/azure/network/nic?toc=%2fazure%2fvirtual-network%2ftoc.json#az-network-nic-create)|
-|PowerShell|[New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface?toc=%2fazure%2fvirtual-network%2ftoc.json)|
+|CLI|[az vm create](/cli/azure/vm#az-vm-create). See [example](../virtual-machines/linux/multiple-nics.md#create-a-vm-and-attach-the-nics)|
+|PowerShell|[New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) and [New-AzVM](/powershell/module/az.compute/new-azvm). See [example](../virtual-machines/windows/multiple-nics.md#create-a-vm-with-multiple-nics)|
## Add a network interface to an existing VM
To add a network interface to your virtual machine:
1. Go to the [Azure portal](https://portal.azure.com) to find an existing virtual machine. Search for and select **Virtual machines**.
-2. Select the name of your VM. The VM must support the number of network interfaces you want to add. To find out how many network interfaces each VM size supports, see the sizes in Azure for [Linux VMs](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Windows VMs](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+2. Select the name of your VM. The VM must support the number of network interfaces you want to add. To find out how many network interfaces each VM size supports, see the sizes in Azure for [Sizes for virtual machines in Azure](../virtual-machines/sizes.md).
+
+3. In the VM **Overview** page, select **Stop**, and then **Yes**. Then wait until the **Status** of the VM changes to **Stopped (deallocated)**.
-3. In the VM command bar, select **Stop**, and then **OK** in the confirmation dialog box. Then wait until the **Status** of the VM changes to **Stopped (deallocated)**.
+ :::image type="content" source="./media/virtual-network-network-interface-vm/stop-virtual-machine.png" alt-text="Screenshot of stop a virtual machine in Azure portal.":::
-4. From the VM menu bar, choose **Networking** > **Attach network interface**. Then in **Attach existing network interface**, choose the network interface you'd like to attach, and select **OK**.
+4. Select **Networking** > **Attach network interface**. Then in **Attach existing network interface**, select the network interface you'd like to attach, and select **OK**.
+
+ :::image type="content" source="./media/virtual-network-network-interface-vm/attach-network-interface.png" alt-text="Screenshot of attach a network interface to a virtual machine in Azure portal.":::
>[!NOTE]
- >The network interface you select can't have accelerated networking enabled, can't have an IPv6 address assigned to it, and must exist in the same virtual network with the network interface currently attached to the VM.
+ >The network interface you select must exist in the same virtual network with the network interface currently attached to the VM.
If you don't have an existing network interface, you must first create one. To do so, select **Create network interface**. To learn more about how to create a network interface, see [Create a network interface](virtual-network-network-interface.md#create-a-network-interface). To learn more about additional constraints when adding network interfaces to virtual machines, see [Constraints](#constraints).
-5. From the VM menu bar, choose **Overview** > **Start** to restart the virtual machine.
+5. Select **Overview** > **Start** to start the virtual machine.
-Now you can configure the VM operating system to use multiple network interfaces properly. Learn how to configure [Linux](../virtual-machines/linux/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json#configure-guest-os-for-multiple-nics) or [Windows](../virtual-machines/windows/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json#configure-guest-os-for-multiple-nics) for multiple network interfaces.
+Now you can configure the VM operating system to use multiple network interfaces properly. Learn how to configure [Linux](../virtual-machines/linux/multiple-nics.md#configure-guest-os-for-multiple-nics) or [Windows](../virtual-machines/windows/multiple-nics.md#configure-guest-os-for-multiple-nics) for multiple network interfaces.
### Commands |Tool|Command| |||
-|CLI|[az vm nic add](/cli/azure/vm/nic?toc=%2fazure%2fvirtual-network%2ftoc.json#az-vm-nic-add) (reference); [detailed steps](../virtual-machines/linux/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-a-nic-to-a-vm)|
-|PowerShell|[Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface?toc=%2fazure%2fvirtual-network%2ftoc.json) (reference); [detailed steps](../virtual-machines/windows/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-a-nic-to-an-existing-vm)|
+|CLI|[az vm nic add](/cli/azure/vm/nic#az-vm-nic-add). See [example](../virtual-machines/linux/multiple-nics.md#add-a-nic-to-a-vm)|
+|PowerShell|[Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface). See [example](../virtual-machines/windows/multiple-nics.md#add-a-nic-to-an-existing-vm)|
## View network interfaces for a VM
You can view the network interfaces currently attached to a VM to learn about ea
1. Go to the [Azure portal](https://portal.azure.com) to find an existing virtual machine. Search for and select **Virtual machines**. >[!NOTE]
- >Sign in using an account that is assigned the Owner, Contributor, or Network Contributor role for your subscription. To learn more about how to assign roles to accounts, see [Built-in roles for Azure role-based access control](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor).
+ >Sign in using an account that is assigned the Owner, Contributor, or Network Contributor role for your subscription. To learn more about how to assign roles to accounts, see [Built-in roles for Azure role-based access control](../role-based-access-control/built-in-roles.md#network-contributor).
2. Select the name of the VM for which you want to view attached network interfaces.
-3. In the VM menu bar, select **Networking**.
+3. Select **Networking** to see the network interfaces currently attached to the VM. Select a network interface to see its configuration
+
+ :::image type="content" source="./media/virtual-network-network-interface-vm/network-interfaces.png" alt-text="Screenshot of network interface attached to a virtual machine in Azure portal.":::
To learn about network interface settings and how to change them, see [Manage network interfaces](virtual-network-network-interface.md). To learn about how to add, change, or remove IP addresses assigned to a network interface, see [Manage network interface IP addresses](./ip-services/virtual-network-network-interface-addresses.md).
To learn about network interface settings and how to change them, see [Manage ne
|Tool|Command| |||
-|CLI|[az vm nic list](/cli/azure/vm/nic?toc=%2fazure%2fvirtual-network%2ftoc.json#az-vm-nic-list)|
-|PowerShell|[Get-AzVM](/powershell/module/az.compute/get-azvm?toc=%2fazure%2fvirtual-network%2ftoc.json)|
+|CLI|[az vm nic list](/cli/azure/vm/nic#az-vm-nic-list)|
+|PowerShell|[Get-AzVM](/powershell/module/az.compute/get-azvm)|
## Remove a network interface from a VM 1. Go to the [Azure portal](https://portal.azure.com) to find an existing virtual machine. Search for and select **Virtual machines**.
-2. Select the name of the VM for which you want to view attached network interfaces.
+2. Select the name of the VM for which you want to delete attached network interfaces.
-3. In the VM toolbar, pick **Stop**.
+3. Select **Stop**.
4. Wait until the **Status** of the VM changes to **Stopped (deallocated)**.
-5. From the VM menu bar, choose **Networking** > **Detach network interface**.
+5. Select **Networking** > **Detach network interface**.
-6. In the **Detach network interface** dialog box, select the network interface you'd like to detach. Then select **OK**.
+6. In the **Detach network interface**, select the network interface you'd like to detach. Then select **OK**.
>[!NOTE] >If only one network interface is listed, you can't detach it, because a virtual machine must always have at least one network interface attached to it.
To learn about network interface settings and how to change them, see [Manage ne
|Tool|Command| |||
-|CLI|[az vm nic remove](/cli/azure/vm/nic?toc=%2fazure%2fvirtual-network%2ftoc.json#az-vm-nic-remove) (reference); [detailed steps](../virtual-machines/linux/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json#remove-a-nic-from-a-vm)|
-|PowerShell|[Remove-AzVMNetworkInterface](/powershell/module/az.compute/remove-azvmnetworkinterface?toc=%2fazure%2fvirtual-network%2ftoc.json) (reference); [detailed steps](../virtual-machines/windows/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json#remove-a-nic-from-an-existing-vm)|
+|CLI|[az vm nic remove](/cli/azure/vm/nic#az-vm-nic-remove). See [example](../virtual-machines/linux/multiple-nics.md#remove-a-nic-from-a-vm)|
+|PowerShell|[Remove-AzVMNetworkInterface](/powershell/module/az.compute/remove-azvmnetworkinterface). See [example](../virtual-machines/windows/multiple-nics.md#remove-a-nic-from-an-existing-vm)|
## Constraints - A VM must have at least one network interface attached to it. -- A VM can only have as many network interfaces attached to it as the VM size supports. To learn more about how many network interfaces each VM size supports, see the sizes in Azure for [Linux VMs](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Windows VMs](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json). All sizes support at least two network interfaces.
+- A VM can only have as many network interfaces attached to it as the VM size supports. To learn more about how many network interfaces each VM size supports, see [Sizes for virtual machines in Azure](../virtual-machines/sizes.md). All sizes support at least two network interfaces.
- The network interfaces you add to a VM can't currently be attached to another VM. To learn more about how to create network interfaces, see [Create a network interface](virtual-network-network-interface.md#create-a-network-interface).
To learn about network interface settings and how to change them, see [Manage ne
- You can control which network interface you send outbound traffic to. However, a VM by default sends all outbound traffic to the IP address that's assigned to the primary IP configuration of the primary network interface. -- In the past, all VMs within the same availability set were required to have a single, or multiple, network interfaces. VMs with any number of network interfaces can now exist in the same availability set, up to the number supported by the VM size. You can only add a VM to an availability set when it's created. To learn more about availability sets, see [Manage the availability of VMs in Azure](../virtual-machines/availability.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+- In the past, all VMs within the same availability set were required to have a single, or multiple, network interfaces. VMs with any number of network interfaces can now exist in the same availability set, up to the number supported by the VM size. You can only add a VM to an availability set when it's created. To learn more about availability sets, see [Availability options for Azure Virtual Machines](../virtual-machines/availability.md).
- You can connect network interfaces in the same VM to different subnets within a virtual network. However, the network interfaces must all be connected to the same virtual network. -- You can add any IP address for any IP configuration of any primary or secondary network interface to an Azure Load Balancer back-end pool. In the past, only the primary IP address for the primary network interface could be added to a back-end pool. To learn more about IP addresses and configurations, see [Add, change, or remove IP addresses](./ip-services/virtual-network-network-interface-addresses.md).
+- You can add any IP address for any IP configuration of any primary or secondary network interface to an Azure Load Balancer back-end pool. In the past, only the primary IP address for the primary network interface could be added to a back-end pool. To learn more about IP addresses and configurations, see [Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md).
- Deleting a VM doesn't delete the network interfaces that are attached to it. When you delete a VM, the network interfaces are detached from the VM. You can add those network interfaces to different VMs or delete them.
To create a VM with multiple network interfaces or IP addresses, see:
|Task|Tool| |||
-|Create a VM with multiple NICs|[CLI](../virtual-machines/linux/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json), [PowerShell](../virtual-machines/windows/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json)|
+|Create a VM with multiple NICs|[CLI](../virtual-machines/linux/multiple-nics.md), [PowerShell](../virtual-machines/windows/multiple-nics.md)|
|Create a single NIC VM with multiple IPv4 addresses|[CLI](./ip-services/virtual-network-multiple-ip-addresses-cli.md), [PowerShell](./ip-services/virtual-network-multiple-ip-addresses-powershell.md)|
-|Create a single NIC VM with a private IPv6 address (behind an Azure Load Balancer)|[CLI](../load-balancer/load-balancer-ipv6-internet-cli.md?toc=%2fazure%2fvirtual-network%2ftoc.json), [PowerShell](../load-balancer/load-balancer-ipv6-internet-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json), [Azure Resource Manager template](../load-balancer/load-balancer-ipv6-internet-template.md?toc=%2fazure%2fvirtual-network%2ftoc.json)|
+|Create a single NIC VM with a private IPv6 address (behind an Azure Load Balancer)|[CLI](../load-balancer/load-balancer-ipv6-internet-cli.md), [PowerShell](../load-balancer/load-balancer-ipv6-internet-ps.md), [Azure Resource Manager template](../load-balancer/load-balancer-ipv6-internet-template.md)|
vpn-gateway Openvpn Azure Ad Client Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-client-mac.md
You can remove the VPN connection profile from your computer.
1. On the **Remove VPN connection?** box, click **Remove**. :::image type="content" source="media/openvpn-azure-ad-client-mac/remove-2.png" alt-text="Screenshot of removing.":::
+## FAQ
+
+### How do I add DNS suffixes to the VPN client?
+
+You can modify the downloaded profile XML file and add the **\<dnssuffixes>\<dnssufix> \</dnssufix>\</dnssuffixes>** tags.
+
+```
+<azvpnprofile>
+<clientconfig>
+
+ <dnssuffixes>
+ <dnssuffix>.mycorp.com</dnssuffix>
+ <dnssuffix>.xyz.com</dnssuffix>
+ <dnssuffix>.etc.net</dnssuffix>
+ </dnssuffixes>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+### How do I add custom DNS servers to the VPN client?
+
+You can modify the downloaded profile XML file and add the **\<dnsservers>\<dnsserver> \</dnsserver>\</dnsservers>** tags.
+
+```
+<azvpnprofile>
+<clientconfig>
+
+ <dnsservers>
+ <dnsserver>x.x.x.x</dnsserver>
+ <dnsserver>y.y.y.y</dnsserver>
+ </dnsservers>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+### <a name="split"></a>Can I configure split tunneling for the VPN client?
+
+Split tunneling is configured by default for the VPN client.
+
+### <a name="forced-tunnel"></a>How do I direct all traffic to the VPN tunnel (forced tunneling)?
+
+You can configure forced tunneling using two different methods; either by advertising custom routes, or by modifying the profile XML file.
+
+> [!NOTE]
+> Internet connectivity is not provided through the VPN gateway. As a result, all traffic bound for the Internet is dropped.
+>
+
+* **Advertise custom routes:** You can advertise custom routes 0.0.0.0/1 and 128.0.0.0/1. For more information, see [Advertise custom routes for P2S VPN clients](vpn-gateway-p2s-advertise-custom-routes.md).
+
+* **Profile XML:** You can modify the downloaded profile XML file to add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags.
++
+ ```
+ <azvpnprofile>
+ <clientconfig>
+
+ <includeroutes>
+ <route>
+ <destination>0.0.0.0</destination><mask>1</mask>
+ </route>
+ <route>
+ <destination>128.0.0.0</destination><mask>1</mask>
+ </route>
+ </includeroutes>
+
+ </clientconfig>
+ </azvpnprofile>
+ ```
++
+### How do I add custom routes to the VPN client?
+
+You can modify the downloaded profile XML file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags.
+
+```
+<azvpnprofile>
+<clientconfig>
+
+ <includeroutes>
+ <route>
+ <destination>x.x.x.x</destination><mask>24</mask>
+ </route>
+ </includeroutes>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+### How do I block (exclude) routes from the VPN client?
+
+You can modify the downloaded profile XML file and add the **\<excluderoutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</excluderoutes>** tags.
+
+```
+<azvpnprofile>
+<clientconfig>
+
+ <excluderoutes>
+ <route>
+ <destination>x.x.x.x</destination><mask>24</mask>
+ </route>
+ </excluderoutes>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+> [!NOTE]
+> - The default status for clientconfig tag is <clientconfig i:nil="true" />, which can be modified based on the requirement.
+> - Duplicate clientconfig tag is not supported on macOS, so make sure the clientconfig tag is not duplicated in the XML file.
+>
+ ## Next steps For more information, see [Create an Azure AD tenant for P2S Open VPN connections that use Azure AD authentication](openvpn-azure-ad-tenant.md).
web-application-firewall Waf Front Door Configure Ip Restriction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-configure-ip-restriction.md
Previously updated : 12/22/2020 Last updated : 11/16/2022
Create an Azure Front Door profile by following the instructions described in [Q
### Create a WAF policy
-1. On the Azure portal, select **Create a resource**, type **Web application firewall** in the search box, and then select **Web Application Firewall (WAF)**.
+1. On the Azure portal, select **Create a resource**, type **Web application firewall** in the **Search services and marketplace** search box, press *Enter*, and then select **Web Application Firewall (WAF)**.
2. Select **Create**. 3. On the **Create a WAF policy** page, use the following values to complete the **Basics** tab: |Setting |Value | ||| |Policy for |Global WAF (Front Door)|
+ |Front door tier| Select Premium or Standard to match you Front Door tier|
|Subscription |Select your subscription|
- |Resource group |Select the resource group where your Front Door is.|
+ |Resource group |Select the resource group where your Front Door is located.|
|Policy name |Type a name for your policy|
- |Policy state |Enabled|
+ |Policy state |selected|
+ |Policy mode|Prevention|
- Select **Next: Policy settings**
+1. Select **Next:Managed rules**.
-1. On the **Policy settings** tab, select **Prevention**. For the **Block response body**, type *You've been blocked!* so you can see that your custom rule is in effect.
-2. Select **Next: Managed rules**.
+1. Select **Next: Policy settings**
+
+1. On the **Policy settings** tab, type *You've been blocked!* for the **Block response body**, so you can see that your custom rule is in effect.
3. Select **Next: Custom rules**. 4. Select **Add custom rule**. 5. On the **Add custom rule** page, use the following test values to create a custom rule:
Create an Azure Front Door profile by following the instructions described in [Q
Select **Add**. 6. Select **Next: Association**.
-7. Select **Add frontend host**.
-8. For **Frontend host**, select your frontend host and select **Add**.
-9. Select **Review + create**.
-10. After your policy validation passes, select **Create**.
+7. Select **Associate a Front door profile**.
+8. For **Frontend profile**, select your frontend profile.
+1. For **Domain**, select the domain.
+1. Select **Add**.
+1. Select **Review + create**.
+1. After your policy validation passes, select **Create**.
### Test your WAF policy
web-application-firewall Application Gateway Waf Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-configuration.md
You can specify an exact request header, body, cookie, or query string attribute
- **Contains**: This operator matches all request fields that contain the specified selector value. - **Equals any**: This operator matches all request fields. * will be the selector value.
-In all cases matching is case insensitive. Regular expressions aren't allowed as selectors.
+When processing exclusions, the WAF will perform a case sensitive match for all fields other than request header keys. Depending on your application, the names, and values, of your headers, cookies and query args can be case sensitive or insensitive. Regular expressions aren't allowed as selectors.
> [!NOTE] > For more information and troubleshooting help, see [WAF troubleshooting](web-application-firewall-troubleshoot.md).