Updates from: 02/19/2023 02:11:26
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml-options.md
Previously updated : 01/13/2022 Last updated : 02/17/2023
The following SAML authorization request contains the authentication context cla
## Include custom data in the authorization request
-You can optionally include protocol message extension elements that are agreed to by both Azure AD BC and your identity provider. The extension is presented in XML format. You include extension elements by adding XML data inside the CDATA element `<![CDATA[Your Custom XML]]>`. Check your identity providerΓÇÖs documentation to see if the extensions element is supported.
+You can optionally include protocol message extension elements that are agreed to by both Azure AD B2C and your identity provider. The extension is presented in XML format. You include extension elements by adding XML data inside the CDATA element `<![CDATA[Your Custom XML]]>`. Check your identity providerΓÇÖs documentation to see if the extensions element is supported.
The following example illustrates the use of extension data:
active-directory Provisioning Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provisioning-workbook.md
+
+ Title: 'Provisioning insights workbook'
+description: This article describes the Azure Monitor workbook for provisioning.
++++++ Last updated : 02/17/2023+++++++
+# Provisioning insights workbook
+The Provisioning workbook provides a flexible canvas for data analysis. This workbook brings together all of the provisioning logs from various sources and allows you to gain insight, in a single area. The workbook allows you to create rich visual reports within the Azure portal. To learn more, see Azure Monitor Workbooks overview.
+
+This workbook is intended for Hybrid Identity Admins who use provisioning to sync users from various data sources to various data repositories. It allows admins to gain insights into sync status and details.
+
+This workbook:
+
+- Provides a synchronization summary of users and groups synchronized from all of you provisioning sources to targets
+- Provides and aggregated and detailed view of information captured by the provisioning logs.
+- Allows you to customize the data to tailor it to your specific needs
+++
+## Enabling provisioning logs
+
+You should already be familiar with Azure monitoring and Log Analytics. If not, jump over to learn about them and then come back to learn about application provisioning logs. To learn more about Azure monitoring, see [Azure Monitor overview](../../azure-monitor/overview.md). To learn more about Azure Monitor logs and Log Analytics, see [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md) and [Provisioning Logs for troubleshooting cloud sync](../cloud-sync/how-to-troubleshoot.md).
+
+## Source and Target
+At the top of the workbook, using the drop-down, specify the source and target identities.
+
+Theses fields are the source and target of identities. The rest of the filters that appear are based on the selection of source and target.
+You can scope your search so that it is more granular using the additional fields. Use the table below as a reference for queries.
+
+For example, if you wanted to see data from your cloud sync workflow, your source would be Active Directory and your target would be Azure AD.
++
+>[!NOTE]
+>Source and target are required. If you do not select a source and target, you won't see any data.
+++
+|Field|Description|
+|--|--|
+|Source|The provisioning source repository|
+|Target|The provisioning target repository|
+|Time Range|The range of provisioning information you want to view. This can be anywhere from 4 hours to 90 days. You can also set a custom value.|
+|Status|View the provisioning status such as Success or Skipped.|
+|Action|View the provisioning actions taken such as Create or Delete.|
+|App Name|Allows you to filter by the application name. In the case of Active Directory, you can filter by domains.|
+|Job Id|Allows you to target specific Job Ids.|
+|Sync type|Filter by type of synchronization such as object or password.|
+
+>[!NOTE]
+> All of the charts and grids in Sync Summary, Sync Details, and Sync Details by grid, change based on source,target and the parameter selections.
++
+## Sync Summary
+The sync summary section provides a summary of your organizations synchronization activities. These activities include:
+ - Total synced objects by type
+ - Provisioning events by action
+ - Provisioning events by status
+ - Unique sync count by status
+ - Provisioning success rate
+ - Top provisioning errors
++
+ :::image type="content" source="media/provisioning-workbook/sync-summary-1.png" alt-text="Screenshot of the synchronization summary." lightbox="media/provisioning-workbook/sync-summary-1.png":::
+
+## Sync details
+The sync details tab allows you to drill into the synchronization data and get more information. This information includes:
+ - Objects sync by status
+ - Objects synced by action
+ - Sync log details
+
+ >[!NOTE]
+ >The grid is filterable on any of the above filters but you can also click the tiles under under **Objects synced by Status** and **Action**.
+
+ :::image type="content" source="media/provisioning-workbook/sync-details-1.png" alt-text="Screenshot of the synchronization details." lightbox="media/provisioning-workbook/sync-details-1.png":::
+
+You can further drill in to the sync log details for additional information.
+++
+>[!NOTE]
+>Clicking on the Source ID it will dive deeper and provide more information on the synchronized object.
+
+## Sync details by cycle
+The sync details by cycle tab allow you to get more granular with the synchronization data. This information includes:
+ - Objects sync by status
+ - Objects synced by action
+ - Sync log details
+
+ :::image type="content" source="media/provisioning-workbook/sync-details-2.png" alt-text="Screenshot of the synchronization details by cycle tab." lightbox="media/provisioning-workbook/sync-details-2.png":::
+
+You can further drill in to the sync log details for additional information.
+
+>[!NOTE]
+>The grid is filterable on any of the above filters but you can also click the tiles under under **Objects synced by Status** and **Action**.
+
+## Single user view
+The user provisioning view tab allows you to get synchronization data on individual users.
+
+>[!NOTE]
+>This section does not involve using source and target.
+
+In this section, you enter a time range and select a specific user to see which applications a user has been provisioned or deprovisioned in.
+
+Once you select a time range, it will filter for users that have events in that time range.
++
+To target a specific user, you can add one of the following parameters, for that user.
+ - UPN
+ - UserID
+
+
+## Details
+By clicking on the Source ID in the **Sync details** or the **Sync details by cycle** views, you can see additional information on the object synchronized.
++
+## Custom queries
+
+You can create custom queries and show the data on Azure dashboards. To learn how, see [Create and share dashboards of Log Analytics data](../../azure-monitor/logs/get-started-queries.md). Also, be sure to check out [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
+
+## Custom alerts
+
+Azure Monitor lets you configure custom alerts so that you can get notified about key events related to Provisioning. For example, you might want to receive an alert on spikes in failures. Or perhaps spikes in disables or deletes. Another example of where you might want to be alerted is a lack of any provisioning, which indicates something is wrong.
+
+To learn more about alerts, see [Azure Monitor Log Alerts](../../azure-monitor/alerts/alerts-log.md).
+
+## Next steps
+
+- [What is provisioning?](../cloud-sync/what-is-provisioning.md)
+- [Error codes](../cloud-sync/reference-error-codes.md)
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
Previously updated : 02/16/2023 Last updated : 02/17/2023
To help automate provisioning and deprovisioning, apps expose proprietary user a
To address these challenges, the System for Cross-domain Identity Management (SCIM) specification provides a common user schema to help users move into, out of, and around apps. SCIM is becoming the de facto standard for provisioning and, when used with federation standards like Security Assertions Markup Language (SAML) or OpenID Connect (OIDC), provides administrators an end-to-end standards-based solution for access management.
-For detailed guidance on developing a SCIM endpoint to automate the provisioning and deprovisioning of users and groups to an application, see [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md). For pre-integrated applications in the gallery, such as Slack, Azure Databricks, and Snowflake, you can skip the developer documentation and use the tutorials provided in [Tutorials for integrating SaaS applications with Azure Active Directory](../../active-directory/saas-apps/tutorial-list.md).
+For detailed guidance on developing a SCIM endpoint to automate the provisioning and deprovisioning of users and groups to an application, see [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md). Many applications integrate directly with Azure Active Directory. Some examples include Slack, Azure Databricks, and Snowflake. For these apps, skip the developer documentation and use the tutorials provided in [Tutorials for integrating SaaS applications with Azure Active Directory](../../active-directory/saas-apps/tutorial-list.md).
## Manual vs. automatic provisioning Applications in the Azure AD gallery support one of two provisioning modes:
-* **Manual** provisioning means there's no automatic Azure AD provisioning connector for the app yet. User accounts must be created manually. Examples are adding users directly into the app's administrative portal or uploading a spreadsheet with user account detail. Consult the documentation provided by the app, or contact the app developer to determine what mechanisms are available.
-* **Automatic** means that an Azure AD provisioning connector has been developed for this application. Follow the setup tutorial specific to setting up provisioning for the application. App tutorials can be found in [Tutorials for integrating SaaS applications with Azure Active Directory](../../active-directory/saas-apps/tutorial-list.md).
+* **Manual** provisioning means there's no automatic Azure AD provisioning connector for the app yet. You must create them manually. Examples are adding users directly into the app's administrative portal or uploading a spreadsheet with user account detail. Consult the documentation provided by the app, or contact the app developer to determine what mechanisms are available.
+* **Automatic** means that an Azure AD provisioning connector is available this application. Follow the setup tutorial specific to setting up provisioning for the application. Find the app tutorials at [Tutorials for integrating SaaS applications with Azure Active Directory](../../active-directory/saas-apps/tutorial-list.md).
The provisioning mode supported by an application is also visible on the **Provisioning** tab after you've added the application to your enterprise apps. ## Benefits of automatic provisioning
-The number of applications used in modern organizations continues to grow. IT admins are tasked with access management at scale. Admins use standards such as SAML or OIDC for single sign-on (SSO), but access also requires users to be provisioned into the app. To many admins, provisioning means manually creating every user account or uploading CSV files each week. These processes are time-consuming, expensive, and error prone. Solutions such as SAML just-in-time (JIT) have been adopted to automate provisioning. Enterprises also need a solution to deprovision users when they leave the organization or no longer require access to certain apps based on role change.
+The number of applications used in modern organizations continues to grow. IT admins must manage access management at scale. Admins use standards such as SAML or OIDC for single sign-on (SSO), but access also requires users to be provisioned into the app. To many admins, provisioning means manually creating every user account or uploading CSV files each week. These processes are time-consuming, expensive, and error prone. Solutions such as SAML just-in-time (JIT) have been adopted to automate provisioning. Enterprises also need a solution to deprovision users when they leave the organization or no longer require access to certain apps based on role change.
Some common motivations for using automatic provisioning include:
Azure AD features pre-integrated support for many popular SaaS apps and human re
## How do I set up automatic provisioning to an application?
-For pre-integrated applications listed in the gallery, step-by-step guidance is available for setting up automatic provisioning. See [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md). The following video demonstrates how to set up automatic user provisioning for SalesForce.
+For pre-integrated applications listed in the gallery, use existing step-by-step guidance to set up automatic provisioning, see [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md). The following video shows you how to set up automatic user provisioning for SalesForce.
> [!VIDEO https://www.youtube.com/embed/pKzyts6kfrw]
active-directory Concept Certificate Based Authentication Smartcard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-smartcard.md
Some customers may maintain different and sometimes may have non-routable UPN va
>[!NOTE] >In all cases, a user supplied username login hint (X509UserNameHint) will be sent if provided. For more information, see [User Name Hint](/windows/security/identity-protection/smart-cards/smart-card-group-policy-and-registry-settings#allow-user-name-hint)
+>[!IMPORTANT]
+> If a user supplies a username login hint (X509UserNameHint), the value provided **MUST** be in UPN Format.
+ For more information about the Windows flow, see [Certificate Requirements and Enumeration (Windows)](/windows/security/identity-protection/smart-cards/smart-card-certificate-requirements-and-enumeration). ## Supported Windows platforms
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Smart lockout tracks the last three bad password hashes to avoid incrementing th
> [!NOTE] > Hash tracking functionality isn't available for customers with pass-through authentication enabled as authentication happens on-premises not in the cloud.
-Federated deployments that use AD FS 2016 and AD FS 2019 can enable similar benefits using [AD FS Extranet Lockout and Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection).
+Federated deployments that use AD FS 2016 and AD FS 2019 can enable similar benefits using [AD FS Extranet Lockout and Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection). It is recommended to move to [managed authentication](https://www.microsoft.com/security/business/identity-access/upgrade-adfs).
Smart lockout is always on, for all Azure AD customers, with these default settings that offer the right mix of security and usability. Customization of the smart lockout settings, with values specific to your organization, requires Azure AD Premium P1 or higher licenses for your users. Using smart lockout doesn't guarantee that a genuine user is never locked out. When smart lockout locks a user account, we try our best to not lock out the genuine user. The lockout service attempts to ensure that bad actors can't gain access to a genuine user account. The following considerations apply:
-* Lockout state across Azure AD data centers are synchronized. The total number of failed sign-in attempts allowed before an account is locked out will also match the configured lockout threshold though there still may be some slight variance before a lockout. Once an account is locked out, they will be locked out everywhere across all Azure AD data centers.
-* Smart Lockout uses familiar location vs unfamiliar location to differentiate between a bad actor and the genuine user. Unfamiliar and familiar locations both have separate lockout counters.
+* Lockout state across Azure AD data centers is synchronized. However, the total number of failed sign-in attempts allowed before an account is locked out will have slight variance from the configured lockout threshold. Once an account is locked out, it will be locked out everywhere across all Azure AD data centers.
+* Smart Lockout uses familiar location vs unfamiliar location to differentiate between a bad actor and the genuine user. Both unfamiliar and familiar locations have separate lockout counters.
Smart lockout can be integrated with hybrid deployments that use password hash sync or pass-through authentication to protect on-premises Active Directory Domain Services (AD DS) accounts from being locked out by attackers. By setting smart lockout policies in Azure AD appropriately, attacks can be filtered out before they reach on-premises AD DS.
Based on your organizational requirements, you can customize the Azure AD smart
To check or modify the smart lockout values for your organization, complete the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Entra portal](https://entra.microsoft.com/#home).
1. Search for and select *Azure Active Directory*, then select **Security** > **Authentication methods** > **Password protection**. 1. Set the **Lockout threshold**, based on how many failed sign-ins are allowed on an account before its first lockout.
When the smart lockout threshold is triggered, you will get the following messag
*Your account is temporarily locked to prevent unauthorized use. Try again later, and if you still have trouble, contact your admin.*
-When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Azure AD authentication service. In that scenario, because each Azure AD datacenter tracks lockout independently, it might take more than your defined lockout threshold number of attempts to cause a lockout. A user has a maximum of (*threshold_limit * datacenter_count*) number of bad attempts before being completely locked out.
+When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Azure AD authentication service.
Smart lockout tracks the last three bad password hashes to avoid incrementing the lockout counter for the same password. If someone enters the same bad password multiple times, this behavior won't cause the account to lock out.
In addition to Smart lockout, Azure AD also protects against attacks by analyzin
## Next steps
-To customize the experience further, you can [configure custom banned passwords for Azure AD password protection](tutorial-configure-custom-password-protection.md).
+- To customize the experience further, you can [configure custom banned passwords for Azure AD password protection](tutorial-configure-custom-password-protection.md).
-To help users reset or change their password from a web browser, you can [configure Azure AD self-service password reset](tutorial-enable-sspr.md).
+- To help users reset or change their password from a web browser, you can [configure Azure AD self-service password reset](tutorial-enable-sspr.md).
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
Deploying the configuration change to enable SSPR from the login screen using Mi
1. Sign in to the [Azure portal](https://portal.azure.com) and select **Endpoint Manager**. 1. Create a new device configuration profile by going to **Device configuration** > **Profiles**, then select **+ Create Profile**
- - For **Platform** choose *Windows 11 and later*
- - For **Profile type**, choose *Custom*
+ - For **Platform** choose *Windows 10 and later*
+ - For **Profile type**, choose Templates then select the Custom template below
1. Select **Create**, then provide a meaningful name for the profile, such as *Windows 11 sign-in screen SSPR* Optionally, provide a meaningful description of the profile, then select **Next**.
active-directory Ui Autopilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-autopilot.md
Previously updated : 02/23/2022 Last updated : 02/16/2023 # View rules in the Autopilot dashboard
-The **Autopilot** dashboard in Permissions Management provides a table of information about **Autopilot rules** for administrators.
+The **Autopilot** dashboard in Permissions Management provides a table of information about Autopilot rules for administrators. Creating Autopilot rules allows you to automate right-sizing policies so you can automatically remove unused roles and permissions assigned to identities in your authorization system.
> [!NOTE]
The **Autopilot** dashboard in Permissions Management provides a table of inform
The following information displays in the **Autopilot Rules** table: - **Rule Name**: The name of the rule.
- - **State**: The status of the rule: idle (not being use) or active (being used).
- - **Rule Type**: The type of rule being applied.
+ - **State**: The status of the rule: idle (not in use) or active (in use).
+ - **Rule Type**: The type of rule that's applied.
- **Mode**: The status of the mode: on-demand or not. - **Last Generated**: The date and time the rule was last generated. - **Created By**: The email address of the user who created the rule. - **Last Modified**: The date and time the rule was last modified.
- - **Subscription**: Provides an **On** or **Off** subscription that allows you to receive email notifications when recommendations have been generated, applied, or unapplied.
+ - **Subscription**: Provides an **On** or **Off** subscription that allows you to receive email notifications when recommendations are generated, applied, or unapplied.
## View other available options for rules
The **Autopilot** dashboard in Permissions Management provides a table of inform
- **Delete Rule**: Select to delete the rule. Only the user who created the selected rule can delete the rule. - **Generate Recommendations**: Creates recommendations for each user and the authorization system. Only the user who created the selected rule can create recommendations. - **View Recommendations**: Displays the recommendations for each user and authorization system.
- - **Notification Settings**: Displays the users subscribed to this rule. Only the user who created the selected rule can add other users to be notified.
+ - **Notification Settings**: Displays the users subscribed to this rule. Only the user who created the selected rule can add other users to receive notifications.
You can also select:
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/what-is-cloud-sync.md
The following table provides a comparison between Azure AD Connect and Azure AD
| Allow basic customization for attribute flows |ΓùÅ |ΓùÅ | | Synchronize Exchange online attributes |ΓùÅ |ΓùÅ | | Synchronize extension attributes 1-15 |ΓùÅ |ΓùÅ |
-| Synchronize customer defined AD attributes (directory extensions) |ΓùÅ | |
+| Synchronize customer defined AD attributes (directory extensions) |ΓùÅ |ΓùÅ|
| Support for Password Hash Sync |ΓùÅ|ΓùÅ| | Support for Pass-Through Authentication |ΓùÅ|| | Support for federation |ΓùÅ|ΓùÅ|
active-directory App Sign In Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-sign-in-flow.md
Previously updated : 05/18/2020 Last updated : 02/17/2023
active-directory Application Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-model.md
Previously updated : 11/02/2022 Last updated : 02/17/2023
active-directory Howto Modify Supported Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-modify-supported-accounts.md
Previously updated : 11/02/2022 Last updated : 02/17/2023
active-directory Quickstart Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-create-new-tenant.md
Previously updated : 09/08/2022 Last updated : 02/17/2023
To check the tenant:
> [!TIP] > To find the tenant ID, you can: > * Hover over your account name to get the directory or tenant ID.
-> * Search and select **Azure Active Directory** > **Properties** > **Tenant ID** in the Azure portal.
+> * Search and select **Azure Active Directory** > **Overview** > **Tenant ID** in the Azure portal.
If you don't have a tenant associated with your account, you'll see a GUID under your account name. You won't be able to do actions like registering apps until you create an Azure AD tenant.
active-directory Single And Multi Tenant Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-and-multi-tenant-apps.md
Previously updated : 11/02/2022 Last updated : 02/17/2023
active-directory Workload Identity Federation Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-considerations.md
Previously updated : 02/15/2022 Last updated : 02/17/2023
active-directory Authentication Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md
When configuring a Conditional Access policy, you have granular control over the
Learn more about [Conditional Access user assignments](../conditional-access/concept-conditional-access-users-groups.md).
+### Comparing External Identities Conditional Access policies
+
+The following table gives a detailed comparison of the security policy and compliance options in Azure AD External Identities. Security policy and compliance are managed by the host/inviting organization under Conditional Access policies.
+
+|**Policy** |**B2B collaboration users** |**B2B direct connect users**|
+| : | :-- | :-- |
+|**Grant controlsΓÇöBlock access** | Supported | Supported |
+|**Grant controls ΓÇö Require multifactor authentication** | Supported | Supported, requires configuring your [inbound trust settings](cross-tenant-access-settings-b2b-direct-connect.md#to-change-inbound-trust-settings-for-mfa-and-device-state) to accept MFA claims from the external organization |
+|**Grant controls ΓÇö Require compliant device** | Supported, requires configuring your [inbound trust settings](cross-tenant-access-settings-b2b-collaboration.md#to-change-inbound-trust-settings-for-mfa-and-device-claims) to accept compliant device claims from the external organization. | Supported, requires configuring your [inbound trust settings](cross-tenant-access-settings-b2b-direct-connect.md#to-change-inbound-trust-settings-for-mfa-and-device-state) to accept compliant device claims from the external organization. |
+|**Grant controls ΓÇö Require Hybrid Azure AD joined device** | Supported, requires configuring your [inbound trust settings](cross-tenant-access-settings-b2b-collaboration.md#to-change-inbound-trust-settings-for-mfa-and-device-claims) to accept hybrid Azure AD joined device claims from the external organization | Supported, requires configuring your [inbound trust settings](cross-tenant-access-settings-b2b-direct-connect.md#to-change-inbound-trust-settings-for-mfa-and-device-state) to accept hybrid Azure AD joined device claims from the external organization |
+|**Grant controls ΓÇö Require approved client app** | Not supported | Not supported |
+|**Grant controls ΓÇö Require app protection policy** | Not supported | Not supported |
+|**Grant controls ΓÇö Require password change** | Not supported | Not supported |
+|**Grant controls ΓÇö Terms of Use** | Supported | Not supported |
+|**Session controls ΓÇö Use app enforced restrictions** | Supported | Not supported |
+|**Session controls ΓÇö Use Conditional Access App control** | Supported | Not supported |
+|**Session controls ΓÇö Sign-in frequency** | Supported | Not supported |
+|**Session controls ΓÇö Persistent browser session** | Supported | Not supported |
+ ### MFA for Azure AD external users In an Azure AD cross-tenant scenario, the resource organization can create Conditional Access policies that require MFA or device compliance for all guest and external users. Generally, a B2B collaboration user accessing a resource is then required to set up their Azure AD MFA with the resource tenant. However, Azure AD now offers the ability to trust MFA claims from other Azure AD tenants. Enabling MFA trust with another tenant streamlines the sign-in process for B2B collaboration users and enables access for B2B direct connect users.
active-directory B2b Quickstart Add Guest Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md
Previously updated : 05/10/2022 Last updated : 02/16/2023
With Azure AD [B2B collaboration](what-is-b2b.md), you can invite anyone to collaborate with your organization using their own work, school, or social account.
-In this quickstart, you'll learn how to add a new guest user to your Azure AD directory in the Azure portal. You'll also send an invitation and see what the guest user's invitation redemption process looks like. In addition to this quickstart, you can learn more about adding guest users [in the Azure portal](add-users-administrator.md), via [PowerShell](b2b-quickstart-invite-powershell.md), or [in bulk](tutorial-bulk-invite.md).
+In this quickstart, you'll learn how to add a new guest user to your Azure AD directory in the Azure portal. You'll also send an invitation and see what the guest user's invitation redemption process looks like.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
When no longer needed, delete the test guest user.
## Next steps
-In this quickstart, you created a guest user in the Azure portal and sent an invitation to share apps. Then you viewed the redemption process from the guest user's perspective and verified that the guest user was able to access their My Apps page. To learn more about adding guest users for collaboration, see [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md).
+In this quickstart, you created a guest user in the Azure portal and sent an invitation to share apps. Then you viewed the redemption process from the guest user's perspective, and verified that the guest user was able to access their My Apps page.
+To learn more about adding guest users for collaboration, see [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md).
+To learn more about adding guest users with PowerShell, see [Add and invite guests with PowerShell](b2b-quickstart-invite-powershell.md).
+You can also bulk invite guest users [via the portal](tutorial-bulk-invite.md) or [via PowerShell](bulk-invite-powershell.md).
active-directory External Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-identities-overview.md
The following table gives a detailed comparison of the scenarios you can enable
| **Identity providers supported** | External users can collaborate using work accounts, school accounts, any email address, SAML and WS-Fed based identity providers, and social identity providers like Gmail and Facebook. | External users collaborate using Azure AD work accounts or school accounts. | Consumer users with local application accounts (any email address, user name, or phone number), Azure AD, various supported social identities, and users with corporate and government-issued identities via SAML/WS-Fed-based identity provider federation. | | **Single sign-on (SSO)** | SSO to all Azure AD-connected apps is supported. For example, you can provide access to Microsoft 365 or on-premises apps, and to other SaaS apps such as Salesforce or Workday. | SSO to a Teams shared channel. | SSO to customer owned apps within the Azure AD B2C tenants is supported. SSO to Microsoft 365 or to other Microsoft SaaS apps isn't supported. | | **Licensing and billing** | Based on monthly active users (MAU), including B2B collaboration and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for B2B](external-identities-pricing.md). | Based on monthly active users (MAU), including B2B collaboration, B2B direct connect, and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for B2B](external-identities-pricing.md). | Based on monthly active users (MAU), including B2B collaboration and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for Azure AD B2C](../../active-directory-b2c/billing.md). |
-| **Security policy and compliance** | Managed by the host/inviting organization (for example, with [Conditional Access policies](authentication-conditional-access.md) and cross-tenant access settings). | Managed by the host/inviting organization (for example, with [Conditional Access policies](authentication-conditional-access.md) and cross-tenant access settings). See also the [Teams documentation](/microsoftteams/security-compliance-overview). | Managed by the organization via Conditional Access and Identity Protection. |
+| **Security policy and compliance** | Managed by the host/inviting organization (for example, with [Conditional Access policies](authentication-conditional-access.md) and cross-tenant access settings). | Managed by the host/inviting organization (for example, with [Conditional Access policies](authentication-conditional-access.md) and cross-tenant access settings). See also the [Teams documentation](/microsoftteams/security-compliance-overview). | Managed by the organization via [Conditional Access and Identity Protection](../../active-directory-b2c/conditional-access-identity-protection-overview.md). |
+| **Multi-factor Authentication (MFA)** | If inbound trust settings to accept MFA claims from the user's home tenant are configured, and MFA policies have already been met in the user's home tenant, the external user can sign in. If MFA trust isn't enabled, the user is presented with an MFA challenge from the resource organization. [Learn more](authentication-conditional-access.md#mfa-for-azure-ad-external-users) about MFA for Azure AD external users. | If inbound trust settings to accept MFA claims from the user's home tenant are configured, and MFA policies have already been met in the user's home tenant, the external user can sign in. If MFA trust isn't enabled, and Conditional Access policies require MFA, the user is blocked from accessing resources. You *must* configure your inbound trust settings to accept MFA claims from the organization. [Learn more](authentication-conditional-access.md#mfa-for-azure-ad-external-users) about MFA for Azure AD external users. | [Integrates directly](../../active-directory-b2c/multi-factor-authentication.md) with Azure AD Multi-Factor Authentication. |
+| **Microsoft cloud settings** | [Supported.](cross-cloud-settings.md) | [Not supported.](cross-cloud-settings.md) | Not applicable. |
+| **Entitlement management** | [Supported.](../governance/entitlement-management-overview.md) | Not supported. | Not applicable. |
+| **Line-of-business (LOB) apps** | Supported. | Not supported. Only B2B direct connect-enabled apps can be shared (currently, Teams Connect shared channels). | Works with [RESTful API](../../active-directory-b2c/technical-overview.md#add-your-own-business-logic-and-call-restful-apis). |
+| **Conditional Access** | Managed by the host/inviting organization. [Learn more](authentication-conditional-access.md) about Conditional Access policies. | Managed by the host/inviting organization. [Learn more](authentication-conditional-access.md) about Conditional Access policies. | Managed by the organization via [Conditional Access and Identity Protection](../../active-directory-b2c/conditional-access-identity-protection-overview.md). |
| **Branding** | Host/inviting organization's brand is used. | For sign-in screens, the userΓÇÖs home organization brand is used. In the shared channel, the resource organization's brand is used. | Fully customizable branding per application or organization. | | **More information** | [Blog post](https://blogs.technet.microsoft.com/enterprisemobility/2017/02/01/azure-ad-b2b-new-updates-make-cross-business-collab-easy/), [Documentation](what-is-b2b.md) | [Documentation](b2b-direct-connect-overview.md) | [Product page](https://azure.microsoft.com/services/active-directory-b2c/), [Documentation](../../active-directory-b2c/index.yml) |
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
A user who has a guest account can't sign in, and is receiving the following err
The user has an Azure user account and is a viral tenant who has been abandoned or unmanaged. Additionally, there are no Global Administrators in the tenant.
-To resolve this problem, you must take over the abandoned tenant. Refer to [Take over an unmanaged directory as administrator in Azure Active Directory](../enterprise-users/domains-admin-takeover.md). You must also access the internet-facing DNS for the domain suffix in question in order to provide direct evidence that you are in control of the namespace. After the tenant is returned to a managed state, please discuss with the customer whether leaving the users and verified domain name is the best option for their organization.
+To resolve this problem, you must take over the abandoned tenant. Refer to [Take over an unmanaged directory as administrator in Azure Active Directory](../enterprise-users/domains-admin-takeover.md). You must also access the internet-facing DNS for the domain suffix in question in order to provide direct evidence that you are in control of the namespace. After the tenant is returned to a managed state, discuss with the customer whether leaving the users and verified domain name is the best option for their organization.
## A guest user with a just-in-time or "viral" tenant is unable to reset their password
If you need to collaborate with an Azure AD organization that's outside of the A
## Invitation is blocked due to cross-tenant access policies
-When you try to invite a B2B collaboration user in another Microsoft Azure cloud, this error message will appear if B2B collaboration is supported between the two clouds but is blocked by cross-tenant access settings. The settings that are blocking collaboration could be either in the B2B collaboration userΓÇÖs home tenant or in your tenant. Check your cross-tenant access settings to make sure youΓÇÖve added the B2B collaboration userΓÇÖs home tenant to your Organizational settings and that your settings allow B2B collaboration with the user. Then make sure an admin in the userΓÇÖs tenant does the same.
+When you try to invite a B2B collaboration user, you might see this error message: "This invitation is blocked by cross-tenant access settings. Admins in both your organization and the invited user's organization must configure cross-tenant access settings to allow the invitation." This error message will appear, if B2B collaboration is supported, but is blocked by cross-tenant access settings. Check your cross-tenant access settings, and make sure that your settings allow B2B collaboration with the user.
+When you try to collaborate with another Azure AD organization in a separate Microsoft Azure cloud, you can use [Microsoft cloud settings](cross-cloud-settings.md) to enable Azure AD B2B collaboration.
## Invitation is blocked due to disabled Microsoft B2B Cross Cloud Worker application
-Rarely, you might see this message: ΓÇ£This action can't be completed because the Microsoft B2B Cross Cloud Worker application has been disabled in the invited userΓÇÖs tenant. Please ask the invited userΓÇÖs admin to re-enable it, then try again.ΓÇ¥ This error means that the Microsoft B2B Cross Cloud Worker application has been disabled in the B2B collaboration userΓÇÖs home tenant. This app is typically enabled, but it might have been disabled by an admin in the userΓÇÖs home tenant, either through PowerShell or the portal (see [Disable how a user signs in](../manage-apps/disable-user-sign-in-portal.md)). An admin in the userΓÇÖs home tenant can re-enable the app through PowerShell or the Azure portal. In the portal, search for ΓÇ£Microsoft B2B Cross Cloud WorkerΓÇ¥ to find the app, select it, and then choose to re-enable it.
-
-## Redemption is blocked due to cross-tenant access settings
-
-A B2B collaboration user could see this message when they try to redeem a B2B collaboration invitation: ΓÇ£This invitation is blocked by cross-tenant access settings. Admins in both your organization and the inviterΓÇÖs organization must configure cross-tenant access settings to allow the invitation.ΓÇ¥ This error can occur when cross-tenant policies are changed between the time the invitation was sent to the user and the time the user redeems it. Check your cross-tenant access settings to make sure B2B collaboration is properly configured, and make sure an admin in the userΓÇÖs tenant does the same.
+Rarely, you might see this message: ΓÇ£This action can't be completed because the Microsoft B2B Cross Cloud Worker application has been disabled in the invited userΓÇÖs tenant. Ask the invited userΓÇÖs admin to re-enable it, then try again.ΓÇ¥ This error means that the Microsoft B2B Cross Cloud Worker application has been disabled in the B2B collaboration userΓÇÖs home tenant. This app is typically enabled, but it might have been disabled by an admin in the userΓÇÖs home tenant, either through PowerShell or the portal (see [Disable how a user signs in](../manage-apps/disable-user-sign-in-portal.md)). An admin in the userΓÇÖs home tenant can re-enable the app through PowerShell or the Azure portal. In the portal, search for ΓÇ£Microsoft B2B Cross Cloud WorkerΓÇ¥ to find the app, select it, and then choose to re-enable it.
## I receive the error that Azure AD can't find the aad-extensions-app in my tenant
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
For delegated scenarios, the admin needs one of the following [Azure AD roles](/
- Global reader - Lifecycle workflows administrator
-## Restrictions
+## Limits
-|Column1 |Limit |
+|Category |Limit |
||| |Number of Workflows | 50 per tenant | |Number of Tasks | 25 per workflow |
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
If you have already installed Azure AD Connect by using the [express installatio
Follow these instructions to verify that you have enabled Pass-through Authentication correctly:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with the Hybrid Identity Administrator credentials for your tenant.
-2. Select **Azure Active Directory** in the left pane.
+1. Sign in to the [Entra admin center](https://entra.microsoft.com) with the Hybrid Identity Administrator credentials for your tenant.
+2. Select **Azure Active Directory**.
3. Select **Azure AD Connect**. 4. Verify that the **Pass-through authentication** feature appears as **Enabled**. 5. Select **Pass-through authentication**. The **Pass-through authentication** pane lists the servers where your Authentication Agents are installed.
-![Azure Active Directory admin center: Azure AD Connect pane](./media/how-to-connect-pta-quick-start/pta7.png)
+ ![Screenhot shows Entra admin center: Azure AD Connect pane.](./media/how-to-connect-pta-quick-start/azure-ad-connect-blade.png)
-![Azure Active Directory admin center: Pass-through Authentication pane](./media/how-to-connect-pta-quick-start/pta8.png)
+ ![Screenshot shows Entra admin center: Pass-through Authentication pane.](./media/how-to-connect-pta-quick-start/pta-server-list.png)
At this stage, users from all the managed domains in your tenant can sign in by using Pass-through Authentication. However, users from federated domains continue to sign in by using AD FS or another federation provider that you have previously configured. If you convert a domain from federated to managed, all users from that domain automatically start signing in by using Pass-through Authentication. The Pass-through Authentication feature does not affect cloud-only users.
For most customers, three Authentication Agents in total are sufficient for high
To begin, follow these instructions to download the Authentication Agent software:
-1. To download the latest version of the Authentication Agent (version 1.5.193.0 or later), sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with your tenant's Hybrid Identity Administrator credentials.
-2. Select **Azure Active Directory** in the left pane.
+1. To download the latest version of the Authentication Agent (version 1.5.193.0 or later), sign in to the [Entra admin center](https://entra.microsoft.com) with your tenant's Hybrid Identity Administrator credentials.
+2. Select **Azure Active Directory**.
3. Select **Azure AD Connect**, select **Pass-through authentication**, and then select **Download Agent**. 4. Select the **Accept terms & download** button.
-![Azure Active Directory admin center: Download Authentication Agent button](./media/how-to-connect-pta-quick-start/pta9.png)
-
-![Azure Active Directory admin center: Download Agent pane](./media/how-to-connect-pta-quick-start/pta10.png)
+ [![Screenshot shows Entra admin center: Download Authentication Agent button.](./media/how-to-connect-pta-quick-start/download-agent.png)](./media/how-to-connect-pta-quick-start/download-agent.png#lightbox)
>[!NOTE] >You can also directly [download the Authentication Agent software](https://aka.ms/getauthagent). Review and accept the Authentication Agent's [Terms of Service](https://aka.ms/authagenteula) _before_ installing it.
active-directory How To Connect Pta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta.md
This feature is an alternative to [Azure AD Password Hash Synchronization](how-t
![Azure AD Pass-through Authentication](./media/how-to-connect-pta/pta1.png)
-You can combine Pass-through Authentication with the [Seamless Single Sign-On](how-to-connect-sso.md) feature. If you have Windows 10 or later machines, use [Hybrid Azure AD Join (AADJ)](../devices/howto-hybrid-azure-ad-join.md). This way, when your users are accessing applications on their corporate machines inside your corporate network, they don't need to type in their passwords to sign in.
+You can combine Pass-through Authentication with the [Seamless single sign-on](how-to-connect-sso.md) feature. If you have Windows 10 or later machines, use [Hybrid Azure AD Join (AADJ)](../devices/howto-hybrid-azure-ad-join.md). This way, when your users are accessing applications on their corporate machines inside your corporate network, they don't need to type in their passwords to sign in.
## Key benefits of using Azure AD Pass-through Authentication
You can combine Pass-through Authentication with the [Seamless Single Sign-On](h
## Next steps - [Quickstart](how-to-connect-pta-quick-start.md) - Get up and running Azure AD Pass-through Authentication.-- [Migrate from AD FS to Pass-through Authentication](https://github.com/Identity-Deployment-Guides/Identity-Deployment-Guides/blob/master/Authentication/Migrating%20from%20Federated%20Authentication%20to%20Pass-through%20Authentication.docx?raw=true) - A detailed guide to migrate from AD FS (or other federation technologies) to Pass-through Authentication.
+- [Migrate your apps to Azure AD](../manage-apps/migration-resources.md): Resources to help you migrate application access and authentication to Azure AD.
- [Smart Lockout](../authentication/howto-password-smart-lockout.md) - Configure Smart Lockout capability on your tenant to protect user accounts. - [Hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources. - [Current limitations](how-to-connect-pta-current-limitations.md) - Learn which scenarios are supported and which ones are not.
active-directory Tshoot Connect Pass Through Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-pass-through-authentication.md
This article helps you find troubleshooting information about common issues rega
### Check status of the feature and Authentication Agents
-Ensure that the Pass-through Authentication feature is still **Enabled** on your tenant and the status of Authentication Agents shows **Active**, and not **Inactive**. You can check status by going to the **Azure AD Connect** blade on the [Azure Active Directory admin center](https://aad.portal.azure.com/).
+Ensure that the Pass-through Authentication feature is still **Enabled** on your tenant and the status of Authentication Agents shows **Active**, and not **Inactive**. You can check status by going to the **Azure AD Connect** blade on the [Entra admin center](https://entra.microsoft.com/).
-![Azure Active Directory admin center - Azure AD Connect blade](./media/tshoot-connect-pass-through-authentication/pta7.png)
+![Screnshot shows Entra admin center - Azure AD Connect blade.](./media/tshoot-connect-pass-through-authentication/azure-ad-connect-blade.png)
-![Azure Active Directory admin center - Pass-through Authentication blade](./media/tshoot-connect-pass-through-authentication/pta11.png)
+![Screenhot shows Entra admin center - Pass-through Authentication blade.](./media/tshoot-connect-pass-through-authentication/pta-server-list.png)
### User-facing sign-in error messages
If you get the same username/password error, this means that the Pass-through Au
### Sign-in failure reasons on the Azure Active Directory admin center (needs Premium license)
-If your tenant has an Azure AD Premium license associated with it, you can also look at the [sign-in activity report](../reports-monitoring/concept-sign-ins.md) on the [Azure Active Directory admin center](https://aad.portal.azure.com/).
+If your tenant has an Azure AD Premium license associated with it, you can also look at the [sign-in activity report](../reports-monitoring/concept-sign-ins.md) on the [Entra admin center](https://entra.microsoft.com/).
-![Azure Active Directory admin center - Sign-ins report](./media/tshoot-connect-pass-through-authentication/pta4.png)
+[![Screenshot shows Entra admin center - Sign-ins report,](./media/tshoot-connect-pass-through-authentication/sign-in-report.png)](./media/tshoot-connect-pass-through-authentication/sign-in-report.png#lightbox)
Navigate to **Azure Active Directory** -> **Sign-ins** on the [Azure Active Directory admin center](https://aad.portal.azure.com/) and click a specific user's sign-in activity. Look for the **SIGN-IN ERROR CODE** field. Map the value of that field to a failure reason and resolution using the following table:
active-directory Migration Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migration-resources.md
Resources to help you migrate application access and authentication to Azure Act
| [Deployment plan: Migrating from AD FS to password hash sync](https://aka.ms/ADFSTOPHSDPDownload) | With password hash synchronization, hashes of user passwords are synchronized from on-premises Active Directory to Azure AD. This allows Azure AD to authenticate users without interacting with the on-premises Active Directory.| | [Deployment plan: Migrating from AD FS to pass-through authentication](https://aka.ms/ADFSTOPTADPDownload)|Azure AD pass-through authentication helps users sign in to both on-premises and cloud-based applications by using the same password. This feature provides your users with a better experience since they have one less password to remember. It also reduces IT helpdesk costs because users are less likely to forget how to sign in when they only need to remember one password. When people sign in using Azure AD, this feature validates users' passwords directly against your on-premises Active Directory.| | [Deployment plan: Enabling single sign-on to a SaaS app with Azure AD](https://aka.ms/SSODPDownload) | Single sign-on (SSO) helps you access all the apps and resources you need to do business, while signing in only once, using a single user account. For example, after a user has signed in, the user can move from Microsoft Office, to SalesForce, to Box without authenticating (for example, typing a password) a second time.
-| [Deployment plan: Extending apps to Azure AD with Application Proxy](https://aka.ms/AppProxyDPDownload)| Providing access from employee laptops and other devices to on-premises applications has traditionally involved virtual private networks (VPNs) or demilitarized zones (DMZs). Not only are these solutions complex and hard to make secure, but they are costly to set up and manage. Azure AD Application Proxy makes it easier to access on-premises applications. |
+| [Deployment plan: Extending apps to Azure AD with Application Proxy](../app-proxy/application-proxy-deployment-plan.md)| Providing access from employee laptops and other devices to on-premises applications has traditionally involved virtual private networks (VPNs) or demilitarized zones (DMZs). Not only are these solutions complex and hard to make secure, but they are costly to set up and manage. Azure AD Application Proxy makes it easier to access on-premises applications. |
| [Deployment plans](../fundamentals/active-directory-deployment-plans.md) | Find more deployment plans for deploying features such as Azure AD multi-factor authentication, Conditional Access, user provisioning, seamless SSO, self-service password reset, and more! | | [Migrating apps from Symantec SiteMinder to Azure AD](https://azure.microsoft.com/mediahandler/files/resourcefiles/migrating-applications-from-symantec-siteminder-to-azure-active-directory/Migrating-applications-from-Symantec-SiteMinder-to-Azure-Active-Directory.pdf) | Get step by step guidance on application migration and integration options with an example that walks you through migrating applications from Symantec SiteMinder to Azure AD. | | [Identity governance for applications](../governance/identity-governance-applications-prepare.md)| This guide outlines what you need to do if you're migrating identity governance for an application from a previous identity governance technology, to connect Azure AD to that application.|
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
This administrator manages federation between Azure AD organizations and externa
## Global Administrator
-Users with this role have access to all administrative features in Azure Active Directory, as well as services that use Azure Active Directory identities like the Microsoft 365 Defender portal, the Microsoft Purview compliance portal, Exchange Online, SharePoint Online, and Skype for Business Online. Furthermore, Global Administrators can [elevate their access](../../role-based-access-control/elevate-access-global-admin.md) to manage all Azure subscriptions and management groups. This allows Global Administrators to get full access to all Azure resources using the respective Azure AD Tenant. The person who signs up for the Azure AD organization becomes a Global Administrator. There can be more than one Global Administrator at your company. Global Administrators can reset the password for any user and all other administrators.
+Users with this role have access to all administrative features in Azure Active Directory, as well as services that use Azure Active Directory identities like the Microsoft 365 Defender portal, the Microsoft Purview compliance portal, Exchange Online, SharePoint Online, and Skype for Business Online. Furthermore, Global Administrators can [elevate their access](../../role-based-access-control/elevate-access-global-admin.md) to manage all Azure subscriptions and management groups. This allows Global Administrators to get full access to all Azure resources using the respective Azure AD Tenant. The person who signs up for the Azure AD organization becomes a Global Administrator. There can be more than one Global Administrator at your company. Global Administrators can reset the password for any user and all other administrators. A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has zero Global Administrators.
> [!NOTE] > As a best practice, Microsoft recommends that you assign the Global Administrator role to fewer than five people in your organization. For more information, see [Best practices for Azure AD roles](best-practices.md).
Assign the Organizational Messages Writer role to users who need to do the follo
Do not use. This role has been deprecated and will be removed from Azure AD in the future. This role is intended for use by a small number of Microsoft resale partners, and is not intended for general use. > [!IMPORTANT]
-> This role can reset passwords and invalidate refresh tokens for only non-administrators. This role should not be used as it is deprecated and it will no longer be returned in API.
+> This role can reset passwords and invalidate refresh tokens for only non-administrators. This role should not be used because it is deprecated.
> [!div class="mx-tableFixed"] > | Actions | Description |
Do not use. This role has been deprecated and will be removed from Azure AD in t
Do not use. This role has been deprecated and will be removed from Azure AD in the future. This role is intended for use by a small number of Microsoft resale partners, and is not intended for general use. > [!IMPORTANT]
-> This role can reset passwords and invalidate refresh tokens for all non-administrators and administrators (including Global Administrators). This role should not be used as it is deprecated and it will no longer be returned in API.
+> This role can reset passwords and invalidate refresh tokens for all non-administrators and administrators (including Global Administrators). This role should not be used because it is deprecated.
> [!div class="mx-tableFixed"] > | Actions | Description |
User Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
Usage Summary Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: All custom roles | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-\* A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has 0 Global Administrators.
+> [!IMPORTANT]
+> The [Partner Tier2 Support](#partner-tier2-support) role can reset passwords and invalidate refresh tokens for all non-administrators and administrators (including Global Administrators). The [Partner Tier1 Support](#partner-tier1-support) role can reset passwords and invalidate refresh tokens for only non-administrators. These roles should not be used because they are deprecated.
-> [!NOTE]
-> The ability to reset a password includes the ability to update the following sensitive properties required for [self-service password reset](../authentication/concept-sspr-howitworks.md):
-> - businessPhones
-> - mobilePhone
-> - otherMails
+The ability to reset a password includes the ability to update the following sensitive properties required for [self-service password reset](../authentication/concept-sspr-howitworks.md):
+- businessPhones
+- mobilePhone
+- otherMails
## Who can perform sensitive actions
active-directory Anaplan Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/anaplan-tutorial.md
Previously updated : 11/21/2022 Last updated : 02/16/2023 # Tutorial: Azure AD SSO integration with Anaplan
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Anaplan** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click the copy icon to copy the **App Federation Metadata URL** and save this to use in the Anaplan SSO configuration.
+
+ ![The Certificate download link.](common/copy-metadataurl.png)
+
+## Configure Anaplan SSO
+
+1. Log in to Anaplan website as an administrator.
+
+1. In the Administration page, navigate to **Security > Single Sign-On**.
+
+1. Click **New**.
+
+1. Perform the following steps in the **Metadata** tab:
+
+ ![Screenshot for the security page.](./media/anaplan-tutorial/security.png)
+
+ a. Enter a **Connection Name**, should match the name of your connection in the identity provider interface.
+
+ b. Select **Load from XML file** and paste the App Federation Metadata URL you copied from Azure portal into the **Metadata URL** textbox.
+
+ c. Click **Save** to create the connection.
+
+ d. Enable the connection by setting the **Enabled** toggle.
+
+1. From the **Config** tab, copy the following values to save them back to the Azure portal:
+
+ a. **Service Provider URL**.
+ b. **Assertion Consumer Service URL**.
+ c. **Entity ID**.
+
+### Complete the Azure AD SSO Configuration
+ 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration.](common/edit-urls.png)
-4. On the **Basic SAML Configuration** section, perform the following steps:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://sdp.anaplan.com/frontdoor/saml/<tenant name>`
+ a. In the **Identifier (Entity ID)** text box, paste the Entity ID that you copied from above, in the format:
+ `https://sdp.anaplan.com/<optional extension>`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<subdomain>.anaplan.com`
+ b. In the **Sign on URL** text box, paste the Service Provider URL that you copied from above, in the format:
+ `https://us1a.app.anaplan.com/samlsp/<connection name>`
+
+ c. In the **Reply URL (Assertion Consumer Service URL)** text box, paste the Assertion Consumer Service URL that you copied from above, in the format:
+ `https://us1a.app.anaplan.com/samlsp/login/callback?connection=<connection name>`
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Anaplan Client support team](mailto:support@anaplan.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+### Complete the Anaplan SSO Configuration
+
+1. Perform the following steps in the **Advanced** tab:
+
+ ![Screenshot for the Advanced page.](./media/anaplan-tutorial/advanced.png)
+
+ a. Select **Name ID Format** as Email Address from the dropdown and keep the remaining values as default.
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
+ b. Click **Save**.
- ![The Certificate download link](common/metadataxml.png)
+1. In the **Workspaces** tab, specify the workspaces that will use the identity provider from the dropdown and Click **Save**.
-6. On the **Set up Anaplan** section, copy the appropriate URL(s) as per your requirement.
+ ![Screenshot for the Workspaces page.](./media/anaplan-tutorial/workspaces.png)
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ > [!NOTE]
+ > Workspace connections are unique. If you have another connection already configured with a workspace, you cannot associate that workspace with a new connection.
+To access the original connection and update it, remove the workspace from the connection and then reassociate it with the new connection.
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Anaplan SSO
-
-1. Login to Anaplan website as an administrator.
-
-1. In Administration page, navigate to **Security > Single Sign-On**.
-
-1. Click **New**.
-
-1. Perform the following steps in the **Metadata** tab:
-
- ![Screenshot for the security page](./media/anaplan-tutorial/security.png)
-
- a. Enter a **Connection Name**, should match the name of your connection in the identity provider interface.
-
- b. Select **Load from XML file** and enter the URL of the metadata XML file with your configuration information in the **Metadata URL** textbox.
-
- C. Enabled the **Signed** toggle.
-
- d. Click **Save** to create the connection.
-
-1. When you upload a **metadata XML** file in the **Metadata** tab, the values in **Config** tab pre-populate with the information from that upload. You can skip this tab in your connection setup and click **Save**.
-
- ![Screenshot for the configuration page](./media/anaplan-tutorial/configuration.png)
-
-1. Perform the following steps in the **Advanced** tab:
-
- ![Screenshot for the Advanced page](./media/anaplan-tutorial/advanced.png)
-
- a. Select **Name ID Format** as Email Address from the dropdown and keep the remaining values as default.
-
- b. Click **Save**.
-
-1. In the **Workspaces** tab, specify the workspaces that will use the identity provider from the dropdown and Click **Save**.
-
- ![Screenshot for the Workspaces page](./media/anaplan-tutorial/Workspaces.png)
-
- > [!NOTE]
- > Workspace connections are unique. If you have another connection already configured with a workspace, you cannot associate that workspace with a new connection.
-To access the original connection and update it, remove the workspace from the connection and then reassociate it with the new connection.
- ### Create Anaplan test user
-In this section, you create a user called Britta Simon in Anaplan. Work with [Anaplan support team](mailto:support@anaplan.com) to add the users in the Anaplan platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Anaplan. Work with [Anaplan support team](mailto:support@anaplan.com) to add the users in the Anaplan platform. Users must be created and activated before you use single sign-on.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Anaplan you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Anaplan you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Atmos Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atmos-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Atmos for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Atmos.
++
+writer: twimmers
++
+ms.assetid: 769b98d7-009f-44ed-8569-a5acc52d7552
++++ Last updated : 02/14/2023+++
+# Tutorial: Configure Atmos for automatic user provisioning
+
+This tutorial describes the steps you need to do in both Atmos and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Atmos](https://www.axissecurity.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Atmos.
+> * Remove users in Atmos when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Atmos.
+> * Provision groups and group memberships in Atmos.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in [Axis Security](https://www.axissecurity.com) with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Atmos](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Atmos to support provisioning with Azure AD
+
+1. Log in to the [Management Console](https://auth.axissecurity.com/).
+1. Navigate to **Settings**-> **Identity Providers** screen.
+1. Hover over the **Azure Identity Provider** and select **edit**.
+1. Navigate to **Advanced Settings**.
+1. Navigate to **User Auto-Provisioning (SCIM)**.
+1. Click **Generate new token**.
+1. Copy the **SCIM Service Provider Endpoint** and **SCIM Provisioning Token** and paste them into a text editor. You need them for Step 5.
+
+## Step 3. Add Atmos from the Azure AD application gallery
+
+Add Atmos from the Azure AD application gallery to start managing provisioning to Atmos. If you have previously setup Atmos for SSO, you can use the same application. However the recommendation is to create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope provisioning based on assignment to the application and or based on attributes of the user / group. If you choose to scope provisioning to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope provisioning based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Atmos
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Atmos based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Atmos in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Atmos**.
+
+ ![Screenshot of the Atmos link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, paste the **SCIM Service Provider Endpoint** obtained from the Axis SCIM configuration (step 2) in Tenant URL, and paste the **SCIM Provisioning Token** obtained from the Axis SCIM configuration (step 2) in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Atmos. If the connection fails, contact Axis to check your account setup.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Atmos**.
+
+1. Review the synchronized user attributes from Azure AD to Atmos, in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Atmos for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Atmos API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Atmos|
+ |||||
+ |userName|String|&check;|&check;|
+ |active|Boolean|||
+ |displayName|String||&check;|
+ |emails[type eq "work"].value|String|||
+ |name.givenName|String|||
+ |name.familyName|String|||
+ |externalId|String|||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Atmos**.
+
+1. Review the synchronized group attributes from Azure AD to Atmos, in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Atmos for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Atmos|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+ |externalId|String||&check;
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Atmos, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and groups that you would like to provision to Atmos by choosing the appropriate values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to execute than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Courseswork Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/courseswork-tutorial.md
+
+ Title: Azure Active Directory SSO integration with courses.work
+description: Learn how to configure single sign-on between Azure Active Directory and courses.work.
++++++++ Last updated : 02/16/2023++++
+# Azure Active Directory SSO integration with courses.work
+
+In this article, you learn how to integrate courses.work with Azure Active Directory (Azure AD). courses.work is a product of Succeed Technologies®, a ISO 27001-2013 company with rich experience in developing engaging and interactive eLearning. When you integrate courses.work with Azure AD, you can:
+
+* Control in Azure AD who has access to courses.work.
+* Enable your users to be automatically signed-in to courses.work with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for courses.work in a test environment. courses.work supports **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with courses.work, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* courses.work single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the courses.work application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add courses.work from the Azure AD gallery
+
+Add courses.work from the Azure AD application gallery to configure single sign-on with courses.work. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **courses.work** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. courses.work application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, courses.work application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | firstname | user.givenname |
+ | lastname | user.surname |
+ | username | user.userprincipalname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure courses.work SSO
+
+To configure single sign-on on **courses.work** side, you need to send the **App Federation Metadata Url** to [courses.work support team](mailto:support@succeedtech.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create courses.work test user
+
+In this section, a user called B.Simon is created in courses.work. courses.work supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in courses.work, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the courses.work for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the courses.work tile in the My Apps, you should be automatically signed in to the courses.work for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure courses.work you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Kallidus Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kallidus-tutorial.md
Previously updated : 11/21/2022 Last updated : 02/15/2023
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Kallidus supports **IDP** initiated SSO.
+* Kallidus supports **SP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-4. On the **Basic SAML Configuration** section, perform the following step:
+4. On the **Basic SAML Configuration** section, enter the values for the following fields:
- In the **Reply URL** text box, type a URL using the following pattern:
- `https://login.kallidus-suite.com/core/<ID>/Acs`
+ a. In the **Identifier** box, enter the URL:`https://login.kallidus-suite.com/core/saml`.
+
+ b. In the **Reply URL** box, type a URL using the following pattern: `https://login.kallidus-suite.com/core/<SCHEME>/acs`
+
+ c. In the **Sign on URL** box, type a URL using the following pattern: `https://login.kallidus-suite.com/core/<SCHEME>/acs`
> [!NOTE]
- > The value is not real. Update the value with the actual Reply URL. Contact [Kallidus Client support team](https://kallidus.zendesk.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign on URL and Reply URL. Contact [Kallidus Client support team](https://kallidus.zendesk.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you create a user called Britta Simon in Kallidus. Work with [
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on Test this application in Azure portal and you should be automatically signed in to the Kallidus for which you set up the SSO.
+* Click on **Test this application** in Azure portal. This will redirect to Kallidus Sign-on URL where you can initiate the login flow.
+
+* Go to Kallidus Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Kallidus tile in the My Apps, you should be automatically signed in to the Kallidus for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the Kallidus tile in the My Apps, you should be automatically signed in to the Kallidus for which you set up the SSO. For more information about the My Apps, [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
active-directory Leandna Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/leandna-tutorial.md
+
+ Title: Azure Active Directory SSO integration with LeanDNA
+description: Learn how to configure single sign-on between Azure Active Directory and LeanDNA.
++++++++ Last updated : 02/16/2023++++
+# Azure Active Directory SSO integration with LeanDNA
+
+In this article, you learn how to integrate LeanDNA with Azure Active Directory (Azure AD). Connect to the LeanDNA app via SAML 2.0 SSO using Azure. When you integrate LeanDNA with Azure AD, you can:
+
+* Control in Azure AD who has access to LeanDNA.
+* Enable your users to be automatically signed-in to LeanDNA with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for LeanDNA in a test environment. LeanDNA supports only **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with LeanDNA, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* LeanDNA single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the LeanDNA application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add LeanDNA from the Azure AD gallery
+
+Add LeanDNA from the Azure AD application gallery to configure single sign-on with LeanDNA. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **LeanDNA** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using one of the patterns:
+
+ | **Identifier** |
+ |-|
+ |`https://www.leandna.com/auth/1/saml2/metadata/customer/<ID>`|
+ | `https://app.leandna.com/auth/1/saml2/metadata/customer/<ID>` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the patterns:
+
+ | **Reply URL** |
+ |-|
+ | `https://www.leandna.com/auth/1/saml2/login/customer/<ID>` |
+ | `https://app.leandna.com/auth/1/saml2/login/customer/<ID>` |
+
+ c. In the **Sign on URL** textbox, type the URL:
+ `https://www.leandna.com/application/sso.html`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [LeanDNA Client support team](mailto:it@leandna.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up LeanDNA** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure LeanDNA SSO
+
+To configure single sign-on on **LeanDNA** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [LeanDNA support team](mailto:it@leandna.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create LeanDNA test user
+
+In this section, you create a user called Britta Simon at LeanDNA. Work with [LeanDNA support team](mailto:it@leandna.com) to add the users in the LeanDNA platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to LeanDNA Sign-on URL where you can initiate the login flow.
+
+* Go to LeanDNA Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you select the LeanDNA tile in the My Apps, this will redirect to LeanDNA Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure LeanDNA you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Oracle Idcs For Jd Edwards Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-idcs-for-jd-edwards-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Oracle IDCS for JD Edwards
+description: Learn how to configure single sign-on between Azure Active Directory and Oracle IDCS for JD Edwards.
++++++++ Last updated : 02/07/2023++++
+# Azure Active Directory SSO integration with Oracle IDCS for JD Edwards
+
+In this article, you'll learn how to integrate Oracle IDCS for JD Edwards with Azure Active Directory (Azure AD). When you integrate Oracle IDCS for JD Edwards with Azure AD, you can:
+
+* Control in Azure AD who has access to Oracle IDCS for JD Edwards.
+* Enable your users to be automatically signed-in to Oracle IDCS for JD Edwards with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Oracle IDCS for JD Edwards in a test environment. Oracle IDCS for JD Edwards supports only **SP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Oracle IDCS for JD Edwards, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Oracle IDCS for JD Edwards single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Oracle IDCS for JD Edwards application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Oracle IDCS for JD Edwards from the Azure AD gallery
+
+Add Oracle IDCS for JD Edwards from the Azure AD application gallery to configure single sign-on with Oracle IDCS for JD Edwards. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Oracle IDCS for JD Edwards** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern: ` https://<SUBDOMAIN>.oraclecloud.com/`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern: `https://<SUBDOMAIN>.oraclecloud.com/v1/saml/<UNIQUEID>`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ ` https://<SUBDOMAIN>.oraclecloud.com/`
+
+ >[!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Oracle IDCS for JD Edwards support team](https://www.oracle.com/support/advanced-customer-services/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Your Oracle IDCS for JD Edwards application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Oracle IDCS for JD Edwards expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
+
+ ![Screenshot shows image of default attributes.](common/default-attributes.png)
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows The Certificate download link.](common/metadataxml.png)
+
+## Configure Oracle IDCS for JD Edwards SSO
+
+To configure single sign-on on Oracle IDCS for JD Edwards side, you need to send the downloaded Federation Metadata XML file from Azure portal to [Oracle IDCS for JD Edwards support team](https://www.oracle.com/support/advanced-customer-services/). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Oracle IDCS for JD Edwards test user
+
+In this section, you create a user called Britta Simon at Oracle IDCS for JD Edwards. Work with [Oracle IDCS for JD Edwards support team](https://www.oracle.com/support/advanced-customer-services/) to add the users in the Oracle IDCS for JD Edwards platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Oracle IDCS for JD Edwards Sign-on URL where you can initiate the login flow.
+
+* Go to Oracle IDCS for JD Edwards Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you select the Oracle IDCS for JD Edwards tile in the My Apps, this will redirect to Oracle IDCS for JD Edwards Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Oracle IDCS for JD Edwards you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Tanium Cloud Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tanium-cloud-sso-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Tanium Cloud SSO
+description: Learn how to configure single sign-on between Azure Active Directory and Tanium Cloud SSO.
++++++++ Last updated : 02/16/2023++++
+# Azure Active Directory SSO integration with Tanium Cloud SSO
+
+In this article, you learn how to integrate Tanium Cloud SSO with Azure Active Directory (Azure AD). Tanium, the industryΓÇÖs only provider of converged endpoint management (XEM), leads the paradigm shift in legacy approaches to managing complex security and technology environments. When you integrate Tanium Cloud SSO with Azure AD, you can:
+
+* Control in Azure AD who has access to Tanium Cloud SSO.
+* Enable your users to be automatically signed-in to Tanium Cloud SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Tanium Cloud SSO in a test environment. Tanium Cloud SSO supports both **SP** and **IDP** initiated single sign-on and also **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Tanium Cloud SSO, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Tanium Cloud SSO single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Tanium Cloud SSO application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Tanium Cloud SSO from the Azure AD gallery
+
+Add Tanium Cloud SSO from the Azure AD application gallery to configure single sign-on with Tanium Cloud SSO. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Tanium Cloud SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `urn:amazon:cognito:sp:InstanceName`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://InstanceName-tanium.auth.<SUBDOMAIN>.amazoncognito.com/saml2/idpresponse`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ https://InstanceName.cloud.tanium.com
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Tanium Cloud SSO Client support team](mailto:integrations@tanium.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Tanium Cloud SSO
+
+To configure single sign-on on **Tanium Cloud SSO** side, you need to send the **App Federation Metadata Url** to [Tanium Cloud SSO support team](mailto:integrations@tanium.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Tanium Cloud SSO test user
+
+In this section, a user called B.Simon is created in Tanium Cloud SSO. Tanium Cloud SSO supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Tanium Cloud SSO, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Tanium Cloud SSO Sign-on URL where you can initiate the login flow.
+
+* Go to Tanium Cloud SSO Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Tanium Cloud SSO for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Tanium Cloud SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Tanium Cloud SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Tanium Cloud SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Trello Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trello-tutorial.md
- Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Trello | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Trello.
-------- Previously updated : 11/21/2022---
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Trello
-
-In this tutorial, you'll learn how to integrate Trello with Azure Active Directory (Azure AD). When you integrate Trello with Azure AD, you can:
-
-* Control in Azure AD who has access to Trello.
-* Enable your users to be automatically signed-in to Trello with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Trello single sign-on (SSO) enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
-
-* Trello supports **SP and IDP** initiated SSO
-* Trello supports **Just In Time** user provisioning
-
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-
-## Add Trello from the gallery
-
-To configure the integration of Trello into Azure AD, you need to add Trello from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Trello** in the search box.
-1. Select **Trello** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-
-## Configure and test Azure AD SSO for Trello
-
-Configure and test Azure AD SSO with Trello using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Trello.
-
-To configure and test Azure AD SSO with Trello, perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Trello SSO](#configure-trello-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Trello test user](#create-trello-test-user)** - to have a counterpart of B.Simon in Trello that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **Trello** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
-
- a. In the **Identifier** text box, type the URL:
- `https://trello.com/auth/saml/metadata`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://trello.com/auth/saml/consume/<enterprise>`
-
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://trello.com/auth/saml/login/<enterprise>`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Reply URL and Sign-on URL. Contact [Trello Client support team](https://trello.com/sso-configuration) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
-
- ![The Certificate download link](common/certificatebase64.png)
-
-1. On the **Set up Trello** section, copy the appropriate URL(s) based on your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Trello.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Trello**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Trello SSO
-
-To configure single sign-on on **Trello** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Trello support team](https://trello.com/sso-configuration). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create Trello test user
-
-In this section, you create a user called Britta Simon in Trello. Trello supports Just in Time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Trello, a new one is created after authentication.
-
-> [!NOTE]
-> If you need to create a user manually, contact the [Trello support team](mailto:support@trello.com).
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to Trello Sign on URL where you can initiate the login flow.
-
-* Go to Trello Sign-on URL directly and initiate the login flow from there.
-
-#### IDP initiated:
-
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Trello for which you set up the SSO.
-
-You can also use Microsoft My Apps to test the application in any mode. When you click the Trello tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Trello for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Next steps
-
-Once you configure Trello you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Udemy Business Saml Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/udemy-business-saml-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Udemy Business SAML
+description: Learn how to configure single sign-on between Azure Active Directory and Udemy Business SAML.
++++++++ Last updated : 02/16/2023++++
+# Azure Active Directory SSO integration with Udemy Business SAML
+
+In this article, you learn how to integrate Udemy Business SAML with Azure Active Directory (Azure AD). Udemy for Business helps employees do whatever comes next - whether thatΓÇÖs the next project to tackle, skill to learn or role to master. When you integrate Udemy Business SAML with Azure AD, you can:
+
+* Control in Azure AD who has access to Udemy Business SAML.
+* Enable your users to be automatically signed-in to Udemy Business SAML with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Udemy Business SAML in a test environment. Udemy Business SAML supports **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Udemy Business SAML, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Udemy Business SAML single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Udemy Business SAML application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Udemy Business SAML from the Azure AD gallery
+
+Add Udemy Business SAML from the Azure AD application gallery to configure single sign-on with Udemy Business SAML. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Udemy Business SAML** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://www.udemy.com/sso/saml`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://sso.connect.pingidentity.com/sso/sp/ACS.saml2`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.udemy.com`
+
+ > [!Note]
+ > This value is not real. Update this value with the actual Sign-on URL. Contact [Udemy Business SAML Client support team](mailto:ufbsupport@udemy.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Udemy Business SAML application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Udemy Business SAML application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Udemy Business SAML SSO
+
+To configure single sign-on on **Udemy Business SAML** side, you need to send the **App Federation Metadata Url** to [Udemy Business SAML support team](mailto:ufbsupport@udemy.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Udemy Business SAML test user
+
+In this section, a user called B.Simon is created in Udemy Business SAML. Udemy Business SAML supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Udemy Business SAML, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Udemy Business SAML Sign-on URL where you can initiate the login flow.
+
+* Go to Udemy Business SAML Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Udemy Business SAML tile in the My Apps, this will redirect to Udemy Business SAML Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Udemy Business SAML you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
Part of the AKS cluster lifecycle involves performing periodic upgrades to the l
> > Auto-upgrade will first upgrade the control plane, and then proceed to upgrade agent pools one by one.
-## Why use auto-upgrade
+## Why use cluster auto-upgrade
-Auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS and upstream Kubernetes.
+Cluster auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS and upstream Kubernetes.
AKS follows a strict versioning window with regard to supportability. With properly selected auto-upgrade channels, you can avoid clusters falling into an unsupported version. For more on the AKS support window, see [Alias minor versions][supported-kubernetes-versions].
+## Cluster auto-upgrade limitations
-Even if using node image auto upgrade (which won't change the Kubernetes version), it still requires MC to be in a supported version
+If youΓÇÖre using cluster auto-upgrade, you can no longer upgrade the control plane first and then upgrade the individual node pools. Cluster auto-upgrade will always upgrade the control plane and the node pools together. There is no ability of upgrading the control plane only, and trying to run the command `az aks upgrade --control-plane-only` will raise the error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.`
-## Using auto-upgrade
+If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.
+
+## Using cluster auto-upgrade
Automatically completed upgrades are functionally the same as manual upgrades. The timing of upgrades is determined by the selected channel. When making changes to auto-upgrade, allow 24 hours for the changes to take effect.
The following upgrade channels are available:
| `patch`| automatically upgrade the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.17.9*| | `stable`| automatically upgrade the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.18.6*. | `rapid`| automatically upgrade the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster is at a version of Kubernetes that is at an *N-2* minor version where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster first is upgraded to *1.18.6*, then is upgraded to *1.19.1*.
-| `node-image`| automatically upgrade the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes won't get the new images unless you do a node image upgrade. Turning on the node-image channel will automatically update your node images whenever a new version is available. |
+| `node-image`| automatically upgrade the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes won't get the new images unless you do a node image upgrade. Turning on the node-image channel will automatically update your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] will be disabled by default.|
> [!NOTE] > Cluster auto-upgrade only updates to GA versions of Kubernetes and will not update to preview versions.
The following upgrade channels are available:
> [!NOTE] > Auto-upgrade requires the cluster's Kubernetes version to be within the [AKS support window][supported-kubernetes-versions], even if using the `node-image` channel.
+> [!NOTE]
+> If using the preview API `11-02-preview` or later, if you select the `node-image` cluster auto-upgrade channel the [node image auto-upgrade channel][node-image-auto-upgrade] will automatically be set to `NodeImage`.
+ Automatically upgrading a cluster follows the same process as manually upgrading a cluster. For more information, see [Upgrade an AKS cluster][upgrade-aks-cluster]. To set the auto-upgrade channel when creating a cluster, use the *auto-upgrade-channel* parameter, similar to the following example.
The Azure portal also highlights all the deprecated APIs between your current ve
## Using auto-upgrade with Planned Maintenance
-If youΓÇÖre using Planned Maintenance and Auto-Upgrade, your upgrade will start during your specified maintenance window.
+If youΓÇÖre using Planned Maintenance and cluster auto-upgrade, your upgrade will start during your specified maintenance window.
> [!NOTE] > To ensure proper functionality, use a maintenance window of four hours or more. For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance].
-## Auto upgrade limitations
-
-If youΓÇÖre using Auto-Upgrade you cannot anymore upgrade the control plane first, and then upgrade the individual node pools. Auto-Upgrade will always upgrade the control plane and the node pools together. In Auto-Upgrade there is no concept of upgrading the control plane only, and trying to run the command `az aks upgrade --control-plane-only` will raise the error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.`
-
-## Best practices for auto-upgrade
+## Best practices for cluster auto-upgrade
The following best practices will help maximize your success when using auto-upgrade: - In order to keep your cluster always in a supported version (i.e within the N-2 rule), choose either `stable` or `rapid` channels. - If you're interested in getting the latest patches as soon as possible, use the `patch` channel. The `node-image` channel is a good fit if you want your agent pools to always be running the most recent node images.
+- To automatically upgrade node images while using a different cluster upgrade channel, consider using the [node image auto-upgrade][node-image-auto-upgrade] `NodeImage` channel.
- Follow [Operator best practices][operator-best-practices-scheduler]. - Follow [PDB best practices][pdb-best-practices].
The following best practices will help maximize your success when using auto-upg
[upgrade-aks-cluster]: upgrade-cluster.md [planned-maintenance]: planned-maintenance.md [operator-best-practices-scheduler]: operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets-
+[node-image-auto-upgrade]: auto-upgrade-node-image.md
<!-- EXTERNAL LINKS --> [pdb-best-practices]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ [release-tracker]: release-tracker.md
-[k8s-deprecation]: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#:~:text=A%20deprecated%20API%20is%20one%20that%20has%20been,point%20you%20must%20migrate%20to%20using%20the%20replacement
+[k8s-deprecation]: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#:~:text=A%20deprecated%20API%20is%20one%20that%20has%20been,point%20you%20must%20migrate%20to%20using%20the%20replacement
+[unattended-upgrades]: https://help.ubuntu.com/community/AutomaticSecurityUpdates
aks Auto Upgrade Node Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md
+
+ Title: Automatically upgrade Azure Kubernetes Service (AKS) cluster node operating system images
+description: Learn how to automatically upgrade Azure Kubernetes Service (AKS) cluster node operating system images.
++++ Last updated : 02/03/2023++
+# Automatically upgrade Azure Kubernetes Service cluster node operating system images
+
+AKS supports upgrading the images on a node so your cluster is up to date with the newest operating system (OS) and runtime updates. AKS regularly provides new node OS images with the latest updates, so it's beneficial to upgrade your node's images regularly for the latest AKS features and to maintain security. Before learning about auto-upgrade, make sure you understand upgrade fundamentals by reading [Upgrade an AKS cluster][upgrade-aks-cluster].
+
+The latest AKS node image information can be found by visiting the [AKS release tracker][release-tracker].
+
+## Why use node OS auto-upgrade
+
+Node OS auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS.
+
+## Prerequisites
+
+- Must be using API version `11-02-preview` or later
+
+- If using Azure CLI, the `aks-preview` CLI extension version `0.5.127` or later must be installed
+
+- If using the `SecurityPatch` channel, the `NodeOsUpgradeChannelPreview` feature flag must be enabled on your subscription
+
+### Register the 'NodeOsUpgradeChannelPreview' feature flag
+
+Register the `NodeOsUpgradeChannelPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "NodeOsUpgradeChannelPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "NodeOsUpgradeChannelPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Limitations
+
+If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.
+
+## Using node OS auto-upgrade
+
+Automatically completed upgrades are functionally the same as manual upgrades. The timing of upgrades is determined by the selected channel. When making changes to auto-upgrade, allow 24 hours for the changes to take effect. By default, a cluster's node OS auto-upgrade channel is set to `Unmanaged`.
+
+> [!NOTE]
+> Node OS image auto-upgrade won't affect the cluster's Kubernetes version, but it still still requires the cluster to be in a supported version to function properly.
+
+The following upgrade channels are available:
+
+|Channel|Description|OS-specific behavior|
+|||
+| `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates|N/A|
+| `Unmanaged`|OS updates will be applied automatically through the OS built-in patching infrastructure. Newly allocated machines will be unpatched initially and will be patched at some point by the OS's infrastructure|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows and Mariner don't apply security patches automatically, so this option behaves equivalently to `None`|
+| `SecurityPatch`|AKS will update the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only" on a regular basis. Where possible, patches will also be applied without disruption to existing nodes. Some patches, such as kernel patches, can't be applied to existing nodes without disruption. For such patches, the VHD will be updated and existing machines will be upgraded to that VHD following maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group.|N/A|
+| `NodeImage`|AKS will update the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades] will be disabled by default.|
+
+To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example.
+
+```azurecli-interactive
+az aks create --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch
+```
+
+To set the auto-upgrade channel on existing cluster, update the *node-os-upgrade-channel* parameter, similar to the following example.
+
+```azurecli-interactive
+az aks update --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch
+```
+
+## Using node OS auto-upgrade with Planned Maintenance
+
+If youΓÇÖre using Planned Maintenance and node OS auto-upgrade, your upgrade will start during your specified maintenance window.
+
+> [!NOTE]
+> To ensure proper functionality, use a maintenance window of four hours or more.
+
+For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance].
+
+<!-- LINKS -->
+[planned-maintenance]: planned-maintenance.md
+[release-tracker]: release-tracker.md
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[upgrade-aks-cluster]: upgrade-cluster.md
+[unattended-upgrades]: https://help.ubuntu.com/community/AutomaticSecurityUpdates
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows to perform updates and minimize workload impact. Once scheduled, upgrades occur only during the window you selected.
-There are currently two available configuration types: `default` and `aksManagedAutoUpgradeSchedule`:
+There are currently three available configuration types: `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`:
-- `default` corresponds to a basic configuration that updates your control plane and your kube-system pods on a Virtual Machine Scale Sets instance. It is a legacy configuration that is mostly suitable for basic scheduling of [weekly releases][release-tracker]. Another way of accomplishing this behavior, using pre-configured windows, is detailed at [use Planned Maintenance to schedule weekly releases][pm-weekly]
+- `default` corresponds to a basic configuration that will update your control plane and your kube-system pods on a Virtual Machine Scale Sets instance. It's a legacy configuration that is mostly suitable for basic scheduling of [weekly releases][release-tracker].
-- `aksManagedAutoUpgradeSchedule` is a more complex configuration that controls when upgrades scheduled by your designated auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible. For more information on cluster auto-upgrade, see [Automatically an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
+- `aksManagedAutoUpgradeSchedule` controls when cluster upgrades scheduled by your designated auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default` configuration. For more information on cluster auto-upgrade, see [Automatically upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
-We recommend using `aksManagedAutoUpgradeSchedule` for all maintenance and upgrade scenarios, while `default` is meant exclusively for weekly releases. You can port `default` configurations to `aksManagedAutoUpgradeSchedule` configurations via the `az aks maintenanceconfiguration update` command.
+- `aksManagedNodeOSUpgradeSchedule` controls when node operating system upgrades scheduled by your node OS auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default configuration. For more information on node OS auto-upgrade, see [Automatically patch and update AKS cluster node images][node-image-auto-upgrade]
+
+We recommend using `aksManagedAutoUpgradeSchedule` for all cluster upgrade scenarios and `aksManagedNodeOSUpgradeSchedule` for all node image upgrade scenarios, while `default` is meant exclusively for weekly releases. You can port `default` configurations to `aksManagedAutoUpgradeSchedule` configurations via the `az aks maintenanceconfiguration update` command.
## Before you begin
This article assumes that you have an existing AKS cluster. If you need an AKS c
When you use Planned Maintenance, the following restrictions apply: - AKS reserves the right to break these windows for unplanned/reactive maintenance operations that are urgent or critical.-- Currently, performing maintenance operations are considered *best-effort only* and are not guaranteed to occur within a specified window.-- Updates cannot be blocked for more than seven days.
+- Currently, performing maintenance operations are considered *best-effort only* and aren't guaranteed to occur within a specified window.
+- Updates can't be blocked for more than seven days.
### Install aks-preview CLI extension
az extension update --name aks-preview
## Creating a maintenance window
-To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default` or `aksManagedAutoUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name will cause your maintenance window not to run.
+To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default`, `aksManagedAutoUpgradeSchedule`, or `aksManagedNodeOSUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name will cause your maintenance window not to run.
> [!NOTE] > When using auto-upgrade, to ensure proper functionality, use a maintenance window with a duration of four hours or more.
A `default` maintenance window has the following properties:
|`timeInWeek`|In a `default` configuration, this property contains the `day` and `hourSlots` values defining a maintenance window|N/A| |`timeInWeek.day`|The day of the week to perform maintenance in a `default` configuration|N/A| |`timeInWeek.hourSlots`|A list of hour-long time slots to perform maintenance on a given day in a `default` configuration|N/A|
-|`notAllowedTime`|Specifies a range of dates that maintenance cannot run, determined by `start` and `end` child properties. Only applicable when creating the maintenance window using a config file|N/A|
+|`notAllowedTime`|Specifies a range of dates that maintenance can't run, determined by `start` and `end` child properties. Only applicable when creating the maintenance window using a config file|N/A|
-An `aksManagedAutoUpgradeSchedule` has the following properties:
+An `aksManagedAutoUpgradeSchedule` or `aksManagedNodeOSUpgradeSchedule` maintenance window has the following properties:
|Name|Description|Default value| |--|--|--|
An `aksManagedAutoUpgradeSchedule` has the following properties:
|`startDate`|The date on which the maintenance window begins to take effect|The current date at creation time| |`startTime`|The time for maintenance to begin, based on the timezone determined by `utcOffset`|N/A| |`schedule`|Used to determine frequency. Three types are available: `Weekly`, `AbsoluteMonthly`, and `RelativeMonthly`|N/A|
+|`intervalDays`|The interval in days for maintenance runs. Only applicable to `aksManagedNodeOSUpgradeSchedule`|N/A|
|`intervalWeeks`|The interval in weeks for maintenance runs|N/A| |`intervalMonths`|The interval in months for maintenance runs|N/A| |`dayOfWeek`|The specified day of the week for maintenance to begin|N/A|
An `aksManagedAutoUpgradeSchedule` has the following properties:
### Understanding schedule types
-There are currently three available schedule types: `Weekly`, `AbsoluteMonthly`, and `RelativeMonthly`. These schedule types are only applicable to `aksManagedClusterAutoUpgrade` configurations.
+There are currently four available schedule types: `Daily`, `Weekly`, `AbsoluteMonthly`, and `RelativeMonthly`. These schedule types are only applicable to `aksManagedClusterAutoUpgradeSchedule` and `aksManagedNodeOSUpgradeSchedule` configurations. `Daily` schedules are only applicable to `aksManagedNodeOSUpgradeSchedule` types.
> [!NOTE] > All of the fields shown for each respective schedule type are required.
+#### Daily schedule
+
+> [!NOTE]
+> Daily schedules are only applicable to `aksManagedNodeOSUpgradeSchedule` configuration types.
+
+A `Daily` schedule may look like *"every three days"*:
+
+```json
+"schedule": {
+ "daily": {
+ "intervalDays": 2
+ }
+}
+```
+ #### Weekly schedule A `Weekly` schedule may look like *"every two weeks on Friday"*:
Create a `default.json` file with the following contents:
} ```
-The above JSON file specifies maintenance windows every Tuesday at 1:00am - 3:00am and every Wednesday at 1:00am - 2:00am and at 6:00am - 7:00am in the `UTC` timezone. There is also an exception from *2021-05-26T03:00:00Z* to *2021-05-30T12:00:00Z* where maintenance isn't allowed even if it overlaps with a maintenance window.
+The above JSON file specifies maintenance windows every Tuesday at 1:00am - 3:00am and every Wednesday at 1:00am - 2:00am and at 6:00am - 7:00am in the `UTC` timezone. There's also an exception from *2021-05-26T03:00:00Z* to *2021-05-30T12:00:00Z* where maintenance isn't allowed even if it overlaps with a maintenance window.
Create an `autoUpgradeWindow.json` file with the following contents:
Create an `autoUpgradeWindow.json` file with the following contents:
} ```
-The above JSON file specifies maintenance windows every three months on the first of the month between 9:00 AM - 1:00 PM in the `UTC-08` timezone. There is also an exception from *2023-12-23* to *2024-01-05* where maintenance isn't allowed even if it overlaps with a maintenance window.
+The above JSON file specifies maintenance windows every three months on the first of the month between 9:00 AM - 1:00 PM in the `UTC-08` timezone. There's also an exception from *2023-12-23* to *2024-01-05* where maintenance isn't allowed even if it overlaps with a maintenance window.
The following command adds the maintenance windows from `default.json` and `autoUpgradeWindow.json`:
The following example output shows the maintenance window for *aksManagedAutoUpg
To delete a certain maintenance configuration window in your AKS Cluster, use the `az aks maintenanceconfiguration delete` command. ```azurecli-interactive
-az aks maintenanceconfiguration delete -g MyResourceGroup --cluster-name myAKSCluster --name autoUpgradeSchedule
+az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCluster --name autoUpgradeSchedule
``` ## Next steps
az aks maintenanceconfiguration delete -g MyResourceGroup --cluster-name myAKSCl
[aks-upgrade]: upgrade-cluster.md [release-tracker]: release-tracker.md [auto-upgrade]: auto-upgrade-cluster.md
+[node-image-auto-upgrade]: auto-upgrade-node-image.md
[pm-weekly]: ./aks-planned-maintenance-weekly-releases.md+
app-service Configure Authentication Api Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-api-version.md
Title: Manage AuthN/AuthZ API versions description: Upgrade your App Service authentication API to V2 or pin it to a specific version, if needed. Previously updated : 03/29/2021 Last updated : 02/17/2023
There are two versions of the management API for App Service authentication. The
## Update the configuration version > [!WARNING]
-> Migration to V2 will disable management of the App Service Authentication / Authorization feature for your application through some clients, such as its existing experience in the Azure portal, Azure CLI, and Azure PowerShell. This cannot be reversed.
+> Migration to V2 will disable management of the App Service Authentication/Authorization feature for your application through some clients, such as its existing experience in the Azure portal, Azure CLI, and Azure PowerShell. This cannot be reversed.
-The V2 API does not support creation or editing of Microsoft Account as a distinct provider as was done in V1. Rather, it leverages the converged [Microsoft Identity Platform](../active-directory/develop/v2-overview.md) to sign-in users with both Azure AD and personal Microsoft accounts. When switching to the V2 API, the V1 Azure Active Directory configuration is used to configure the Microsoft Identity Platform provider. The V1 Microsoft Account provider will be carried forward in the migration process and continue to operate as normal, but it is recommended that you move to the newer Microsoft Identity Platform model. See [Support for Microsoft Account provider registrations](#support-for-microsoft-account-provider-registrations) to learn more.
+The V2 API does not support creation or editing of Microsoft Account as a distinct provider as was done in V1. Rather, it leverages the converged [Microsoft identity platform](../active-directory/develop/v2-overview.md) to sign-in users with both Azure AD and personal Microsoft accounts. When switching to the V2 API, the V1 Azure Active Directory (Azure AD) configuration is used to configure the Microsoft identity platform provider. The V1 Microsoft Account provider will be carried forward in the migration process and continue to operate as normal, but it is recommended that you move to the newer Microsoft Identity Platform model. See [Support for Microsoft Account provider registrations](#support-for-microsoft-account-provider-registrations) to learn more.
The automated migration process will move provider secrets into application settings and then convert the rest of the configuration into the new format. To use the automatic migration: 1. Navigate to your app in the portal and select the **Authentication** menu option.
-1. If the app is configured using the V1 model, you will see an **Upgrade** button.
-1. Review the description in the confirmation prompt. If you are ready to perform the migration, click **Upgrade** in the prompt.
+1. If the app is configured using the V1 model, you'll see an **Upgrade** button.
+1. Review the description in the confirmation prompt. If you're ready to perform the migration, click **Upgrade** in the prompt.
### Manually managing the migration
The following steps will allow you to manually migrate the application to the V2
az webapp auth show -g <group_name> -n <site_name> ```
- In the resulting JSON payload, make note of the secret value used for each provider you have configured:
+ In the resulting JSON payload, make note of the secret value used for each provider you've configured:
- * AAD: `clientSecret`
+ * Azure AD: `clientSecret`
* Google: `googleClientSecret` * Facebook: `facebookAppSecret` * Twitter: `twitterConsumerSecret`
The following steps will allow you to manually migrate the application to the V2
> [!IMPORTANT] > The secret values are important security credentials and should be handled carefully. Do not share these values or persist them on a local machine.
-1. Create slot-sticky application settings for each secret value. You may choose the name of each application setting. It's value should match what you obtained in the previous step or [reference a Key Vault secret](./app-service-key-vault-references.md?toc=/azure/azure-functions/toc.json) that you have created with that value.
+1. Create slot-sticky application settings for each secret value. You may choose the name of each application setting. Its value should match what you obtained in the previous step or [reference a Key Vault secret](./app-service-key-vault-references.md?toc=/azure/azure-functions/toc.json) that you've created with that value.
To create the setting, you can use the Azure portal or run a variation of the following for each provider:
The following steps will allow you to manually migrate the application to the V2
> [!NOTE] > The application settings for this configuration should be marked as slot-sticky, meaning that they will not move between environments during a [slot swap operation](./deploy-staging-slots.md). This is because your authentication configuration itself is tied to the environment.
-1. Create a new JSON file named `authsettings.json`.Take the output that you received previously and remove each secret value from it. Write the remaining output to the file, making sure that no secret is included. In some cases, the configuration may have arrays containing empty strings. Make sure that `microsoftAccountOAuthScopes` does not, and if it does, switch that value to `null`.
+1. Create a new JSON file named `authsettings.json`. Take the output that you received previously and remove each secret value from it. Write the remaining output to the file, making sure that no secret is included. In some cases, the configuration may have arrays containing empty strings. Make sure that `microsoftAccountOAuthScopes` does not, and if it does, switch that value to `null`.
1. Add a property to `authsettings.json` which points to the application setting name you created earlier for each provider:
- * AAD: `clientSecretSettingName`
+ * Azure AD: `clientSecretSettingName`
* Google: `googleClientSecretSettingName` * Facebook: `facebookAppSecretSettingName` * Twitter: `twitterConsumerSecretSettingName` * Microsoft Account: `microsoftAccountClientSecretSettingName`
- An example file after this operation might look similar to the following, in this case only configured for AAD:
+ An example file after this operation might look similar to the following, in this case only configured for Azure AD:
```json {
The following steps will allow you to manually migrate the application to the V2
1. Delete the file used in the previous steps.
-You have now migrated the app to store identity provider secrets as application settings.
+You've now migrated the app to store identity provider secrets as application settings.
#### Support for Microsoft Account provider registrations
-If your existing configuration contains a Microsoft Account provider and does not contain an Azure Active Directory provider, you can switch the configuration over to the Azure Active Directory provider and then perform the migration. To do this:
+If your existing configuration contains a Microsoft Account provider and does not contain an Azure AD provider, you can switch the configuration over to the Azure AD provider and then perform the migration. To do this:
1. Go to [**App registrations**](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) in the Azure portal and find the registration associated with your Microsoft Account provider. It may be under the "Applications from personal account" heading. 1. Navigate to the "Authentication" page for the registration. Under "Redirect URIs" you should see an entry ending in `/.auth/login/microsoftaccount/callback`. Copy this URI. 1. Add a new URI that matches the one you just copied, except instead have it end in `/.auth/login/aad/callback`. This will allow the registration to be used by the App Service Authentication / Authorization configuration. 1. Navigate to the App Service Authentication / Authorization configuration for your app. 1. Collect the configuration for the Microsoft Account provider.
-1. Configure the Azure Active Directory provider using the "Advanced" management mode, supplying the client ID and client secret values you collected in the previous step. For the Issuer URL, use Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with your **Directory (tenant) ID**.
-1. Once you have saved the configuration, test the login flow by navigating in your browser to the `/.auth/login/aad` endpoint on your site and complete the sign-in flow.
-1. At this point, you have successfully copied the configuration over, but the existing Microsoft Account provider configuration remains. Before you remove it, make sure that all parts of your app reference the Azure Active Directory provider through login links, etc. Verify that all parts of your app work as expected.
-1. Once you have validated that things work against the AAD Azure Active Directory provider, you may remove the Microsoft Account provider configuration.
+1. Configure the Azure AD provider using the "Advanced" management mode, supplying the client ID and client secret values you collected in the previous step. For the Issuer URL, use Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with your **Directory (tenant) ID**.
+1. Once you've saved the configuration, test the login flow by navigating in your browser to the `/.auth/login/aad` endpoint on your site and complete the sign-in flow.
+1. At this point, you've successfully copied the configuration over, but the existing Microsoft Account provider configuration remains. Before you remove it, make sure that all parts of your app reference the Azure AD provider through login links, etc. Verify that all parts of your app work as expected.
+1. Once you've validated that things work against the Azure AD provider, you may remove the Microsoft Account provider configuration.
> [!WARNING]
-> It is possible to converge the two registrations by modifying the [supported account types](../active-directory/develop/supported-accounts-validation.md) for the AAD app registration. However, this would force a new consent prompt for Microsoft Account users, and those users' identity claims may be different in structure, `sub` notably changing values since a new App ID is being used. This approach is not recommended unless thoroughly understood. You should instead wait for support for the two registrations in the V2 API surface.
+> It is possible to converge the two registrations by modifying the [supported account types](../active-directory/develop/supported-accounts-validation.md) for the Azure AD app registration. However, this would force a new consent prompt for Microsoft Account users, and those users' identity claims may be different in structure, `sub` notably changing values since a new App ID is being used. This approach is not recommended unless thoroughly understood. You should instead wait for support for the two registrations in the V2 API surface.
#### Switching to V2
Alternatively, you may make a PUT request against the `config/authsettingsv2` re
## Pin your app to a specific authentication runtime version
-When you enable Authentication / Authorization, platform middleware is injected into your HTTP request pipeline as described in the [feature overview](overview-authentication-authorization.md#how-it-works). This platform middleware is periodically updated with new features and improvements as part of routine platform updates. By default, your web or function app will run on the latest version of this platform middleware. These automatic updates are always backwards compatible. However, in the rare event that this automatic update introduces a runtime issue for your web or function app, you can temporarily roll back to the previous middleware version. This article explains how to temporarily pin an app to a specific version of the authentication middleware.
+When you enable authentication/authorization, platform middleware is injected into your HTTP request pipeline as described in the [feature overview](overview-authentication-authorization.md#how-it-works). This platform middleware is periodically updated with new features and improvements as part of routine platform updates. By default, your web or function app will run on the latest version of this platform middleware. These automatic updates are always backwards compatible. However, in the rare event that this automatic update introduces a runtime issue for your web or function app, you can temporarily roll back to the previous middleware version. This article explains how to temporarily pin an app to a specific version of the authentication middleware.
### Automatic and manual version updates
az webapp auth show --name <my_app_name> \
In this code, replace `<my_app_name>` with the name of your app. Also replace `<my_resource_group>` with the name of the resource group for your app.
-You will see the `runtimeVersion` field in the CLI output. It will resemble the following example output, which has been truncated for clarity:
+You'll see the `runtimeVersion` field in the CLI output. It will resemble the following example output, which has been truncated for clarity:
```output { "additionalLoginParams": null,
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 02/15/2023 Last updated : 02/16/2023
App Service can now automate migration of your App Service Environment v1 and v2
## Supported scenarios
-At this time, App Service Environment migrations to v3 using the migration feature are supported in the following regions:
+At this time, the migration feature doesn't support migrations to App Service Environment v3 in the following regions:
### Azure Public: -- Australia East-- Australia Central-- Australia Central 2-- Australia Southeast-- Brazil South-- Canada Central-- Canada East-- Central India-- Central US-- East Asia-- East US-- East US 2-- France Central-- France South-- Germany North-- Germany West Central-- Japan East-- Korea Central-- Korea South-- North Central US-- North Europe-- Norway East-- Norway West-- South Africa North-- South Africa West-- South Central US-- South India-- Southeast Asia-- Switzerland North-- Switzerland West-- UAE North-- UK South-- UK West-- West Central US-- West Europe-- West India-- West US-- West US 2-- West US 3
+- Japan West
+- Jio India West
+- UAE Central
### Azure Government: -- US Gov Virginia
+- US DoD Central
+- US Gov Arizona
+
+### Azure China:
+
+- China East 2
+- China North 2
The following App Service Environment configurations can be migrated using the migration feature. The table gives the App Service Environment v3 configuration you'll end up with when using the migration feature based on your existing App Service Environment. All supported App Service Environments can be migrated to a [zone redundant App Service Environment v3](../../availability-zones/migrate-app-service-environment.md) using the migration feature as long as the environment is [in a region that supports zone redundancy](./overview.md#regions). You can [configure zone redundancy](#choose-your-app-service-environment-v3-configurations) during the migration process.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
App Service Environment v3 is available in the following regions:
| Australia East | ✅ | ✅ | ✅ | | Australia Southeast | ✅ | | ✅ | | Brazil South | ✅ | ✅ | ✅ |
-| Brazil Southeast | | | ✅ |
+| Brazil Southeast | ✅ | | ✅ |
| Canada Central | ✅ | ✅ | ✅ | | Canada East | ✅ | | ✅ | | Central India | ✅ | ✅ | ✅ |
app-service Tutorial Multi Region App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-region-app.md
az afd origin create --resource-group myresourcegroup --host-name <web-app-east-
|http-port |80 |The port used for HTTP requests to the origin. | |https-port |443 |The port used for HTTPS requests to the origin. |
-Repeat this step to add your second origin. Pay attention to the `--priority` parameter. For this origin, it's set to "2". This priority setting tells Azure Front Door to direct all traffic to the primary origin unless the primary goes down. Be sure to replace both instances of the placeholder for `<web-app-west-us>` with the name of that web app.
+Repeat this step to add your second origin. Pay attention to the `--priority` parameter. For this origin, it's set to "2". This priority setting tells Azure Front Door to direct all traffic to the primary origin unless the primary goes down. If you set the priority for this origin to "1", Azure Front Door will treat both origins as active and direct traffic to both regions. Be sure to replace both instances of the placeholder for `<web-app-west-us>` with the name of that web app.
```azurecli-interactive az afd origin create --resource-group myresourcegroup --host-name <web-app-west-us>.azurewebsites.net --profile-name myfrontdoorprofile --origin-group-name myorigingroup --origin-name secondaryapp --origin-host-header <web-app-west-us>.azurewebsites.net --priority 2 --weight 1000 --enabled-state Enabled --http-port 80 --https-port 443
With Azure App service, the SCM/advanced tools site is used to manage your apps
> [How to deploy a highly available multi-region web app](https://azure.github.io/AppService/2022/12/02/multi-region-web-app.html) > [!div class="nextstepaction"]
-> [Highly available zone-redundant web application](/azure/architecture/reference-architectures/app-service-web-app/zone-redundant)
+> [Highly available zone-redundant web application](/azure/architecture/reference-architectures/app-service-web-app/zone-redundant)
application-gateway Tutorial Url Route Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-cli.md
az network application-gateway create \
--frontend-port 80 \ --http-settings-port 80 \ --http-settings-protocol Http \
- --public-ip-address myAGPublicIPAddress
+ --public-ip-address myAGPublicIPAddress \
+ --priority 100
``` It may take several minutes to create the application gateway. After the application gateway is created, you can see these new features:
az network application-gateway url-path-map rule create \
--resource-group myResourceGroupAG \ --path-map-name myPathMap \ --paths /video/* \
- --address-pool videoBackendPool
+ --address-pool videoBackendPool \
+ --http-settings appGatewayBackendHttpSettings
``` ### Add a routing rule
az network application-gateway rule create \
--http-listener backendListener \ --rule-type PathBasedRouting \ --url-path-map myPathMap \
- --address-pool appGatewayBackendPool
+ --address-pool appGatewayBackendPool \
+ --priority 200
``` ## Create virtual machine scale sets
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
You can use the following PowerShell cmdlets to manage Hybrid Runbook Worker and
After creating new Hybrid Runbook Worker, you must install the extension on the Hybrid Worker.
+**Hybrid Worker extension settings**
+
+```powershell-interactive
+$settings = @{
+ "AutomationAccountURL" = "<registrationurl>";
+};
+```
**Azure VMs** ```powershell
azure-arc Privacy Data Collection And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/privacy-data-collection-and-reporting.md
The following JSON document is an example of the SQL Server database - Azure Arc
| Last uploaded date from on-premises cluster | LastUploadedDate | System.DateTime | | Data controller state | ProvisioningState | string |
-#### Data controller
--- Location information
- - `public OnPremiseProperty OnPremiseProperty`
-- The raw Kubernetes information (`kubectl get datacontroller`)
- - `object: K8sRaw` [Details](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/crds)
-- Last uploaded date from on-premises cluster.
- - `System.DateTime: LastUploadedDate`
-- Data controller state-- `string: ProvisioningState`
-
- The following JSON document is an example of the Azure Arc Data Controller resource. ++ ```json { "id": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.AzureArcData/dataControllers/contosodc",
The following JSON document is an example of the Azure Arc Data Controller resou
``` + ### PostgreSQL server - Azure Arc | Description | Property name | Property type|
The following JSON document is an example of the Azure Arc Data Controller resou
| Last uploaded date from on premises cluster | LastUploadedDate | System.DateTime | | Group provisioning state | ProvisioningState | string |
-#### Azure Arc-enabled PostgreSQL
--- The data controller ID
- - `string: DataControllerId`
-- The instance admin name
- - `string: Admin`
-- Username and password for basic authentication
- - `public: BasicLoginInformation BasicLoginInformation`
-- The raw Kubernetes information (`kubectl get postgres12`) -- `object: K8sRaw` [Details](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/crds)-- Last uploaded date from on premises cluster.
- - `System.DateTime: LastUploadedDate`
-- Group provisioning state
- - `string: ProvisioningState`
- ### SQL managed instance - Azure Arc | Description | Property name | Property type|
The following JSON document is an example of the Azure Arc Data Controller resou
| Last uploaded date from on-premises cluster | LastUploadedDate | System.DateTime | | SQL managed instance provisioning state | ProvisioningState | string |
-The following JSON document is an example of the SQL managed instance - Azure Arc resource.
-
-#### SQL managed instance
--- The managed instance ID
- - `public string: DataControllerId`
-- The instance admin username
- - `string: Admin`
-- The instance start time
- - `string: StartTime`
-- The instance end time
- - `string: EndTime`
-- The raw kubernetes information (`kubectl get sqlmi`)
- - `object: K8sRaw` [Details](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/crds)
-- Username and password for basic authentication.
- - `public: BasicLoginInformation BasicLoginInformation`
-- Last uploaded date from on-premises cluster.
- - `public: System.DateTime LastUploadedDate`
-- SQL managed instance provisioning state-- `public string: ProvisioningState`
-
+ The following JSON document is an example of the SQL Managed Instance - Azure Arc resource. ++ ```json {
In support situations, you may be asked to provide database instance logs, Kuber
[Upload usage data to Azure Monitor](upload-usage-data.md) ++
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
Title: 'Tutorial: Deploy configurations using GitOps on an Azure Arc-enabled Kubernetes cluster' description: This tutorial demonstrates applying configurations on an Azure Arc-enabled Kubernetes cluster. Previously updated : 05/24/2022 Last updated : 02/16/2023
In this tutorial, you will apply configurations using GitOps on an Azure Arc-ena
>[!TIP] > If the `k8s-configuration` extension is already installed, you can update it to the latest version using the following command - `az extension update --name k8s-configuration` -- If your Git repository is located outside the firewall and git protocol is being used with the configuration repository parameter, then TCP on port 9418 (`git://:9418`) needs to be enabled for egress access on firewall.- ## Create a configuration The [example repository](https://github.com/Azure/arc-k8s-demo) used in this article is structured around the persona of a cluster operator. The manifests in this repository provision a few namespaces, deploy workloads, and provide some team-specific configuration. Using this repository with GitOps creates the following resources on your cluster:
Use the Azure CLI extension for `k8s-configuration` to link a connected cluster
| Parameter | Format | | - | - |
-| `--repository-url` | http[s]://server/repo[.git] or git://server/repo[.git]
+| `--repository-url` | http[s]://server/repo[.git]
### Use a private Git repository with SSH and Flux-created keys
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Title: Azure Arc resource bridge (preview) overview description: Learn how to use Azure Arc resource bridge (preview) to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Previously updated : 01/06/2023 Last updated : 02/15/2023
Arc resource bridge delivers the following benefits:
* Designed to recover from software failures. * Supports deployment to any private cloud hosted on Hyper-V or VMware from the Azure portal or using the Azure Command-Line Interface (CLI). - ## Overview Azure Arc resource bridge (preview) hosts other components such as [custom locations](..\platform\conceptual-custom-locations.md), cluster extensions, and other Azure Arc agents in order to deliver the level of functionality with the private cloud infrastructures it supports. This complex system is composed of three layers:
You can connect an SCVMM management server to Azure by deploying Azure Arc resou
* Add, remove, and update network interfaces * Add, remove, and update disks and update VM size (CPU cores and memory)
-## Prerequisites
-
-[Azure CLI](/cli/azure/install-azure-cli) is required to deploy the Azure Arc resource bridge on supported private cloud environments.
-
-If you are deploying on VMware, a x64 Python environment is required. The [pip](https://pypi.org/project/pip/) package installer for Python is also required.
-
-If you are deploying on Azure Stack HCI, the x32 Azure CLI installer can be used to install Azure CLI.
- ### Supported regions In order to use Arc resource bridge in a region, Arc resource bridge and the private cloud product must be supported in the region. For example, to use Arc resource bridge with Azure Stack HCI in East US, Arc resource bridge and Azure Stack HCI must be supported in East US. Please check with the private cloud product for their region availability - it is typically called out in their deployment instructions of Arc resource bridge. There are instances where Arc Resource Bridge may be available in a region where private cloud support is not yet available.
Arc resource bridge communicates outbound securely to Azure Arc over TCP port 44
You may need to allow specific URLs to [ensure outbound connectivity is not blocked](troubleshoot-resource-bridge.md#restricted-outbound-connectivity) by your firewall or proxy server.
+For more information, see [Azure Arc resource bridge (preview) network requirements](network-requirements.md).
+ ## Next steps * Learn more about [how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md). * Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
+* Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
+
+ Title: Azure Arc resource bridge (preview) system requirements
+description: Learn about system requirements for Azure Arc resource bridge (preview).
+ Last updated : 02/15/2023++
+# Azure Arc resource bridge (preview) system requirements
+
+This article describes the system requirements for deploying Azure Arc resource bridge (preview).
+
+Arc resource bridge is used with other partner products, such as [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), [Arc-enabled VMware vSphere](../vmware-vsphere/index.yml), and [Arc-enabled System Center Virtual Machine Manager (SCVMM)](../system-center-virtual-machine-manager/index.yml). These products may have additional requirements.
+
+## Management tool requirements
+
+[Azure CLI](/cli/azure/install-azure-cli) is required to deploy the Azure Arc resource bridge on supported private cloud environments.
+
+If you're deploying on VMware, a x64 Python environment is required. The [pip](https://pypi.org/project/pip/) package installer for Python is also required.
+
+If you're deploying on Azure Stack HCI, the x32 Azure CLI installer can be used to install Azure CLI.
+
+## Minimum resource requirements
+
+Arc resource bridge has the following minimum resource requirements:
+
+- 50 GB disk space
+- 4 vCPUs
+- 8 GB memory
+
+These minimum requirements enable most scenarios. However, a partner product may support a higher resource connection count to Arc resource bridge, which requires the bridge to have higher resource requirements. Failure to provide sufficient resources may cause errors during deployment, such as disk copy errors. Review the partner product's documentation for specific resource requirements.
+
+> [!NOTE]
+> To [use Arc resource bridge with Azure Kubernetes Service (AKS) on Azure Stack HCI](#aks-and-arc-resource-bridge-on-azure-stack-hci), the AKS clusters must be deployed prior to deploying Arc resource bridge. If Arc resource bridge has already been deployed, AKS clusters can't be installed unless you delete Arc resource bridge first. Once your AKS clusters are deployed to Azure Stack HCI, you can deploy Arc resource bridge again.
+
+## Management machine requirements
+
+The machine used to run the commands to deploy Arc resource bridge, and maintain it, is called the *management machine*. The management machine should be considered part of the Arc resource bridge ecosystem, as it has specific requirements and is necessary to manage the appliance VM.
+
+Because the management machine needs these specific requirements to manage Arc resource bridge, once the machine is set up, it should continue to be the primary machine used to maintain Arc resource bridge.
+
+The management machine has the following requirements:
+
+- [Azure CLI x64](/cli/azure/install-azure-cli-windows?tabs=azure-cli) installed.
+- Open communication to Control Plane IP (`controlplaneendpoint` parameter in `createconfig` command).
+- Open communication to Appliance VM IP (`k8snodeippoolstart` parameter in `createconfig` command).
+- Open communication to the reserved Appliance VM IP for upgrade (`k8snodeippoolend` parameter in `createconfig` command).
+- Internal and external DNS resolution. The DNS server must resolve internal names, such as the vCenter endpoint for vSphere or cloud agent service endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses that are [required URLs](network-requirements.md#outbound-connectivity) for deployment.
+- If using a proxy, the proxy server configuration on the management machine must allow the machine to have internet access and to connect to [required URLs](network-requirements.md#outbound-connectivity) needed for deployment, such as the URL to download OS images.
+
+## Appliance VM requirements
+
+Arc resource bridge consists of an appliance VM that is deployed on-premises. The appliance VM has visibility into the on-premises infrastructure and can tag on-premises resources (guest management) for availability in Azure Resource Manager (ARM). The appliance VM is assigned an IP address from the `k8snodeippoolstart` parameter in the `createconfig` command.
+
+The appliance VM has the following requirements:
+
+- Open communication with the management machine, vCenter endpoint (for VMware), MOC cloud agent service endpoint (for Azure Stack HCI), or other control center for the on-premises environment.
+- The appliance VM needs to be able to resolve the management machine and vice versa.
+- Internet access.
+- Connectivity to [required URLs](network-requirements.md#outbound-connectivity) enabled in proxy and firewall.
+- Static IP assigned, used for the `k8snodeippoolstart` in configuration command. (If using DHCP, then the address must be reserved.)
+- Ability to reach a DNS server that can resolve internal names, such as the vCenter endpoint for vSphere or cloud agent service endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses, such as Azure service addresses, container registry names, and other [required URLs](network-requirements.md#outbound-connectivity).
+- If using a proxy, the proxy server configuration is provided when running the `createconfig` command, which is used to create the configuration files of the appliance VM. The proxy should allow internet access on the appliance VM to connect to [required URLs](network-requirements.md#outbound-connectivity) needed for deployment, such as the URL to download OS images.
+
+## Reserved appliance VM IP requirements
+
+Arc resource bridge reserves an additional IP address to be used for the appliance VM upgrade. During upgrade, a new appliance VM is created with the reserved appliance VM IP. Once the new appliance VM is created, the old appliance VM is deleted, and its IP address becomes reserved for a future upgrade. The reserved appliance VM IP is assigned an IP address from the `k8snodeippoolend` parameter in the `az arcappliance createconfig` command.
+
+The reserved appliance VM IP has the following requirements:
+
+- Open communication with the management machine, vCenter endpoint (for VMware), MOC cloud agent service endpoint (for Azure Stack HCI), or other control center for the on-premises environment.
+- The appliance VM needs to be able to resolve the management machine and vice versa.
+- Internet access.
+- Connectivity to [required URLs](network-requirements.md#outbound-connectivity) enabled in proxy and firewall.
+- Static IP assigned, used for the `k8snodeippoolend` in configuration command. (If using DHCP, then the address must be reserved.)
+- Ability to reach a DNS server that can resolve internal names, such as the vCenter endpoint for vSphere or cloud agent service endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses, such as Azure service addresses, container registry names, and other [required URLs](network-requirements.md#outbound-connectivity).
+
+## Control plane IP requirements
+
+The appliance VM hosts a management Kubernetes cluster with a control plane that should be given a static IP. This IP is assigned from the `controlplaneendpoint` parameter in the `createconfig` command.
+
+The control plane IP has the following requirements:
+
+- Open communication with the management machine.
+- The control plane needs to be able to resolve the management machine and vice versa.
+- Static IP address assigned; the IP should be outside the DHCP range but still available on the network segment. This IP address can't be assigned to any other machine on the network. If you're using Azure Kubernetes Service on Azure Stack HCI (AKS hybrid) and installing resource bridge, then the control plane IP for the resource bridge can't be used by the AKS hybrid cluster. For specific instructions on deploying Arc resource bridge with AKS on Azure Stack HCI, see [AKS on HCI (AKS hybrid) - Arc resource bridge deployment](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server).
+
+## User account and credentials
+
+Arc resource bridge may require a separate user account with the necessary roles to view and manage resources in the on-premises infrastructure (such as Arc-enabled VMware vSphere or Arc-enabled SCVMM). If so, during creation of the configuration files, the `username` and `password` parameters will be required. The account credentials are then stored in a configuration file locally within the appliance VM.
+
+If the user account is set to periodically change passwords, the credentials must be immediately updated on the resource bridge. This user account may also be set with a lockout policy to protect the on-premises infrastructure, in case the credentials aren't updated and the resource bridge makes multiple attempts to use expired credentials to access the on-premises control center.
+
+For example, with Arc-enabled VMware, Arc resource bridge needs a separate user account for vCenter with the necessary roles. If the [credentials for the user account change](troubleshoot-resource-bridge.md#insufficient-permissions), then the credentials stored in Arc resource bridge must be immediately updated by running `az arcappliance update-infracredentials` from the [management machine](#management-machine-requirements). Otherwise, the appliance will make repeated attempts to use the expired credentials to access vCenter, which will result in a lockout of the account.
+
+## Configuration files
+
+Arc resource bridge consists of an appliance VM that is deployed in the on-premises infrastructure. To maintain the appliance VM, the configuration files generated during deployment must be saved in a secure location and made available on the management machine.
+
+There are several different types of configuration files, based on the on-premises infrastructure.
+
+### Appliance configuration files
+
+Three configuration files are created when the `createconfig` command completes (or the equivalent commands used by Azure Stack HCI and AKS hybrid): resource.yaml, appliance.yaml and infra.yaml.
+
+By default, these files are generated in the current CLI directory when `createconfig` completes. These files should be saved in a secure location on the management machine, because they're required for maintaining the appliance VM. Because the configuration files reference each other, all three files must be stored in the same location. If the files are moved from their original location at deployment, open the files to check that the reference paths to the configuration files are accurate.
+
+### Kubeconfig
+
+The appliance VM hosts a management Kubernetes cluster. The kubeconfig is a low-privilege Kubernetes configuration file that is used to maintain the appliance VM. By default, it's generated in the current CLI directory when the `deploy` command completes. The kubeconfig should be saved in a secure location to the management machine, because it's required for maintaining the appliance VM.
+
+### HCI login configuration file (Azure Stack HCI only)
+
+Arc resource bridge uses a MOC login credential called [KVA token](/azure-stack/hci/manage/deploy-arc-resource-bridge-using-command-line#set-up-arc-vm-management) (kvatoken.tok) to interact with Azure Stack HCI. The KVA token is generated with the appliance configuration files when deploying Arc resource bridge. This token is also used when collecting logs for Arc resource bridge, so it should be saved in a secure location with the rest of the appliance configuration files. This file is saved in the directory provided during configuration file creation or the default CLI directory.
+
+## AKS and Arc Resource Bridge on Azure Stack HCI
+
+To use AKS and Arc resource bridge together on Azure Stack HCI, the AKS cluster must be deployed prior to deploying Arc resource bridge. If Arc resource bridge has already been deployed, AKS can't be deployed unless you delete Arc resource bridge first. Once your AKS cluster is deployed to Azure Stack HCI, you can deploy Arc resource bridge.
+
+The following example shows a network configuration setup for Arc resource bridge and AKS clusters when deployed on Azure Stack HCI. Key details are that Arc resource bridge and AKS share the same switch and `ipaddressprefix`, but require different IP addresses for `vippoolstart/end` and `k8snodeippoolstart/end`.
+
+### AKS hybrid
+
+```
+azurestackhciprovider:
+ virtualnetwork:
+   name: "mgmtvnet"
+   vswitchname: "Default Switch"
+   type: "Transparent"
+   macpoolname: 
+   vlanid: 0
+   ipaddressprefix: 172.16.0.0/16
+   gateway: 17.16.1.1 
+   dnsservers: 17.16.1.1
+   vippoolstart: 172.16.255.0
+   vippoolend: 172.16.255.254
+   k8snodeippoolstart: 172.16.10.0
+   k8snodeippoolend: 172.16.10.254 
+```
+
+### Arc resource bridge
+
+```
+azurestackhciprovider:
+ virtualnetwork:
+      name: "mgmtvnet"
+      vswitchname: "Default Switch"
+      type: "Transparent"
+      macpoolname: 
+      vlanid: 0
+      ipaddressprefix: 172.16.0.0/16
+      gateway: 17.16.1.1
+      dnsservers: 17.16.0.1
+      vippoolstart: 172.16.250.0
+      vippoolend: 172.16.250.254
+      k8snodeippoolstart: 172.16.30.0
+      k8snodeippoolend: 172.16.30.254
+```
+
+For instructions for how to deploy Arc resource bridge on Hybrid AKS, see [How to install Azure Arc Resource Bridge on Windows Server - AKS hybrid](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server).
+
+## Next steps
+
+- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about requirements and technical details.
+- Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md).
azure-arc Concept Log Analytics Extension Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/concept-log-analytics-extension-deployment.md
Title: Deploy Azure Monitor agent on Arc-enabled servers description: This article reviews the different methods to deploy the Azure Monitor agent on Windows and Linux-based machines registered with Azure Arc-enabled servers in your local datacenter or other cloud environment. Previously updated : 09/14/2022 Last updated : 02/17/2023
-# Understand deployment options for the Azure Monitor agent on Azure Arc-enabled servers
+# Deployment options for Azure Monitor agent on Azure Arc-enabled servers
Azure Monitor supports multiple methods to install the Azure Monitor agent and connect your machine or server registered with Azure Arc-enabled servers to the service. Azure Arc-enabled servers support the Azure VM extension framework, which provides post-deployment configuration and automation tasks, enabling you to simplify management of your hybrid machines like you can with Azure VMs. The Azure Monitor agent is required if you want to:
-* Monitor the operating system and any workloads running on the machine or server using [VM insights](../../azure-monitor/vm/vminsights-overview.md).
-* Analyze and alert using [Azure Monitor](../../azure-monitor/overview.md).
-* Perform security monitoring in Azure by using [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md).
-* Collect inventory and track changes by using [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md).
+* Monitor the operating system and any workloads running on the machine or server using [VM insights](../../azure-monitor/vm/vminsights-overview.md)
+* Analyze and alert using [Azure Monitor](../../azure-monitor/overview.md)
+* Perform security monitoring in Azure by using [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md)
+* Collect inventory and track changes by using [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md)
This article reviews the deployment methods for the Azure Monitor agent VM extension, across multiple production physical servers or virtual machines in your environment, to help you determine which works best for your organization. If you are interested in the new Azure Monitor agent and want to see a detailed comparison, see [Azure Monitor agents overview](../../azure-monitor/agents/agents-overview.md).
This method supports managing the installation, management, and removal of VM ex
#### Advantages
-* Can be useful for testing purposes.
-* Useful if you have a few machines to manage.
+* Can be useful for testing purposes
+* Useful if you have a few machines to manage
#### Disadvantages
-* Limited automation when using an Azure Resource Manager template.
-* Can only focus on a single Arc-enabled server, and not multiple instances.
-* Only supports specifying a single workspace to report to. Requires using PowerShell or the Azure CLI to configure the Log Analytics Windows agent VM extension to report to up to four workspaces.
-* Doesn't support deploying the Dependency agent from the portal. You can only use PowerShell, the Azure CLI, or ARM template.
+* Limited automation when using an Azure Resource Manager template
+* Can only focus on a single Arc-enabled server, and not multiple instances
+* Only supports specifying a single workspace to report to; requires using PowerShell or the Azure CLI to configure the Log Analytics Windows agent VM extension to report to up to four workspaces
+* Doesn't support deploying the Dependency agent from the portal; you can only use PowerShell, the Azure CLI, or ARM template
### Use Azure Policy
Azure Policy includes several prebuilt definitions related to Azure Monitor. For
#### Advantages
-* If the VM extension is removed, after policy evaluation it reinstalls it.
-* Identifies and installs the VM extension when a new Azure Arc-enabled server is registered with Azure.
+* Reinstalls the VM extension if removed (after policy evaluation)
+* Identifies and installs the VM extension when a new Azure Arc-enabled server is registered with Azure
#### Disadvantages
The process automation operating environment in Azure Automation and its support
#### Advantages
-* Can use a scripted method to automate its deployment and configuration using scripting languages you're familiar with.
-* Runs on a schedule that you define and control.
-* Authenticate securely to Arc-enabled servers from the Automation account using a managed identity.
+* Can use a scripted method to automate its deployment and configuration using scripting languages you're familiar with
+* Runs on a schedule that you define and control
+* Authenticate securely to Arc-enabled servers from the Automation account using a managed identity
#### Disadvantages
-* Requires an Azure Automation account.
-* Experience authoring and managing runbooks in Azure Automation.
-* Must create a runbook based on PowerShell or Python, depending on the target operating system.
+* Requires an Azure Automation account
+* Experience authoring and managing runbooks in Azure Automation
+* Must create a runbook based on PowerShell or Python, depending on the target operating system
+
+### Use Azure portal
+
+The Azure Monitor agent VM extension can be installed using the Azure portal. See [Automatic extension upgrade for Azure Arc-enabled servers](manage-automatic-vm-extension-upgrade.md) for more information about installing extensions from the Azure portal.
+
+#### Advantages
+
+* Point and click directly from Azure portal
+* Useful for testing with small set of servers
+* Immediate deployment of extension
+
+#### Disadvantages
+
+* Not scalable to many servers
+* Limited automation
## Next steps
azure-functions Functions Bindings Kafka Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-trigger.md
An [isolated worker process class library](dotnet-isolated-process-guide.md) com
The attributes you use depend on the specific event provider.
-# [Confluent](#tab/confluent/in-process)
+# [Confluent (in-process)](#tab/confluent/in-process)
The following example shows a C# function that reads and logs the Kafka message as a Kafka event:
In the following function, an instance of `UserRecord` is available in the `Kafk
For a complete set of working .NET examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/dotnet/).
-# [Event Hubs](#tab/event-hubs/in-process)
+# [Event Hubs (in-process)](#tab/event-hubs/in-process)
The following example shows a C# function that reads and logs the Kafka message as a Kafka event:
In the following function, an instance of `UserRecord` is available in the `Kafk
For a complete set of working .NET examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/dotnet/).
-# [Confluent](#tab/confluent/isolated-process)
+# [Confluent (isolated process)](#tab/confluent/isolated-process)
The following example shows a C# function that reads and logs the Kafka message as a Kafka event:
The following function logs the message and headers for the Kafka Event:
For a complete set of working .NET examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/dotnet-isolated/).
-# [Event Hubs](#tab/event-hubs/isolated-process)
+# [Event Hubs (isolated process)](#tab/event-hubs/isolated-process)
The following example shows a C# function that reads and logs the Kafka message as a Kafka event:
For a complete set of supported host.json settings for the Kafka trigger, see [h
- [Write to an Apache Kafka stream from a function](./functions-bindings-kafka-output.md)
-[Avro schema]: http://avro.apache.org/docs/current/
+[Avro schema]: http://avro.apache.org/docs/current/
azure-functions Functions Compare Logic Apps Ms Flow Webjobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md
You can mix and match services when you build an orchestration, such as calling
| **Development** | Code-first (imperative) | Designer-first (declarative) | | **Connectivity** | [About a dozen built-in binding types](functions-triggers-bindings.md#supported-bindings), write code for custom bindings | [Large collection of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors), [Enterprise Integration Pack for B2B scenarios](../logic-apps/logic-apps-enterprise-integration-overview.md), [build custom connectors](/connectors/custom-connectors/) | | **Actions** | Each activity is an Azure function; write code for activity functions |[Large collection of ready-made actions](/connectors/connector-reference/connector-reference-logicapps-connectors)|
-| **Monitoring** | [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [Azure Monitor logs](../logic-apps/monitor-logic-apps-log-analytics.md), [Microsoft Defender for Cloud](../logic-apps/healthy-unhealthy-resource.md) |
+| **Monitoring** | [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [Azure Monitor Logs](../logic-apps/monitor-workflows-collect-diagnostic-data.md), [Microsoft Defender for Cloud](../logic-apps/healthy-unhealthy-resource.md) |
| **Management** | [REST API](durable/durable-functions-http-api.md), [Visual Studio](/visualstudio/azure/vs-azure-tools-resources-managing-with-cloud-explorer) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [REST API](/rest/api/logic/), [PowerShell](/powershell/module/az.logicapp), [Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md) | | **Execution context** | Can run [locally](./functions-kubernetes-keda.md) or in the cloud | Runs in Azure, locally, or on premises. For more information, see [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md#resource-environment-differences). |
azure-functions Functions Consumption Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-consumption-costs.md
When estimating the overall costs of your function app and related services, use
| | -- | | **Storage account** | Each function app requires that you have an associated General Purpose [Azure Storage account](../storage/common/storage-introduction.md#types-of-storage-accounts), which is [billed separately](https://azure.microsoft.com/pricing/details/storage/). This account is used internally by the Functions runtime, but you can also use it for Storage triggers and bindings. If you don't have a storage account, one is created for you when the function app is created. To learn more, see [Storage account requirements](storage-considerations.md#storage-account-requirements).| | **Application Insights** | Functions relies on [Application Insights](../azure-monitor/app/app-insights-overview.md) to provide a high-performance monitoring experience for your function apps. While not required, you should [enable Application Insights integration](configure-monitoring.md#enable-application-insights-integration). A free grant of telemetry data is included every month. To learn more, see [the Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). |
-| **Network bandwidth** | You don't pay for data transfer between Azure services in the same region. However, you can incur costs for outbound data transfers to another region or outside of Azure. To learn more, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/). |
+| **Network bandwidth** | You can incur costs for data transfer depending on the direction and scenario of the data movement. To learn more, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/). |
## Behaviors affecting execution time
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
See the complete regional availability of Functions on the [Azure web site](http
|North Europe| 100 | 100 | |Norway East| 100 | 20 | |South Africa North| 100 | 20 |
+|South Africa West| 20 | 20 |
|South Central US| 100 | 100 | |South India | 100 | Not Available | |Southeast Asia| 100 | 20 |
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[ClearShark](https://clearshark.com/)| |[CloudFit Software, LLC](https://www.cloudfitsoftware.com/)| |[Cloud Navigator, Inc - formerly ISC](https://www.cloudnav.com)|
+|[Cloud Unity LLC](https://cloudunity.com)|
|[CNSS - Cherokee Nation System Solutions LLC](https://cherokee-federal.com/about/cherokee-nation-system-solutions)| |[Cobalt](https://www.cobalt.net/)| |[CodeLynx, LLC](http://www.codelynx.com/)|
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
Title: Facility Ontology in Microsoft Azure Maps Creator description: Facility Ontology that describes the feature class definitions for Azure Maps Creator-- Previously updated : 11/08/2022++ Last updated : 02/17/2023
The `category` class feature defines category names. For example: "room.conferen
:::zone-end
+## Next steps
+
+Learn more about Creator for indoor maps by reading:
+
+> [!div class="nextstepaction"]
+> [Creator for indoor maps](creator-indoor-maps.md)
+ [conversion]: /rest/api/maps/v2/conversion [geojsonpoint]: /rest/api/maps/v2/wfs/get-features#geojsonpoint [GeoJsonPolygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonpolygon
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md
Title: Use Azure Maps Drawing Error Visualizer description: In this article, you'll learn about how to visualize warnings and errors returned by the Creator Conversion API.-- Previously updated : 05/26/2021++ Last updated : 02/17/2023
Once the _ConversionWarningsAndErrors.json_ file loads, you'll see a list of you
## Next steps
-Once your [Drawing package meets the requirements](drawing-requirements.md), you can use the [Azure Maps Dataset service](/rest/api/maps/v2/conversion) to convert the Drawing package to a dataset. Then, you can use the Indoor Maps web module to develop your application. Learn more by reading the following articles:
-
-> [!div class="nextstepaction"]
-> [Drawing Conversion error codes](drawing-conversion-error-codes.md)
-
-> [!div class="nextstepaction"]
-> [Drawing Package Guide](drawing-package-guide.md)
+Learn more by reading:
> [!div class="nextstepaction"] > [Creator for indoor maps](creator-indoor-maps.md)-
-> [!div class="nextstepaction"]
-> [Use the Indoor Maps module](how-to-use-indoor-module.md)
-
-> [!div class="nextstepaction"]
-> [Implement indoor map dynamic styling](indoor-map-dynamic-styling.md)
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
Title: Drawing package requirements in Microsoft Azure Maps Creator description: Learn about the Drawing package requirements to convert your facility design files to map data-- Previously updated : 03/18/2022++ Last updated : 02/17/2023
Below is the manifest file for the sample Drawing package. Go to the [Sample Dra
## Next steps
-When your Drawing package meets the requirements, you can use the [Azure Maps Conversion service](/rest/api/maps/v2/conversion) to convert the package to a map dataset. Then, you can use the dataset to generate an indoor map by using the indoor maps module.
-
-> [!div class="nextstepaction"]
-> [Creator Facility Ontology](creator-facility-ontology.md)
-
-> [!div class="nextstepaction"]
-> [Creator for indoor maps](creator-indoor-maps.md)
-
-> [!div class="nextstepaction"]
-> [Drawing Package Guide](drawing-package-guide.md)
-
-> [!div class="nextstepaction"]
-> [Creator for indoor maps](creator-indoor-maps.md)
- > [!div class="nextstepaction"] > [Tutorial: Creating a Creator indoor map](tutorial-creator-indoor-maps.md)
+Learn more by reading:
+ > [!div class="nextstepaction"]
-> [Indoor maps dynamic styling](indoor-map-dynamic-styling.md)
+> [Creator for indoor maps](creator-indoor-maps.md)
azure-maps Schema Stateset Stylesobject https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/schema-stateset-stylesobject.md
Title: StylesObject Schema reference guide for Dynamic Azure Maps description: Reference guide to the dynamic Azure Maps StylesObject schema and syntax.-- Previously updated : 12/07/2020++ Last updated : 02/17/2023
The following JSON illustrates a `BooleanTypeStyleRule` *state* named `occupied`
} ] }
-```
+```
+
+## Next steps
+
+Learn more about Creator for indoor maps by reading:
+
+> [!div class="nextstepaction"]
+> [Creator for indoor maps](creator-indoor-maps.md)
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
A preview [OpenTelemetry-based .NET offering](opentelemetry-enable.md?tabs=net)
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-You can also use the Microsoft.Extensions.Logging.ApplicationInsights package to capture logs. For more information, see [Application Insights logging with .NET](ilogger.md). For an example, see [Console application](ilogger.md#console-application).
+> [!NOTE]
+> If you want to use standalone ILogger provider, use [Microsoft.Extensions.Logging.ApplicationInsight](./ilogger.md).
## Supported scenarios
It's important to note that the following example doesn't cause the Application
} ```
-For more information, see [ILogger configuration](ilogger.md#logging-level).
+For more information, see [ILogger configuration](/dotnet/core/extensions/logging#configure-logging).
### Some Visual Studio templates used the UseApplicationInsights() extension method on IWebHostBuilder to enable Application Insights. Is this usage still valid?
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
# Application Insights logging with .NET
-In this article, you'll learn how to capture logs with Application Insights in .NET apps by using several NuGet packages:
--- **Core package:**
- - [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai]
-- **Workload packages:**
- - [`Microsoft.ApplicationInsights.AspNetCore`][nuget-ai-anc]
- - [`Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel`][nuget-ai-ws-tc]
+In this article, you'll learn how to capture logs with Application Insights in .NET apps by using the [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai] provider package. If you use this provider, you can query and analyze your logs by using the Application Insights tools.
[nuget-ai]: https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights
-[nuget-ai-anc]: https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore
[nuget-ai-ws]: https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService
-[nuget-ai-ws-tc]: https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel
+
+> [!NOTE]
+> If you want to implement the full range of Application Insights telemetry along with logging, see [Configure Application Insights for your ASP.NET websites](./asp-net.md) or [Application Insights for ASP.NET Core applications](./asp-net-core.md).
> [!TIP] > The [`Microsoft.ApplicationInsights.WorkerService`][nuget-ai-ws] NuGet package, used to enable Application Insights for background services, is out of scope. For more information, see [Application Insights for Worker Service apps](./worker-service.md).
-Depending on the Application Insights logging package that you use, there will be various ways to register `ApplicationInsightsLoggerProvider`. `ApplicationInsightsLoggerProvider` is an implementation of <xref:Microsoft.Extensions.Logging.ILoggerProvider>, which is responsible for providing <xref:Microsoft.Extensions.Logging.ILogger> and <xref:Microsoft.Extensions.Logging.ILogger%601> implementations.
- ## ASP.NET Core applications
-To add Application Insights telemetry to ASP.NET Core applications, use the `Microsoft.ApplicationInsights.AspNetCore` NuGet package. You can configure this telemetry through [Visual Studio as a connected service](/visualstudio/azure/azure-app-insights-add-connected-service), or manually.
+To add Application Insights logging to ASP.NET Core applications, use the `Microsoft.Extensions.Logging.ApplicationInsights` NuGet provider package.
-By default, ASP.NET Core applications have an Application Insights logging provider registered when they're configured through the [code](./asp-net-core.md) or [codeless](./azure-web-apps-net-core.md#enable-auto-instrumentation-monitoring) approach. The registered provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater. You can customize severity and categories. For more information, see [Logging level](#logging-level).
+1. Install the [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai] NuGet package.
-1. Ensure that the NuGet package is installed:
-
- ```xml
- <ItemGroup>
- <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.19.0" />
- </ItemGroup>
- ```
-
-1. Ensure that the `Startup.ConfigureServices` method calls `services.AddApplicationInsightsTelemetry`:
+1. Add `ApplicationInsightsLoggerProvider`:
```csharp
- using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
- using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting;
- using Microsoft.Extensions.Configuration;
+ using Microsoft.Extensions.Logging;
+ using Microsoft.Extensions.Logging.ApplicationInsights;
namespace WebApplication {
- public class Startup
+ public class Program
{
- public Startup(IConfiguration configuration)
- {
- Configuration = configuration;
- }
-
- public IConfiguration Configuration { get; }
-
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddApplicationInsightsTelemetry();
- // Configure the Connection String in appsettings.json
- }
-
- public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
+ public static void Main(string[] args)
{
- // omitted for brevity
+ var host = CreateHostBuilder(args).Build();
+
+ var logger = host.Services.GetRequiredService<ILogger<Program>>();
+ logger.LogInformation("From Program, running the host now.");
+
+ host.Run();
}
+
+ public static IHostBuilder CreateHostBuilder(string[] args) =>
+ Host.CreateDefaultBuilder(args)
+ .ConfigureWebHostDefaults(webBuilder =>
+ {
+ webBuilder.UseStartup<Startup>();
+ })
+ .ConfigureLogging((context, builder) =>
+ {
+ builder.AddApplicationInsights(
+ configureTelemetryConfiguration: (config) => config.ConnectionString = context.Configuration["APPLICATIONINSIGHTS_CONNECTION_STRING"],
+ configureApplicationInsightsLoggerOptions: (options) => { }
+ );
+
+ // Capture all log-level entries from Startup
+ builder.AddFilter<ApplicationInsightsLoggerProvider>(
+ typeof(Startup).FullName, LogLevel.Trace);
+ });
} } ```
public class ValuesController : ControllerBase
For more information, see [Logging in ASP.NET Core](/aspnet/core/fundamentals/logging).
-### Capture logs within ASP.NET Core startup code
-
-Some scenarios require capturing logs as part of the app startup routine, before the request-response pipeline is ready to accept requests. However, `ILogger` implementations aren't easily available from dependency injection in *Program.cs* and *Startup.cs*. For more information, see [Logging in .NET: Create logs in Main](/dotnet/core/extensions/logging?tabs=command-line#create-logs-in-main).
-
-There are several limitations when you're logging from *Program.cs* and *Startup.cs*:
-
-* Telemetry is sent through the [InMemoryChannel](./telemetry-channels.md) telemetry channel.
-* No [sampling](./sampling.md) is applied to telemetry.
-* Standard [telemetry initializers or processors](./api-filtering-sampling.md) aren't available.
-
-The following examples provide a demonstration by explicitly instantiating and configuring *Program.cs* and *Startup.cs*.
-
-#### Example Program.cs
-
-```csharp
-using Microsoft.AspNetCore.Hosting;
-using Microsoft.Extensions.DependencyInjection;
-using Microsoft.Extensions.Hosting;
-using Microsoft.Extensions.Logging;
-using Microsoft.Extensions.Logging.ApplicationInsights;
-
-namespace WebApplication
-{
- public class Program
- {
- public static void Main(string[] args)
- {
- var host = CreateHostBuilder(args).Build();
-
- var logger = host.Services.GetRequiredService<ILogger<Program>>();
- logger.LogInformation("From Program, running the host now.");
-
- host.Run();
- }
-
- public static IHostBuilder CreateHostBuilder(string[] args) =>
- Host.CreateDefaultBuilder(args)
- .ConfigureWebHostDefaults(webBuilder =>
- {
- webBuilder.UseStartup<Startup>();
- })
- .ConfigureLogging((context, builder) =>
- {
- // Providing a connection string is required if you're using the
- // standalone Microsoft.Extensions.Logging.ApplicationInsights package,
- // or when you need to capture logs during application startup, such as
- // in Program.cs or Startup.cs itself.
- builder.AddApplicationInsights(
- configureTelemetryConfiguration: (config) => config.ConnectionString = context.Configuration["APPLICATIONINSIGHTS_CONNECTION_STRING"],
- configureApplicationInsightsLoggerOptions: (options) => { }
- );
-
- // Capture all log-level entries from Program
- builder.AddFilter<ApplicationInsightsLoggerProvider>(
- typeof(Program).FullName, LogLevel.Trace);
-
- // Capture all log-level entries from Startup
- builder.AddFilter<ApplicationInsightsLoggerProvider>(
- typeof(Startup).FullName, LogLevel.Trace);
- });
- }
-}
-```
--
-#### Example Startup.cs
-
-```csharp
-using Microsoft.AspNetCore.Builder;
-using Microsoft.AspNetCore.Hosting;
-using Microsoft.AspNetCore.Http;
-using Microsoft.Extensions.DependencyInjection;
-using Microsoft.Extensions.Hosting;
-using Microsoft.Extensions.Configuration;
-using Microsoft.Extensions.Logging;
-
-namespace WebApplication
-{
- public class Startup
- {
- public Startup(IConfiguration configuration)
- {
- Configuration = configuration;
- }
-
- public IConfiguration Configuration { get; }
-
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddApplicationInsightsTelemetry();
- // Configure the Connection String in appsettings.json
- }
-
- // The ILogger<Startup> is resolved by dependency injection
- // and available in Startup.Configure.
- public void Configure(
- IApplicationBuilder app, IWebHostEnvironment env, ILogger<Startup> logger)
- {
- logger.LogInformation(
- "Configuring for {Environment} environment",
- env.EnvironmentName);
-
- if (env.IsDevelopment())
- {
- app.UseDeveloperExceptionPage();
- }
-
- app.UseRouting();
- app.UseEndpoints(endpoints =>
- {
- endpoints.MapGet("/", async context =>
- {
- await context.Response.WriteAsync("Hello World!");
- });
- });
- }
- }
-}
-```
- ## Console application
-The following example uses the Microsoft.Extensions.Logging.ApplicationInsights package. The Microsoft.Extensions.Logging.ApplicationInsights package should be used in a console application or whenever you want a bare minimum implementation of Application Insights without the full feature set such as metrics, distributed tracing, sampling, and telemetry initializers.
+To add Application Insights logging to console applications, first install the [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai] NuGet provider package.
+
+The following example uses the Microsoft.Extensions.Logging.ApplicationInsights package and demonstrates the default behavior for a console application. The Microsoft.Extensions.Logging.ApplicationInsights package should be used in a console application or whenever you want a bare minimum implementation of Application Insights without the full feature set such as metrics, distributed tracing, sampling, and telemetry initializers.
Here are the installed packages:
namespace ConsoleApp
```
-The previous example demonstrates the default behavior for a console application. As the following example shows, you can override this default behavior.
-
-Also install this package:
-
-```xml
-<PackageReference Include="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel" Version="2.17.0" />
-```
-
-The following section shows how to override the default `TelemetryConfiguration` setup by using the <xref:Microsoft.Extensions.Options.ConfigureOptions%601.Configure(%600)> method. This example sets up `ServerTelemetryChannel` and sampling. It adds a custom `TelemetryInitializer` instance to `TelemetryConfiguration`.
-
-```csharp
-using Microsoft.ApplicationInsights.Extensibility;
-using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
-using Microsoft.Extensions.DependencyInjection;
-using Microsoft.Extensions.Logging;
-using Microsoft.Extensions.Logging.ApplicationInsights;
-using System;
-using System.Threading.Tasks;
-
-namespace ConsoleApp
-{
- class Program
- {
- static async Task Main(string[] args)
- {
- using var channel = new ServerTelemetryChannel();
-
- try
- {
- IServiceCollection services = new ServiceCollection();
- services.Configure<TelemetryConfiguration>(
- config =>
- {
- config.TelemetryChannel = channel;
-
- // Optional: implement your own TelemetryInitializer instance and configure it here
- // config.TelemetryInitializers.Add(new MyTelemetryInitializer());
-
- config.DefaultTelemetrySink.TelemetryProcessorChainBuilder.UseSampling(5);
- channel.Initialize(config);
- });
-
- services.AddLogging(builder =>
- {
- // Only Application Insights is registered as a logger provider
- builder.AddApplicationInsights(
- configureTelemetryConfiguration: (config) => config.ConnectionString = "<YourConnectionString>",
- configureApplicationInsightsLoggerOptions: (options) => { }
- );
- });
-
- IServiceProvider serviceProvider = services.BuildServiceProvider();
- ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>();
-
- logger.LogInformation("Logger is working...");
- }
- finally
- {
- // Explicitly call Flush() followed by Delay, as required in console apps.
- // This ensures that even if the application terminates, telemetry is sent to the back end.
- channel.Flush();
-
- await Task.Delay(TimeSpan.FromMilliseconds(1000));
- }
- }
- }
-}
-```
-
-## Logging level
-
-`ILogger` implementations have a built-in mechanism to apply [log filtering](/dotnet/core/extensions/logging#how-filtering-rules-are-applied). This filtering lets you control the logs that are sent to each registered provider, including the Application Insights provider. You can use the filtering either in configuration (for example, by using an *appsettings.json* file) or in code.
-
-The following examples show how to apply filter rules to `ApplicationInsightsLoggerProvider`.
-
-### Create filter rules in configuration with appsettings.json
-
-`ApplicationInsightsLoggerProvider` is aliased as "ApplicationInsights". The following section of *appsettings.json* overrides the default <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> log level of Application Insights to log categories that start with "Microsoft" at level <xref:Microsoft.Extensions.Logging.LogLevel.Error?displayProperty=nameWithType> and higher.
-
-```json
-{
- "Logging": {
- "LogLevel": {
- "Default": "Warning"
- },
- "ApplicationInsights": {
- "LogLevel": {
- "Microsoft": "Error"
- }
- }
- }
-}
-```
-
-### Create filter rules in code
-
-The following code snippet configures logs to be sent to `ApplicationInsightsLoggerProvider` for these items:
--- <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> and higher from all categories-- <xref:Microsoft.Extensions.Logging.LogLevel.Error?displayProperty=nameWithType> and higher from categories that start with "Microsoft"-
-```csharp
-Host.CreateDefaultBuilder(args)
- .UseStartup<Startup>()
- .ConfigureLogging(builder =>
- {
- builder.AddFilter<ApplicationInsightsLoggerProvider>("", LogLevel.Warning);
- builder.AddFilter<ApplicationInsightsLoggerProvider>("Microsoft", LogLevel.Error);
- });
-```
--
-## Logging scopes
-
-`ApplicationInsightsLoggingProvider` supports [log scopes](/dotnet/core/extensions/logging#log-scopes). Scopes are enabled by default.
-
-If the scope is of type `IReadOnlyCollection<KeyValuePair<string,object>>`, then each key/value pair in the collection is added to the Application Insights telemetry as custom properties. In the following example, logs will be captured as `TraceTelemetry` and will have `("MyKey", "MyValue")` in properties.
-
-```csharp
-using (_logger.BeginScope(new Dictionary<string, object> { ["MyKey"] = "MyValue" }))
-{
- _logger.LogError("An example of an Error level message");
-}
-```
-
-If any other type is used as a scope, it will be stored under the property `Scope` in Application Insights telemetry. In the following example, `TraceTelemetry` will have a property called `Scope` that contains the scope.
-
-```csharp
- using (_logger.BeginScope("hello scope"))
- {
- _logger.LogError("An example of an Error level message");
- }
-```
- ## Frequently asked questions
-### What are the old and new versions of ApplicationInsightsLoggerProvider?
-
-The [Microsoft.ApplicationInsights.AspNet SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) included a built-in `ApplicationInsightsLoggerProvider` (`Microsoft.ApplicationInsights.AspNetCore.Logging.ApplicationInsightsLoggerProvider`) instance, which was enabled through `ILoggerFactory` extension methods. This provider is marked obsolete from version 2.7.1. It's slated to be removed completely in the next major version change.
-
-The [Microsoft.ApplicationInsights.AspNetCore 2.6.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) package itself isn't obsolete. It's required to enable monitoring of items like requests and dependencies.
-
-The suggested alternative is the new standalone package [Microsoft.Extensions.Logging.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights), which contains an improved `ApplicationInsightsLoggerProvider` instance (`Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider`) and extension methods on `ILoggerBuilder` for enabling it.
-
-[Microsoft.ApplicationInsights.AspNet SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) version 2.7.1 takes a dependency on the new package and enables `ILogger` capture automatically.
-
-### Why are some ILogger logs shown twice in Application Insights?
-
-Duplication can occur if you have the older (now obsolete) version of `ApplicationInsightsLoggerProvider` enabled by calling `AddApplicationInsights` on `ILoggerFactory`. Check if your `Configure` method has the following code, and remove it:
-
-```csharp
- public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
- {
- loggerFactory.AddApplicationInsights(app.ApplicationServices, LogLevel.Warning);
- // ..other code.
- }
-```
-
-If you experience double logging when you debug from Visual Studio, set `EnableDebugLogger` to `false` in the code that enables Application Insights, as follows. This duplication and fix are relevant only when you're debugging the application.
-
-```csharp
-public void ConfigureServices(IServiceCollection services)
-{
- var options = new ApplicationInsightsServiceOptions
- {
- EnableDebugLogger = false
- }
- services.AddApplicationInsightsTelemetry(options);
- // ...
-}
-```
-
-### I updated to Microsoft.ApplicationInsights.AspNet SDK version 2.7.1, and logs from ILogger are captured automatically. How do I turn off this feature completely?
-
-See the [Logging level](#logging-level) section to see how to filter logs in general. To turn off `ApplicationInsightsLoggerProvider`, use `LogLevel.None` in your call for configuring logging. In the following command, `builder` is <xref:Microsoft.Extensions.Logging.ILoggingBuilder>.
-
-```csharp
-builder.AddFilter<ApplicationInsightsLoggerProvider>("", LogLevel.None);
-```
-
-Here's the change in the *appsettings.json* file:
-
-```json
-{
- "Logging": {
- "ApplicationInsights": {
- "LogLevel": {
- "Default": "None"
- }
- }
- }
-}
-```
- ### Why do some ILogger logs not have the same properties as others? Application Insights captures and sends `ILogger` logs by using the same `TelemetryConfiguration` information that's used for every other telemetry. But there's an exception. By default, `TelemetryConfiguration` isn't fully set up when you log from *Program.cs* or *Startup.cs*. Logs from these places won't have the default configuration, so they won't be running all `TelemetryInitializer` instances and `TelemetryProcessor` instances.
builder.AddApplicationInsights(
The Application Insights extension in Azure Web Apps uses the new provider. You can modify the filtering rules in the *appsettings.json* file for your application.
-### I can't see some of the logs from my application in the workspace.
-
-Missing data can occur due to adaptive sampling. Adaptive sampling is enabled by default in all the latest versions of the Application Insights ASP.NET and ASP.NET Core Software Development Kits (SDKs). See the [Sampling in Application Insights](./sampling.md) for more details.
- ## Next steps * [Logging in .NET](/dotnet/core/extensions/logging)
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
OpenCensus.stats supports four aggregation methods but provides partial support
# TODO: replace the all-zero GUID with your instrumentation key. exporter = metrics_exporter.new_metrics_exporter(
- connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000')
+ connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000',
+ export_interval=60, # Application Insights backend assumes aggregation on a 60s interval
+ )
# You can also instantiate the exporter directly if you have the environment variable # `APPLICATIONINSIGHTS_CONNECTION_STRING` configured # exporter = metrics_exporter.new_metrics_exporter()
OpenCensus.stats supports four aggregation methods but provides partial support
main() ```
-1. The exporter sends metric data to Azure Monitor at a fixed interval. The default is every 15 seconds. To modify the export interval, pass in `export_interval` as a parameter in seconds to `new_metrics_exporter()`. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The value is cumulative, can only increase, and resets to 0 on restart.
+1. The exporter sends metric data to Azure Monitor at a fixed interval. You must set this value to 60s as Application Insights backend assumes aggregation of metrics points on a 60s time interval. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The data is cumulative, can only increase, and resets to 0 on restart.
You can find the data under `customMetrics`, but the `customMetrics` properties `valueCount`, `valueSum`, `valueMin`, `valueMax`, and `valueStdDev` aren't effectively used.
Each exporter accepts the same arguments for configuration, passed through the c
`connection_string`| The connection string used to connect to your Azure Monitor resource. Takes priority over `instrumentation_key`.| `credential`| Credential class used by Azure Active Directory authentication. See the "Authentication" section that follows.| `enable_standard_metrics`| Used for `AzureMetricsExporter`. Signals the exporter to send [performance counter](../essentials/app-insights-metrics.md#performance-counters) metrics automatically to Azure Monitor. Defaults to `True`.|
-`export_interval`| Used to specify the frequency in seconds of exporting. Defaults to `15s`.|
+`export_interval`| Used to specify the frequency in seconds of exporting. Defaults to `15s`. For metrics you MUST set this to 60s or else your metric aggregations will not make sense in the metrics explorer.|
`grace_period`| Used to specify the timeout for shutdown of exporters in seconds. Defaults to `5s`.| `instrumentation_key`| The instrumentation key used to connect to your Azure Monitor resource.| `logging_sampling_rate`| Used for `AzureLogHandler` and `AzureEventHandler`. Provides a sampling rate [0,1.0] for exporting logs/events. Defaults to `1.0`.|
azure-monitor Status Monitor V2 Detailed Instructions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-detailed-instructions.md
See the [API reference](./status-monitor-v2-api-reference.md#enable-applicationi
Add more telemetry: -- [Create web tests](monitor-web-app-availability.md) to make sure your site stays live.
+- [Availability overview](availability-overview.md)
- [Add web client telemetry](./javascript.md) to see exceptions from web page code and to enable trace calls. - [Add the Application Insights SDK to your code](./asp-net.md) so you can insert trace and log calls.
azure-monitor Status Monitor V2 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-get-started.md
Enable-ApplicationInsightsMonitoring -ConnectionString 'InstrumentationKey=00000
Add more telemetry: -- [Create web tests](monitor-web-app-availability.md) to make sure your site stays live.
+- [Availability overview](availability-overview.md)
- [Add web client telemetry](./javascript.md) to see exceptions from webpage code and to enable trace calls. - [Add the Application Insights SDK to your code](./asp-net.md) so that you can insert trace and log calls.
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
View your telemetry:
Add more telemetry:
-* [Create web tests](monitor-web-app-availability.md) to make sure your site stays live.
+- [Availability overview](availability-overview.md)
* [Add web client telemetry](./javascript.md) to see exceptions from webpage code and to enable trace calls. * [Add the Application Insights SDK to your code](./asp-net.md) so that you can insert trace and log calls.
azure-monitor Tutorial Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md
For the latest updates and bug fixes, see the [release notes](./release-notes.md
* [Explore user flows](./usage-flows.md) to understand how users navigate through your app. * [Configure a snapshot collection](./snapshot-debugger.md) to see the state of source code and variables at the moment an exception is thrown. * [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage.
-* Use [availability tests](./monitor-web-app-availability.md) to check your app constantly from around the world.
+* [Availability overview](availability-overview.md)
* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection) * [Logging in ASP.NET Core](/aspnet/core/fundamentals/logging) * [.NET trace logs in Application Insights](./asp-net-trace-logs.md)
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
The following sections list the full telemetry automatically collected by Applic
### ILogger logs
-Logs emitted via `ILogger` with the severity Warning or greater are automatically captured. Follow [ILogger docs](ilogger.md#logging-level) to customize which log levels are captured by Application Insights.
+Logs emitted via `ILogger` with the severity Warning or greater are automatically captured. To change this behavior, explicitly override the logging configuration for the provider `ApplicationInsights`, as shown in the following code. The following configuration allows Application Insights to capture all `Information` logs and more severe logs.
+
+```json
+{
+ "Logging": {
+ "LogLevel": {
+ "Default": "Warning"
+ },
+ "ApplicationInsights": {
+ "LogLevel": {
+ "Default": "Information"
+ }
+ }
+ }
+}
+```
+
+It's important to note that the following example doesn't cause the Application Insights provider to capture `Information` logs. It doesn't capture it because the SDK adds a default logging filter that instructs `ApplicationInsights` to capture only `Warning` logs and more severe logs. Application Insights requires an explicit override.
+
+```json
+{
+ "Logging": {
+ "LogLevel": {
+ "Default": "Information"
+ }
+ }
+}
+```
+
+For more information, follow [ILogger docs](/dotnet/core/extensions/logging#configure-logging) to customize which log levels are captured by Application Insights.
### Dependencies
azure-monitor Create Pipeline Datacollector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-pipeline-datacollector-api.md
We are using a classic ETL-type logic to design our pipeline. The architecture w
![Data collection pipeline architecture](./media/create-pipeline-datacollector-api/data-pipeline-dataflow-architecture.png)
-This article will not cover how to create data or [upload it to an Azure Blob Storage account](../../storage/blobs/storage-upload-process-images.md). Rather, we pick the flow up as soon as a new file is uploaded to the blob. From here:
+This article will not cover how to create data or [upload it to an Azure Blob Storage account](../../storage/blobs/blob-upload-function-trigger.md). Rather, we pick the flow up as soon as a new file is uploaded to the blob. From here:
-1. A process will detect that new data has been uploaded. Our example uses an [Azure Logic App](../../logic-apps/logic-apps-overview.md), which has available a trigger to detect new data being uploaded to a blob.
+1. A process will detect that new data has been uploaded. Our example uses an [logic app workflow](../../logic-apps/logic-apps-overview.md), which has available a trigger to detect new data being uploaded to a blob.
-2. A processor reads this new data and converts it to JSON, the format required by Azure Monitor In this example, we use an [Azure Function](../../azure-functions/functions-overview.md) as a lightweight, cost-efficient way of executing our processing code. The function is kicked off by the same Logic App that we used to detect the new data.
+2. A processor reads this new data and converts it to JSON, the format required by Azure Monitor In this example, we use an [Azure Function](../../azure-functions/functions-overview.md) as a lightweight, cost-efficient way of executing our processing code. The function is kicked off by the same logic app workflow that we used to detect the new data.
-3. Finally, once the JSON object is available, it is sent to Azure Monitor. The same Logic App sends the data to Azure Monitor using the built in Log Analytics Data Collector activity.
+3. Finally, once the JSON object is available, it is sent to Azure Monitor. The same logic app workflow sends the data to Azure Monitor using the built in Log Analytics Data Collector activity.
-While the detailed setup of the blob storage, Logic App, or Azure Function is not outlined in this article, detailed instructions are available on the specific productsΓÇÖ pages.
+While the detailed setup of the blob storage, logic app workflow, or Azure Function is not outlined in this article, detailed instructions are available on the specific productsΓÇÖ pages.
-To monitor this pipeline, we use Application Insights to monitor our Azure Function [details here](../../azure-functions/functions-monitoring.md), and Azure Monitor to monitor our Logic App [details here](../../logic-apps/monitor-logic-apps-log-analytics.md).
+To monitor this pipeline, we use Application Insights to [monitor our Azure Function](../../azure-functions/functions-monitoring.md), and Azure Monitor to [monitor our logic app workflow](../../logic-apps/monitor-workflows-collect-diagnostic-data.md).
## Setting up the pipeline To set the pipeline, first make sure you have your blob container created and configured. Likewise, make sure that the Log Analytics workspace where youΓÇÖd like to send the data to is created. ## Ingesting JSON data
-Ingesting JSON data is trivial with Logic Apps, and since no transformation needs to take place, we can encase the entire pipeline in a single Logic App. Once both the blob container and the Log Analytics workspace have been configured, create a new Logic App and configure it as follows:
+Ingesting JSON data is trivial with Azure Logic Apps, and since no transformation needs to take place, we can encase the entire pipeline in a single logic app workflow. Once both the blob container and the Log Analytics workspace have been configured, create a new logic app workflow and configure it as follows:
![Logic apps workflow example](./media/create-pipeline-datacollector-api/logic-apps-workflow-example-01.png)
-Save your Logic App and proceed to test it.
+Save your logic app workflow and proceed to test it.
## Ingesting XML, CSV, or other formats of data
-Logic Apps today does not have built-in capabilities to easily transform XML, CSV, or other types into JSON format. Therefore, we need to use another means to complete this transformation. For this article, we use the serverless compute capabilities of Azure Functions as a very lightweight and cost-friendly way of doing so.
+monitor-workflows-collect-diagnostic-data today does not have built-in capabilities to easily transform XML, CSV, or other types into JSON format. Therefore, we need to use another means to complete this transformation. For this article, we use the serverless compute capabilities of Azure Functions as a very lightweight and cost-friendly way of doing so.
In this example, we parse a CSV file, but any other file type can be similarly processed. Simply modify the deserializing portion of the Azure Function to reflect the correct logic for your specific data type.
In this example, we parse a CSV file, but any other file type can be similarly p
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log) {
- string filePath = await req.Content.ReadAsStringAsync(); //get the CSV URI being passed from Logic App
+ string filePath = await req.Content.ReadAsStringAsync(); //get the CSV URI being passed from logic app workflow
string response = ""; //get a stream from blob
In this example, we parse a CSV file, but any other file type can be similarly p
![Function Apps test code](./media/create-pipeline-datacollector-api/functions-test-01.png)
-Now we need to go back and modify the Logic App we started building earlier to include the data ingested and converted to JSON format. Using View Designer, configure as follows and then save your Logic App:
+Now we need to go back and modify the logic app we started building earlier to include the data ingested and converted to JSON format. Using View Designer, configure as follows and then save your logic app:
-![Logic Apps workflow complete example](./media/create-pipeline-datacollector-api/logic-apps-workflow-example-02.png)
+![Azure Logic Apps workflow complete example](./media/create-pipeline-datacollector-api/logic-apps-workflow-example-02.png)
## Testing the pipeline
-Now you can upload a new file to the blob configured earlier and have it monitored by your Logic App. Soon, you should see a new instance of the Logic App kick off, call out to your Azure Function, and then successfully send the data to Azure Monitor.
+Now you can upload a new file to the blob configured earlier and have it monitored by your logic app workflow. Soon, you should see a new instance of the logic app workflow kick off, call out to your Azure Function, and then successfully send the data to Azure Monitor.
>[!NOTE] >It can take up to 30 minutes for the data to appear in Azure Monitor the first time you send a new data type.
The output should show the two data sources now joined.
## Suggested improvements for a production pipeline This article presented a working prototype, the logic behind which can be applied towards a true production-quality solution. For such a production-quality solution, the following improvements are recommended:
-* Add error handling and retry logic in your Logic App and Function.
+* Add error handling and retry logic in your logic app workflow and Function.
* Add logic to ensure that the 30MB/single Log Analytics Ingestion API call limit is not exceeded. Split the data into smaller segments if needed. * Set up a clean-up policy on your blob storage. Once successfully sent to the Log Analytics workspace, unless youΓÇÖd like to keep the raw data available for archival purposes, there is no reason to continue storing it. * Verify monitoring is enabled across the full pipeline, adding trace points and alerts as appropriate.
-* Leverage source control to manage the code for your function and Logic App.
-* Ensure that a proper change management policy is followed, such that if the schema changes, the function and Logic Apps are modified accordingly.
+* Leverage source control to manage the code for your function and logic app workflow.
+* Ensure that a proper change management policy is followed, such that if the schema changes, the function and logic app are modified accordingly.
* If you are uploading multiple different data types, segregate them into individual folders within your blob container, and create logic to fan the logic out based on the data type.
azure-monitor Logicapp Flow Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logicapp-flow-connector.md
Last updated 03/22/2022
# Azure Monitor Logs connector for Logic Apps and Power Automate [Azure Logic Apps](../../logic-apps/index.yml) and [Power Automate](https://make.powerautomate.com) allow you to create automated workflows using hundreds of actions for various services. The Azure Monitor Logs connector allows you to build workflows that retrieve data from a Log Analytics workspace or an Application Insights application in Azure Monitor. This article describes the actions included with the connector and provides a walkthrough to build a workflow using this data.
-For example, you can create a logic app to use Azure Monitor log data in an email notification from Office 365, create a bug in Azure DevOps, or post a Slack message. You can trigger a workflow by a simple schedule or from some action in a connected service such as when a mail or a tweet is received.
+For example, you can create a logic app workflow to use Azure Monitor log data in an email notification from Office 365, create a bug in Azure DevOps, or post a Slack message. You can trigger a workflow by a simple schedule or from some action in a connected service such as when a mail or a tweet is received.
## Connector limits The Azure Monitor Logs connector has these limits:
The following tutorial illustrates the use of the Azure Monitor Logs connector i
### Create a Logic App 1. Go to **Logic Apps** in the Azure portal and select **Add**.
-1. Select a **Subscription**, **Resource group**, and **Region** to store the new logic app and then give it a unique name. You can turn on the **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.
-
- ![Screenshot that shows the Basics tab on the Logic App creation screen.](media/logicapp-flow-connector/create-logic-app.png)
+1. Select a **Subscription**, **Resource group**, and **Region** to store the new logic app and then give it a unique name. You can turn on the **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-workflows-collect-diagnostic-data.md). This setting isn't required for using the Azure Monitor Logs connector.
+ ![Screenshot that shows the Basics tab on the logic app creation screen.](media/logicapp-flow-connector/create-logic-app.png)
1. Select **Review + create** > **Create**. 1. When the deployment is complete, select **Go to resource** to open the **Logic Apps Designer**.
-### Create a trigger for the logic app
+### Create a trigger for the logic app workflow
1. Under **Start with a common trigger**, select **Recurrence**.
- This creates a logic app that automatically runs at a regular interval.
+ This creates a logic app workflow that automatically runs at a regular interval.
1. In the **Frequency** box of the action, select **Day** and in the **Interval** box, enter **1** to run the workflow once per day. ![Screenshot that shows the Logic Apps Designer "Recurrence" window on which you can set the interval and frequency at which the logic app runs.](media/logicapp-flow-connector/recurrence-action.png) ## Walkthrough: Mail visualized results
-This tutorial shows how to create a logic app that sends the results of an Azure Monitor log query by email.
+This tutorial shows how to create a logic app workflow that sends the results of an Azure Monitor log query by email.
### Add Azure Monitor Logs action 1. Select **+ New step** to add an action that runs after the recurrence action.
This tutorial shows how to create a logic app that sends the results of an Azure
1. Specify the email address of a recipient in the **To** window and a subject for the email in **Subject**.
- ![Screenshot of the settings for the new Send an email (V2) action, showing the subject line and email recepients being defined.](media/logicapp-flow-connector/mail-action.png)
+ ![Screenshot of the settings for the new Send an email (V2) action, showing the subject line and email recipients being defined.](media/logicapp-flow-connector/mail-action.png)
-### Save and test your logic app
-1. Select **Save** and then **Run** to perform a test run of the logic app.
+### Save and test your workflow
+1. Select **Save** and then **Run** to perform a test run of the workflow.
![Save and run](media/logicapp-flow-connector/save-run.png)
- When the logic app completes, check the mail of the recipient that you specified. You should receive a mail with a body similar to the following:
+ When the workflow completes, check the mail of the recipient that you specified. You should receive a mail with a body similar to the following:
![An image of a sample email.](media/logicapp-flow-connector/sample-mail.png) > [!NOTE]
- > The log app generates an email with a JPG file that depicts the query result set. If your query doesn't return results, the logic app won't create a JPG file.
+ > The workflow generates an email with a JPG file that depicts the query result set. If your query doesn't return results, the workflow won't create a JPG file.
## Next steps - Learn more about [log queries in Azure Monitor](./log-query-overview.md).-- Learn more about [Logic Apps](../../logic-apps/index.yml)
+- Learn more about [Azure Logic Apps](../../logic-apps/index.yml)
- Learn more about [Power Automate](https://make.powerautomate.com).
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
The following sections walk you through the procedure.
Use the procedure in [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) to add a container to your storage account to hold the exported data. The name used for the container in this article is **loganalytics-data**, but you can use any name.
-### Create a logic app
+### Create a logic app workflow
-1. Go to **Logic Apps** in the Azure portal and select **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new Logic App. Then give it a unique name. You can turn on the **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor Logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.
+1. Go to **Logic Apps** in the Azure portal and select **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new logic app. Then give it a unique name. You can turn on the **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor Logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-workflows-collect-diagnostic-data.md). This setting isn't required for using the Azure Monitor Logs connector.
[![Screenshot that shows creating a logic app.](media/logs-export-logic-app/create-logic-app.png "Screenshot that shows creating a Logic Apps resource.")](media/logs-export-logic-app/create-logic-app.png#lightbox) 1. Select **Review + create** and then select **Create**. After the deployment is finished, select **Go to resource** to open the **Logic Apps Designer**.
-### Create a trigger for the logic app
+### Create a trigger for the workflow
-Under **Start with a common trigger**, select **Recurrence**. This setting creates a logic app that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day**. In the **Interval** box, enter **1** to run the workflow once per day.
+Under **Start with a common trigger**, select **Recurrence**. This setting creates a logic app workflow that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day**. In the **Interval** box, enter **1** to run the workflow once per day.
[![Screenshot that shows a Recurrence action.](media/logs-export-logic-app/recurrence-action.png "Screenshot that shows creating a recurrence action.")](media/logs-export-logic-app/recurrence-action.png#lightbox)
The **Create blob** action writes the composed JSON to storage.
[![Screenshot that shows creating a blob expression.](media/logs-export-logic-app/create-blob.png "Screenshot that shows a Blob action output configuration.")](media/logs-export-logic-app/create-blob.png#lightbox)
-### Test the logic app
+### Test the workflow
To test the workflow, select **Run**. If the workflow has errors, they're indicated on the step with the problem. You can view the executions and drill in to each step to view the input and output to investigate failures. See [Troubleshoot and diagnose workflow failures in Azure Logic Apps](../../logic-apps/logic-apps-diagnosing-failures.md), if necessary.
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
na Previously updated : 01/27/2023 Last updated : 02/17/2023
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
| Azure NetApp Files features | Azure public cloud availability | Azure Government availability | |: |: |: | | Azure NetApp Files backup | Public preview | No |
-| Cross-zone replication | Public preview | No |
| Standard network features | Generally available (GA) | No | ## Portal access
azure-netapp-files Cross Zone Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-introduction.md
na Previously updated : 12/16/2022 Last updated : 02/17/2023
The preview of cross-zone replication is available in the following regions:
* France Central * Germany West Central * Japan East
+* Korea Central
* North Europe * Norway East
+* Norway West
* South Africa North * Southeast Asia * South Central US
+* Sweden Central
+* Switzerland North
* UK South
+* US Gov Virginia
* West Europe * West US 2 * West US 3
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
description: Learn how to create Azure NetApp Files-based NFS datastores for Azure VMware Solution hosts. Previously updated : 02/09/2023- Last updated : 02/17/2023+ # Attach Azure NetApp Files datastores to Azure VMware Solution hosts
By using NFS datastores backed by Azure NetApp Files, you can expand your storage instead of scaling the clusters. You can also use Azure NetApp Files volumes to replicate data from on-premises or primary VMware environments for the secondary site.
-Create your Azure VMware Solution and create Azure NetApp Files NFS volumes in the virtual network connected to it using an ExpressRoute. Ensure there's connectivity from the private cloud to the NFS volumes created. Use those volumes to create NFS datastores and attach the datastores to clusters of your choice in a private cloud. As a native integration, no other permissions configured via vSphere are needed.
+Create your Azure VMware Solution and create Azure NetApp Files NFS volumes in the virtual network connected to it using an ExpressRoute. Ensure there's connectivity from the private cloud to the NFS volumes created. Use those volumes to create NFS datastores and attach the datastores to clusters of your choice in a private cloud. As a native integration, you need no other permissions configured via vSphere.
The following diagram demonstrates a typical architecture of Azure NetApp Files backed NFS datastores attached to an Azure VMware Solution private cloud via ExpressRoute.
Before you begin the prerequisites, review the [Performance best practices](#per
1. Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. For optimal performance, it's recommended to use the Ultra tier. Select option **Azure VMware Solution Datastore** listed under the **Protocol** section. 1. Create a volume with **Standard** [network features](../azure-netapp-files/configure-network-features.md) if available for ExpressRoute FastPath connectivity. 1. Under the **Protocol** section, select **Azure VMware Solution Datastore** to indicate the volume is created to use as a datastore for Azure VMware Solution private cloud.
- 1. If you're using [export policies](../azure-netapp-files/azure-netapp-files-configure-export-policy.md) to control access to Azure NetApp Files volumes, enable the Azure VMware private cloud IP range, not individual host IPs. Faulty hosts in a private cloud could get replaced so if the IP isn't enabled, connectivity to datastore will be impacted.
+ 1. If you're using [export policies](../azure-netapp-files/azure-netapp-files-configure-export-policy.md) to control, access to Azure NetApp Files volumes, enable the Azure VMware private cloud IP range, not individual host IPs. Faulty hosts in a private cloud could get replaced so if the IP isn't enabled, connectivity to datastore will be impacted.
>[!NOTE]
->Azure NetApp Files datastores for Azure VMware Solution are generally available. You must register Azure NetApp Files datastores for Azure VMware Solution before using it.
+>Azure NetApp Files datastores for Azure VMware Solution are generally available. To use it, you must register Azure NetApp Files datastores for Azure VMware Solution.
## Supported regions
There are some important best practices to follow for optimal performance of NFS
- Create Azure NetApp Files volumes using **Standard** network features to enable optimized connectivity from Azure VMware Solution private cloud via ExpressRoute FastPath connectivity. - For optimized performance, choose either **UltraPerformance** gateway or **ErGw3Az** gateway, and enable [FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md).-- Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. See [Service levels for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-service-levels.md) to understand the throughput allowed per provisioned TiB for each service level.-- Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type will determine volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 64, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).-- Ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones). Information regarding your AVS private cloud's availability zone can be viewed from the overview pane within the AVS private cloud.
+- Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. See [Service levels for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-service-levels.md) to understand the throughput allowed per provisioned TiB for each service level.
-For performance benchmarks that Azure NetApp Files datastores deliver for virtual machines on Azure VMware Solution, see [Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](../azure-netapp-files/performance-benchmarks-azure-vmware-solution.md).
+ >[!IMPORTANT]
+ > If you've changed the Azure NetApp Files volumes performance tier after creating the volume and datastore, see [Service level change for Azure NetApp files datastore](#service-level-change-for-azure-netapp-files-datastore) to ensure that volume/datastore metadata is in sync to avoid unexpected behavior in the portal or the API due to metadata mismatch.
-> [!IMPORTANT]
->Changing the Azure NetApp Files volumes tier after creating the datastore will result in unexpected behavior in portal and API due to metadata mismatch. Set your performance tier of the Azure NetApp Files volume when creating the datastore. If you need to change tier during run time, detach the datastore, change the performance tier of the volume and attach the datastore. We are working on improvements to make this seamless.
+- Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type will determine volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 64, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+- Ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones) using the [the availability zone volume placement](../azure-netapp-files/manage-availability-zone-volume-placement.md) in the same subscription. Information regarding your AVS private cloud's availability zone can be viewed from the overview pane within the AVS private cloud.
+
+For performance benchmarks that Azure NetApp Files datastores deliver for virtual machines on Azure VMware Solution, see [Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](../azure-netapp-files/performance-benchmarks-azure-vmware-solution.md).
## Attach an Azure NetApp Files volume to your private cloud
Under **Manage**, select **Storage**.
:::image type="content" source="media/attach-netapp-files-to-cloud/connect-netapp-files-portal-experience-1.png" alt-text="Image shows the navigation to Connect Azure NetApp Files volume pop-up window." lightbox="media/attach-netapp-files-to-cloud/connect-netapp-files-portal-experience-1.png":::
-1. Verify the protocol is NFS. You'll need to verify the virtual network and subnet to ensure connectivity to the Azure VMware Solution private cloud.
+1. Verify the protocol is NFS. You need to verify the virtual network and subnet to ensure connectivity to the Azure VMware Solution private cloud.
1. Under **Associated cluster**, in the **Client cluster** field, select one or more clusters to associate the volume as a datastore. 1. Under **Data store**, create a personalized name for your **Datastore name**. 1. When the datastore is created, you should see all of your datastores in the **Storage**. 2. You'll also notice that the NFS datastores are added in vCenter. - ### [Azure CLI](#tab/azure-cli) To attach an Azure NetApp Files volume to your private cloud using Azure CLI, follow these steps:
To attach an Azure NetApp Files volume to your private cloud using Azure CLI, fo
`az vmware datastore list --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud` -
+## Service level change for Azure NetApp Files datastore
+
+Based on the performance requirements of the datastore, you can change the service level of the Azure NetApp Files volume used for the datastore by following the instructions to [dynamically change the service level of a volume for Azure NetApp Files](../azure-netapp-files/dynamic-change-volume-service-level.md)
+This has no impact to the Datastore or private cloud as there is no downtime involved and the IP address/mount path remain unchanged. However, the volume Resource Id will be changed due to the capacity pool change. Therefore to avoid any metadata mismatch re-issue the datastore create command via Azure CLI as follows: `az vmware datastore netapp-volume create`.
+>[!IMPORTANT]
+> The input values for **cluster** name, datastore **name**, **private-cloud** (SDDC) name, and **resource-group** must be **exactly the same as the current one**, and the **volume-id** is the new Resource Id of the volume.
+
+ -**cluster**
+ -**name**
+ -**private-cloud**
+ -**resource-group**
+ -**volume-id**
## Disconnect an Azure NetApp Files-based datastore from your private cloud
Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y
- **Can a single Azure NetApp Files datastore be added to multiple clusters within the same Azure VMware Solution SDDC?**
- Yes, you can select multiple clusters at the time of datastore creation. Additional clusters may be added or removed after the initial creation as well.
+ Yes, you can select multiple clusters at the time of creating the datastore. Additional clusters may be added or removed after the initial creation as well.
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-networking.md
Title: Concepts - Network interconnectivity
description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution. Previously updated : 2/7/2023 Last updated : 2/16/2023
The diagram below shows the basic network interconnectivity established at the t
> [!IMPORTANT] > When connecting **production** Azure VMware Solution private clouds to an Azure virtual network, an ExpressRoute virtual network gateway with the Ultra Performance Gateway SKU should be used with FastPath enabled to achieve 10Gbps connectivity. Less critical environments can use the Standard or High Performance Gateway SKUs for slower network performance.
+> [!NOTE]
+> If connecting more than four Azure VMware Solution private clouds in the same Azure region to the same Azure virtual network is a requirement, use [Azure VMware Solution Interconnect](connect-multiple-private-clouds-same-region.md) to aggregate private cloud connectivity within the Azure region.
+ :::image type="content" source="media/concepts/adjacency-overview-drawing-single.png" alt-text="Diagram showing the basic network interconnectivity established at the time of an Azure VMware Solution private cloud deployment." border="false"::: ## On-premises interconnectivity
backup Backup Azure Private Endpoints Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-concept.md
When the workload extension or MARS agent is installed for Recovery Services vau
>- [Germany](../germany/germany-developer-guide.md#endpoint-mapping) >- [US Gov](../azure-government/documentation-government-developer-guide.md)
-The storage FQDNs hit in both the scenarios are same. However, for a Recovery Services vault with private endpoint setup, the name resolution for these should return a private IP address. This can be achieved by
+For a Recovery Services vault with private endpoint setup, the name resolution for the FQDNs (`privatelink.<geo>.backup.windowsazure.com`, `*.blob.core.windows.net`, `*.queue.core.windows.net`, `*.blob.storage.azure.net`) should return a private IP address. This can be achieved by using:
- Azure Private DNS zones - Custom DNS
If you've configured a DNS proxy server, using third-party proxy servers or fire
The following example shows Azure firewall used as DNS proxy to redirect the domain name queries for Recovery Services vault, blob, queues and Azure AD to 168.63.129.16. For more information, see [Creating and using private endpoints](private-endpoints.md).
The private endpoint for Recovery Services is associated with a network interfac
When the workload backup extensions are installed on the virtual machine registered to a Recovery Services vault with a private endpoint, the extension attempts connection on the private URL of the Azure Backup services `<vault_id>.<azure_backup_svc>.privatelink.<geo>.backup.windowsazure.com`.
-If the private URL isn't resolving, it tries the public URL `<azure_backup_svc>.<geo>.backup.windowsazure.com`. If the public network access for Recovery Services vault is configured to *Allow from all networks*, the Recovery Services vault allows the requests coming from the extension over public URLs. If the public network access for Recovery Services vault is configured to *Deny*, the recovery services vault denies the requests coming from the extension over public URLs.
+If the private URL doesn't resolve, it tries the public URL `<azure_backup_svc>.<geo>.backup.windowsazure.com`. If the public network access for Recovery Services vault is configured to *Allow from all networks*, the Recovery Services vault allows the requests coming from the extension over public URLs. If the public network access for Recovery Services vault is configured to *Deny*, the recovery services vault denies the requests coming from the extension over public URLs.
>[!Note] >In the above domain names, `<geo>` determines the region code (for example, eus for East US and ne for North Europe). For more information on the region codes, see the following list:
If the private URL isn't resolving, it tries the public URL `<azure_backup_svc>.
>- [Germany](/azure/germany/germany-developer-guide#endpoint-mapping) >- [US Gov](/azure/azure-government/documentation-government-developer-guide)
-These private URLs are specific for the vault. Only extensions and agents registered to the vault can communicate with the Azure Backup service over these endpoints. If the public network access for Recovery Services vault is configured to *Deny*, this restricts the clients that aren't running in the VNet from requesting the backup and restore operations on the vault. We recommend that public network access is set to *Deny* along with private endpoint setup. As the extension and agent attempt the private URL first, the `*.privatelink.<geo>.backup.windowsazure.com` URL should resolve the corresponding private IP associated with the private endpoint.
+These private URLs are specific for the vault. Only extensions and agents registered to the vault can communicate with the Azure Backup service over these endpoints. If the public network access for Recovery Services vault is configured to *Deny*, this restricts the clients that aren't running in the VNet from requesting the backup and restore operations on the vault. We recommend that public network access is set to *Deny* along with private endpoint setup. As the extension and agent attempt the private URL first, the `*.privatelink.<geo>.backup.windowsazure.com` DNS resolution of the URL should return the corresponding private IP associated with the private endpoint.
-There are multiple solutions for DNS resolution
+There are multiple solutions for DNS resolution:
- Azure Private DNS zones - Custom DNS
There are multiple solutions for DNS resolution
When the private endpoint for Recovery Services vaults is created via the Azure portal with the *Integrate with private DNS zone* option, the required DNS entries for private IP addresses for the Azure Backup services (`*.privatelink.<geo>backup.windowsazure.com`) are created automatically when the resource is allocated. In other solutions, you need to create the DNS entries manually for these FQDNs in the custom DNS or in the host files.
-For the manual management of DNS records after the VM discovery for communication channel - blob or queue, see [DNS records for blobs and queues (only for custom DNS servers/host files) after the first registration](private-endpoints.md#dns-records-for-blobs-and-queues-only-for-custom-dns-servershost-files-after-the-first-registration). For the manual management of DNS records after the first backup for backup storage account blob, see [DNS records for blobs (only for custom DNS servers/host files) after the first backup](private-endpoints.md#dns-records-for-blobs-only-for-custom-dns-servershost-files-after-the-first-backup).
+For the manual management of DNS records after the VM discovery for communication channel - blob or queue, see [DNS records for blobs and queues (only for custom DNS servers/host files) after the first registration](backup-azure-private-endpoints-configure-manage.md#dns-records-for-blobs-and-queues-only-for-custom-dns-servershost-files-after-the-first-registration). For the manual management of DNS records after the first backup for backup storage account blob, see [DNS records for blobs (only for custom DNS servers/host files) after the first backup](backup-azure-private-endpoints-configure-manage.md#dns-records-for-blobs-only-for-custom-dns-servershost-files-after-the-first-backup).
-The private IP addresses for the FQDNs can be found in the private endpoint pane for the private endpoint created for the Recovery Services vault.
+The private IP addresses for the FQDNs can be found in **DNS configuration** pane for the private endpoint created for the Recovery Services vault.
The following diagram shows how the resolution works when using a private DNS zone to resolve these private service FQDNs. The workload extension running on Azure VM requires connection to at least two storage accounts endpoints - the first one is used as communication channel (via queue messages) and second one for storing backup data. The MARS agent requires access to at least one storage account endpoint that is used for storing backup data. For a private endpoint enabled vault, the Azure Backup service creates private endpoint for these storage accounts. This prevents any network traffic related to Azure Backup (control plane traffic to service and backup data to storage blob) from leaving the virtual network.
-In addition to the Azure Backup cloud services, the workload extension and agent require connectivity to the Azure Storage accounts and Azure Active Directory.
+In addition to the Azure Backup cloud services, the workload extension and agent require connectivity to the Azure Storage accounts and Azure Active Directory (Azure AD).
+
+As a pre-requisite, Recovery Services vault requires permissions for creating additional private endpoints in the same Resource Group. We also recommend providing the Recovery Services vault the permissions to create DNS entries in the private DNS zones (`privatelink.blob.core.windows.net`, `privatelink.queue.core.windows.net`). Recovery Services vault searches for private DNS zones in the resource groups where VNet and private endpoint are created. If it has the permissions to add DNS entries in these zones, theyΓÇÖll be created by the vault; otherwise, you must create them manually.
+
+>[!Note]
+>Integration with private DNS zone present in different subscriptions is unsupported in this experience.
+
+The following diagram shows how the name resolution works for storage accounts using a private DNS zone.
+ ## Next steps
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md
Title: Back up an SAP HANA database to Azure with Azure Backup description: In this article, learn how to back up an SAP HANA database to Azure virtual machines with the Azure Backup service. Previously updated : 01/05/2023 Last updated : 02/17/2023
The following table lists the various alternatives you can use for establishing
| Private endpoints | Allow backups over private IPs inside the virtual network <br><br> Provide granular control on the network and vault side | Incurs standard private endpoint [costs](https://azure.microsoft.com/pricing/details/private-link/) | | NSG service tags | Easier to manage as range changes are automatically merged <br><br> No additional costs | Can be used with NSGs only <br><br> Provides access to the entire service | | Azure Firewall FQDN tags | Easier to manage since the required FQDNs are automatically managed | Can be used with Azure Firewall only |
-| Allow access to service FQDNs/IPs | No additional costs. <br><br> Works with all network security appliances and firewalls. <br><br> You can also use service endpoints for *Storage* and *Azure Active Directory*. However, for Azure Backup, you need to assign the access to the corresponding IPs/FQDNs. | A broad set of IPs or FQDNs may be required to be accessed. |
-| [Virtual Network Service Endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) | Can be used for Azure Storage (= Recovery Services vault). <br><br> Provides large benefit to optimize performance of data plane traffic. | CanΓÇÖt be used for Azure AD, Azure Backup service. |
+| Allow access to service FQDNs/IPs | No additional costs. <br><br> Works with all network security appliances and firewalls. <br><br> You can also use service endpoints for *Storage*. However, for *Azure Backup* and *Azure Active Directory*, you need to assign the access to the corresponding IPs/FQDNs. | A broad set of IPs or FQDNs may be required to be accessed. |
+| [Virtual Network Service Endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) | Can be used for Azure Storage. <br><br> Provides large benefit to optimize performance of data plane traffic. | Can't be used for Azure AD, Azure Backup service. |
| Network Virtual Appliance | Can be used for Azure Storage, Azure AD, Azure Backup service. <br><br> **Data plane** <ul><li> Azure Storage: `*.blob.core.windows.net`, `*.queue.core.windows.net`, `*.blob.storage.azure.net` </li></ul> <br><br> **Management plane** <ul><li> Azure AD: Allow access to FQDNs mentioned in sections 56 and 59 of [Microsoft 365 Common and Office Online](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). </li><li> Azure Backup service: `.backup.windowsazure.com` </li></ul> <br>Learn more about [Azure Firewall service tags](../firewall/fqdn-tags.md). | Adds overhead to data plane traffic and decrease throughput/performance. | More details around using these options are shared below:
backup Private Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints-overview.md
When the workload extension or MARS agent is installed for Recovery Services vau
>- [Germany](../germany/germany-developer-guide.md#endpoint-mapping) >- [US Gov](../azure-government/documentation-government-developer-guide.md)
-The storage FQDNs hit in both the scenarios are same. However, for a Recovery Services vault with private endpoint setup, the name resolution for these should return a private IP address. This can be achieved by using:
+For a Recovery Services vault with private endpoint setup, the name resolution for the FQDNs (`privatelink.<geo>.backup.windowsazure.com`, `*.blob.core.windows.net`, `*.queue.core.windows.net`, `*.blob.storage.azure.net`) should return a private IP address. This can be achieved by using:
- Azure Private DNS zones - Custom DNS
When workload backup extensions are installed on the virtual machine registered
>- [Germany](../germany/germany-developer-guide.md#endpoint-mapping) >- [US Gov](../azure-government/documentation-government-developer-guide.md)
-These private URLs are specific for the vault. Only extensions and agents registered to the vault can communicate with Azure Backup over these endpoints. If the public network access for Recovery Services vault is configured to *Deny*, this restricts the clients that aren't running in the VNet from requesting backup and restore on the vault. We recommend setting the public network access to *Deny*, along with private endpoint setup. As the extension and agent attempt the private URL initially, the `*.privatelink.<geo>.backup.windowsazure.com` URL should resolve to the corresponding private IP associated with the private endpoint.
+These private URLs are specific for the vault. Only extensions and agents registered to the vault can communicate with Azure Backup over these endpoints. If the public network access for Recovery Services vault is configured to *Deny*, this restricts the clients that aren't running in the VNet from requesting backup and restore on the vault. We recommend setting the public network access to *Deny*, along with private endpoint setup. As the extension and agent attempt the private URL initially, the `*.privatelink.<geo>.backup.windowsazure.com` DNS resolution of the URL should return the corresponding private IP associated with the private endpoint.
The solutions for DNS resolution are:
The following diagram shows how the resolution works when using a private DNS zo
The workload extension running on Azure VM requires connection to at least two storage accounts - the first one is used as communication channel (via queue messages) and second one for storing backup data. The MARS agent requires access to one storage account used for storing backup data.
-For a private endpoint enabled vault, Azure Backup creates private endpoint for these storage accounts. This prevents any network traffic related to Azure Backup (control plane traffic to service and backup data to storage blob) from leaving the virtual network. In addition to Azure Backup cloud services, the workload extension and agent require connectivity to Azure Storage accounts and Azure Active Directory (Azure AD).
+For a private endpoint enabled vault, the Azure Backup service creates private endpoint for these storage accounts. This prevents any network traffic related to Azure Backup (control plane traffic to service and backup data to storage blob) from leaving the virtual network. In addition to Azure Backup cloud services, the workload extension and agent require connectivity to Azure Storage accounts and Azure Active Directory (Azure AD).
As a pre-requisite, Recovery Services vault requires permissions for creating additional private endpoints in the same Resource Group. We also recommend providing the Recovery Services vault the permissions to create DNS entries in the private DNS zones (`privatelink.blob.core.windows.net`, `privatelink.queue.core.windows.net`). Recovery Services vault searches for private DNS zones in the resource groups where VNet and private endpoint are created. If it has the permissions to add DNS entries in these zones, theyΓÇÖll be created by the vault; otherwise, you must create them manually.
baremetal-infrastructure Supported Instances And Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md
Nutanix Clusters on Azure supports:
## Supported regions
-NC2 on Azure supports the following region using AN36:
+NC2 on Azure supports the following regions using AN36:
* East US * West US 2
-NC2 on Azure supports the following region using AN36P:
+NC2 on Azure supports the following regions using AN36P:
* North Central US * East US 2
+* Southeast Asia
+* Australia East
## Next steps
cognitive-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md
Here are some property options that you can use to configure a transcription whe
|`contentContainerUrl`| You can submit individual audio files, or a whole storage container. You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`contentUrls`| You can submit individual audio files, or a whole storage container. You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information, see [Destination container URL](#destination-container-url).|
-|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The feature isn't available with stereo recordings.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.|
-|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.|
+|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech-to-text REST API version 3.1.|
+|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech-to-text REST API version 3.1).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.|
|`model`|You can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models).| |`profanityFilterMode`|Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. | |`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.|
cognitive-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md
Title: Azure OpenAI content filtering
+ Title: Azure OpenAI Service content filtering
-description: Learn about the content filtering capabilities of the OpenAI service in Azure Cognitive Services
+description: Learn about the content filtering capabilities of Azure OpenAI in Azure Cognitive Services
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
Title: Azure OpenAI models
+ Title: Azure OpenAI Service models
description: Learn about the different models that are available in Azure OpenAI.
recommendations: false
keywords:
-# Azure OpenAI models
+# Azure OpenAI Service models
The service provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI. Not all models are available in all regions currently. Please refer to the capability table at the bottom for a full breakdown.
cognitive-services Understand Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/understand-embeddings.md
Title: Azure OpenAI embeddings
+ Title: Azure OpenAI Service embeddings
description: Learn more about Azure OpenAI embeddings API for document search and cosine similarity
recommendations: false
-# Understanding embeddings in Azure OpenAI
+# Understanding embeddings in Azure OpenAI Service
An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar.
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/encrypt-data-at-rest.md
Title: Azure OpenAI encryption of data at rest
+ Title: Azure OpenAI Service encryption of data at rest
description: Learn how Azure OpenAI encrypts your data when it's persisted to the cloud.
Last updated 11/14/2022
-# Azure OpenAI encryption of data at rest
+# Azure OpenAI Service encryption of data at rest
Azure OpenAI automatically encrypts your data when it's persisted to the cloud. The encryption protects your data and helps you meet your organizational security and compliance commitments. This article covers how Azure OpenAI handles encryption of data at rest, specifically training data and fine-tuned models. For information on how data provided by you to the service is processed, used, and stored, consult the [data, privacy, and security article](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).
cognitive-services Business Continuity Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/business-continuity-disaster-recovery.md
Title: 'Business Continuity and Disaster Recovery (BCDR) with Azure OpenAI'
+ Title: 'Business Continuity and Disaster Recovery (BCDR) with Azure OpenAI Service'
description: Considerations for implementing Business Continuity and Disaster Recovery (BCDR) with Azure OpenAI
keywords:
-# Business Continuity and Disaster Recovery (BCDR) considerations with Azure OpenAI
+# Business Continuity and Disaster Recovery (BCDR) considerations with Azure OpenAI Service
-The Azure OpenAI service is available in two regions. Since subscription keys are region bound, when a customer acquires a key, they select the region in which their deployments will reside and from then on, all operations stay associated with that Azure server region.
+Azure OpenAI is available in two regions. Since subscription keys are region bound, when a customer acquires a key, they select the region in which their deployments will reside and from then on, all operations stay associated with that Azure server region.
It's rare, but not impossible, to encounter a network issue that hits an entire region. If your service needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two OpenAI resources in different regions. This article provides general recommendations for how to implement Business Continuity and Disaster Recovery (BCDR) for your Azure OpenAI applications.
cognitive-services Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/completions.md
Title: 'How to generate text with Azure OpenAI'
+ Title: 'How to generate text with Azure OpenAI Service'
description: Learn how to generate or manipulate text, including code with Azure OpenAI
keywords:
The completions endpoint can be used for a wide variety of tasks. It provides a simple but powerful text-in, text-out interface to any of our [models](../concepts/models.md). You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. For example, if you give the API the prompt, "As Descartes said, I think, therefore", it will return the completion " I am" with high probability.
-The best way to start exploring completions is through our playground in the [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you can submit a prompt to generate a completion. You can start with a simple example like the following:
+The best way to start exploring completions is through our playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you can submit a prompt to generate a completion. You can start with a simple example like the following:
`write a tagline for an ice cream shop`
cognitive-services Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/create-resource.md
Title: 'How-to - Create a resource and deploy a model using Azure OpenAI'
+ Title: 'How-to - Create a resource and deploy a model using Azure OpenAI Service'
description: Walkthrough on how to get started with Azure OpenAI and make your first resource and deploy your first model.
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/embeddings.md
Title: 'How to generate embeddings with Azure OpenAI'
+ Title: 'How to generate embeddings with Azure OpenAI Service'
description: Learn how to generate embeddings with Azure OpenAI
cognitive-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/fine-tuning.md
Title: 'How to customize a model with Azure OpenAI'
+ Title: 'How to customize a model with Azure OpenAI Service'
description: Learn how to create your own customized model with Azure OpenAI
keywords:
# Learn how to customize a model for your application
-The Azure OpenAI Service lets you tailor our models to your personal datasets using a process known as *fine-tuning*. This customization step will let you get more out of the service by providing:
+Azure OpenAI Service lets you tailor our models to your personal datasets using a process known as *fine-tuning*. This customization step will let you get more out of the service by providing:
- Higher quality results than what you can get just from prompt design - The ability to train on more examples than can fit into a prompt
cognitive-services Integrate Synapseml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/integrate-synapseml.md
Title: 'How-to - Use Azure OpenAI with large datasets'
+ Title: 'How-to - Use Azure OpenAI Service with large datasets'
description: Walkthrough on how to integrate Azure OpenAI with SynapseML and Apache Spark to apply large language models at a distributed scale.
recommendations: false
# Use Azure OpenAI with large datasets
-The Azure OpenAI service can be used to solve a large number of natural language tasks through prompting the completion API. To make it easier to scale your prompting workflows from a few examples to large datasets of examples, we have integrated the Azure OpenAI service with the distributed machine learning library [SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/). This integration makes it easy to use the [Apache Spark](https://spark.apache.org/) distributed computing framework to process millions of prompts with the OpenAI service. This tutorial shows how to apply large language models at a distributed scale using Azure Open AI and Azure Synapse Analytics.
+Azure OpenAI can be used to solve a large number of natural language tasks through prompting the completion API. To make it easier to scale your prompting workflows from a few examples to large datasets of examples, we have integrated the Azure OpenAI service with the distributed machine learning library [SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/). This integration makes it easy to use the [Apache Spark](https://spark.apache.org/) distributed computing framework to process millions of prompts with the OpenAI service. This tutorial shows how to apply large language models at a distributed scale using Azure Open AI and Azure Synapse Analytics.
## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>-- Access granted to the Azure OpenAI service in the desired Azure subscription
+- Access granted to Azure OpenAI in the desired Azure subscription
- Currently, access to this service is granted only by application. You can apply for access to the Azure OpenAI service by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+ Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- An Azure OpenAI resource ΓÇô [create a resource](create-resource.md?pivots=web-portal#create-a-resource) - An Apache Spark cluster with SynapseML installed - create a serverless Apache Spark pool [here](../../../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool)
The next step is to add this code into your Spark cluster. You can either create
## Fill in your service information
-Next, edit the cell in the notebook to point to your service. In particular, set the `resource_name`, `deployment_name`, `location`, and `key` variables to the corresponding values for your Azure OpenAI service.
+Next, edit the cell in the notebook to point to your service. In particular, set the `resource_name`, `deployment_name`, `location`, and `key` variables to the corresponding values for your Azure OpenAI resource.
> [!IMPORTANT] > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Cognitive Services [security](../../cognitive-services-security.md) article for more information.
Next, edit the cell in the notebook to point to your service. In particular, set
```python import os
-# Replace the following values with your Azure OpenAI service information
+# Replace the following values with your Azure OpenAI resource information
resource_name = "RESOURCE_NAME" # The name of your Azure OpenAI resource. deployment_name = "DEPLOYMENT_NAME" # The name of your Azure OpenAI deployment. location = "RESOURCE_LOCATION" # The location or region ID for your resource.
display(completed_autobatch_df)
### Prompt engineering for translation
-The Azure OpenAI service can solve many different natural language tasks through [prompt engineering](completions.md). Here, we show an example of prompting for language translation:
+Azure OpenAI can solve many different natural language tasks through [prompt engineering](completions.md). Here, we show an example of prompting for language translation:
```python translate_df = spark.createDataFrame(
cognitive-services Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/manage-costs.md
Title: Plan to manage costs for Azure OpenAI
-description: Learn how to plan for and manage costs for Azure OpenAI Service by using cost analysis in the Azure portal.
+ Title: Plan to manage costs for Azure OpenAI Service
+description: Learn how to plan for and manage costs for Azure OpenAI by using cost analysis in the Azure portal.
This article describes how you plan for and manage costs for Azure OpenAI Servic
Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../../../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../../../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-## Estimate costs before using Azure OpenAI Service
+## Estimate costs before using Azure OpenAI
Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the costs of using Azure OpenAI.
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/managed-identity.md
Title: How to Configure Azure OpenAI with Managed Identities
+ Title: How to Configure Azure OpenAI Service with Managed Identities
description: Provides guidance on how to set managed identity with Azure Active Directory
recommendations: false
-# How to Configure Azure OpenAI with Managed Identities
+# How to Configure Azure OpenAI Service with Managed Identities
More complex security scenarios require Azure role-based access control (Azure RBAC). This document covers how to authenticate to your OpenAI resource using Azure Active Directory (Azure AD).
In the following sections, you'll use the Azure CLI to assign roles, and obtain
- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a> - Access granted to the Azure OpenAI service in the desired Azure subscription
- Currently, access to this service is granted only by application. You can apply for access to the Azure OpenAI service by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+ Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- Azure CLI - [Installation Guide](/cli/azure/install-azure-cli) - The following Python libraries: os, requests, json
cognitive-services Prepare Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/prepare-dataset.md
Title: 'How to prepare a dataset for custom model training'-+ description: Learn how to prepare your dataset for fine-tuning
Generative tasks have a potential to leak training data when requesting completi
## Next steps * Fine tune your model with our [How-to guide](fine-tuning.md)
-* Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md)
+* Learn more about the [underlying models that power Azure OpenAI Service](../concepts/models.md)
cognitive-services Work With Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/work-with-code.md
Title: 'How to use the Codex models to work with code'-
-description: Learn how to use the Codex models on the Azure OpenAI Service to handle a variety of coding tasks
+
+description: Learn how to use the Codex models on Azure OpenAI to handle a variety of coding tasks
keywords:
-# Codex models and Azure OpenAI
+# Codex models and Azure OpenAI Service
The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
You can use Codex for a variety of tasks including:
## How to use the Codex models
-Here are a few examples of using Codex that can be tested in the [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
+Here are a few examples of using Codex that can be tested in [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
### Saying "Hello" (Python)
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
# Azure OpenAI Service quotas and limits
-This article contains a quick reference and a detailed description of the quotas and limits for the Azure OpenAI Service in Azure Cognitive Services.
+This article contains a quick reference and a detailed description of the quotas and limits for Azure OpenAI in Azure Cognitive Services.
## Quotas and limits reference
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
Title: Azure OpenAI Service REST API reference
-description: Learn how to use the Azure OpenAI REST API. In this article, you'll learn about authorization options, how to structure a request and receive a response.
+description: Learn how to use Azure OpenAI's REST API. In this article, you'll learn about authorization options, how to structure a request and receive a response.
# Azure OpenAI Service REST API reference
-This article provides details on the REST API endpoints for the Azure OpenAI Service, a service in the Azure Cognitive Services suite. The REST APIs are broken up into two categories:
+This article provides details on the REST API endpoints for Azure OpenAI, a service in the Azure Cognitive Services suite. The REST APIs are broken up into two categories:
* **Management APIs**: The Azure Resource Manager (ARM) provides the management layer in Azure that allows you to create, update and delete resource in Azure. All services use a common structure for these operations. [Learn More](../../azure-resource-manager/management/overview.md)
-* **Service APIs**: The Azure OpenAI service provides you with a set of REST APIs for interacting with the resources & models you deploy via the Management APIs.
+* **Service APIs**: Azure OpenAI provides you with a set of REST APIs for interacting with the resources & models you deploy via the Management APIs.
## Management APIs
-The Azure OpenAI Service is deployed as a part of the Azure Cognitive Services. All Cognitive Services rely on the same set of management APIs for creation, update and delete operations. The management APIs are also used for deploying models within an OpenAI resource.
+Azure OpenAI is deployed as a part of the Azure Cognitive Services. All Cognitive Services rely on the same set of management APIs for creation, update and delete operations. The management APIs are also used for deploying models within an OpenAI resource.
[**Management APIs reference documentation**](/rest/api/cognitiveservices/) ## Authentication
-The Azure OpenAI service provides two methods for authentication. you can use either API Keys or Azure Active Directory.
+Azure OpenAI provides two methods for authentication. you can use either API Keys or Azure Active Directory.
- **API Key authentication**: For this type of authentication, all API requests must include the API Key in the ```api-key``` HTTP header. The [Quickstart](./quickstart.md) provides a tutorial for how to make calls with this type of authentication
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
Title: Azure OpenAI embeddings tutorial
+ Title: Azure OpenAI Service embeddings tutorial
-description: Learn how to use the Azure OpenAI embeddings API for document search with the BillSum dataset
+description: Learn how to use Azure OpenAI's embeddings API for document search with the BillSum dataset
-# Tutorial: Explore Azure OpenAI embeddings and document search
+# Tutorial: Explore Azure OpenAI Service embeddings and document search
This tutorial will walk you through using the Azure OpenAI [embeddings](../concepts/understand-embeddings.md) API to perform **document search** where you'll query a knowledge base to find the most relevant document.
In this tutorial, you learn how to:
## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)
-* Access granted to the Azure OpenAI service in the desired Azure subscription
- Currently, access to this service is granted only by application. You can apply for access to the Azure OpenAI service by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+* Access granted to Azure OpenAI in the desired Azure subscription
+ Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
* <a href="https://www.python.org/" target="_blank">Python 3.7.1 or later version</a> * The following Python libraries: openai, num2words, matplotlib, plotly, scipy, scikit-learn, transformers.
-* An Azure OpenAI Service resource with **text-search-curie-doc-001** and **text-search-curie-query-001** models deployed. These models are currently only available in [certain regions](../concepts/models.md#model-summary-table-and-region-availability). If you don't have a resource the process is documented in our [resource deployment guide](../how-to/create-resource.md).
+* An Azure OpenAI resource with **text-search-curie-doc-001** and **text-search-curie-query-001** models deployed. These models are currently only available in [certain regions](../concepts/models.md#model-summary-table-and-region-availability). If you don't have a resource the process is documented in our [resource deployment guide](../how-to/create-resource.md).
> [!NOTE] > If you have never worked with the Hugging Face transformers library it has its own specific [prerequisites](https://huggingface.co/docs/transformers/installation) that are required before you can successfully run `pip install transformers`.
curl "https://raw.githubusercontent.com/Azure-Samples/Azure-OpenAI-Docs-Samples/
### Retrieve key and endpoint
-To successfully make a call against the Azure OpenAI service, you'll need an **endpoint** and a **key**.
+To successfully make a call against Azure OpenAI, you'll need an **endpoint** and a **key**.
|Variable name | Value | |--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in the **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://docs-test-001.openai.azure.com/`.|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://docs-test-001.openai.azure.com/`.|
| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.| Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
keywords:
* **Process for requesting modifications to the abuse & miss-use data logging & human review.** Today, the service logs request/response data for the purposes of abuse and misuse detection to ensure that these powerful models aren't abused. However, many customers have strict data privacy and security requirements that require greater control over their data. To support these use cases, we're releasing a new process for customers to modify the content filtering policies or turn off the abuse logging for low-risk use cases. This process follows the established Limited Access process within Azure Cognitive Services and [existing OpenAI customers can apply here](https://aka.ms/oai/modifiedaccess).ΓÇï
-* **Customer managed key (CMK) encryption.** CMK provides customers greater control over managing their data in the Azure OpenAI Service by providing their own encryption keys used for storing training data and customized models. Customer-managed keys (CMK), also known as bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. [Learn more from our encryption at rest documentation](encrypt-data-at-rest.md).
+* **Customer managed key (CMK) encryption.** CMK provides customers greater control over managing their data in Azure OpenAI by providing their own encryption keys used for storing training data and customized models. Customer-managed keys (CMK), also known as bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. [Learn more from our encryption at rest documentation](encrypt-data-at-rest.md).
* **Lockbox support**ΓÇï
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/what-are-cognitive-services.md
keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services, cognitive understanding, cognitive features Previously updated : 02/28/2022 Last updated : 02/17/2023
Azure Cognitive Services are cloud-based artificial intelligence (AI) services t
## Categories of Cognitive Services
-Cognitive Services can be categorized into four main pillars:
+Cognitive Services can be categorized into five main areas:
* Vision * Speech * Language * Decision
+* Azure OpenAI Service
See the tables below to learn about the services offered within those categories.
See the tables below to learn about the services offered within those categories
|[Content Moderator](./content-moderator/overview.md "Content Moderator")|Content Moderator provides monitoring for possible offensive, undesirable, and risky content. | [Content Moderator quickstart](./content-moderator/client-libraries.md)| |[Personalizer](./personalizer/index.yml "Personalizer")|Personalizer allows you to choose the best experience to show to your users, learning from their real-time behavior. |[Personalizer quickstart](./personalizer/quickstart-personalizer-sdk.md)|
+## Azure OpenAI
+
+|Service Name | Service Description| Quickstart|
+|:|:-|--|
+|[Azure OpenAI](./openai/index.yml "Azure OpenAI") |Powerful language models including the GPT-3, Codex and Embeddings model series for content generation, summarization, semantic search, and natural language to code translation. | [Azure OpenAI quickstart](./openai/quickstart.md) |
+ ## Create a Cognitive Services resource You can create a Cognitive Services resource with hands-on quickstarts using any of the following methods:
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
Additional details on eligible subscription types are as follows:
| Number Type | Eligible Azure Agreement Type | | :- | :-- |
-| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement* |
-| Short-Codes | Modern Customer Agreement (Field Led) and Enterprise Agreement** |
+| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+| Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go |
\* Allowing the purchase of Italian phone numbers for CSP and LSP customers is planned only for General Availability launch.
confidential-ledger Manage Azure Ad Token Based Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/manage-azure-ad-token-based-users.md
+
+ Title: Manage Azure AD token-based users in Azure confidential ledger
+description: Learn how to manage Azure AD token-based users in Azure confidential ledger
++ Last updated : 02/09/2023++++
+# Manage Azure AD token-based users in Azure confidential ledger
+
+Azure AD-based users are identified by their Azure AD object ID.
+
+Users with Administrator privileges can manage users of the confidential ledger. Available roles are Reader (read-only), Contributor (read and write), and Administrator (read, write, and manage users).
+
+The following client libraries are available to manage users:
+
+- [Python](#python-client-library)
+- [.NET](#net-client-library)
+- [Java](#java-client-library)
+- [TypeScript](#typescript-client-library)
+
+## Sign in to Azure
++
+Get the confidential ledger's name and the identity service URI from the Azure portal as it is needed to create a client to manage the users. This image shows the appropriate properties in the Azure portal.
++
+Replace instances of `contoso` and `https://contoso.confidential-ledger.azure.com` in the following code snippets with the respective values from the Azure portal.
+
+## Python Client Library
+
+### Install the packages
+
+```Python
+pip install azure-identity azure-confidentialledger
+```
+
+### Create a confidential ledger client
+
+```Python
+from azure.identity import DefaultAzureCredential
+from azure.confidentialledger import ConfidentialLedgerClient
+from azure.confidentialledger.identity_service import ConfidentialLedgerIdentityServiceClient
+from azure.confidentialledger import LedgerUserRole
+
+identity_client = ConfidentialLedgerCertificateClient()
+network_identity = identity_client.get_ledger_identity(
+ ledger_id="contoso"
+)
+
+ledger_tls_cert_file_name = "ledger_certificate.pem"
+with open(ledger_tls_cert_file_name, "w") as cert_file:
+ cert_file.write(network_identity["ledgerTlsCertificate"])
+
+# The DefaultAzureCredential will use the current Azure context to authenticate to Azure
+credential = DefaultAzureCredential()
+
+ledger_client = ConfidentialLedgerClient(
+ endpoint="https://contoso.confidential-ledger.azure.com",
+ credential=credential,
+ ledger_certificate_path=ledger_tls_cert_file_name
+)
+
+# Add a user with the contributor role
+# Other supported roles are Contributor and Administrator
+user_id = "Azure AD object id of the user"
+user = ledger_client.create_or_update_user(
+ user_id, {"assignedRole": "Contributor"}
+)
+
+# Get the user and check their properties
+user = ledger_client.get_user(user_id)
+assert user["userId"] == user_id
+assert user["assignedRole"] == "Contributor"
+
+# Delete the user
+ledger_client.delete_user(user_id)
+```
+
+## .NET Client Library
+
+### Install the packages
++
+```
+dotnet add package Azure.Security.ConfidentialLedger
+dotnet add package Azure.Identity
+dotnet add Azure.Security
+```
+
+### Create a client and manage the users
+
+```Dotnet
+using Azure.Core;
+using Azure.Identity;
+using Azure.Security.ConfidentialLedger;
+
+internal class ACLUserManagement
+{
+ static void Main(string[] args)
+ {
+ // Create a ConfidentialLedgerClient instance
+ // The DefaultAzureCredential will use the current Azure context to authenticate to Azure
+ var ledgerClient = new ConfidentialLedgerClient(new Uri("https://contoso.confidential-ledger.azure.com"), new DefaultAzureCredential());
+
+ string userId = "Azure AD object id of the user";
+
+ // Add the user with the Reader role
+ // Other supported roles are Contributor and Administrator
+ ledgerClient.CreateOrUpdateUser(
+ userId,
+ RequestContent.Create(new { assignedRole = "Reader" }));
+
+ // Get the user and print their properties
+ Azure.Response response = ledgerClient.GetUser(userId);
+ var aclUser = System.Text.Json.JsonDocument.Parse(response.Content.ToString());
+
+ Console.WriteLine($"Assigned Role is = {aclUser.RootElement.GetProperty("assignedRole").ToString()}");
+ Console.WriteLine($"User id is = {aclUser.RootElement.GetProperty("userId").ToString()}");
+
+ // Delete the user
+ ledgerClient.DeleteUser(userId);
+ }
+}
+```
+
+## Java Client Library
+
+### Install the packages
+
+```Java
+<!-- https://mvnrepository.com/artifact/com.azure/azure-security-confidentialledger -->
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-security-confidentialledger</artifactId>
+ <version>1.0.6</version>
+</dependency>
+<!-- https://mvnrepository.com/artifact/com.azure/azure-identity -->
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.8.0</version>
+</dependency>
+<!-- https://mvnrepository.com/artifact/com.azure/azure-core -->
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-core</artifactId>
+ <version>1.36.0</version>
+</dependency>
+```
+
+### Create a client and manage the users
+
+```Java
+ import java.io.IOException;
+import com.azure.core.http.HttpClient;
+import java.io.ByteArrayInputStream;
+import java.nio.charset.StandardCharsets;
+
+import com.azure.security.confidentialledger.*;
+import com.azure.core.http.rest.RequestOptions;
+import com.azure.core.http.netty.NettyAsyncHttpClientBuilder;
+import com.azure.core.http.rest.Response;
+import com.azure.core.util.BinaryData;
+import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.azure.security.confidentialledger.certificate.ConfidentialLedgerCertificateClient;
+import com.azure.security.confidentialledger.certificate.ConfidentialLedgerCertificateClientBuilder;
+
+import io.netty.handler.ssl.SslContext;
+import io.netty.handler.ssl.SslContextBuilder;
+
+public class CreateOrUpdateUserSample {
+ public static void main(String[] args) {
+ try {
+ // Download the service identity certificate of the ledger from the well-known identity service endpoint.
+ // Do not change the identity endpoint.
+ ConfidentialLedgerCertificateClientBuilder confidentialLedgerCertificateClientbuilder = new ConfidentialLedgerCertificateClientBuilder()
+ .certificateEndpoint("https://identity.confidential-ledger.core.azure.com")
+ .credential(new DefaultAzureCredentialBuilder().build()).httpClient(HttpClient.createDefault());
+
+ ConfidentialLedgerCertificateClient confidentialLedgerCertificateClient = confidentialLedgerCertificateClientbuilder
+ .buildClient();
+
+ String ledgerId = "contoso";
+ Response<BinaryData> ledgerCertificateWithResponse = confidentialLedgerCertificateClient
+ .getLedgerIdentityWithResponse(ledgerId, null);
+ BinaryData certificateResponse = ledgerCertificateWithResponse.getValue();
+ ObjectMapper mapper = new ObjectMapper();
+ JsonNode jsonNode = mapper.readTree(certificateResponse.toBytes());
+ String ledgerTlsCertificate = jsonNode.get("ledgerTlsCertificate").asText();
+
+ SslContext sslContext = SslContextBuilder.forClient()
+ .trustManager(new ByteArrayInputStream(ledgerTlsCertificate.getBytes(StandardCharsets.UTF_8)))
+ .build();
+ reactor.netty.http.client.HttpClient reactorClient = reactor.netty.http.client.HttpClient.create()
+ .secure(sslContextSpec -> sslContextSpec.sslContext(sslContext));
+ HttpClient httpClient = new NettyAsyncHttpClientBuilder(reactorClient).wiretap(true).build();
+
+ // The DefaultAzureCredentialBuilder will use the current Azure context to authenticate to Azure
+ ConfidentialLedgerClient confidentialLedgerClient = new ConfidentialLedgerClientBuilder()
+ .credential(new DefaultAzureCredentialBuilder().build()).httpClient(httpClient)
+ .ledgerEndpoint("https://contoso.confidential-ledger.azure.com").buildClient();
+
+ // Add a user
+ // Other supported roles are Contributor and Administrator
+ BinaryData userDetails = BinaryData.fromString("{\"assignedRole\":\"Reader\"}");
+ RequestOptions requestOptions = new RequestOptions();
+ String userId = "Azure AD object id of the user";
+ Response<BinaryData> response = confidentialLedgerClient.createOrUpdateUserWithResponse(userId,
+ userDetails, requestOptions);
+
+ BinaryData parsedResponse = response.getValue();
+
+ ObjectMapper objectMapper = new ObjectMapper();
+ JsonNode responseBodyJson = null;
+
+ try {
+ responseBodyJson = objectMapper.readTree(parsedResponse.toBytes());
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+
+ System.out.println("Assigned role for user is " + responseBodyJson.get("assignedRole"));
+
+ // Get the user and print the details
+ response = confidentialLedgerClient.getUserWithResponse(userId, requestOptions);
+
+ parsedResponse = response.getValue();
+
+ try {
+ responseBodyJson = objectMapper.readTree(parsedResponse.toBytes());
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+
+ System.out.println("Assigned role for user is " + responseBodyJson.get("assignedRole"));
+
+ // Delete the user
+ confidentialLedgerClient.deleteUserWithResponse(userId, requestOptions);
+ } catch (Exception ex) {
+ System.out.println("Caught exception" + ex);
+ }
+ }
+}
+```
+
+## TypeScript Client Library
+
+### Install the packages
+
+```
+ "dependencies": {
+ "@azure-rest/confidential-ledger": "^1.0.0",
+ "@azure/identity": "^3.1.3",
+ "typescript": "^4.9.5"
+ }
+```
+### Create a client and manage the users
+
+```TypeScript
+import ConfidentialLedger, { getLedgerIdentity } from "@azure-rest/confidential-ledger";
+import { DefaultAzureCredential } from "@azure/identity";
+
+export async function main() {
+ // Get the signing certificate from the confidential ledger Identity Service
+ const ledgerIdentity = await getLedgerIdentity("contoso");
+
+ // Create the confidential ledger Client
+ const confidentialLedger = ConfidentialLedger(
+ "https://contoso.confidential-ledger.azure.com",
+ ledgerIdentity.ledgerIdentityCertificate,
+ new DefaultAzureCredential()
+ );
+
+ // Azure AD object id of the user
+ const userId = "Azure AD Object id"
+
+ // Other supported roles are Reader and Contributor
+ const createUserParams: CreateOrUpdateUserParameters = {
+ contentType: "application/merge-patch+json",
+ body: {
+ assignedRole: "Contributor",
+ userId: `${userId}`
+ }
+ }
+
+ // Add the user
+ var response = await confidentialLedger.path("/app/users/{userId}", userId).patch(createUserParams)
+
+ // Check for a non-success response
+ if (response.status !== "200") {
+ throw response.body.error;
+ }
+
+ // Print the response
+ console.log(response.body);
+
+ // Get the user
+ response = await confidentialLedger.path("/app/users/{userId}", userId).get()
+
+ // Check for a non-success response
+ if (response.status !== "200") {
+ throw response.body.error;
+ }
+
+ // Print the response
+ console.log(response.body);
+
+ // Set the user role to Reader
+ const updateUserParams: CreateOrUpdateUserParameters = {
+ contentType: "application/merge-patch+json",
+ body: {
+ assignedRole: "Reader",
+ userId: `${userId}`
+ }
+ }
+
+ // Update the user
+ response = await confidentialLedger.path("/app/users/{userId}", userId).patch(updateUserParams)
+
+ // Check for a non-success response
+ if (response.status !== "200") {
+ throw response.body.error;
+ }
+
+ // Print the response
+ console.log(response.body);
+
+ // Delete the user
+ await confidentialLedger.path("/app/users/{userId}", userId).delete()
+
+ // Get the user to make sure it is deleted
+ response = await confidentialLedger.path("/app/users/{userId}", userId).get()
+
+ // Check for a non-success response
+ if (response.status !== "200") {
+ throw response.body.error;
+ }
+}
+
+main().catch((err) => {
+ console.error(err);
+});
+```
+
+## Next steps
+
+- [Register an ACL app with Azure AD](register-application.md)
+- [Manage certificate-based users](manage-certificate-based-users.md)
confidential-ledger Manage Certificate Based Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/manage-certificate-based-users.md
+
+ Title: Manage certificate-based users in Azure confidential ledger
+description: Learn how to manage certificate-based users in Azure confidential ledger
++ Last updated : 02/09/2023++++
+# Manage certificate-based users in Azure confidential ledger
+
+For certificate based users, their user ID is the fingerprint of their PEM certificate. Users with Administrator privileges can manage users of the confidential ledger. Available roles are Reader (read-only), Contributor (read and write), and Administrator (read, write, and manage users).
+
+The following client libraries are available to manage users:
+
+- [Python](#python-client-library)
+- [.NET](#net-client-library)
+- [Java](#java-client-library)
+- [TypeScript](#typescript-client-library)
+
+## Sign in to Azure
++
+Get the confidential ledger's name and the identity service URI from the Azure portal; it will be needed to create a client to manage the users. The image shows the appropriate properties in the Azure portal.
++
+Replace instances of `contoso` and `https://contoso.confidential-ledger.azure.com` in the following code snippets with the respective values from the Azure portal.
+
+## Python Client Library
+
+### Install the packages
+
+```Python
+pip install azure-identity azure-confidentialledger
+```
+
+### Create a confidential ledger client and manage the users
+
+```Python
+from azure.identity import DefaultAzureCredential
+from azure.confidentialledger import ConfidentialLedgerClient
+from azure.confidentialledger.identity_service import ConfidentialLedgerIdentityServiceClient
+from azure.confidentialledger import LedgerUserRole
+
+identity_client = ConfidentialLedgerCertificateClient()
+network_identity = identity_client.get_ledger_identity(
+ ledger_id="contoso"
+ )
+
+ledger_tls_cert_file_name = "ledger_certificate.pem"
+with open(ledger_tls_cert_file_name, "w") as cert_file:
+ cert_file.write(network_identity["ledgerTlsCertificate"])
+
+credential = DefaultAzureCredential()
+ledger_client = ConfidentialLedgerClient(
+ endpoint="https://contoso.confidential-ledger.azure.com",
+ credential=credential,
+ ledger_certificate_path=ledger_tls_cert_file_name
+)
+
+# Add a user with the contributor role
+# Other possible roles are Contributor and Administrator
+user_id = "PEM certficate fingerprint"
+user = ledger_client.create_or_update_user(
+ user_id, {"assignedRole": "Contributor"}
+)
+
+# Get the user and check their properties
+user = ledger_client.get_user(user_id)
+assert user["userId"] == user_id
+assert user["assignedRole"] == "Contributor"
+
+# Delete the user
+ledger_client.delete_user(user_id)
+```
+
+## .NET Client Library
+
+### Install the packages
+
+```
+dotnet add package Azure.Security.ConfidentialLedger
+dotnet add package Azure.Identity
+dotnet add Azure.Security
+```
+
+### Create a client and manage the users
+
+```Dotnet
+using Azure.Core;
+using Azure.Identity;
+using Azure.Security.ConfidentialLedger;
+
+internal class ACLUserManagement
+{
+ static void Main(string[] args)
+ {
+ // The DefaultAzureCredential will use the current Azure context to authenticate to Azure
+ var ledgerClient = new ConfidentialLedgerClient(new Uri("https://contoso.confidential-ledger.azure.com"), new DefaultAzureCredential());
+
+ // User id is the fingerprint of the PEM certificate
+ string userId = "PEM certficate fingerprint";
+
+ // Add the user with the Reader role
+ // Other supported roles are Contributor and Administrator
+ ledgerClient.CreateOrUpdateUser(
+ userId,
+ RequestContent.Create(new { assignedRole = "Reader" }));
+
+ // Get the user and print their properties
+ Azure.Response response = ledgerClient.GetUser(userId);
+ var aclUser = System.Text.Json.JsonDocument.Parse(response.Content.ToString());
+ Console.WriteLine($"Assigned Role is = {aclUser.RootElement.GetProperty("assignedRole").ToString()}");
+ Console.WriteLine($"User id is = {aclUser.RootElement.GetProperty("userId").ToString()}");
+
+ // Delete the user
+ ledgerClient.DeleteUser(userId);
+ }
+}
+```
+
+## Java Client Library
+
+### Install the packages
+
+```Java
+<!-- https://mvnrepository.com/artifact/com.azure/azure-security-confidentialledger -->
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-security-confidentialledger</artifactId>
+ <version>1.0.6</version>
+</dependency>
+<!-- https://mvnrepository.com/artifact/com.azure/azure-identity -->
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.8.0</version>
+</dependency>
+<!-- https://mvnrepository.com/artifact/com.azure/azure-core -->
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-core</artifactId>
+ <version>1.36.0</version>
+</dependency>
+```
+
+### Create a client and manage the users
+
+```Java
+import java.io.IOException;
+import com.azure.core.http.HttpClient;
+import java.io.ByteArrayInputStream;
+import java.nio.charset.StandardCharsets;
+
+import com.azure.security.confidentialledger.*;
+import com.azure.core.http.rest.RequestOptions;
+import com.azure.core.http.netty.NettyAsyncHttpClientBuilder;
+import com.azure.core.http.rest.Response;
+import com.azure.core.util.BinaryData;
+import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.azure.security.confidentialledger.certificate.ConfidentialLedgerCertificateClient;
+import com.azure.security.confidentialledger.certificate.ConfidentialLedgerCertificateClientBuilder;
+
+import io.netty.handler.ssl.SslContext;
+import io.netty.handler.ssl.SslContextBuilder;
+
+public class CreateOrUpdateUserSample {
+ public static void main(String[] args) {
+ try {
+ // Download the service identity certificate of the ledger from the well-known identity service endpoint.
+ // Do not change the identity endpoint.
+ ConfidentialLedgerCertificateClientBuilder confidentialLedgerCertificateClientbuilder = new ConfidentialLedgerCertificateClientBuilder()
+ .certificateEndpoint("https://identity.confidential-ledger.core.azure.com")
+ .credential(new DefaultAzureCredentialBuilder().build()).httpClient(HttpClient.createDefault());
+
+ ConfidentialLedgerCertificateClient confidentialLedgerCertificateClient = confidentialLedgerCertificateClientbuilder
+ .buildClient();
+
+ String ledgerId = "contoso";
+ Response<BinaryData> ledgerCertificateWithResponse = confidentialLedgerCertificateClient
+ .getLedgerIdentityWithResponse(ledgerId, null);
+ BinaryData certificateResponse = ledgerCertificateWithResponse.getValue();
+ ObjectMapper mapper = new ObjectMapper();
+ JsonNode jsonNode = mapper.readTree(certificateResponse.toBytes());
+ String ledgerTlsCertificate = jsonNode.get("ledgerTlsCertificate").asText();
+
+ SslContext sslContext = SslContextBuilder.forClient()
+ .trustManager(new ByteArrayInputStream(ledgerTlsCertificate.getBytes(StandardCharsets.UTF_8)))
+ .build();
+ reactor.netty.http.client.HttpClient reactorClient = reactor.netty.http.client.HttpClient.create()
+ .secure(sslContextSpec -> sslContextSpec.sslContext(sslContext));
+ HttpClient httpClient = new NettyAsyncHttpClientBuilder(reactorClient).wiretap(true).build();
+
+ // The DefaultAzureCredentialBuilder will use the current Azure context to authenticate to Azure.
+ ConfidentialLedgerClient confidentialLedgerClient = new ConfidentialLedgerClientBuilder()
+ .credential(new DefaultAzureCredentialBuilder().build()).httpClient(httpClient)
+ .ledgerEndpoint("https://contoso.confidential-ledger.azure.com").buildClient();
+
+ // Add a user using their certificate fingerprint as the user id
+ // Other supported roles are Contributor and Administrator
+ BinaryData userDetails = BinaryData.fromString("{\"assignedRole\":\"Reader\"}");
+ RequestOptions requestOptions = new RequestOptions();
+
+ String userId = "PEM certificate fingerprint";
+ Response<BinaryData> response = confidentialLedgerClient.createOrUpdateUserWithResponse(userId,
+ userDetails, requestOptions);
+
+ BinaryData parsedResponse = response.getValue();
+
+ ObjectMapper objectMapper = new ObjectMapper();
+ JsonNode responseBodyJson = null;
+
+ try {
+ responseBodyJson = objectMapper.readTree(parsedResponse.toBytes());
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+
+ System.out.println("Assigned role for user is " + responseBodyJson.get("assignedRole"));
+
+ // Get the user and print the details
+ response = confidentialLedgerClient.getUserWithResponse(userId, requestOptions);
+
+ parsedResponse = response.getValue();
+
+ try {
+ responseBodyJson = objectMapper.readTree(parsedResponse.toBytes());
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+
+ System.out.println("Assigned role for user is " + responseBodyJson.get("assignedRole"));
+
+ // Delete the user
+ confidentialLedgerClient.deleteUserWithResponse(userId, requestOptions);
+ } catch (Exception ex) {
+ System.out.println("Caught exception" + ex);
+ }
+ }
+}
+```
+
+## TypeScript Client Library
+
+### Install the packages
+
+```
+ "dependencies": {
+ "@azure-rest/confidential-ledger": "^1.0.0",
+ "@azure/identity": "^3.1.3",
+ "typescript": "^4.9.5"
+ }
+```
+### Create a client and manage the users
+
+```TypeScript
+import ConfidentialLedger, { getLedgerIdentity } from "@azure-rest/confidential-ledger";
+import { DefaultAzureCredential } from "@azure/identity";
+
+export async function main() {
+ // Get the signing certificate from the confidential ledger Identity Service
+ const ledgerIdentity = await getLedgerIdentity("contoso");
+
+ // Create the confidential ledger Client
+ const confidentialLedger = ConfidentialLedger(
+ "https://contoso.confidential-ledger.azure.com",
+ ledgerIdentity.ledgerIdentityCertificate,
+ new DefaultAzureCredential()
+ );
+
+ // User id is the PEM certificate fingerprint
+ const userId = "PEM certificate fingerprint"
+
+ // Other supported roles are Reader and Contributor
+ const createUserParams: CreateOrUpdateUserParameters = {
+ contentType: "application/merge-patch+json",
+ body: {
+ assignedRole: "Contributor",
+ userId: `${userId}`
+ }
+ }
+
+ // Add the user
+ var response = await confidentialLedger.path("/app/users/{userId}", userId).patch(createUserParams)
+
+ // Check for a non-success response
+ if (response.status !== "200") {
+ throw response.body.error;
+ }
+
+ // Print the response
+ console.log(response.body);
+
+ // Get the user
+ response = await confidentialLedger.path("/app/users/{userId}", userId).get()
+
+ // Check for a non-success response
+ if (response.status !== "200") {
+ throw response.body.error;
+ }
+
+ // Print the response
+ console.log(response.body);
+
+ // Set the user role to Reader
+ const updateUserParams: CreateOrUpdateUserParameters = {
+ contentType: "application/merge-patch+json",
+ body: {
+ assignedRole: "Reader",
+ userId: `${userId}`
+ }
+ }
+
+ // Update the user
+ response = await confidentialLedger.path("/app/users/{userId}", userId).patch(updateUserParams)
+
+ // Check for a non-success response
+ if (response.status !== "200") {
+ throw response.body.error;
+ }
+
+ // Print the response
+ console.log(response.body);
+
+ // Delete the user
+ await confidentialLedger.path("/app/users/{userId}", userId).delete()
+
+ // Get the user to make sure it is deleted
+ response = await confidentialLedger.path("/app/users/{userId}", userId).get()
+
+ // Check for a non-success response
+ if (response.status !== "200") {
+ throw response.body.error;
+ }
+}
+
+main().catch((err) => {
+ console.error(err);
+});
+```
+
+## Next steps
+
+- [Create a client certificate](create-client-certificate.md)
+- [Manage Azure AD token-based users](manage-azure-ad-token-based-users.md)
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
This guide provides insight into core Dapr concepts and details regarding the Da
| [**Secrets**][dapr-secrets] | Access secrets from your application code or reference secure values in your Dapr components. | > [!NOTE]
-> The above table covers stable Dapr APIs. To learn more about using alpha APIs and components, [see limitations](#unsupported-dapr-capabilities).
+> The above table covers stable Dapr APIs. To learn more about using alpha APIs and features, [see limitations](#unsupported-dapr-capabilities).
## Dapr concepts overview
This resource defines a Dapr component called `dapr-pubsub` via ARM.
- **Dapr Configuration spec**: Any capabilities that require use of the Dapr configuration spec. - **Declarative pub/sub subscriptions** - **Any Dapr sidecar annotations not listed above**-- **Alpha APIs and components**: Dapr alpha APIs and components are available to use on a self-service, opt-in basis. Alpha APIs and components are provided "as is" and "as available," and are continually evolving as they move toward stable status. Alpha APIs and components are not covered by customer support.
+- **Alpha APIs and components**: Azure Container Apps does not guarantee the availability of Dapr alpha APIs and features. If available to use, they are on a self-service, opt-in basis. Alpha APIs and components are provided "as is" and "as available," and are continually evolving as they move toward stable status. Alpha APIs and components are not covered by customer support.
### Known limitations
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
Previously updated : 11/21/2022 Last updated : 02/17/2023
To request an increase in quota amounts for your container app, learn [how to re
| Feature | Scope | Default | Is Configurable<sup>1</sup> | Remarks | |--|--|--|--|--|
-| Environments | Region | Up to 5 | Yes | Limit up to five environments per subscription, per region.<br><br>For example, if you deploy to three regions you can get up to 15 environments for a single subscription. |
-| Container Apps | Environment | 20 | Yes | |
+| Environments | Region | Up to 15 | Yes | Limit up to 15 environments per subscription, per region.<br><br>For example, if you deploy to three regions you can get up to 45 environments for a single subscription. |
+| Container Apps | Environment | Unlimited | Yes | |
| Revisions | Container app | 100 | No | | | Replicas | Revision | 30 | Yes | | | Cores | Replica | 2 | No | Maximum number of cores that can be requested by a revision replica. |
cosmos-db Migration Choices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migration-choices.md
Last updated 04/02/2022
You can load data from various data sources to Azure Cosmos DB. Since Azure Cosmos DB supports multiple APIs, the targets can be any of the existing APIs. The following are some scenarios where you migrate data to Azure Cosmos DB: * Move data from one Azure Cosmos DB container to another container within the Azure Cosmos DB account (could be in the same database or a different database).
-* Move data from one Azure Cosmos DB account to another Azure Cosmos DB account (could be in the same region or a different regions, same subscription or a different one).
+* Move data from one Azure Cosmos DB account to another Azure Cosmos DB account (could be in the same region or a different region, same subscription or a different one).
* Move data from a source such as Azure blob storage, a JSON file, Oracle database, Couchbase, DynamoDB to Azure Cosmos DB. In order to support migration paths from the various sources to the different Azure Cosmos DB APIs, there are multiple solutions that provide specialized handling for each migration path. This document lists the available solutions and describes their advantages and limitations.
In order to support migration paths from the various sources to the different Az
The following factors determine the choice of the migration tool:
-* **Online vs offline migration**: Many migration tools provide a path to do a one-time migration only. This means that the applications accessing the database might experience a period of downtime. Some migration solutions provide a way to do a live migration where there is a replication pipeline set up between the source and the target.
+* **Online vs offline migration**: Many migration tools provide a path to do a one-time migration only. This means that the applications accessing the database might experience a period of downtime. Some migration solutions provide a way to do a live migration where there's a replication pipeline set up between the source and the target.
* **Data source**: The existing data can be in various data sources like Oracle DB2, Datastax Cassanda, Azure SQL Database, PostgreSQL, etc. The data can also be in an existing Azure Cosmos DB account and the intent of migration can be to change the data model or repartition the data in a container with a different partition key.
The following factors determine the choice of the migration tool:
## Azure Cosmos DB API for NoSQL If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
-* If you are migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](estimate-ru-with-capacity-planner.md).
-
->[!IMPORTANT]
-> The [Custom Migration Service using ChangeFeed](https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator) is an open-source tool for live container migrations that implements change feed and bulk support. However, please note that the user interface application code for this tool is not supported or actively maintained by Microsoft. For Azure Cosmos DB API for NoSQL live container migrations, we recommend using the Spark Connector + Change Feed as illustrated in the [sample](https://github.com/Azure/azure-sdk-for-jav) is fully supported by Microsoft.
+* If you're migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](estimate-ru-with-capacity-planner.md).
|Migration type|Solution|Supported sources|Supported targets|Considerations| |||||| |Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB for NoSQL|Azure Cosmos DB for NoSQL|&bull; CLI-based; No set up needed. <br/>&bull; Supports large datasets.| |Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB<br/>&bull;MongoDB <br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/> <br/>See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources.|&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB<br/>&bull;JSON Files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets. |&bull; Easy to set up and supports multiple sources.<br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing - It means that if an issue occurs during the course of migration, you need to restart the whole migration process.<br/>&bull; Lack of a dead letter queue - It means that a few erroneous files can stop the entire migration process.| |Offline|[Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md)|Azure Cosmos DB for NoSQL. <br/><br/>You can use other sources with additional connectors from the Spark ecosystem.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
-|Online|[Azure Cosmos DB Spark connector + Change Feed](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration)|Azure Cosmos DB for NoSQL. <br/><br/>Uses Azure Cosmos DB Change Feed to stream all historic data as well as live updates.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
+|Online|[Azure Cosmos DB Spark connector + Change Feed sample](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration)|Azure Cosmos DB for NoSQL. <br/><br/>Uses Azure Cosmos DB Change Feed to stream all historic data as well as live updates.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
|Offline|[Custom tool with Azure Cosmos DB bulk executor library](migrate.md)| The source depends on your custom code | Azure Cosmos DB for NoSQL| &bull; Provides checkpointing, dead-lettering capabilities which increases migration resiliency. <br/>&bull; Suitable for very large datasets (10 TB+). <br/>&bull; Requires custom setup of this tool running as an App Service. |
-|Online|[Azure Cosmos DB Functions + ChangeFeed API](change-feed-functions.md)| Azure Cosmos DB for NoSQL | Azure Cosmos DB for NoSQL| &bull; Easy to set up. <br/>&bull; Works only if the source is an Azure Cosmos DB container. <br/>&bull; Not suitable for large datasets. <br/>&bull; Does not capture deletes from the source container. |
-|Online|[Custom Migration Service using ChangeFeed](https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator)| Azure Cosmos DB for NoSQL | Azure Cosmos DB for NoSQL| &bull; Provides progress tracking. <br/>&bull; Works only if the source is an Azure Cosmos DB container. <br/>&bull; Works for larger datasets as well.<br/>&bull; Requires the user to set up an App Service to host the Change feed processor. <br/>&bull; Does not capture deletes from the source container.|
+|Online|[Azure Cosmos DB Functions + ChangeFeed API](change-feed-functions.md)| Azure Cosmos DB for NoSQL | Azure Cosmos DB for NoSQL| &bull; Easy to set up. <br/>&bull; Works only if the source is an Azure Cosmos DB container. <br/>&bull; Not suitable for large datasets. <br/>&bull; Doesn't capture deletes from the source container. |
|Online|[Striim](cosmosdb-sql-api-migrate-data-striim.md)| &bull;Oracle <br/>&bull;Apache Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported sources. |&bull;Azure Cosmos DB for NoSQL <br/>&bull; Azure Cosmos DB for Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported targets. | &bull; Works with a large variety of sources like Oracle, DB2, SQL Server.<br/>&bull; Easy to build ETL pipelines and provides a dashboard for monitoring. <br/>&bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.| ## Azure Cosmos DB API for MongoDB Follow the [pre-migration guide](mongodb/pre-migration-steps.md) to plan your migration. * If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
-* If you are migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](convert-vcore-to-request-unit.md).
+* If you're migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](convert-vcore-to-request-unit.md).
-When you are ready to migrate, you can find detailed guidance on migration tools below
+When you're ready to migrate, you can find detailed guidance on migration tools below
* [Offline migration using MongoDB native tools](mongodb/tutorial-mongotools-cosmos-db.md) * [Offline migration using Azure database migration service (DMS)](../dms/tutorial-mongodb-cosmos-db.md) * [Online migration using Azure database migration service (DMS)](../dms/tutorial-mongodb-cosmos-db-online.md) * [Offline/online migration using Azure Databricks and Spark](mongodb/migrate-databricks.md)
-Then, follow our [post-migration guide](mongodb/post-migration-optimization.md) to optimize your Azure Cosmos DB data estate once you have migrated.
+Then, follow our [post-migration guide](mongodb/post-migration-optimization.md) to optimize your Azure Cosmos DB data estate once you've migrated.
A summary of migration pathways from your current solution to Azure Cosmso DB for MongoDB is provided below:
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
The [``Database.GetContainer``](/dotnet/api/microsoft.azure.cosmos.database.getc
:::code language="csharp" source="~/cosmos-db-nosql-dotnet-samples/002-quickstart-passwordless/Program.cs" id="new_container" highlight="2,4":::
-### Create an item
-
-The easiest way to create a new item in a container is to first build a C# [class](/dotnet/csharp/language-reference/keywords/class) or [record](/dotnet/csharp/language-reference/builtin-types/record) type with all of the members you want to serialize into JSON. In this example, the C# record has a unique identifier, a *categoryId* field for the partition key, and extra *categoryName*, *name*, *quantity*, and *sale* fields.
--
-Create an item in the container by calling [``Container.CreateItemAsync``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync).
--
-For more information on creating, upserting, or replacing items, see [Create an item in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-item.md).
-
-### Get an item
-
-In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) passing in both values to return a deserialized instance of your C# type.
--
-For more information about reading items and parsing the response, see [Read an item in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-read-item.md).
-
-### Query items
-
-After you insert an item, you can run a query to get all items that match a specific filter. This example runs the SQL query: ``SELECT * FROM products p WHERE p.categoryId = "61dba35b-4f02-45c5-b648-c6badc0cbd79"``. This example uses the **QueryDefinition** type and a parameterized query expression for the partition key filter. Once the query is defined, call [``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) to get a result iterator that will manage the pages of results. Then, use a combination of ``while`` and ``foreach`` loops to retrieve pages of results and then iterate over the individual items.
-- ## [Connection String](#tab/connection-string) ### Create a database
Use the [``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.
For more information on creating a database, see [Create a database in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-database.md). + ### Create a container The [``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) will create a new container if it doesn't already exist. This method will also return a reference to the container.
The [``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.c
For more information on creating a container, see [Create a container in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-container.md). ++ ### Create an item The easiest way to create a new item in a container is to first build a C# [class](/dotnet/csharp/language-reference/keywords/class) or [record](/dotnet/csharp/language-reference/builtin-types/record) type with all of the members you want to serialize into JSON. In this example, the C# record has a unique identifier, a *categoryId* field for the partition key, and extra *categoryName*, *name*, *quantity*, and *sale* fields. Create an item in the container by calling [``Container.CreateItemAsync``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync). For more information on creating, upserting, or replacing items, see [Create an item in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-item.md).
For more information on creating, upserting, or replacing items, see [Create an
In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) passing in both values to return a deserialized instance of your C# type. For more information about reading items and parsing the response, see [Read an item in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-read-item.md).
For more information about reading items and parsing the response, see [Read an
After you insert an item, you can run a query to get all items that match a specific filter. This example runs the SQL query: ``SELECT * FROM products p WHERE p.categoryId = "61dba35b-4f02-45c5-b648-c6badc0cbd79"``. This example uses the **QueryDefinition** type and a parameterized query expression for the partition key filter. Once the query is defined, call [``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) to get a result iterator that will manage the pages of results. Then, use a combination of ``while`` and ``foreach`` loops to retrieve pages of results and then iterate over the individual items. -- ## Run the code
data-factory Concepts Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md
Previously updated : 01/20/2023 Last updated : 02/17/2023 # Change data capture resource overview
The new Change Data Capture resource in ADF allows for full fidelity change data
* Avro * Azure Cosmos DB (SQL API) * Azure SQL Database
+* Azure SQL Managed Instance
* Delimited Text * JSON * ORC
The new Change Data Capture resource in ADF allows for full fidelity change data
* Avro * Azure SQL Database
+* SQL Managed Instance
* Delimited Text * Delta * JSON
data-factory Control Flow Set Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-set-variable-activity.md
To use a Set Variable activity in a pipeline, complete the following steps:
2. Search for _Set Variable_ in the pipeline Activities pane, and drag a Set Variable activity to the pipeline canvas.
-3. Select the Set Variable activity on the canvas if it isn't already selected, and then click the **Settings** tab to edit its details.
+3. Select the Set Variable activity on the canvas if it isn't already selected, and then select the **Settings** tab to edit its details.
-4. Select the variable for the Name property.
+4. Select **Pipeline variable** for your **Variable type**.
-5. Enter an expression to set the value for the variables. This expression can be a literal string expression, or any combination of dynamic [expressions, functions](control-flow-expression-language-functions.md), [system variables](control-flow-system-variables.md), or [outputs from other activities](how-to-expression-language-functions.md#examples-of-using-parameters-in-expressions).
+5. Select the variable for the Name property.
+
+6. Enter an expression to set the value for the variables. This expression can be a literal string expression, or any combination of dynamic [expressions, functions](control-flow-expression-language-functions.md), [system variables](control-flow-system-variables.md), or [outputs from other activities](how-to-expression-language-functions.md#examples-of-using-parameters-in-expressions).
:::image type="content" source="media/control-flow-set-variable-activity/set-variable-activity.png" alt-text="Screenshot of the UI for a Set variable activity.":::
-## Setting a pipeline return value in the Set Variable activity with UI
-
-The Set Variable activity now allows you to set a pipeline return value (preview). The pipeline return value is a system variable that allows you to customize a value that can be consumed by a parent pipeline and used downstream in your pipeline.
-
-To set a pipeline return value, complete the following steps:
-
-1. Search for _Set Variable_ in the pipeline Activities pane, and drag a Set Variable activity to the pipeline canvas.
-
-2. Select the Set Variable activity on the canvas if it isn't already selected, and then click the **Settings** tab to edit its details.
+## Setting a pipeline return value with UI
-3. Select **Pipeline return value (preview)** for your **Variable type**.
+We have expanded Set Variable activity to include a special system variable, named _Pipeline Return Value_. This allows communication from the child pipeline to the calling pipeline, in the following scenario.
-4. Enter a **Name** for your variable and select the **Type** from the drop-down menu.
+You don't need to define the variable, before using it. For more information, see [Pipeline Return Value](tutorial-pipeline-return-value.md)
-5. Enter an expression to set the value for the pipeline return value. This expression can be a literal string expression, or any combination of dynamic [expressions, functions](control-flow-expression-language-functions.md), [system variables](control-flow-system-variables.md), or [outputs from other activities](how-to-expression-language-functions.md#examples-of-using-parameters-in-expressions).
## Type properties
value | String literal or expression object value that the variable is assigned
## Incrementing a variable
-A common scenario involving variables is to use a variable as an iterator within an **Until** or **ForEach** activity. In a **Set variable** activity, you can't reference the variable being set in the `value` field. To work around this limitation, set a temporary variable and then create a second **Set variable** activity. The second **Set variable** activity sets the value of the iterator to the temporary variable.
+A common scenario involving variable is to use a variable as an iterator within an **Until** or **ForEach** activity. In a **Set variable** activity, you can't reference the variable being set in the `value` field. To work around this limitation, set a temporary variable and then create a second **Set variable** activity. The second **Set variable** activity sets the value of the iterator to the temporary variable.
Below is an example of this pattern:
Below is an example of this pattern:
} ```
-Variables are currently scoped at the pipeline level. This means that they're not thread safe and can cause unexpected and undesired behavior if they are accessed from within a parallel iteration activity such as a ForEach loop, especially when the value is also being modified within that foreach activity.
+Variables are currently scoped at the pipeline level. This means that they're not thread safe and can cause unexpected and undesired behavior if they're accessed from within a parallel iteration activity such as a ForEach loop, especially when the value is also being modified within that foreach activity.
## Next steps
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 11/30/2022 Last updated : 02/16/2023 # Manage Azure Data Factory studio preview experience
Find the error icon in the pipeline monitoring page and in the pipeline **Output
#### Container view
+> [!NOTE]
+> This feature will now be generally available in the ADF studio.
+ When monitoring your pipeline run, you have the option to enable the container view, which will provide a consolidated view of the activities that ran. This view is available in the output of your pipeline debug run and in the detailed monitoring view found in the monitoring tab.
Click the button next to the iteration or conditional activity to collapse the n
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-40.png" alt-text="Screenshot of the collapsed container monitoring view.":::
-#### Simplified default monitoring view
+#### Simplified default monitoring view
The default monitoring view has been simplified with fewer default columns. You can add/remove columns if youΓÇÖd like to personalize your monitoring view. Changes to the default will be cached.
data-factory Tutorial Pipeline Return Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-pipeline-return-value.md
+
+ Title: Set Pipeline Return Value
+
+description: Learn how to use the Set Variable activity to send information from child pipeline to main pipeline
++++ Last updated : 2/12/2022+++++
+# Set Pipeline Return Value in Azure Data Factory and Azure Synapse Analytics
+
+In the calling pipeline-child pipeline paradigm, you can use the [Set Variable activity](control-flow-set-variable-activity.md) to return values from the child pipeline to the calling pipeline. In the following scenario, we have a child pipeline through [Execute Pipeline Activity](control-flow-execute-pipeline-activity.md). And we want to __retrieve information from the child pipeline__, to then be used in the calling pipeline.
++
+Introduce pipeline return value, a dictionary of key value pairs, that allows communications between child pipelines and parent pipeline.
+
+## Prerequisite - Calling a Child Pipeline
+
+The prerequisite of the use case, is that you've an [Execute Pipeline Activity](control-flow-execute-pipeline-activity.md), calling a child pipeline. It's important that we enabled _Wait on Completion_ for the activity
+++
+## Configure Pipeline Return Value in Child Pipeline
+
+We've expanded the [Set Variable activity](control-flow-set-variable-activity.md) to include system variables _Pipeline Return Value_. You don't need to define them at pipeline level (as opposed to any other variables you use in the pipeline).
+
+1. Search for _Set Variable_ in the pipeline Activities pane, and drag a Set Variable activity to the pipeline canvas.
+1. Select the Set Variable activity on the canvas if it isn't already selected, and then its **Variables** tab, to edit its details.
+1. Choose _Pipeline return value_ for variable type.
+1. Select _New_ to add a new key value pair.
+1. You can add reasonable number of key value pairs, bounded by size limit of returning json.
++
+There are a few options for value types, including
+
+Type Name | Description
+-- | --
+String | The most straight forward of all. It expects a string value.
+Expression | It allows you to reference output from previous activities.
+Array | It expects an array of _string values_. Press "enter" key to separate values in the array
+Boolean | True or False
+Null | Signal place holder status; the value is constant _null_
+Int | It expects a numerical value of integer type
+Float | It expects a numerical value of float type
+Object | __Warning__ complicated use cases only. It allows you to embed a list of key value pairs type for the value
+
+Value of object type is defined as follows:
+
+``` json
+[{"key": "myKey1", "value": {"type": "String", "content": "hello world"}},
+ {"key": "myKey2", "value": {"type": "String", "content": "hi"}}
+]
+```
+
+## Retrieving Value in Calling Pipeline
+
+The pipeline return value of the child pipeline becomes the activity output of the Execute Pipeline Activity. You can retrieve the information with _@activity('Execute Pipeline1').output.pipelineReturnValue.keyName_. The use case is limitless. For instance, you may use
+* An _int_ value from child pipeline to define the wait period for a [wait activity](control-flow-wait-activity.md)
+* A _sting_ value to define the URL for the [Web activity](control-flow-web-activity.md)
+* An _expression_ value payload for a [script activity](transform-data-using-script.md) for logging purposes.
++
+There are two noticeable call outs in referencing the pipeline return values.
+
+1. With _Object_ type, you may further expand into the nested json object, such as _@activity('Execute Pipeline1').output.pipelineReturnValue.keyName.nextLevelKey_
+2. With _Array_ type, you can specify the index in the list, with _@activity('Execute Pipeline1').output.pipelineReturnValue.keyName[0]_. The number is zero indexed, meaning that it starts with 0.
+
+> [!NOTE]
+> Please make sure that the _keyName_ you are referencing exists in your child pipeline. ADF expression builder can _not_ confirm the referential check for you.
+> Pipeline will fail if the key referenced is missing in the payload
+
+## Special Considerations
+
+You may have multiple Set Pipeline Return value activities in a pipeline. However, please ensure that only one gets to run in a pipeline.
++
+To avoid missing key situation in the calling pipeline, described above, we encourage you to have the same list of keys for all branches in child pipeline. Consider using _null_ types for keys that don't have values, in a specific branch.
+
+## Next steps
+Learn about another related control flow activity:
+- [Set Variable Activity](control-flow-set-variable-activity.md)
+- [Append Variable Activity](control-flow-append-variable-activity.md)
+
databox-online Azure Stack Edge Deploy Aks On Azure Stack Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge.md
Previously updated : 02/01/2023 Last updated : 02/16/2023 # Customer intent: As an IT admin, I need to understand how to deploy and configure Azure Kubernetes service on Azure Stack Edge.
Follow these steps to deploy the AKS cluster.
1. Select **Add** to configure AKS.
-1. On the **Create Kubernetes service** dialog, select the Kubernetes **Node size** for the infrastructure VM. In this example, we have selected VM size **Standard_F16s_HPN ΓÇô 16 vCPUs, 32.77 GB memory**.
+1. On the **Create Kubernetes service** dialog, select the Kubernetes **Node size** for the infrastructure VM. Select a VM node size that's appropriate for the workload size you're deploying. In this example, we've selected VM size **Standard_F16s_HPN ΓÇô 16 vCPUs, 32.77 GB memory**.
> [!NOTE] > If the node size dropdown menu isnΓÇÖt populated, wait a few minutes so that it's synchronized after VMs are enabled in the preceding step.
dev-box Cli Reference Subset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/cli-reference-subset.md
Install the Azure CLI and the Dev Box CLI extension as described here: [Microsof
#### Create an image definition that meets all requirements ```azurecli
-az sig image-definition create --resource-group {resourceGroupName} `
gallery-name {galleryName} --gallery-image-definition {definitionName} `publisher {publisherName} --offer {offerName} --sku {skuName} `os-type windows --os-state Generalized `hyper-v-generation v2 `features SecurityType=TrustedLaunch `
+az sig image-definition create --resource-group {resourceGroupName}
+--gallery-name {galleryName} --gallery-image-definition {definitionName}
+--publisher {publisherName} --offer {offerName} --sku {skuName}
+--os-type windows --os-state Generalized
+--hyper-v-generation v2
+--features SecurityType=TrustedLaunch
``` #### Attach a Gallery to the DevCenter ```azurecli
-az devcenter admin gallery create -g demo-rg `
devcenter-name contoso-devcenter -n SharedGallery `gallery-resource-id "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/galleries/{computeGalleryName}" `
+az devcenter admin gallery create -g demo-rg
+--dev-center-name contoso-devcenter -n SharedGallery
+--gallery-resource-id "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/galleries/{computeGalleryName}"
``` ### DevCenter
az devcenter admin gallery create -g demo-rg `
#### Create a DevCenter ```azurecli
-az devcenter admin devcenter create -g demo-rg `
--n contoso-devcenter --identity-type UserAssigned `user-assigned-identity ` "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{managedIdentityName}" `location {regionName} `
+az devcenter admin devcenter create -g demo-rg
+-n contoso-devcenter --identity-type UserAssigned
+--user-assigned-identity "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{managedIdentityName}"
+--location {regionName}
``` ### Project
az devcenter admin devcenter create -g demo-rg `
#### Create a Project ```azurecli
-az devcenter admin project create -g demo-rg `
--n ContosoProject `description "project description" `devcenter-id /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DevCenter/devcenters/{devCenterName} `
+az devcenter admin project create -g demo-rg
+-n ContosoProject
+--description "project description"
+--devcenter-id /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DevCenter/devcenters/{devCenterName}
``` #### Delete a Project ```azurecli
-az devcenter admin project delete `
--g {resourceGroupName} `project {projectName} `
+az devcenter admin project delete
+-g {resourceGroupName}
+--project {projectName}
``` ### Network Connection
az devcenter admin project delete `
#### Create a native AADJ Network Connection ```azurecli
-az devcenter admin network-connection create --location "centralus" `
domain-join-type "AzureADJoin" `subnet-id "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ExampleRG/providers/Microsoft.Network/virtualNetworks/ExampleVNet/subnets/default" `name "{networkConnectionName}" --resource-group "rg1" `
+az devcenter admin network-connection create --location "centralus"
+--domain-join-type "AzureADJoin"
+--subnet-id "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ExampleRG/providers/Microsoft.Network/virtualNetworks/ExampleVNet/subnets/default"
+--name "{networkConnectionName}" --resource-group "rg1"
``` #### Create a hybrid AADJ Network Connection ```azurecli
-az devcenter admin network-connection create --location "centralus" `
domain-join-type "HybridAzureADJoin" --domain-name "mydomaincontroller.local" `domain-password "Password value for user" --domain-username "testuser@mydomaincontroller.local" `subnet-id "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ExampleRG/providers/Microsoft.Network/virtualNetworks/ExampleVNet/subnets/default" `name "{networkConnectionName}" --resource-group "rg1" `
+az devcenter admin network-connection create --location "centralus"
+--domain-join-type "HybridAzureADJoin" --domain-name "mydomaincontroller.local"
+--domain-password "Password value for user" --domain-username "testuser@mydomaincontroller.local"
+--subnet-id "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ExampleRG/providers/Microsoft.Network/virtualNetworks/ExampleVNet/subnets/default"
+--name "{networkConnectionName}" --resource-group "rg1"
``` #### Attach a Network Connection to the DevCenter ```azurecli
-az devcenter admin attached-network create --attached-network-connection-name westus3network `
devcenter-name contoso-devcenter -g demo-rg `network-connection-id /subscriptions/f141e9f2-4778-45a4-9aa0-8b31e6469454/resourceGroups/demo-rg/providers/Microsoft.DevCenter/networkConnections/netset99 `
+az devcenter admin attached-network create --attached-network-connection-name westus3network
+--dev-center-name contoso-devcenter -g demo-rg
+--network-connection-id /subscriptions/f141e9f2-4778-45a4-9aa0-8b31e6469454/resourceGroups/demo-rg/providers/Microsoft.DevCenter/networkConnections/netset99
``` ### Dev Box Definition
az devcenter admin attached-network create --attached-network-connection-name we
#### List Dev Box Definitions in a DevCenter ```azurecli
-az devcenter admin devbox-definition list `
devcenter-name "Contoso" --resource-group "rg1" `
+az devcenter admin devbox-definition list
+--dev-center-name "Contoso" --resource-group "rg1"
``` #### List skus available in your subscription
az devcenter admin sku list
#### Create a Dev Box Definition with a marketplace image ```azurecli
-az devcenter admin devbox-definition create -g demo-rg `
devcenter-name contoso-devcenter -n BaseImageDefinition `image-reference id="/subscriptions/{subscriptionId}/resourceGroups/demo-rg/providers/Microsoft.DevCenter/devcenters/contoso-devcenter/galleries/Default/images/MicrosoftWindowsDesktop_windows-ent-cpc_win11-21h2-ent-cpc-m365" `sku name="general_a_8c32gb_v1" `
+az devcenter admin devbox-definition create -g demo-rg
+--dev-center-name contoso-devcenter -n BaseImageDefinition
+--image-reference id="/subscriptions/{subscriptionId}/resourceGroups/demo-rg/providers/Microsoft.DevCenter/devcenters/contoso-devcenter/galleries/Default/images/MicrosoftWindowsDesktop_windows-ent-cpc_win11-21h2-ent-cpc-m365"
+--sku name="general_a_8c32gb_v1"
``` #### Create a Dev Box Definition with a custom image ```azurecli
-az devcenter admin devbox-definition create -g demo-rg `
devcenter-name contoso-devcenter -n CustomDefinition `image-reference id="/subscriptions/{subscriptionId}/resourceGroups/demo-rg/providers/Microsoft.DevCenter/devcenters/contoso-devcenter/galleries/SharedGallery/images/CustomImageName" `
+az devcenter admin devbox-definition create -g demo-rg
+--dev-center-name contoso-devcenter -n CustomDefinition
+--image-reference id="/subscriptions/{subscriptionId}/resourceGroups/demo-rg/providers/Microsoft.DevCenter/devcenters/contoso-devcenter/galleries/SharedGallery/images/CustomImageName"
--os-storage-type "ssd_1024gb" --sku name=general_a_8c32gb_v1 ```
az devcenter admin devbox-definition create -g demo-rg `
#### Create a Pool ```azurecli
-az devcenter admin pool create -g demo-rg `
project-name ContosoProject -n MarketplacePool `devbox-definition-name Definition `network-connection-name westus3network `license-type Windows_Client --local-administrator Enabled `
+az devcenter admin pool create -g demo-rg
+--project-name ContosoProject -n MarketplacePool
+--devbox-definition-name Definition
+--network-connection-name westus3network
+--license-type Windows_Client --local-administrator Enabled
``` #### Get Pool ```azurecli
-az devcenter admin pool show --resource-group "{resourceGroupName}" `
project-name {projectName} --name "{poolName}" `
+az devcenter admin pool show --resource-group "{resourceGroupName}"
+--project-name {projectName} --name "{poolName}"
``` #### List Pools ```azurecli
-az devcenter admin pool list --resource-group "{resourceGroupName}" `
project-name {projectName} `
+az devcenter admin pool list --resource-group "{resourceGroupName}"
+--project-name {projectName}
``` #### Update Pool
az devcenter admin pool list --resource-group "{resourceGroupName}" `
Update Network Connection ```azurecli
-az devcenter admin pool update `
resource-group "{resourceGroupName}" `project-name {projectName} `name "{poolName}" `
+az devcenter admin pool update
+--resource-group "{resourceGroupName}"
+--project-name {projectName}
+--name "{poolName}"
--network-connection-name {networkConnectionName} ``` Update Dev Box Definition ```azurecli
-az devcenter admin pool update `
resource-group "{resourceGroupName}" `project-name {projectName} `name "{poolName}" `devbox-definition-name {devBoxDefinitionName} `
+az devcenter admin pool update
+--resource-group "{resourceGroupName}"
+--project-name {projectName}
+--name "{poolName}"
+--devbox-definition-name {devBoxDefinitionName}
``` #### Delete Pool ```azurecli
-az devcenter admin pool delete `
resource-group "{resourceGroupName}" `project-name "{projectName}" `name "{poolName}" `
+az devcenter admin pool delete
+--resource-group "{resourceGroupName}"
+--project-name "{projectName}"
+--name "{poolName}"
``` ### Dev Boxes
az devcenter admin pool delete `
#### List available Projects ```azurecli
-az devcenter dev project list `
+az devcenter dev project list
--devcenter {devCenterName} ``` #### List Pools in a Project ```azurecli
-az devcenter dev pool list `
devcenter {devCenterName} `project-name {ProjectName} `
+az devcenter dev pool list
+--devcenter {devCenterName}
+--project-name {ProjectName}
``` #### Create a dev box ```azurecli
-az devcenter dev dev-box create `
devcenter {devCenterName} `project-name {projectName} `pool-name {poolName} `--n {devBoxName} `
+az devcenter dev dev-box create
+--devcenter {devCenterName}
+--project-name {projectName}
+--pool-name {poolName}
+-n {devBoxName}
``` #### Get web connection URL for a dev box ```azurecli
-az devcenter dev dev-box show-remote-connection `
devcenter {devCenterName} `project-name {projectName} `
+az devcenter dev dev-box show-remote-connection
+--devcenter {devCenterName}
+--project-name {projectName}
--user-id "me"--n {devBoxName} `
+-n {devBoxName}
``` #### List your Dev Boxes ```azurecli
-az devcenter dev dev-box list --devcenter {devCenterName} `
+az devcenter dev dev-box list --devcenter {devCenterName}
``` #### View details of a Dev Box ```azurecli
-az devcenter dev dev-box show `
devcenter {devCenterName} `project-name {projectName} `
+az devcenter dev dev-box show
+--devcenter {devCenterName}
+--project-name {projectName}
-n {devBoxName} ``` #### Stop a Dev Box ```azurecli
-az devcenter dev dev-box stop `
devcenter {devCenterName} `project-name {projectName} `user-id "me" `--n {devBoxName} `
+az devcenter dev dev-box stop
+--devcenter {devCenterName}
+--project-name {projectName}
+--user-id "me"
+-n {devBoxName}
``` #### Start a Dev Box ```azurecli
-az devcenter dev dev-box start `
devcenter {devCenterName} `project-name {projectName} `user-id "me" `--n {devBoxName} `
+az devcenter dev dev-box start
+--devcenter {devCenterName}
+--project-name {projectName}
+--user-id "me"
+-n {devBoxName}
``` ## Next steps
education-hub Program Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/program-support.md
# Support [!INCLUDE [help using Azure Dev Tools for Teaching](../../../includes/edu-dev-tools-program-support.md)]+
+If you need help with GitHub sign in or setup, go to [GitHub Support](https://aka.ms/githubsupporteduhub).
energy-data-services Troubleshoot Manifest Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/troubleshoot-manifest-ingestion.md
The workflow run has failed and the data records weren't ingested.
data_partition_id = ctx_payload['data-partition-id'] KeyError: 'data-partition-id'
- requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://it1672283875.oep.ppe.azure-int.net/api/workflow/v1/workflow/Osdu_ingest/workflowRun/e9a815f2-84f5-4513-9825-4d37ab291264
+ requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://contoso.energy.azure.com/api/workflow/v1/workflow/Osdu_ingest/workflowRun/e9a815f2-84f5-4513-9825-4d37ab291264
``` ## Cause 2: Schema validation failures
Records weren't ingested due to schema validation failures.
**Sample trace output** ```md Error traces to look out for
- [2023-02-05, 14:55:37 IST] {connectionpool.py:452} DEBUG - https://it1672283875.oep.ppe.azure-int.net:443 "GET /api/schema-service/v1/schema/osdu:wks:work-product-component--WellLog:2.2.0 HTTP/1.1" 404 None
+ [2023-02-05, 14:55:37 IST] {connectionpool.py:452} DEBUG - https://contoso.energy.azure.com:443 "GET /api/schema-service/v1/schema/osdu:wks:work-product-component--WellLog:2.2.0 HTTP/1.1" 404 None
[2023-02-05, 14:55:37 IST] {authorization.py:137} ERROR - {"error":{"code":404,"message":"Schema is not present","errors":[{"domain":"global","reason":"notFound","message":"Schema is not present"}]}} [2023-02-05, 14:55:37 IST] {validate_schema.py:170} ERROR - Error on getting schema of kind 'osdu:wks:work-product-component--WellLog:2.2.0'
- [2023-02-05, 14:55:37 IST] {validate_schema.py:171} ERROR - 404 Client Error: Not Found for url: https://it1672283875.oep.ppe.azure-int.net/api/schema-service/v1/schema/osdu:wks:work-product-component--WellLog:2.2.0
+ [2023-02-05, 14:55:37 IST] {validate_schema.py:171} ERROR - 404 Client Error: Not Found for url: https://contoso.energy.azure.com/api/schema-service/v1/schema/osdu:wks:work-product-component--WellLog:2.2.0
[2023-02-05, 14:55:37 IST] {validate_schema.py:314} WARNING - osdu:wks:work-product-component--WellLog:2.2.0 is not present in Schema service. [2023-02-05, 15:01:23 IST] {validate_schema.py:322} ERROR - Schema validation error. Data field. [2023-02-05, 15:01:23 IST] {validate_schema.py:323} ERROR - Manifest kind: osdu:wks:work-product-component--WellLog:1.1.0
Since there are no such error logs specifically for referential integrity tasks,
For instance, the output shows record queried using the Search service for referential integrity ```md
- [2023-02-05, 19:14:40 IST] {search_record_ids.py:75} DEBUG - Search query "it1672283875-dp1:work-product-component--WellLog:5ab388ae0e140838c297f0e6559" OR "it1672283875-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559" OR "it1672283875-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559758a"
+ [2023-02-05, 19:14:40 IST] {search_record_ids.py:75} DEBUG - Search query "contoso-dp1:work-product-component--WellLog:5ab388ae0e140838c297f0e6559" OR "contoso-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559" OR "contoso-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559758a"
``` The records that were retrieved and were in the system are shown in the output. The related manifest object that referenced a record would be dropped and no longer be ingested if we noticed that some of the records weren't present. ```md
- [2023-02-05, 19:14:40 IST] {search_record_ids.py:141} DEBUG - response ids: ['it1672283875-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559758a:1675590506723615', 'it1672283875-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559758a ']
+ [2023-02-05, 19:14:40 IST] {search_record_ids.py:141} DEBUG - response ids: ['contoso-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559758a:1675590506723615', 'contoso-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559758a ']
``` In the coming release, we plan to enhance the logs by appropriately logging skipped records with reasons
Records weren't ingested due to invalid legal tags or ACLs present in the manife
```md "PUT /api/storage/v2/records HTTP/1.1" 400 None
- [2023-02-05, 16:57:05 IST] {authorization.py:137} ERROR - {"code":400,"reason":"Invalid legal tags","message":"Invalid legal tags: it1672283875-dp1-R3FullManifest-Legal-Tag-Test779759112"}
+ [2023-02-05, 16:57:05 IST] {authorization.py:137} ERROR - {"code":400,"reason":"Invalid legal tags","message":"Invalid legal tags: contoso-dp1-R3FullManifest-Legal-Tag-Test779759112"}
``` and the output indicates records that were retrieved. Manifest entity records corresponding to missing search records will get dropped and not ingested. ```md "PUT /api/storage/v2/records HTTP/1.1" 400 None
- [2023-02-05, 16:58:46 IST] {authorization.py:137} ERROR - {"code":400,"reason":"Validation error.","message":"createOrUpdateRecords.records[0].acl: Invalid group name 'data1.default.viewers@it1672283875-dp1.dataservices.energy'"}
+ [2023-02-05, 16:58:46 IST] {authorization.py:137} ERROR - {"code":400,"reason":"Validation error.","message":"createOrUpdateRecords.records[0].acl: Invalid group name 'data1.default.viewers@contoso-dp1.dataservices.energy'"}
[2023-02-05, 16:58:46 IST] {single_manifest_processor.py:83} WARNING - Can't process entity SRN: surrogate-key:0ef20853-f26a-456f-b874-3f2f5f35b6fb ```
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
If a reader disconnects from a partition, when it reconnects it begins reading a
### Log compaction
-Azure Event Hubs supports compacting event log to retain the latest events of a given event key. With compacted event hubs/Kafka topic, you can use key-baesd retention rather than using the coarser-grained time-based retention.
+Azure Event Hubs supports compacting event log to retain the latest events of a given event key. With compacted event hubs/Kafka topic, you can use key-based retention rather than using the coarser-grained time-based retention.
For more information on log compaction, see [Log compaction](log-compaction.md).
event-hubs Event Hubs Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-ip-filtering.md
You specify IP firewall rules at the Event Hubs namespace level. So, the rules a
## Use Azure portal+
+When creating a namespace, you can either allow public only (from all networks) or private only (only via private endpoints) access to the namespace. Once the namespace is created, you can allow access from specific IP addresses or from specific virtual networks (using network service endpoints).
+
+### Configure public access when creating a namespace
+To enable public access, select **Public access** on the **Networking** page of the namespace creation wizard.
++
+After you create the namespace, select **Networking** on the left menu of the **Event Hubs Namespace** page. You see that **All Networks** option is selected. You can select **Selected Networks** option and allow access from specific IP addresses or specific virtual networks. The next section provides you details on configuring IP firewall to specify the IP addresses from which the access is allowed.
+
+### Configure IP firewall for an existing namespace
This section shows you how to use the Azure portal to create IP firewall rules for an Event Hubs namespace. 1. Navigate to your **Event Hubs namespace** in the [Azure portal](https://portal.azure.com).
To deploy the template, follow the instructions for [Azure Resource Manager][lnk
> [!IMPORTANT] > If there are no IP and virtual network rules, all the traffic flows into the namespace even if you set the `defaultAction` to `deny`. The namespace can be accessed over the public internet (using the access key). Specify at least one IP rule or virtual network rule for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network.
+## Use Azure CLI
+Use [`az eventhubs namespace network-rule`](/cli/azure/eventhubs/namespace/network-rule) add, list, update, and remove commands to manage IP firewall rules for an Event Hubs namespace.
+
+## Use Azure PowerShell
+Use the following Azure PowerShell commands to add, list, remove, update, and delete IP firewall rules.
+
+- [`Add-AzEventHubIPRule`](/powershell/module/az.eventhub/add-azeventhubiprule) to add an IP firewall rule.
+- [`New-AzEventHubIPRuleConfig`](/powershell/module/az.eventhub/new-azeventhubipruleconfig) and [`Set-AzEventHubNetworkRuleSet`](/powershell/module/az.eventhub/set-azeventhubnetworkruleset) together to add an IP firewall rule
+- [`Remove-AzEventHubIPRule`](/powershell/module/az.eventhub/remove-azeventhubiprule) to remove an IP firewall rule.
++ ## Default action and public network access ### REST API
event-hubs Event Hubs Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-service-endpoints.md
Binding an Event Hubs namespace to a virtual network is a two-step process. You
The virtual network rule is an association of the Event Hubs namespace with a virtual network subnet. While the rule exists, all workloads bound to the subnet are granted access to the Event Hubs namespace. Event Hubs itself never establishes outbound connections, doesn't need to gain access, and is therefore never granted access to your subnet by enabling this rule. ## Use Azure portal
+When creating a namespace, you can either allow public only (from all networks) or private only (only via private endpoints) access to the namespace. Once the namespace is created, you can allow access from specific IP addresses or from specific virtual networks (using network service endpoints).
+
+### Configure public access when creating a namespace
+To enable public access, select **Public access** on the **Networking** page of the namespace creation wizard.
++
+After you create the namespace, select **Networking** on the left menu of the **Service Bus Namespace** page. You see that **All Networks** option is selected. You can select **Selected Networks** option and allow access from specific IP addresses or specific virtual networks. The next section provides you details on specifying the networks from which the access is allowed.
+
+### Configure selected networks for an existing namespace
This section shows you how to use Azure portal to add a virtual network service endpoint. To limit access, you need to integrate the virtual network service endpoint for this Event Hubs namespace. 1. Navigate to your **Event Hubs namespace** in the [Azure portal](https://portal.azure.com).
This section shows you how to use Azure portal to add a virtual network service
1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Choose **Selected networks** option to allow access only from specific virtual networks. Here are more details about options available in the **Public network access** page:
- - **Disabled**. This option disables any public access to the namespace. The namespace will be accessible only through [private endpoints](private-link-service.md).
+ - **Disabled**. This option disables any public access to the namespace. The namespace is accessible only through [private endpoints](private-link-service.md).
- **Selected networks**. This option enables public access to the namespace using an access key from selected networks. > [!IMPORTANT]
This section shows you how to use Azure portal to add a virtual network service
> [!IMPORTANT] > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
-3. Select the virtual network from the list of virtual networks, and then pick the **subnet**. You have to enable the service endpoint before adding the virtual network to the list. If the service endpoint isn't enabled, the portal will prompt you to enable it.
+3. Select the virtual network from the list of virtual networks, and then pick the **subnet**. You have to enable the service endpoint before adding the virtual network to the list. If the service endpoint isn't enabled, the portal prompts you to enable it.
:::image type="content" source="./media/event-hubs-tutorial-vnet-and-firewalls/select-subnet.png" lightbox="./media/event-hubs-tutorial-vnet-and-firewalls/select-subnet.png" alt-text="Image showing the selection of a subnet."::: 4. You should see the following successful message after the service endpoint for the subnet is enabled for **Microsoft.EventHub**. Select **Add** at the bottom of the page to add the network.
To deploy the template, follow the instructions for [Azure Resource Manager][lnk
> [!IMPORTANT] > If there are no IP and virtual network rules, all the traffic flows into the namespace even if you set the `defaultAction` to `deny`. The namespace can be accessed over the public internet (using the access key). Specify at least one IP rule or virtual network rule for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network.
+## Use Azure CLI
+Use [`az eventhubs namespace network-rule`](/cli/azure/eventhubs/namespace/network-rule) add, list, update, and remove commands to manage virtual network rules for a Service Bus namespace.
+
+## Use Azure PowerShell
+Use the following Azure PowerShell commands to add, list, remove, update, and delete network rules for a Service Bus namespace.
+
+- [`Add-AzEventHubVirtualNetworkRule`](/powershell/module/az.eventhub/add-azeventhubvirtualnetworkrule) to add a virtual network rule.
+- [`New-AzEventHubVirtualNetworkRuleConfig`](/powershell/module/az.eventhub/new-azeventhubipruleconfig) and [`Set-AzEventHubNetworkRuleSet`](/powershell/module/az.eventhub/set-azeventhubnetworkruleset) together to add a virtual network rule.
+- [`Remove-AzEventHubVirtualNetworkRule`](/powershell/module/az.eventhub/remove-azeventhubvirtualnetworkrule) to remove s virtual network rule.
++ ## default action and public network access ### REST API
event-hubs Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/private-link-service.md
Your private endpoint and virtual network must be in the same region. When you s
Your private endpoint uses a private IP address in your virtual network.
-### Steps
+### Configure private access when creating a namespace
+When creating a namespace, you can either allow public only (from all networks) or private only (only via private endpoints) access to the namespace.
+
+If you select the **Private access** option on the **Networking** page of the namespace creation wizard, you can add a private endpoint on the page by selecting **+ Private endpoint** button. See the next section for the detailed steps for adding a private endpoint.
+++
+### Configure private access for an existing namespace
If you already have an Event Hubs namespace, you can create a private link connection by following these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | Supported | AIS, National Telecom UIH | | **Berlin** | [NTT GDC](https://www.e-shelter.de/en/location/berlin-1-data-center) | 1 | Germany North | Supported | Colt, Equinix, NTT Global DataCenters EMEA| | **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | Supported | CenturyLink Cloud Connect, Equinix |
-| **Busan** | [LG CNS](https://www.lgcns.com/En/Service/DataCenter) | 2 | Korea South | n/a | LG CNS |
+| **Busan** | [LG CNS](https://www.lgcns.com/business/cloud/datacenter/) | 2 | Korea South | n/a | LG CNS |
| **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | Supported | Ascenty | | **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | Supported | CDC | | **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2| Supported | CDC, Equinix |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
If you're remote and don't have fiber connectivity, or you want to explore other
| **[C3ntro Telecom](https://www.c3ntro.com/)** | Equinix, Megaport | Dallas | | **[Chief](https://www.chief.com.tw/)** | Equinix | Hong Kong SAR | | **[Cinia](https://www.cinia.fi/palvelutiedotteet)** | Equinix, Megaport | Frankfurt, Hamburg |
-| **[CloudXpress](https://www2.telenet.be/content/www-telenet-be/fr/business/sme-le/aanbod/internet/cloudxpress)** | Equinix | Amsterdam |
+| **[CloudXpress](https://www2.telenet.be/business/nl/sme-le/aanbod/verbinden/bedrijfsnetwerk/cloudxpress.html)** | Equinix | Amsterdam |
| **[CMC Telecom](https://cmctelecom.vn/san-pham/value-added-service-and-it/cmc-telecom-cloud-express-en/)** | Equinix | Singapore | | **[CoreAzure](https://www.coreazure.com/)**| Equinix | London | | **[Cox Business](https://www.cox.com/business/networking/cloud-connectivity.html)**| Equinix | Dallas, Silicon Valley, Washington DC |
firewall Firewall Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-diagnostics.md
You can access some of these logs through the portal. Logs can be sent to [Azure
Before starting, you should read [Azure Firewall logs and metrics](logs-and-metrics.md) for an overview of the diagnostics logs and metrics available for Azure Firewall.
+Additionally, for an improved method to work with firewall logs, see [Azure Structured Firewall Logs (preview)](firewall-structured-logs.md).
+ ## Enable diagnostic logging through the Azure portal It can take a few minutes for the data to appear in your logs after you complete this procedure to turn on diagnostic logging. If you don't see anything at first, check again in a few more minutes.
hdinsight Hdinsight 40 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-40-component-versioning.md
Apache Spark versions supported in Azure HDIinsight
|Apache Spark version on HDInsight|Release date|Release stage|End of life announcement date|[End of standard support]()|[End of basic support]()| |--|--|--|--|--|--|
-|2.4|July 8, 2023 |End of Life Announced (EOLA)| Feb10,2023| Aug 10,2023|Feb 10,2024|
+|2.4|July 8, 2019|End of Life Announced (EOLA)| Feb10,2023| Aug 10,2023|Feb 10,2024|
|3.1|March 11,2022|GA |-|-|-| |3.3|March 22,2023|Public Preview|-|-|-|
hdinsight Apache Esp Kafka Ssl Encryption Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-esp-kafka-ssl-encryption-authentication.md
description: Set up TLS encryption for communication between Kafka clients and K
Previously updated : 02/14/2023 Last updated : 02/17/2023 # Set up TLS encryption and authentication for ESP Apache Kafka cluster in Azure HDInsight
These steps are detailed in the following code snippets.
keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt ```
-1. Create the file `client-ssl-auth.properties` on client machine (hn1) . It should have the following lines:
+1. Create the file `client-ssl-auth.properties` on client machine (hn1). It should have the following lines:
```config security.protocol=SASL_SSL
Run these steps on the client machine.
### Kafka 2.1 or above
-1. Create a topic if it doesn't exist already.
+> [!Note]
+> Below commands will work if you are either using Kafka user or a custom user which have access to do CRUD operation.
- ```bash
- /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper <ZOOKEEPER_NODE>:2181 --create --topic topic1 --partitions 2 --replication-factor 2
- ```
-1. Start console producer and provide the path to `client-ssl-auth.properties` as a configuration file for the producer.
+Using Command Line Tool
- ```bash
- /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <FQDN_WORKER_NODE>:9093 --topic topic1 --producer.config ~/ssl/client-ssl-auth.properties
- ```
+1. Create a topic if it doesn't exist already.
+
+ ```bash
+ sudo su kafka ΓÇôc "/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper <ZOOKEEPER_NODE>:2181 --create --topic topic1 --partitions 2 --replication-factor 2"
+ ```
+ To use a keytab, create a JAAS file with the following content. Be sure to point the keyTab property to your keytab file and reference the principal used inside the keytab. Following is a sample JAAS file created and placed in the location in VM: **/home/hdiuser/kafka_client_jaas_keytab.conf**
+
+ ```
+ KafkaClient {
+ com.sun.security.auth.module.Krb5LoginModule required
+ useKeyTab=true
+ storeKey=true
+ keyTab="/home/hdiuser/espkafkauser.keytab"
+ principal="espkafkauser@TEST.COM";
+ };
+ ```
+
+1. Start console producer and provide the path to `client-ssl-auth.properties` as a configuration file for the producer.
+ ```bash
+ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/hdiuser/kafka_client_jaas_keytab.conf"
+
+ /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <FQDN_WORKER_NODE>:9093 --topic topic1 --producer.config ~/ssl/client-ssl-auth.properties
+ ```
+
1. Open another ssh connection to client machine and start console consumer and provide the path to `client-ssl-auth.properties` as a configuration file for the consumer.
- ```bash
- /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server <FQDN_WORKER_NODE>:9093 --topic topic1 --consumer.config ~/ssl/client-ssl-auth.properties --from-beginning
- ```
+ ```bash
+ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/hdiuser/kafka_client_jaas_keytab.conf"
+
+ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server <FQDN_WORKER_NODE>:9093 --topic topic1 --consumer.config ~/ssl/client-ssl-auth.properties --from-beginning
+ ```
+
+If you want to use Java client to do CRUD operations, then use following GitHub repository.
+
+https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/tree/main/DomainJoined-Producer-Consumer-With-TLS
## Next steps
hdinsight Rest Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/rest-proxy.md
description: Learn how to do Apache Kafka operations using a Kafka REST proxy on
Previously updated : 04/01/2022 Last updated : 02/17/2023 # Interact with Apache Kafka clusters in Azure HDInsight using a REST proxy
-Kafka REST Proxy enables you to interact with your Kafka cluster via a REST API over HTTPS. This action means that your Kafka clients can be outside of your virtual network. Clients can make simple, secure HTTPS calls to the Kafka cluster, instead of relying on Kafka libraries. This article will show you how to create a REST proxy enabled Kafka cluster. Also provides a sample code that shows how to make calls to REST proxy.
+Kafka REST Proxy enables you to interact with your Kafka cluster via a REST API over HTTPS. This action means that your Kafka clients can be outside of your virtual network. Clients can make simple, secure HTTPS calls to the Kafka cluster, instead of relying on Kafka libraries. This article shows you how to create a REST proxy enabled Kafka cluster. Also provides a sample code that shows how to make calls to REST proxy.
## REST API reference
Creating an HDInsight Kafka cluster with REST proxy creates a new public endpoin
### Security
-Access to the Kafka REST proxy is managed with Azure Active Directory security groups. When creating the Kafka cluster, provide the Azure AD security group with REST endpoint access. Kafka clients that need access to the REST proxy should be registered to this group by the group owner. The group owner can register via the Portal or via PowerShell.
+Access to the Kafka REST proxy managed with Azure Active Directory security groups. When creating the Kafka cluster, provide the Azure AD security group with REST endpoint access. Kafka clients that need access to the REST proxy should be registered to this group by the group owner. The group owner can register via the Portal or via PowerShell.
-For REST proxy endpoint requests, client applications should get an OAuth token. The token is used to verify security group membership. Find a [Client application sample](#client-application-sample) below that shows how to get an OAuth token. The client application passes the OAuth token in the HTTPS request to the REST proxy.
+For REST proxy endpoint requests, client applications should get an OAuth token. The token uses to verify security group membership. Find a [Client application sample](#client-application-sample) shows how to get an OAuth token. The client application passes the OAuth token in the HTTPS request to the REST proxy.
> [!NOTE] > See [Manage app and resource access using Azure Active Directory groups](../../active-directory/fundamentals/active-directory-manage-groups.md), to learn more about AAD security groups. For more information on how OAuth tokens work, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md). ## Kafka REST proxy with Network Security Groups
-If you bring your own VNet and control network traffic with network security groups, allow **inbound** traffic on port **9400** in addition to port 443. This will ensure that Kafka REST proxy server is reachable.
+If you bring your own VNet and control network traffic with network security groups, allow **inbound** traffic on port **9400** in addition to port 443. This ensures that Kafka REST proxy server is reachable.
## Prerequisites
-1. Register an application with Azure AD. The client applications that you write to interact with the Kafka REST proxy will use this application's ID and secret to authenticate to Azure.
+1. Register an application with Azure AD. The client applications that you write to interact with the Kafka REST proxy uses this application's ID and secret to authenticate to Azure.
-1. Create an Azure AD security group. Add the application that you've registered with Azure AD to the security group as a **member** of the group. This security group will be used to control which applications are allowed to interact with the REST proxy. For more information on creating Azure AD groups, see [Create a basic group and add members using Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
+1. Create an Azure AD security group. Add the application that you've registered with Azure AD to the security group as a **member** of the group. This security group will be used to control which applications allow to interact with the REST proxy. For more information on creating Azure AD groups, see [Create a basic group and add members using Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
Validate the group is of type **Security**. :::image type="content" source="./media/rest-proxy/rest-proxy-group.png" alt-text="Security Group" border="true":::
If you bring your own VNet and control network traffic with network security gro
## Create a Kafka cluster with REST proxy enabled
-The steps below use the Azure portal. For an example using Azure CLI, see [Create Apache Kafka REST proxy cluster using Azure CLI](tutorial-cli-rest-proxy.md).
+The steps use the Azure portal. For an example using Azure CLI, see [Create Apache Kafka REST proxy cluster using Azure CLI](tutorial-cli-rest-proxy.md).
1. During the Kafka cluster creation workflow, in the **Security + networking** tab, check the **Enable Kafka REST proxy** option.
The steps below use the Azure portal. For an example using Azure CLI, see [Creat
## Client application sample
-You can use the Python code below to interact with the REST proxy on your Kafka cluster. To use the code sample, follow these steps:
+You can use the Python code to interact with the REST proxy on your Kafka cluster. To use the code sample, follow these steps:
1. Save the sample code on a machine with Python installed. 1. Install required Python dependencies by executing `pip3 install msal`.
This code does the following action:
1. Fetches an OAuth token from Azure AD. 1. Shows how to make a request to Kafka REST proxy.
-For more information about getting OAuth tokens in Python, see [Python AuthenticationContext class](/python/api/adal/adal.authentication_context.authenticationcontext). You might see a delay while `topics` that aren't created or deleted through the Kafka REST proxy are reflected there. This delay is because of cache refresh. The **value** field of the Producer API has been enhanced. Now, it accepts JSON objects and any serialized form.
+For more information about getting OAuth tokens in Python, see [Python AuthenticationContext class](/python/api/adal/adal.authentication_context.authenticationcontext). You might see a delay while `topics` that isn't created or deleted through the Kafka REST proxy are reflected there. This delay is because of cache refresh. The **value** field of the Producer API has been enhanced. Now, it accepts JSON objects and any serialized form.
```python #Required Python packages
get_topic_api = 'metadata/topics'
topic_api_format = 'topics/{topic_name}' producer_api_format = 'producer/topics/{topic_name}' consumer_api_format = 'consumer/topics/{topic_name}/partitions/{partition_id}/offsets/{offset}?count={count}' # by default count = 1
-partitions_api_format = 'topics/{topic_name}/partitions'
-partition_api_format = 'topics/{topic_name}/partitions/{partition_id}'
+partitions_api_format = 'metadata/topics/{topic_name}/partitions'
+partition_api_format = 'metadata/topics/{topic_name}/partitions/{partition_id}'
# Request header headers = {
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/store-profiles-in-fhir.md
Profiles are also specified by various Implementation Guides (IGs). Some common
### Storing profiles
-To store profiles in Azure API for FHIR, you can `PUT` the `StructureDefinition` with the profile content in the body of the request. An update or a conditional update are both good methods to store profiles on the FHIR service. Use the conditional update if you are unsure which to use.
+To store profiles in Azure API for FHIR, you can `PUT` the `StructureDefinition` with the profile content in the body of the request. An update or a conditional update are both good methods to store profiles on the FHIR service. Use the conditional update if you're unsure which to use.
Standard `PUT`: `PUT http://<your Azure API for FHIR base URL>/StructureDefinition/profile-id`
You'll be returned with a `CapabilityStatement` that includes the following info
"http://hl7.org/fhir/us/core/StructureDefinition/us-core-patient" ], ```
+### Bindings in Profiles
+A terminology service is a set of functions that can perform operations on medical ΓÇ£terminologies,ΓÇ¥ such as validating codes, translating codes, expanding value sets, etc. The Azure API for FHIR service doesn't support terminology service. Information for supported operations ($), resource types and interactions can be found in the service's CapabilityStatement. Resource types ValueSet, StructureDefinition and CodeSystem are supported with basic CRUD operations and Search (as defined in the CapabilityStatement) as well as being leveraged by the system for use in $validate.
+
+ValueSets can contain a complex set of rules and external references. Today, the service will only consider the pre-expanded inline codes. Customers need to upload supported ValueSets to the FHIR server prior to utilizing the $validate operation. The ValueSet resources must be uploaded to the FHIR server, using PUT or conditional update as mentioned under Storing Profiles section above.
++ ## Next steps In this article, you've learned about FHIR profiles. Next, you'll learn how you can use $validate to ensure that resources conform to these profiles.
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-proxy-support.md
[!INCLUDE [iot-edge-version-1.4](includes/iot-edge-version-1.4.md)]
-IoT Edge devices send HTTPS requests to communicate with IoT Hub. If your device is connected to a network that uses a proxy server, you need to configure the IoT Edge runtime to communicate through the server. Proxy servers can also affect individual IoT Edge modules if they make HTTP or HTTPS requests that aren't routed through the IoT Edge hub.
+IoT Edge devices send HTTPS requests to communicate with IoT Hub. If you connected your device to a network that uses a proxy server, you need to configure the IoT Edge runtime to communicate through the server. Proxy servers can also affect individual IoT Edge modules if they make HTTP or HTTPS requests that you haven't routed through the IoT Edge hub.
This article walks through the following four steps to configure and then manage an IoT Edge device behind a proxy server:
This article walks through the following four steps to configure and then manage
The IoT Edge installation scripts pull packages and files from the internet, so your device needs to communicate through the proxy server to make those requests. For Windows devices, the installation script also provides an offline installation option.
- This step is a one-time process to configure the IoT Edge device when you first set it up. The same connections are also required when you update the IoT Edge runtime.
+ This step is a one-time process to configure the IoT Edge device when you first set it up. You also need these same connections when you update the IoT Edge runtime.
2. [**Configure IoT Edge and the container runtime on your device**](#configure-iot-edge-and-moby)
This article walks through the following four steps to configure and then manage
3. [**Configure the IoT Edge agent properties in the config file on your device**](#configure-the-iot-edge-agent)
- The IoT Edge daemon starts the edgeAgent module initially. Then, the edgeAgent module retrieves the deployment manifest from IoT Hub and starts all the other modules. For the IoT Edge agent to make the initial connection to IoT Hub, configure the edgeAgent module environment variables manually on the device itself. After the initial connection, you can configure the edgeAgent module remotely.
+ The IoT Edge daemon starts the edgeAgent module initially. Then, the edgeAgent module retrieves the deployment manifest from IoT Hub and starts all the other modules. Configure the edgeAgent module environment variables manually on the device itself, so that the IoT Edge agent can make the initial connection to IoT Hub. After the initial connection, you can configure the edgeAgent module remotely.
This step is a one-time process to configure the IoT Edge device when you first set it up. 4. [**For all future module deployments, set environment variables for any module communicating through the proxy**](#configure-deployment-manifests)
- Once your IoT Edge device is set up and connected to IoT Hub through the proxy server, you need to maintain the connection in all future module deployments.
+ Once you set up and connect an IoT Edge device to IoT Hub through the proxy server, you need to maintain the connection in all future module deployments.
This step is an ongoing process done remotely so that every new module or deployment update maintains the device's ability to communicate through the proxy server.
Whether your IoT Edge device runs on Windows or Linux, you need to access the in
### Linux devices
-If you're installing the IoT Edge runtime on a Linux device, configure the package manager to go through your proxy server to access the installation package. For example, [Set up apt-get to use a http-proxy](https://help.ubuntu.com/community/AptGet/Howto/#Setting_up_apt-get_to_use_a_http-proxy). Once your package manager is configured, follow the instructions in [Install Azure IoT Edge runtime](how-to-provision-single-device-linux-symmetric.md) as usual.
+If you're installing the IoT Edge runtime on a Linux device, configure the package manager to go through your proxy server to access the installation package. For example, [Set up apt-get to use a http-proxy](https://help.ubuntu.com/community/AptGet/Howto/#Setting_up_apt-get_to_use_a_http-proxy). Once you configure your package manager, follow the instructions in [Install Azure IoT Edge runtime](how-to-provision-single-device-linux-symmetric.md) as usual.
### Windows devices using IoT Edge for Linux on Windows
-If you're installing the IoT Edge runtime using IoT Edge for Linux on Windows, IoT Edge is installed by default on your Linux virtual machine. No additional installation or update steps are required.
+If you're installing the IoT Edge runtime using IoT Edge for Linux on Windows, IoT Edge is installed by default on your Linux virtual machine. You're not required to install or update any other steps.
### Windows devices using Windows containers
The following steps demonstrate an example of a windows installation using the `
. {Invoke-WebRequest -proxy <proxy URL> -useb aka.ms/iotedge-win} | Invoke-Expression; Initialize-IoTEdge ```
-If you have complicated credentials for the proxy server that can't be included in the URL, use the `-ProxyCredential` parameter within `-InvokeWebRequestParameters`. For example,
+If you have complicated credentials for the proxy server that you can't include in the URL, use the `-ProxyCredential` parameter within `-InvokeWebRequestParameters`. For example,
```powershell $proxyCredential = (Get-Credential).GetNetworkCredential()
For more information about proxy parameters, see [Invoke-WebRequest](/powershell
IoT Edge relies on two daemons running on the IoT Edge device. The Moby daemon makes web requests to pull container images from container registries. The IoT Edge daemon makes web requests to communicate with IoT Hub.
-Both the Moby and the IoT Edge daemons need to be configured to use the proxy server for ongoing device functionality. This step takes place on the IoT Edge device during initial device setup.
+You must configure both the Moby and the IoT Edge daemons to use the proxy server for ongoing device functionality. This step takes place on the IoT Edge device during initial device setup.
### Moby daemon
Choose the article that applies to your IoT Edge device operating system:
### IoT Edge daemon
-The IoT Edge daemon is configured in a similar manner to the Moby daemon. Use the following steps to set an environment variable for the service, based on your operating system.
+The IoT Edge daemon is similar to the Moby daemon. Use the following steps to set an environment variable for the service, based on your operating system.
The IoT Edge daemon always uses HTTPS to send requests to IoT Hub. #### Linux - Open an editor in the terminal to configure the IoT Edge daemon. ```bash
Restart the IoT Edge system services for the changes to both daemons to take eff
sudo iotedge system restart ```
-Verify that your environment variables were created, and the new configuration was loaded.
+Verify that your environment variables and the new configuration are present.
```bash systemctl show --property=Environment aziot-edged
systemctl show --property=Environment aziot-identityd
#### Windows using IoT Edge for Linux on Windows
-Log in to your IoT Edge for Linux on Windows virtual machine:
+Sign in to your IoT Edge for Linux on Windows virtual machine:
```powershell Connect-EflowVm ```
-Follow the same steps as the Linux section above to configure the IoT Edge daemon.
+Follow the same steps as the Linux section of this article to configure the IoT Edge daemon.
#### Windows using Windows containers
Restart-Service iotedge
## Configure the IoT Edge agent
-The IoT Edge agent is the first module to start on any IoT Edge device. It's started for the first time based on the information in the IoT Edge config file. The IoT Edge agent then connects to IoT Hub to retrieve deployment manifests, which declare what other modules should be deployed on the device.
+The IoT Edge agent is the first module to start on any IoT Edge device. This module starts for the first time based on information in the IoT Edge config file. The IoT Edge agent then connects to IoT Hub to retrieve deployment manifests. The manifest declares which other modules the device should deploy.
This step takes place once on the IoT Edge device during initial device setup.
-1. Open the config file on your IoT Edge device: `/etc/aziot/config.toml`. The configuration file is protected, so you need administrative privileges to access it. On Linux systems, use the `sudo` command before opening the file in your preferred text editor.
+1. Open the config file on your IoT Edge device: `/etc/aziot/config.toml`. You need administrative privileges to access the configuration file. On Linux systems, use the `sudo` command before opening the file in your preferred text editor.
-2. In the config file, find the `[agent]` section, which contains all the configuration information for the edgeAgent module to use on startup. Check and make sure that the `[agent]`section is uncommented or add it if it is not included in the `config.toml`. The IoT Edge agent definition includes an `[agent.env]` subsection where you can add environment variables.
+2. In the config file, find the `[agent]` section, which contains all the configuration information for the edgeAgent module to use on startup. Check to make sure the `[agent]` section is without comments. If the `[agent]` section is missing, add it to the `config.toml`. The IoT Edge agent definition includes an `[agent.env]` subsection where you can add environment variables.
3. Add the **https_proxy** parameter to the environment variables section, and set your proxy URL as its value.
This step takes place once on the IoT Edge device during initial device setup.
sudo iotedge config apply ```
-6. Verify that your proxy settings are propagated using `docker inspect edgeAgent` in the `Env` section. If not, the container must be recreated.
+6. Verify that your proxy settings are propagated using `docker inspect edgeAgent` in the `Env` section. If not, you must recreate the container.
```bash sudo docker rm -f edgeAgent ```
-7. The IoT Edge runtime should recreate `edgeAgent` within a minute. Once `edgeAgent` container is running again, `docker inspect edgeAgent` and verify the proxy settings matches the configuration file.
+7. The IoT Edge runtime should recreate `edgeAgent` within a minute. Once the `edgeAgent` container is running again, use the `docker inspect edgeAgent` command to verify that the proxy settings match the configuration file.
## Configure deployment manifests
-Once your IoT Edge device is configured to work with your proxy server, you need to continue to declare the HTTPS_PROXY environment variable in future deployment manifests. You can edit deployment manifests either using the Azure portal wizard or by editing a deployment manifest JSON file.
+Once you configure your IoT Edge device to work with your proxy server, declare the HTTPS_PROXY environment variable in future deployment manifests. You can edit deployment manifests either using the Azure portal wizard or by editing a deployment manifest JSON file.
Always configure the two runtime modules, edgeAgent and edgeHub, to communicate through the proxy server so they can maintain a connection with IoT Hub. If you remove the proxy information from the edgeAgent module, the only way to reestablish connection is by editing the config file on the device, as described in the previous section.
Add the **https_proxy** environment variable to both the IoT Edge agent and IoT
![Set https_proxy environment variable](./media/how-to-configure-proxy-support/edgehub-environmentvar.png)
-All other modules that you add to a deployment manifest follow the same pattern.
+All other modules that you add to a deployment manifest follow the same pattern. Select **Apply** to save your changes.
### JSON deployment manifest files
-If you create deployments for IoT Edge devices using the templates in Visual Studio Code or by manually creating JSON files, you can add the environment variables directly to each module definition.
+If you create deployments for IoT Edge devices using the templates in Visual Studio Code or by manually creating JSON files, you can add the environment variables directly to each module definition. If you didn't add them in the Azure portal, add them here to your JSON manifest file. Replace `<proxy URL>` with your own value.
Use the following JSON format:
With the environment variables included, your module definition should look like
"edgeHub": { "type": "docker", "settings": {
- "image": "mcr.microsoft.com/azureiotedge-hub:1.1",
+ "image": "mcr.microsoft.com/azureiotedge-hub:1.4",
"createOptions": "{}" }, "env": {
If you included the **UpstreamProtocol** environment variable in the confige.yam
## Working with traffic-inspecting proxies
-Some proxies like [Zscaler](https://www.zscaler.com) can inspect TLS-encrypted traffic. During TLS traffic inspection, the certificate returned by the proxy isn't the certificate from the target server, but instead is the certificate signed by the proxy's own root certificate. By default, this proxy's certificate isn't trusted by IoT Edge modules (including *edgeAgent* and *edgeHub*), and the TLS handshake fails.
+Some proxies like [Zscaler](https://www.zscaler.com) can inspect TLS-encrypted traffic. During TLS traffic inspection, the certificate returned by the proxy isn't the certificate from the target server, but instead is the certificate signed by the proxy's own root certificate. By default, IoT Edge modules (including *edgeAgent* and *edgeHub*) don't trust this proxy's certificate and the TLS handshake fails.
-To resolve this, the proxy's root certificate needs to be trusted by both the operating system and IoT Edge modules.
+To resolve the failed handshake, configure both the operating system and IoT Edge modules to trust the proxy's root certificate with the following steps.
1. Configure proxy certificate in the trusted root certificate store of your host operating system. For more information about how to install a root certificate, see [Install root CA to OS certificate store](how-to-manage-device-certificates.md#install-root-ca-to-os-certificate-store).
To configure traffic inspection proxy support for containers not managed by IoT
## Fully qualified domain names (FQDNs) of destinations that IoT Edge communicates with
-If your proxy has a firewall that requires you to allowlist all FQDNs for internet connectivity, review the list from [Allow connections from IoT Edge devices](production-checklist.md#allow-connections-from-iot-edge-devices) to determine which FQDNs to add.
+If your proxy's firewall requires you to add all FQDNs to your allowlist for internet connectivity, review the list from [Allow connections from IoT Edge devices](production-checklist.md#allow-connections-from-iot-edge-devices) to determine which FQDNs to add.
## Next steps
iot-edge How To Provision Devices At Scale Linux Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-tpm.md
A virtual switch enables your VM to connect to a physical network.
Create a new VM from a bootable image file.
-1. Download a disk image file to use for your VM and save it locally. For example, [Ubuntu Server 20.04](http://releases.ubuntu.com/20.04/). For information about supported operating systems for IoT Edge devices, see [Azure IoT Edge supported systems](./support.md).
+1. Download a disk image file to use for your VM and save it locally. For example, [Ubuntu Server 22.04](http://releases.ubuntu.com/22.04/). For information about supported operating systems for IoT Edge devices, see [Azure IoT Edge supported systems](./support.md).
1. In Hyper-V Manager, select **Action** > **New** > **Virtual Machine** on the **Actions** menu.
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
The default module code that comes with the solution is located at the following
# [C\#](#tab/csharp)
-modules/*&lt;your module name&gt;*/**Program.cs**
+modules/*&lt;your module name&gt;*/**ModuleBackgroundService.cs**
# [Azure Functions](#tab/azfunctions)
In the Visual Studio Code integrated terminal, change the directory to the ***&l
dotnet build ```
-Open the file `Program.cs` and add a breakpoint.
+Open the file `ModuleBackgroundService.cs` and add a breakpoint.
Navigate to the Visual Studio Code Debug view by selecting the debug icon from the menu on the left or by typing `Ctrl+Shift+D`. Select the debug configuration ***&lt;your module name&gt;* Local Debug (.NET Core)** from the dropdown.
On your development machine, you can start an IoT Edge simulator instead of inst
### Build and run container for debugging and debug in attach mode
-1. Open your module file (`Program.cs`, `app.js`, `App.java`, or `<your module name>.cs`) and add a breakpoint.
+1. Open your module file (`ModuleBackgroundService.cs`, `app.js`, `App.java`, or `<your module name>.cs`) and add a breakpoint.
1. In the Visual Studio Code Explorer view, right-click the `deployment.debug.template.json` file for your solution and then select **Build and Run IoT Edge solution in Simulator**. You can watch all the module container logs in the same window. You can also navigate to the Docker view to watch container status.
Open the module file for your development language and add a breakpoint:
# [C\#](#tab/csharp)
-Add your breakpoint to the file `Program.cs`.
+Add your breakpoint to the file `ModuleBackgroundService.cs`.
# [Azure Functions](#tab/azfunctions)
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Modules built as Linux containers can be deployed to either Linux or Windows dev
| - | -- | - | -- | | Debian 11 (Bullseye) | | ![Debian + ARM32v7](./media/support/green-check.png) | | | Red Hat Enterprise Linux 8 | ![Red Hat Enterprise Linux 8 + AMD64](./media/support/green-check.png) | | |
+| Ubuntu Server 22.04 | ![Ubuntu Server 22.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 22.04 + ARM64](./media/support/green-check.png) |
| Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | | Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) | | Windows 10/11 Pro | ![Windows 10/11 Pro + AMD64](./media/support/green-check.png) | | ![Win 10 Pro + ARM64](./media/support/green-check.png) |
The systems listed in the following table are considered compatible with Azure I
| [RHEL 7](https://access.redhat.com/documentation/red_hat_enterprise_linux/7) | ![RHEL 7 + AMD64](./media/support/green-check.png) | ![RHEL 7 + ARM32v7](./media/support/green-check.png) | ![RHEL 7 + ARM64](./media/support/green-check.png) | | [Ubuntu 18.04 <sup>2</sup>](https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes) | | ![Ubuntu 18.04 + ARM32v7](./media/support/green-check.png) | | | [Ubuntu 20.04 <sup>2</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | |
+| [Ubuntu 22.04 <sup>2</sup>](https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes) | | ![Ubuntu 22.04 + ARM32v7](./media/support/green-check.png) | |
| [Wind River 8](https://docs.windriver.com/category/os-wind_river_linux) | ![Wind River 8 + AMD64](./media/support/green-check.png) | | | | [Yocto](https://www.yoctoproject.org/) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) | | Raspberry Pi OS Buster | | ![Raspberry Pi OS Buster + ARM32v7](./media/support/green-check.png) | ![Raspberry Pi OS Buster + ARM64](./media/support/green-check.png) |
lab-services Class Type Adobe Creative Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-adobe-creative-cloud.md
Title: Set up a lab with Adobe Creative Cloud using Azure Lab Services | Microsoft Docs
-description: Learn how to set up a lab for digital arts and media classes that use Adobe Creative Cloud.
-- Previously updated : 04/21/2021
+ Title: Set up a lab with Adobe Creative Cloud
+
+description: Learn how to set up a lab in Azure Lab Services for digital arts and media classes that use Adobe Creative Cloud.
+ +++ Last updated : 02/17/2023
-# Set up a lab for Adobe Creative Cloud
+# Set up a lab for Adobe Creative Cloud in Azure Lab Services
+In this article, you learn how to set up a class that uses Adobe Creative Cloud. [Adobe Creative Cloud](https://www.adobe.com/creativecloud.html) is a collection of desktop applications and web services used for photography, design, video, web, user experience (UX), and more. Universities and K-12 schools use Creative Cloud in digital arts and media classes. Some of Creative CloudΓÇÖs media processes might require more computational and visualization (GPU) power than a typical tablet, laptop, or workstation support. With Azure Lab Services, you have flexibility to choose from various virtual machine (VM) sizes, including GPU sizes.
-[Adobe Creative Cloud](https://www.adobe.com/creativecloud.html) is a collection of desktop applications and web services used for photography, design, video, web, user experience (UX), and more. Universities and K-12 schools use Creative Cloud in digital arts and media classes. Some of Creative CloudΓÇÖs media processes may require more computational and visualization (GPU) power than a typical tablet, laptop, or workstation support. With Azure Lab Services, you have flexibility to choose from various virtual machine (VM) sizes, including GPU sizes.
+## Create Cloud licensing in a lab VM
-In this article, weΓÇÖll show how to set up a class that uses Creative Cloud.
+To use Creative Cloud on a lab VM, you must use [Named User Licensing](https://helpx.adobe.com/enterprise/kb/technical-support-boundaries-virtualized-server-based.html#main_Licensing_considerations), which is the only type of licensing that supports deployment on a virtual machine.
-## Licensing
+Each lab VM has internet access so that lab users can activate Creative Cloud apps by signing into the software. When a user signs in, their authentication token is cached in the user profile so that they donΓÇÖt have to sign in again on their VM.
-To use Creative Cloud on a lab VM, you must use [Named User Licensing](https://helpx.adobe.com/enterprise/kb/technical-support-boundaries-virtualized-server-based.html#main_Licensing_considerations), which is the only type of licensing that supports deployment on a virtual machine. Each lab VM has internet access so that your students can activate Creative Cloud apps by signing into the software. Once a student signs in, their authentication token is cached in the user profile so that they donΓÇÖt have to sign in again on their VM. Read [AdobeΓÇÖs article on licensing](https://helpx.adobe.com/enterprise/using/licensing.html) for more details.
+Read [AdobeΓÇÖs article on licensing](https://helpx.adobe.com/enterprise/using/licensing.html) for more details.
## Lab configuration
-To set up this lab, you need an Azure subscription and lab account to get started. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
### Lab plan settings
-Once you get have Azure subscription, you can create a new lab plan in Azure Lab Services. For more information about creating a new lab plan, see the tutorial on [how to set up a lab plan](./tutorial-setup-lab-plan.md). You can also use an existing lab plan.
-Enable the settings described in the table below for the lab plan. For more information about how to enable marketplace images, see the article on [how to specify Marketplace images available to lab creators](./specify-marketplace-images.md).
+This lab uses a Windows 10 Azure Marketplace images as the base VM image. You first need to enable this image in your lab plan. This lets lab creators then select the image as a base image for their lab.
-| Lab plan setting | Instructions |
-| - | |
-| Marketplace image | Enable the Windows 10 image, if not done already.|
+Follow these steps to [enable these Azure Marketplace images available to lab creators](specify-marketplace-images.md). Select one of the **Windows 10** Azure Marketplace images.
### Lab settings
-For instructions on how to create a lab, see [Tutorial: Set up a lab](tutorial-setup-lab.md). Use the following settings when creating the lab.
+1. Create a lab for your lab plan:
+
+ [!INCLUDE [create lab](./includes/lab-services-class-type-lab.md)] Specify the following lab settings:
+
+ | Lab settings | Value/instructions |
+ | | |
+ |Virtual Machine Size| **Small GPU (Visualization)**. This VM is best suited for remote visualization, streaming, gaming, encoding using frameworks such as OpenGL and DirectX.|
+ |Virtual Machine Image| Windows 10 |
-| Lab settings | Value/instructions |
-| | |
-|Virtual Machine Size| **Small GPU (Visualization)**. This VM is best suited for remote visualization, streaming, gaming, encoding using frameworks such as OpenGL and DirectX.|
-|Virtual Machine Image| Windows 10 |
+ The size of VM that you need to use for your lab depends on the types of projects that users create. Most [Creative Cloud apps](https://helpx.adobe.com/creative-cloud/system-requirements.html) support GPU-based acceleration and require a GPU for features to work properly. To ensure that you select the appropriate VM size, we recommend that you test the projects that users create, to ensure adequate performance. Learn more about [which VM size is recommended](./administrator-guide.md#vm-sizing) for using Creative Cloud.
-The size of VM that you need to use for your lab depends on the types of projects that your students will create. Most [Creative Cloud apps](https://helpx.adobe.com/creative-cloud/system-requirements.html) support GPU-based acceleration and require a GPU for features to work properly. To ensure that you select the appropriate VM size, we recommend that you test the projects that your students will create to ensure adequate performance. The below table shows the recommended [VM size](./administrator-guide.md#vm-sizing) to use with Creative Cloud.
+1. When you create a lab with the **Small GPU (Visualization)** size, follow these steps to [set up a lab with GPUs](./how-to-setup-lab-gpu.md).
-> [!WARNING]
-> The **Small GPU (Visualization)** virtual machine size is configured to enable a high-performing graphics experience and meets [AdobeΓÇÖs system requirements for each application](https://helpx.adobe.com/creative-cloud/system-requirements.html). Make sure to choose Small GPU (Visualization) not Small GPU (Compute). For more information about this virtual machine size, see the article on [how to set up a lab with GPUs](./how-to-setup-lab-gpu.md).
+ > [!WARNING]
+ > The **Small GPU (Visualization)** virtual machine size is configured to enable a high-performing graphics experience and meets [AdobeΓÇÖs system requirements for each application](https://helpx.adobe.com/creative-cloud/system-requirements.html). Make sure to choose Small GPU (Visualization) not Small GPU (Compute).
#### GPU drivers
-When you create the lab, we recommend that you install the GPU drivers by selecting the **Install GPU drivers** option in the lab creation wizard. You should also validate that the GPU drivers are correctly installed. For more information, read the following sections:
+When you create the lab, we recommend that you install the GPU drivers by selecting the **Install GPU drivers** option in the lab creation wizard. You should also validate that the correct installation of the GPU drivers. For more information, read the following sections:
+ - [Ensure that the appropriate GPU drivers are installed](../lab-services/how-to-setup-lab-gpu.md#ensure-that-the-appropriate-gpu-drivers-are-installed) - [Validate the installed drivers](../lab-services/how-to-setup-lab-gpu.md#validate-the-installed-drivers)
When you create the lab, we recommend that you install the GPU drivers by select
### Creative Cloud deployment package
-Installing Creative Cloud requires the use of a deployment package. Typically, the deployment package is created by your IT department using AdobeΓÇÖs Admin Console. When IT creates the deployment package, they also have the option to enable self-service. There are a few ways to enable self-service for the deployment package:
+Installing Creative Cloud requires the use of a deployment package. Typically, your IT department creates the deployment package department using AdobeΓÇÖs Admin Console. When IT creates the deployment package, they can also enable self-service. There are a few ways to enable self-service for the deployment package:
- Create a self-service package. - Create a managed package with self-service elevated privileges turned on.
-With self-service enabled, you donΓÇÖt install the entire Creative Cloud collection of apps. Instead, students can install apps themselves using the Creative Cloud desktop app. Here are some key benefits with this approach:
+With self-service enabled, you donΓÇÖt install the entire Creative Cloud collection of apps. Instead, users can install apps themselves using the Creative Cloud desktop app. Here are some key benefits with this approach:
-- The entire Creative Cloud install is about 25 GB. If students install only the apps they need on-demand, this helps optimize disk space. Lab VMs have a disk size of 128 GB.-- You can choose to install a subset of the apps on the template VM before publishing. This way the student VMs will have some apps installed by default and students can add more apps on their own as needed.-- You can avoid republishing the template VM because students can install more apps on their VM at any point during the lifetime of the lab. Otherwise, either IT or the teacher would need to install more apps on the template VM and republish. Republishing causes the studentsΓÇÖ VMs to be reset and any work that isnΓÇÖt saved externally is lost.
+- The entire Creative Cloud install is about 25 GB. If users install only the apps they need on-demand, this helps optimize disk space. Lab VMs have a maximum disk size of 128 GB.
+- You can choose to install a subset of the apps on the template VM before publishing. This way the lab VMs have some apps installed by default and users can add more apps on their own as needed.
+- You can avoid republishing the template VM because users can install more apps on their VM at any point during the lifetime of the lab. Otherwise, either IT or the lab creator needs to install more apps on the template VM and republish. Republishing causes the usersΓÇÖ VMs to be reset and any work that isnΓÇÖt saved externally is lost.
-If you use a managed deployment package with self-service disabled, students wonΓÇÖt have the ability to install their own apps. In this case, IT must specify the Creative Cloud apps that will be installed.
+If you use a managed deployment package with self-service disabled, users donΓÇÖt have the ability to install their own apps. In this case, IT must specify the Creative Cloud apps that are installed.
Read [AdobeΓÇÖs steps to create a package](https://helpx.adobe.com/enterprise/admin-guide.html/enterprise/using/create-nul-packages.ug.html) for more information. ### Install Creative Cloud
-After the template machine is created, follow the steps below to set up your labΓÇÖs template virtual machine (VM) with Creative Cloud.
+After the creation of the lab template machine completes, follow the steps below to set up your labΓÇÖs template virtual machine (VM) with Creative Cloud.
1. Start the template VM and connect using RDP.+ 1. To install Creative Cloud, download the deployment package given to you by IT or directly from [AdobeΓÇÖs Admin Console](https://adminconsole.adobe.com/).
-1. Run the deployment package file. Depending on whether self-service is enabled or disabled, this will install Creative Cloud desktop app and\or the specified Creative Cloud apps.
+
+1. Run the deployment package file. Depending on whether self-service is enabled or disabled, this installs Creative Cloud desktop app and\or the specified Creative Cloud apps.
Read [AdobeΓÇÖs deployment steps](https://helpx.adobe.com/enterprise/admin-guide.html/enterprise/using/deploy-packages.ug.html) for more information.
-1. Once the template VM is set up, [publish the template VMΓÇÖs image](how-to-create-manage-template.md) that is used to create all of the studentsΓÇÖ VMs in the lab.
+
+1. Once the template VM is set up, [publish the template VM](how-to-create-manage-template.md). All lab VMs use this template as their base image.
### Storage
-As mentioned earlier, Azure Lab VMs have a disk size of 128 GB. If your students need extra storage for saving large media assets or they need to access shared media assets, you should consider using external file storage. For more information, read the following articles:
+Lab virtual machines have a maximum disk size of 128 GB. If users need extra storage for saving large media assets or they need to access shared media assets, you should consider using external file storage. For more information, read the following articles:
-- [Using external file storage in Lab Services](how-to-attach-external-storage.md)
+- [Using external file storage in Azure Lab Services](how-to-attach-external-storage.md)
- [Install and configure OneDrive](./how-to-prepare-windows-template.md#install-and-configure-onedrive) ### Save template VM image Consider saving your template VM for future use. To save the template VM, see [Save an image to a compute gallery](how-to-use-shared-image-gallery.md#save-an-image-to-a-compute-gallery). -- When self-service is *enabled*, the template VMΓÇÖs image will have Creative Cloud desktop installed. Teachers can then reuse this image to create labs and to choose which Creative Cloud apps to install. This helps reduce IT overhead since teachers can independently set up labs and have full control over installing the Creative Cloud apps required for their classes.-- When self-service is *disabled*, the template VMΓÇÖs image will already have the specified Creative Cloud apps installed. Teachers can reuse this image to create labs; however, they wonΓÇÖt be able to install additional Creative Cloud apps.
+- When self-service is *enabled*, the template VMΓÇÖs image has Creative Cloud desktop installed. Lab creators can then reuse this image to create labs and to choose which Creative Cloud apps to install. This helps reduce IT overhead since teachers can independently set up labs and have full control over installing the Creative Cloud apps required for their classes.
+- When self-service is *disabled*, the template VMΓÇÖs image will already have the specified Creative Cloud apps installed. Lab creators can reuse this image to create labs; however, they wonΓÇÖt be able to install additional Creative Cloud apps.
### Troubleshooting
-Adobe Creative Cloud may show an error saying *Your graphics processor is incompatible* when the GPU drivers or the GPU is not configured correctly.
+Adobe Creative Cloud may show the following error: *Your graphics processor is incompatible* when the GPU drivers or the GPU isn't configured correctly.
:::image type="content" source="./media/class-type-adobe-creative-cloud/gpu-driver-error.png" alt-text="Screenshot of Adobe Creative Cloud showing an error message that the graphics processor is incompatible."::: To fix this issue:-- Ensure that you selected the Small GPU *(Visualization)* size when you created your lab. You can see the VM size used by the lab on the lab's [Template page](../lab-services/how-to-create-manage-template.md).
+- Ensure that you selected the Small GPU *(Visualization)* VM size when you created your lab. You can see the VM size used by the lab on the lab's [Template page](../lab-services/how-to-create-manage-template.md).
- Try [manually installing the Small GPU Visualization drivers](../lab-services/how-to-setup-lab-gpu.md#install-the-small-gpu-visualization-drivers). ## Cost
-In this section, weΓÇÖll look at a possible cost estimate for this class. WeΓÇÖll use a class of 25 students with 20 hours of scheduled class time. Also, each student gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose was **Small GPU (Visualization)**, which is 160 lab units.
+This section provides a cost estimate for running this class for 25 users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose was **Small GPU (Visualization)**, which is 160 lab units.
25 students \* (20 scheduled hours + 10 quota hours) \* 160 Lab Units * 0.01 USD per hour = 1200.00 USD >[!IMPORTANT]
-> Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
+> This cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
## Next steps
lab-services Class Type Jupyter Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-jupyter-notebook.md
Title: Set up a lab to teach data science with Python and Jupyter Notebooks | Microsoft Docs
-description: Learn how to set up a lab to teach data science using Python and Jupyter Notebooks.
- Previously updated : 01/04/2022
+ Title: Set up a data science lab with Python and Jupyter Notebooks
+
+description: Learn how to set up a lab VM in Azure Lab Services to teach data science using Python and Jupyter Notebooks.
+ +++ Last updated : 02/17/2023 # Set up a lab to teach data science with Python and Jupyter Notebooks
+This article outlines how to set up a [template virtual machine (VM)](./classroom-labs-concepts.md#template-virtual-machine) in Azure Lab Services with the tools for teaching students to use Jupyter Notebooks. You also learn how to lab users can connect to notebooks on their virtual machines.
-Jupyter Notebooks is an open-source project that lets you easily combine rich text and executable Python source code on a single canvas called a notebook. Running a notebook results in a linear record of inputs and outputs. Those outputs can include text, tables of information, scatter plots, and more.
-
-This article outlines how to set up a template virtual machine (VM) in Azure Lab Services with the tools needed to teach students to use [Jupyter Notebooks](http://jupyter-notebook.readthedocs.io/). We'll also show how students can connect to their notebooks on their virtual machines (VMs).
+[Jupyter Notebooks](https://jupyter-notebook.readthedocs.io/) is an open-source project that enables you to easily combine rich text and executable Python source code on a single canvas, known as a notebook. Running a notebook results in a linear record of inputs and outputs. Those outputs can include text, tables of information, scatter plots, and more.
## Lab configuration
This article outlines how to set up a template virtual machine (VM) in Azure Lab
[!INCLUDE [must have lab plan](./includes/lab-services-class-type-lab-plan.md)]
-Enable settings described in the table below for the lab plan. For more information on enabling marketplace images, see [specify Marketplace images available to lab creators](specify-marketplace-images.md).
+This lab uses one of the Data Science Virtual Machine Azure Marketplace images as the base VM image. You first need to enable these images in your lab plan. This lets lab creators then select the image as a base image for their lab.
+
+1. Follow these steps to [enable these Azure Marketplace images available to lab creators](specify-marketplace-images.md). Select one of the following Azure Marketplace images, depending on your OS requirements:
-| Lab plan setting | Instructions |
-| - | |
-| Marketplace image | Inside your lab account, enable either **Data Science Virtual Machine ΓÇô Windows Server 2019** or **Data Science Virtual Machine ΓÇô Ubuntu 18.04** depending on your OS needs. |
+ - **Data Science Virtual Machine ΓÇô Windows Server 2019**
+ - **Data Science Virtual Machine ΓÇô Ubuntu 18.04**
+
+1. Alternately, create a custom VM image:
-This article uses the Data Science virtual machine images available on the Azure Marketplace because they are already configured with Jupyter Notebook. These images, however, also include many other development and modeling tools for data science. If you don't want those extra tools and want a lightweight setup with only Jupyter notebooks, create a custom VM image. For an example, [Installing JupyterHub on Azure](http://tljh.jupyter.org/en/latest/install/azure.html). Once the custom image is created, you can upload it to a compute gallery to use the image inside Azure Lab Services. Learn more about [using compute gallery in Azure Lab Services](how-to-attach-detach-shared-image-gallery.md).
+ The Data Science VMs images in the Azure Marketplace are already configured with Jupyter Notebooks. These images, however, also include many other development and modeling tools for data science. If you don't want those extra tools and want a lightweight setup with only Jupyter notebooks, create a custom VM image. For an example, [Installing JupyterHub on Azure](http://tljh.jupyter.org/en/latest/install/azure.html).
+
+ After you create the custom image, upload the image to a compute gallery to use it with Azure Lab Services. Learn more about [using compute gallery in Azure Lab Services](how-to-attach-detach-shared-image-gallery.md).
### Lab settings
+1. Create a lab for your lab plan:
+
+ [!INCLUDE [create lab](./includes/lab-services-class-type-lab.md)] Specify the following lab settings:
-| Lab settings | Value |
-| | |
-| Virtual machine size | Select **Small** or **Medium** for a basic setup accessing Jupyter Notebooks. Select **Small GPU (Compute)** for compute-intensive and network-intensive applications used in Artificial Intelligence and Deep Learning classes. |
-| Virtual machine image | Choose **[Data Science Virtual Machine ΓÇô Windows Server 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019)** or **[Data Science Virtual Machine ΓÇô Ubuntu](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux)** depending on your OS needs. |
-| Template virtual machine settings | Select **Use virtual machine without customization.**.
+ | Lab settings | Value |
+ | | |
+ | Virtual machine size | Select **Small** or **Medium** for a basic setup accessing Jupyter Notebooks. Select **Small GPU (Compute)** for compute-intensive and network-intensive applications used in Artificial Intelligence and Deep Learning classes. |
+ | Virtual machine image | Choose **[Data Science Virtual Machine ΓÇô Windows Server 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019)** or **[Data Science Virtual Machine ΓÇô Ubuntu](https://azuremarketplace.microsoft.com/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux)** depending on your OS needs. |
+ | Template virtual machine settings | Select **Use virtual machine without customization.**.
-When you create a lab with the **Small GPU (Compute)** size, you can [install GPU drivers](./how-to-setup-lab-gpu.md#ensure-that-the-appropriate-gpu-drivers-are-installed). This option installs recent NVIDIA drivers and Compute Unified Device Architecture (CUDA) toolkit, which is required to enable high-performance computing with the GPU. For more information, see the article [Set up a lab with GPU virtual machines](./how-to-setup-lab-gpu.md).
+1. When you create a lab with the **Small GPU (Compute)** size, follow these steps to [install GPU drivers](./how-to-setup-lab-gpu.md#ensure-that-the-appropriate-gpu-drivers-are-installed).
+
+ These process installs recent NVIDIA drivers and the Compute Unified Device Architecture (CUDA) toolkit, which you need to enable high-performance computing with the GPU. For more information, see [Set up a lab with GPU virtual machines](./how-to-setup-lab-gpu.md).
## Template machine configuration
When you create a lab with the **Small GPU (Compute)** size, you can [install GP
The Data Science VM images come with many of data science frameworks and tools required for this type of class. For example, the images include: -- [Jupyter Notebooks](http://jupyter-notebook.readthedocs.io/): A web application that allows data scientists to take raw data, run computations, and see the results all in the same environment. It will run locally in the template VM.
+- [Jupyter Notebooks](http://jupyter-notebook.readthedocs.io/): A web application that allows data scientists to take raw data, run computations, and see the results all in the same environment. It runs locally in the template VM.
- [Visual Studio Code](https://code.visualstudio.com/): An integrated development environment (IDE) that provides a rich interactive experience when writing and testing a notebook. For more information, see [Working with Jupyter Notebooks in Visual Studio Code](https://code.visualstudio.com/docs/python/jupyter-support).
-The **Data Science Virtual Machine ΓÇô Ubuntu** image is already provisioned with X2GO server and to enable students to use a graphical desktop experience. No further steps are required when setting up the template VM.
+The **Data Science Virtual Machine ΓÇô Ubuntu** image is already provisioned with X2GO server to enable lab users to use a graphical desktop experience.
### Enabling tools to use GPUs
-If you're using the **Small GPU (Compute)** size, we recommend that you verify that the Data Science frameworks and libraries are properly set up to use GPUs. You may need to install a different version of the NVIDIA drivers and CUDA toolkit. To properly configure the GPUs, you should consult the framework's or library's documentation.
+If you're using the **Small GPU (Compute)** size, we recommend that you verify that the Data Science frameworks and libraries are properly set up to use GPUs. You might need to install a different version of the NVIDIA drivers and CUDA toolkit. To configure the GPUs, you should consult the framework's or library's documentation.
-For example, to validate that the GPU is configured for TensorFlow, connect to the template VM and run the following Python-TensorFlow code in Jupyter Notebooks:
+For example, to validate that TensorFlow uses the GPU, connect to the template VM and run the following Python-TensorFlow code in Jupyter Notebooks:
```python import tensorflow as tf
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices()) ```
-If the output from the above code looks like the following, the GPU isn't configured for TensorFlow:
+If the output from the above code looks like the following, TensorFlow isn't using the GPU:
```python [name: "/device:CPU:0"
physical_device_desc: "device: 0, name: NVIDIA Tesla K80, pci bus id: 0001:00:00
### Provide notebooks for the class
-The next task is to provide students with notebooks that you want them to use. Notebooks can be saved locally on the template VM so each student has their own copy. If you want to use sample notebooks from Azure Machine Learning, see [how to configure an environment with Jupyter Notebooks](../machine-learning/how-to-configure-environment.md#jupyter-notebooks).
+The next task is to provide lab users with notebooks that you want them to use. You can save notebooks locally on the template VM so each lab user has their own copy.
+
+If you want to use sample notebooks from Azure Machine Learning, see [how to configure an environment with Jupyter Notebooks](/azure/machine-learning/how-to-configure-environment#jupyter-notebooks).
### Publish the template machine
-When you [publish the template](how-to-create-manage-template.md#publish-the-template-vm), each student registered in the lab will get a copy of the template VM with all the local tools and notebooks youΓÇÖve set up on it.
+You make the lab VM available for lab users, you have to [publish the template](how-to-create-manage-template.md#publish-the-template-vm). The lab VM has all the local tools and notebooks that you configured previously.
-## How students connect to Jupyter Notebooks?
+## Connect to Jupyter Notebooks
-Once you publish the template, each student will have access to a VM that comes with everything youΓÇÖve already configured for the class, including the Jupyter Notebooks. The following sections show different ways for students to connect to Jupyter Notebooks.
+The following sections show different ways for lab users to connect to Jupyter Notebooks on the lab VM.
-### For Windows VMs
+### Use Jupyter Notebooks on the lab VM
-If youΓÇÖve provided students with Windows VMs, they need to connect to their lab VMs to use Jupyter Notebooks. To connect to a Windows VM, a student can use a remote desktop connection (RDP). For more information, see [Connect to a Windows lab VM](connect-virtual-machine.md#connect-to-a-windows-lab-vm).
+Lab users can connect from their local machine to the lab VM and then use Jupyter Notebooks inside the lab VM.
-### For Linux VMs
+If you use a Windows-based lab VM, lab users can connect to their lab VMs through remote desktop (RDP). For more information, see how to [connect to a Windows lab VM](connect-virtual-machine.md#connect-to-a-windows-lab-vm).
-If youΓÇÖve provided students with Linux VMs, students can Access Jupyter Notebooks locally after connecting to the VM. For instructions to SSH or connect using X2Go, see [Connect to a Linux lab VM](connect-virtual-machine.md#connect-to-a-linux-lab-vm).
+If you use a Linux-based lab VM, lab users can connect to their lab VMs through SSH or by using X2Go. For more information, see how to [connect to a Linux lab VM](connect-virtual-machine.md#connect-to-a-linux-lab-vm).
-#### SSH tunnel to Jupyter server on the VM
+### SSH tunnel to Jupyter server on the VM
-Some students may want to connect directly from their local computer directly to the Jupyter server inside their lab VMs. The SSH protocol enables port forwarding between the local computer and a remote server (in our case, the studentΓÇÖs lab VM), so that an application running on a certain port on the server is **tunneled** to the mapping port on the local computer. Students should follow these steps to SSH tunnel to the Jupyter server on their lab VMs:
+For Linux-based labs, you can also connect directly from your local computer to the Jupyter server inside the lab VM. The SSH protocol enables port forwarding between the local computer and a remote server (in our case, the user's lab VM). An application that is running on a certain port on the server is **tunneled** to the mapping port on the local computer.
-1. In the Lab Services web portal ([https://labs.azure.com](https://labs.azure.com)), make sure that the Linux VM that you want to connect to is [started](how-to-use-lab.md#start-or-stop-the-vm).
-2. Once the VM is running, [get the SSH connection command](connect-virtual-machine.md#connect-to-a-linux-lab-vm-using-ssh) by selecting **Connect**, which will show a window that provides the SSH command string, which will look like the following string:
+Follow these steps to configure an SSH tunnel between a user's local machine and the Jupyter server on the lab VM:
+
+1. Go to the [Azure Lab Services website](https://labs.azure.com)
+
+1. Verify that the Linux-based [lab VM is running](how-to-use-lab.md#start-or-stop-the-vm).
+
+1. Select the **Connect** icon > **Connect via SSH** to get the SSH connection command.
+
+ The SSH connection command looks like the following:
```shell ssh -p 12345 student@ml-lab-00000000-0000-0000-0000-000000000000.eastus2.cloudapp.azure.com ```
-3. On your local computer, launch a terminal or command prompt, and copy the SSH connection string to it. Then, add `-L 8888:localhost:8888` to the command string, which creates the **tunnel** between the ports. The final string should look like:
+ Learn more about [how to connect to a Linux VM](connect-virtual-machine.md#connect-to-a-linux-lab-vm-using-ssh).
+
+1. On your local computer, launch a terminal or command prompt, and copy the SSH connection string to it. Then, add `-L 8888:localhost:8888` to the command string, which creates the **tunnel** between the ports.
+
+ The final command should look as follows:
```shell ssh ΓÇôL 8888:localhost:8888 -p 12345 student@ml-lab-00000000-0000-0000-0000-000000000000.eastus.cloudapp.azure.com ```
-4. Press **ENTER** to run the command.
-5. When prompted, provide the password to connect to the lab VM.
-6. Once youΓÇÖre connected to the VM, start the Jupyter server using this command:
+1. Press **ENTER** to run the command.
+1. When prompted, provide the lab VM password to connect to the lab VM.
+1. When youΓÇÖre connected to the VM, start the Jupyter server using this command:
```bash jupyter notebook ```
-7. Running the command will provide you with a URL in the terminal. The URL should look like:
+ The command outputs a URL for the Jupyter server in the terminal. The URL should look like:
- ```bash
+ ```output
http://localhost:8888/?token=8c09ecfc93e6a8cbedf9c66dffdae19670a64acc1d37 ```
-8. Paste this URL into a browser on your local computer to connect and work on your Jupyter Notebook.
+1. Paste this URL into a browser on your local computer to connect and work on your Jupyter Notebook.
> [!NOTE] > Visual Studio Code also enables a great [Jupyter Notebook editing experience](https://code.visualstudio.com/docs/python/jupyter-support). You can follow the instructions on [how to connect to a remote Jupyter server](https://code.visualstudio.com/docs/python/jupyter-support#_connect-to-a-remote-jupyter-server) and use the same URL from the previous step to connect from VS Code instead of from the browser. ## Cost estimate
-Let's cover a possible cost estimate for this class. We'll use a class of 25 students. There are 20 hours of scheduled class time. Also, each student gets 10 hours quota for homework or assignments outside scheduled class time. The VM size we chose was small GPU (compute), which is 139 lab units. If you want to use the Small (20 lab units) or Medium size (42 lab units), you can replace the lab unit part in the equation below with the correct number.
+This section provides a cost estimate for running this class for 25 users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The VM size we chose was small GPU (compute), which is 139 lab units. If you want to use the Small (20 lab units) or Medium size (42 lab units), you can replace the lab unit part in the equation below with the correct number.
-Here is an example of a possible cost estimate for this class:
+Here's an example of a possible cost estimate for this class:
25 students \* (20 scheduled hours + 10 quota hours) \* 139 lab units \* 0.01 USD per hour = 1042.5 USD >[!IMPORTANT]
->Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
+>This cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
## Conclusion
-In this article, we walked through the steps to create a lab for a Jupyter Notebooks class. You can use a similar setup for other machine learning classes.
+In this article, you learned how to create a lab for a Jupyter Notebooks class and how user can connect to their notebooks on the lab VM. You can use a similar setup for other machine learning classes.
## Next steps
lab-services Connect Virtual Machine Mac Remote Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine-mac-remote-desktop.md
Title: Connect to Azure Lab Services VMs from Mac
-description: Learn how to connect from a Mac to a virtual machine in Azure Lab Services.
+
+description: Learn how to connect using remote desktop (RDP) from a Mac to a virtual machine in Azure Lab Services.
++++ Previously updated : 02/04/2022 Last updated : 02/16/2023 # Connect to a VM using Remote Desktop Protocol on a Mac
-This section shows how a student can connect to a lab VM from a Mac by using RDP.
+In this article, you learn how to connect to a lab VM in Azure Lab Services from a Mac by using Remote Desktop Protocol (RDP).
## Install Microsoft Remote Desktop on a Mac
+To connect to the lab VM via RDP, you can use the Microsoft Remote Desktop app.
+
+To install the Microsoft Remote Desktop app:
+ 1. Open the App Store on your Mac, and search for **Microsoft Remote Desktop**.+ :::image type="content" source="./media/connect-virtual-machine-mac-remote-desktop\install-remote-desktop.png" alt-text="Screenshot of Microsoft Remote Desktop app in the App Store.":::
-1. Install the latest version of Microsoft Remote Desktop.
+
+1. Select **Install** to install the latest version of Microsoft Remote Desktop.
## Access the VM from your Mac using RDP
+Next, you connect to the lab VM by using the remote desktop application. You can retrieve the connection information for the lab VM from the Azure Lab Services website.
+
+1. Navigate to the Azure Lab Services website (https://labs.azure.com), and sign in with your credentials.
+ 1. On the tile for your VM, ensure the [VM is running](how-to-use-lab.md#start-or-stop-the-vm) and select the **Connect** icon. :::image type="content" source="./media/connect-virtual-machine-mac-remote-desktop/connect-vm.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services. The connect icon button on the VM tile is highlighted.":::
-1. If youΓÇÖre connecting *to a Linux VM*, you'll see two options to connect to the VM: SSH and RDP. Select the **Connect via RDP** option. If you're connecting *to a Windows VM*, you don't need to choose an connection option. The RDP file will automatically start downloading.
+
+1. If youΓÇÖre connecting to a Linux VM, you'll see two options to connect to the VM: SSH and RDP. Select the **Connect via RDP** option. If you're connecting to a Windows VM, you don't need to choose a connection option. The RDP file will automatically start downloading.
:::image type="content" source="./media/connect-virtual-machine-mac-remote-desktop/student-vm-connect-options.png" alt-text="Screenshot that shows V M tile for student. The R D P and S S H connection options are highlighted.":::
-1. Open the **RDP** file that's downloaded on your computer with **Microsoft Remote Desktop** app previously installed. It should start connecting to the VM.
+
+1. Open the **RDP** file that's downloaded on your computer with **Microsoft Remote Desktop** installed. It should start connecting to the VM.
:::image type="content" source="./media/how-to-use-classroom-lab/connect-linux-vm.png" alt-text="Screenshot of Microsoft Remote Desktop app connecting to a remote VM.":::
-1. Select **Continue** if you receive the following warning.
+
+1. When prompted, enter your username and password.
+
+1. If you receive a certificate warning, you can select **Continue**.
:::image type="content" source="./media/how-to-use-classroom-lab/certificate-error.png" alt-text="Screenshot of certificate error for Microsoft Remote Desktop app.":::
-1. You should see the VM desktop. The following example is for a CentOS Linux VM.
- :::image type="content" source="./media/how-to-use-classroom-lab/vm-ui.png" alt-text="Screenshot of desktop for CentOs Linux VM.":::
+1. After the connection is established, you see the desktop of your lab VM.
+
+ The following example is for a CentOS Linux VM:
+
+ :::image type="content" source="./media/how-to-use-classroom-lab/vm-ui.png" alt-text="Screenshot of the desktop for a CentOS Linux VM.":::
## Next steps
lab-services Connect Virtual Machine Windows Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine-windows-rdp.md
Title: Connect to a VM using Remote Desktop Protocol on Windows in Azure Lab Services | Microsoft Docs
-description: Learn how to connect from Windows to a Linux VM using Remote Desktop Protocol
+ Title: Connect to Azure Lab Services VMs from Windows
+
+description: Learn how to connect using remote desktop (RDP) from Windows to a virtual machine in Azure Lab Services.
++++ Previously updated : 02/01/2022 Last updated : 02/17/2023 # Connect to a VM using Remote Desktop Protocol on Windows
-This article shows how a student can connect from Windows to a lab VM using Remote Desktop Protocol (RDP).
+In this article, you learn how to connect to a lab VM in Azure Lab Services from Windows by using Remote Desktop Protocol (RDP).
## Connect to VM from Windows using RDP
-Students can use RDP to connect to their lab VMs. If the lab VM is a Windows VM, no extra configuration is required by the educator. If the lab VM is a Linux VM, the educator must [enable RDP](how-to-enable-remote-desktop-linux.md) and install GUI packages for a Linux graphical desktop.
+You can use RDP to connect to your lab VMs in Azure Lab Services. If the lab VM is a Linux VM, the lab creator must [enable RDP for the lab](how-to-enable-remote-desktop-linux.md) and install GUI packages for a Linux graphical desktop. For Windows-based lab VMs, no additional configuration is needed.
-Typically, the [Remote Desktop client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients) is already installed and configured on Windows. As a result, all you need to do is select on the RDP file to open it and start the remote session.
+Typically, the [Remote Desktop client software](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients) is already present on Windows. To connect to the lab VM, you open the RDP connection file to start the remote session.
-1. On the tile for your VM, ensure the [VM is running](how-to-use-lab.md#start-or-stop-the-vm) and select the **Connect** icon.
+To connect to a lab VM in Azure Lab
- :::image type="content" source="./media/connect-virtual-machine-windows-rdp/connect-vm.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services. The connect icon button on the VM tile is highlighted.":::
-1. If youΓÇÖre connecting *to a Linux VM*, you'll see two options to connect to the VM: SSH and RDP. Select the **Connect via RDP** option. If you're connecting *to a Windows VM*, you don't need to choose a connection option. The RDP file will automatically start downloading.
+1. Navigate to the Azure Lab Services website (https://labs.azure.com), and sign in with your credentials.
- :::image type="content" source="./media/connect-virtual-machine-windows-rdp/student-vm-connect-options.png" alt-text="Screenshot that shows V M tile for student. The R D P and S S H connection options are highlighted.":::
-1. When the RDP file is downloaded onto your machine, open it to launch the RDP client.
-1. After adjusting RDP connection settings as needed, select **Connect** to start the remote session.
+1. On the tile for your VM, select the **Connect** icon.
+
+ To connect to a lab VM, the vm must be running. Learn how you can [start a VM](how-to-use-lab.md#start-or-stop-the-vm).
+
+ :::image type="content" source="./media/connect-virtual-machine-windows-rdp/connect-vm.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services, highlighting the connect button on the VM tile.":::
+
+1. If youΓÇÖre connecting to a Linux VM, select the **Connect via RDP** option.
+
+ :::image type="content" source="./media/connect-virtual-machine-windows-rdp/student-vm-connect-options.png" alt-text="Screenshot that shows VM tile for student, highlighting the connect button and showing the SSH and RDP connection options.":::
+
+1. After the RDP connection file download finishes, open the RDP file to launch the RDP client.
+
+1. Optionally, adjust the RDP connection settings, and then select **Connect** to start the remote session.
## Optimize RDP client settings
-The RDP client includes various settings that can be adjusted to optimize the user's connection experience. Typically, these settings don't need to be changed. By default, the settings are already configured to choose the right experience based on your network connection. For more information on these settings, see [RDP client's **Experience** settings](/windows-server/administration/performance-tuning/role/remote-desktop/session-hosts#client-experience-settings).
+The RDP client software has various settings for optimizing your connection experience. The default settings optimize your experience based on your network connection. Typically, you don't need to change the default settings.
+
+Learn more about the [RDP client's **Experience** settings](/windows-server/administration/performance-tuning/role/remote-desktop/session-hosts#client-experience-settings).
+
+If the lab creator configured the GNOME graphical desktop on a Linux lab VM with the RDP client, we recommend the following settings to optimize performance:
-If your educator has configured the GNOME graphical desktop on a Linux VM with the RDP client, we recommend the following settings to optimize performance:
+- On the **Display** tab, set the color depth to **High Color (15 bit)**.
-- Under the **Display** tab, set the color depth to **High Color (15 bit)**.
+ :::image type="content" source="./media/connect-virtual-machine-windows-rdp/rdp-display-settings.png" alt-text="Screenshot of display tab of the Windows R D P client, highlighting the color depth setting.":::
- :::image type="content" source="./media/connect-virtual-machine-windows-rdp/rdp-display-settings.png" alt-text="Screenshot of display tab of the Windows R D P client. The color depth setting is highlighted.":::
-- Under the **Experience** tab, set the connection speed to **Modem (56 kbps)**.
+- On the **Experience** tab, set the connection speed to **Modem (56 kbps)**.
- :::image type="content" source="./media/connect-virtual-machine-windows-rdp/rdp-experience-settings.png" alt-text="Screenshot of experience tab of the Windows R D P client. The connection speed setting is highlighted.":::
+ :::image type="content" source="./media/connect-virtual-machine-windows-rdp/rdp-experience-settings.png" alt-text="Screenshot of experience tab of the Windows R D P client, highlighting the connection speed setting.":::
## Next steps
lab-services Connect Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine.md
Title: How to connect to an Azure Lab Services VM | Microsoft Docs
-description: Learn how to connect to a VM in Azure Lab Services
+ Title: How to connect to a lab VM
+
+description: Learn how to connect to a lab VM in Azure Lab Services.
+++ Previously updated : 02/1/2022 Last updated : 02/17/2023 # Connect to a lab VM
-As a student, you'll need to [start](how-to-use-lab.md#start-or-stop-the-vm) and then connect to your lab VM to complete your lab work. How you connect to your VM will depend on the operating system (OS) of the machine your using and the OS of the VM your connecting to.
+In this article, you learn how to connect to a lab virtual machine (VM) in Azure Lab Services. Depending on the operating system (OS) and your local machine, you can use different mechanisms, such as secure shell (SSH) or remote desktop (RDP), to connect to your VM.
+
+To connect to a lab VM, the VM must be running. Learn more about [how to start and stop a lab VM](how-to-use-lab.md#start-or-stop-the-vm).
## Connect to a Windows lab VM
-If connecting *to a Windows VM*, follow the instructions based on the type of OS you're using.
+Follow these instructions to connect to a Windows-based lab VM. Choose the instructions that match your local machine's operating system.
| Client OS | Instructions | | | |
If connecting *to a Windows VM*, follow the instructions based on the type of OS
## Connect to a Linux lab VM
-This section shows students how to connect to a Linux VM in a lab using secure shell protocol (SSH), remote desktop protocol (RDP), or X2Go.
+To connect to a Linux-based lab VM, you can use the secure shell protocol (SSH), remote desktop protocol (RDP), or X2Go.
-SSH is configured automatically for Linux VMs. Both students and educators can SSH into Linux VMs without any extra setup. However, if students need to connect to using a GUI, the educators may need to do extra setup on the template VM.
+Azure Lab Services automatically configures SSH for Linux VMs. Both lab users and lab creators can SSH into Linux VMs without additional setup. If you want to connect to a VM with a Linux GUI, the lab creator might need to do extra setup on the template VM.
> [!WARNING]
-> If you need to use [GNOME](https://www.gnome.org/) or [MATE](https://mate-desktop.org/) you should coordinate with your educator to ensure your lab VM is properly configured. For details, see [Using GNOME or MATE graphical desktops](how-to-enable-remote-desktop-linux.md#using-gnome-or-mate-graphical-desktops).
+> If you need to use [GNOME](https://www.gnome.org/) or [MATE](https://mate-desktop.org/) you should coordinate with the lab creator to ensure that your lab VM has the correct configuration. For details, see [Using GNOME or MATE graphical desktops](how-to-enable-remote-desktop-linux.md#using-gnome-or-mate-graphical-desktops).
-### Connect to a Linux lab VM Using RDP
+### Connect to a Linux lab VM using RDP
-An educator must first [enable remote desktop connection for Linux VMs](how-to-enable-remote-desktop-linux.md#rdp-setup).
+Before you can connect to a Linux VM using RDP, the lab creator first needs to [enable remote desktop connection for Linux VMs](how-to-enable-remote-desktop-linux.md#rdp-setup).
-To connect *to a Linux VM using RDP*, follow the instructions based on the type of OS you're using.
+To connect to a Linux VM using RDP, follow the instructions based on the type of OS you're using.
| Client OS | Instructions | | | |
To connect *to a Linux VM using RDP*, follow the instructions based on the type
Linux VMs can have X2Go enabled and a graphical desktop installed. For more information, see [X2Go Setup](how-to-enable-remote-desktop-linux.md#setting-up-x2go) and [Using GNOME or MATE graphical desktops](how-to-enable-remote-desktop-linux.md#using-gnome-or-mate-graphical-desktops).
-For instructions to connect *to a Linux VM using X2Go*, see [Connect to a VM using X2Go](connect-virtual-machine-linux-x2go.md).
+For instructions to connect to a Linux VM using X2Go, see [Connect to a VM using X2Go](connect-virtual-machine-linux-x2go.md).
### Connect to a Linux lab VM using SSH
-By default Linux VMs have SSH installed. To connect *to a Linux VM using SSH*, do the following actions:
+By default Linux VMs have SSH installed. To connect to a Linux VM using SSH:
+
+1. If you're using a Windows machine to connect to the lab VM, ensure you have SSH client software on your machine.
+
+ The latest version of Windows 10 and Windows 11 include a built-in SSH client. Learn how you can use the [access the built-in SSH client](/windows/terminal/tutorials/ssh).
+
+ Alternately, you can download an SSH client, such as [PuTTY](https://www.putty.org/) or enable [OpenSSH in Windows](/windows-server/administration/openssh/openssh_install_firstuse).
-1. If using a Windows machine to connect to a Linux VM, first install an ssh client like [PuTTY](https://www.putty.org/) or enable [OpenSSH in Windows](/windows-server/administration/openssh/openssh_install_firstuse).
-1. [Start the VM](how-to-use-lab.md#start-or-stop-the-vm), if not done already.
-1. Once the VM is running, select **Connect**, which will show a dialog box that provides the SSH command string. The connection command will look like the following sample:
+1. If the lab VM is not running, go to the [Azure Lab Services website](https://labs.azure.com), and then [start the lab VM](how-to-use-lab.md#start-or-stop-the-vm).
+
+1. Once the VM is running, select the **Connect** icon > **Connect via SSH**, to get the SSH command string
+
+ The connection command looks like the following sample:
```bash ssh -p 12345 student@ml-lab-00000000-0000-0000-0000-000000000000.eastus2.cloudapp.azure.com ``` 1. Copy the command.+ 1. Go to your command prompt or terminal, paste in the command, and then press **ENTER**.+ 1. Enter the password to sign in to the lab VM. ## Next steps
lab-services How To Use Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-lab.md
Title: How to access a lab in Azure Lab Services | Microsoft Docs
+ Title: How to access and manage a lab VM
+ description: Learn how to register to a lab. Also learn how to view, start, stop, and connect to all the lab VMs assigned to you. ++++ Last updated 02/01/2022
-# How to access a lab in Azure Lab Services
+# Access a lab in Azure Lab Services
-Learn how to register for a lab. Also learn how to view, start, stop, and connect to all the lab VMs assigned to you.
+Before you can access a lab in Azure Lab Services, you need to first register to the lab. In this article, you learn how to register for a lab, connect to a lab virtual machine (VM), start and stop the lab VM, and how to monitor your quota hours.
+
+## Prerequisites
+
+- To register for a lab, you need a lab registration link.
+- To view, start, stop, and connect to a lab VM, you need to register for the lab and have an assigned lab VM.
## Register to the lab
-1. Navigate to the **registration URL** that you received from the educator. You don't need to use the registration URL after you complete the registration. Instead, use the URL: [https://labs.azure.com](https://labs.azure.com).
+To get access to a lab and connect to the lab VM from the Azure Lab Services website, you first need to register for the lab by using a lab registration link. The lab creator can [provide the registration link for the lab](./how-to-configure-student-usage.md#send-invitations-to-users).
+
+To register for a lab by using the registration link:
+
+1. Open the lab registration URL in a browser.
+
+ After you complete the lab registration, you no longer need the registration link. Instead, you can navigate to the Azure Lab Services website (https://labs.azure.com) to access your labs.
:::image type="content" source="./media/how-to-use-lab/register-lab.png" alt-text="Screenshot of registration link for lab.":::
-1. Sign in to the service using your school account to complete the registration.
+1. Sign in to the service using your organizational or school account to complete the registration.
> [!NOTE]
- > A Microsoft account is required for using Azure Lab Services unless using Canvas. If you are trying to use your non-Microsoft account such as Yahoo or Google accounts to sign in to the portal, follow instructions to create a Microsoft account that will be linked to your non-Microsoft account. Then, follow the steps to complete the registration process.
-1. Once registered, confirm that you see the virtual machine for the lab you have access to.
+ > You need a Microsoft account to use Azure Lab Services, unless you're using Canvas. If you try to use your non-Microsoft account, such as Yahoo or Google accounts, to sign in to the portal, follow the instructions to create a Microsoft account that's linked to your non-Microsoft account. Then, follow the steps to complete the lab registration process.
+
+1. After the registration finishes, confirm that you see the lab virtual machine in **My virtual machines**.
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/accessible-vms.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services.":::
-1. Wait until the virtual machine is ready. On the VM tile, notice the following fields:
- 1. At the top of the tile, you see the **name of the lab**.
- 1. To its right, you see the icon representing the **operating system (OS)** of the VM. In this example, it's Windows OS.
- 1. You see icons/buttons at the bottom of the tile to start/stop the VM, and connect to the VM.
- 1. To the right of the buttons, you see the status of the VM. Confirm that you see the status of the VM is **Stopped**.
- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/vm-in-stopped-state.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services. The status toggle and Stopped label are highlighted.":::
+
+## View your lab virtual machines
+
+You can view all your assigned lab virtual machines to you in the Azure Lab Services website. Alternately, if your organization uses Azure Lab Services with Microsoft Teams or Canvas, learn how you can [access your lab VMs in Microsoft Teams](./how-to-access-vm-for-students-within-teams.md) or [access your lab VMs in Canvas](./how-to-access-vm-for-students-within-canvas.md).
+
+1. Go to the [Azure Lab Services website](https://labs.azure.com).
+
+1. The page has a tile for each lab VM that you have access to. The VM tile shows the VM details and provides access to functionality for controlling the lab VM:
+
+ - In the top-left, notice the name of the lab. The lab creator specifies the lab name when creating the lab.
+ - In the top-right, you can see an icon that represents the operating system (OS) of the VM.
+ - In the center, you can see a progress bar that shows your [quota hours consumption](#view-quota-hours).
+ - In the bottom-left, you can see the status of the lab VM and a control to [start or stop the VM](#start-or-stop-the-vm).
+ - In the bottom-right, you have the control to [connect to the lab VM](./connect-virtual-machine.md) with remote desktop (RDP) or secure shell (SSH).
+ - Also in the bottom-right, you can [reset or troubleshoot the lab VM](./how-to-reset-and-redeploy-vm.md), if you experience problems with the VM.
+
+ :::image type="content" source="./media/how-to-use-lab/lab-services-virtual-machine-tile.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services, highlighting the VM tile sections.":::
## Start or stop the VM
-1. **Start** the VM by selecting the first button as shown in the following image. This process takes some time.
- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/start-vm.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services. The status toggle and Starting label on the VM tile are highlighted.":::
-1. Confirm that the status of the VM is set to **Running**.
- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/vm-running.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services. The Running label on the VM tile is highlighted.":::
+As a lab user, you can start or stop a lab VM from the Azure Lab Services website. Alternately, you can also stop a lab VM by using the operating system shutdown command from within the lab VM. The preferred method to stop a lab VM is to use the [Azure Lab Services website](https://labs.azure.com) to avoid incurring additional costs.
+
+> [!TIP]
+> With the [April 2022 Updates](lab-services-whats-new.md), Azure Lab Services will detect when a lab user shuts down their VM using the OS shutdown command. After a long delay to ensure the VM wasn't being restarted, the lab VM will be marked as stopped and billing will discontinue.
- Notice that the status toggle is in the on position. Select the status toggle again to **stop** the VM.
+To start or stop a lab VM in the Azure Lab Services website:
-Using the [Azure Lab Services portal](https://labs.azure.com/virtualmachines) is the preferred method for a student to stop their lab VM. However, with the [April 2022 Updates](lab-services-whats-new.md), Azure Lab Services will detect when a student shuts down their VM using the OS shutdown command. After a long delay to ensure the VM wasn't being restarted, the lab VM will be marked as stopped and billing will discontinue.
+1. Go to the [Azure Lab Services website](https://labs.azure.com).
+
+1. Use the toggle control in the bottom-left of the VM tile to start or stop the lab VM.
+
+ Depending on the current status of the lab VM, the toggle control starts or stops the VM. When the VM is in progress of starting or stopping, the control is inactive.
+
+ Starting or stopping the lab VM might take some time to complete.
+
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/start-vm.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services, highlighting the status toggle and status label on the VM tile.":::
+
+1. After the operation finishes, confirm that the lab VM status is correct.
+
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/vm-running.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services, highlighting the status label on the VM tile.":::
## Connect to the VM
-For OS-specific instructions to connect to your lab VM, see [Connect to a lab VM](connect-virtual-machine.md).
+Depending on the lab VM operating system configuration, you can use remote desktop (RDP) or secure shell (SSH) to connect to your lab VM. Learn more about how to [connect to a lab VM](connect-virtual-machine.md).
+
+## View quota hours
+
+On the lab VM tile in the [Azure Lab Services website](https://labs.azure.com), you can view your consumption of [quota hours](how-to-configure-student-usage.md#set-quotas-for-users) in the progress bar. Quota hours are the extra time allotted to you outside of the [scheduled time](./classroom-labs-concepts.md#schedules) for the lab. For example, the time outside of classroom time, to complete homework.
+
+The color of the progress bar and the text under the progress bar changes depending on the scenario:
+
+- A class is in progress, according to the lab schedules: the progress bar is grayed out to represent that you didn't use quota hours.
+
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-class-in-progress.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when a schedule started the VM.":::
+
+- The lab has no quota (zero hours): the text **Available during classes only** shows in place of the progress bar.
+
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/available-during-class.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when there's no quota.":::
+
+- You ran out of quota: the color of the progress bar is **red**.
-## Progress bar
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-red-color.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when there's quota usage.":::
-The progress bar on the tile shows the number of hours used against the number of [quota hours](how-to-configure-student-usage.md#set-quotas-for-users) assigned to you. This time is the extra time allotted to you in outside of the scheduled time for the lab. The color of the progress bar and the text under the progress bar varies. Let's cover the scenarios you might see.
+- No class is in progress, according to the lab schedules: the color of the progress bar is **blue** to indicate that it's outside the scheduled time for the lab, and some of the quota time was used.
-- If a class is in progress (within the schedule of the class), progress bar is grayed out to represent quota hours aren't being used.
- <br/>:::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-class-in-progress.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when VM has been started by a schedule.":::
-- If a quota isn't assigned (zero hours), the text **Available during classes only** is shown in place of the progress bar.
- <br/>:::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/available-during-class.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when no quota has been assigned.":::
-- If you ran **out of quota**, the color of the progress bar is **red**.
- <br/>:::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-red-color.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when quota has been used.":::
-- The color of the progress bar is **blue** when it's outside the scheduled time for the lab and some of the quota time has been used.
- <br/>:::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-blue-color.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when quota has been partially used.":::
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/progress-bar-blue-color.png" alt-text="Screenshot of lab VM tile in Azure Lab Services when quota has been partially used.":::
## Next steps
lab-services Tutorial Connect Lab Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-connect-lab-virtual-machine.md
Title: Access a lab in Azure Lab Services | Microsoft Docs
-description: In this tutorial, students access virtual machines in a lab that's set up by an educator.
+ Title: 'Tutorial: Access a lab in Azure Lab Services'
+
+description: In this tutorial, learn how you can register for a lab in Azure Lab Services and connect to the lab virtual machine.
++++ Previously updated : 01/04/2022 Last updated : 02/17/2023
-# Tutorial: Access a lab in Azure Lab Services
+# Tutorial: Access a lab in Azure Lab Services from the Lab Services website
-In this tutorial, you, as a student, connect to a virtual machine (VM) in a lab by completing the following actions in the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com):
+In this tutorial, learn how you can register for a lab as a lab user, and then start and connect to lab virtual machine (VM) by using the Azure Lab Services website.
+
+If you're using Microsoft Teams or Canvas with Azure Lab Services, learn how you can [access your lab from Microsoft Teams](./how-to-access-vm-for-students-within-teams.md) or how you can [access your lab from Canvas](./how-to-access-vm-for-students-within-canvas.md).
> [!div class="checklist"] > * Register to the lab > * Start the VM > * Connect to the VM
-The tutorial applies to the Lab Services web portal ([https://labs.azure.com](https://labs.azure.com)) only. If using Teams, see [Access a VM (student view) in Azure Lab from Teams](how-to-access-vm-for-students-within-teams.md). If using Canvas, see [Access a VM (student view) in Azure Lab Services from Canvas](how-to-access-vm-for-students-within-canvas.md).
- ## Register to the lab
-1. Navigate to the **registration URL** that you received from the educator. You only have to use the registration URL once to complete the registration. Registration must be completed for each lab.
- > [!IMPORTANT]
- > Registration must be completed for each lab.
+Before you can use the lab from the Azure Lab Services website, you need to first register for the lab by using a registration link.
+
+To register for a lab by using the registration link:
+
+1. Navigate to the registration URL that you received from the lab creator.
- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/register-lab.png" alt-text="Screenshot of browser with example registration link for Azure Lab Services. Registration link is highlighted.":::
-1. Sign in using your school account to complete the registration.
+ You have to register for each lab that you want to access. After you complete registration for a lab, you no longer need the registration link for that lab.
+
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/register-lab.png" alt-text="Screenshot of browser with example registration link for Azure Lab Services, highlighting the registration link.":::
+
+1. Sign in to the service using your organizational or school account to complete the registration.
> [!NOTE]
- > A Microsoft account is required for using Azure Lab Services. If you are trying to use your non-Microsoft account such as Yahoo or Google accounts to sign in to the portal, follow instructions to create a Microsoft account that will be linked to your non-Microsoft account email. Then, follow the steps to complete the registration process.
-1. Once registered, confirm that you see the virtual machine for the lab you have access to. Now that you have registered, you can go directly to the Azure Lab Services portal at [https://labs.azure.com](https://labs.azure.com) in the future.
+ > You need a Microsoft account to use Azure Lab Services, unless you're using Canvas. If you try to use your non-Microsoft account, such as Yahoo or Google accounts, to sign in to the portal, follow the instructions to create a Microsoft account that's linked to your non-Microsoft account. Then, follow the steps to complete the lab registration process.
+
+1. After the registration finishes, confirm that you see the lab virtual machine in **My virtual machines**.
+
+ After you complete the registration, you can directly access your lab VMs by using the Azure Lab Services website (https://labs.azure.com).
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/accessible-vms.png" alt-text="Screenshot of My virtual machines page in Azure Lab Services portal.":::
-1. Wait until the virtual machine is ready. On the VM tile, notice the following fields:
- 1. At the top of the tile, you see the **name of the lab**.
- 1. To its right, you see the icon representing the **operating system (OS)** of the VM. In this example, it's Windows.
- 1. The progress bar on the tile shows the number of hours used against the number of [quota hours](how-to-configure-student-usage.md#set-quotas-for-users) assigned to you. Quota time is time you have in addition to the scheduled time for the lab.
- 1. You see icons and buttons at the bottom of the tile to start, stop, and connect to the VM.
- 1. To the right of the buttons, you see the status of the VM. Confirm that you see the status of the VM is **Stopped**.
- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/vm-in-stopped-state.png" alt-text="Screenshot of My virtual machines page in Azure Lab Services portal. VM state toggle with stopped label is highlighted.":::
+
+1. On the **My virtual machines** page, you can see a tile for your lab VM. Confirm that the VM is in the **Stopped** state.
+
+ The VM tile shows the lab VM details, such as the lab name, operating system, and status. The VM tile also enables you to perform specific actions on the lab VM, such starting and stopping it.
+
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/vm-in-stopped-state.png" alt-text="Screenshot of My virtual machines page in Azure Lab Services website, highlighting the stopped state.":::
## Start the VM
-1. **Start** the VM by selecting the toggle button as shown in the following image. This process takes some time.
- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/start-vm.png" alt-text="Screenshot of My virtual machines page in Azure Lab Services portal. VM state toggle with starting label is highlighted.":::
-1. Confirm that the status of the VM is set to **Running**.
- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/vm-running.png" alt-text="Screenshot of My virtual machines page in Azure Lab Services portal. VM state toggle with running label is highlighted.":::
+Before you can connect to a lab VM, the VM must be running.
+
+To start the lab VM from the Azure Lab Services website:
+
+1. Go to the [Azure Lab Services website](https://labs.azure.com).
+
+1. Start the VM by selecting the status toggle control.
+
+ Starting the lab VM might take some time.
+
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/start-vm.png" alt-text="Screenshot of My virtual machines page in the Azure Lab Services website, highlighting the VM state toggle.":::
+
+1. Confirm that the status of the VM is now **Running**.
+
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/vm-running.png" alt-text="Screenshot of My virtual machines page in the Azure Lab Services website, highlighting the VM is running.":::
## Connect to the VM
-1. Select the button in the lower right of the tile as shown in the following image to connect to the lab's VM.
- :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/connect-vm.png" alt-text="Screenshot of My virtual machines page in Azure Lab Services portal. Connect VM button is highlighted.":::
-1. Do one of the following steps:
- 1. For **Windows** virtual machines, open the **RDP** file once it has finished downloading. Use the **username** and **password** you get from your educator to sign in to the machine. For more information, see [Connect to a Windows lab VM](connect-virtual-machine.md#connect-to-a-windows-lab-vm).
- 2. For **Linux** virtual machines, you can use **SSH** or **RDP** (if it's enabled) to connect to them. For more information, see [Connect to a Linux lab VM](connect-virtual-machine.md#connect-to-a-linux-lab-vm).
+You can now connect to the lab VM. You can retrieve the connection information from the Azure Lab Services website.
+
+1. Go to the [Azure Lab Services website](https://labs.azure.com).
+
+1. Select the connect button in the lower right of the VM tile to retrieve the connection information.
+
+ :::image type="content" source="./media/tutorial-connect-vm-in-classroom-lab/connect-vm.png" alt-text="Screenshot of My virtual machines page in Azure Lab Services website, highlighting the Connect button.":::
+
+1. Connect to the lab VM in either of two ways:
+
+ - For Windows virtual machines, open the RDP connection file once it has finished downloading. Use the credentials that the lab creator provided to sign in to the virtual machine. For more information, see [Connect to a Windows lab VM](connect-virtual-machine.md#connect-to-a-windows-lab-vm).
+
+ - For Linux virtual machines, you can use either SSH or RDP (if RDP is enabled for the lab) to connect to the VM. For more information, see [Connect to a Linux lab VM](connect-virtual-machine.md#connect-to-a-linux-lab-vm).
## Next steps
-In this tutorial, you accessed a lab using the registration link you got from your educator. When done with the VM, stop the VM from the Azure Lab Services portal.
+In this tutorial, you accessed a lab using the registration link you got from the lab creator. When done with the VM, you stop the lab VM from the Azure Lab Services website.
>[!div class="nextstepaction"] >[Stop the VM](how-to-use-lab.md#start-or-stop-the-vm)
load-balancer Load Balancer Floating Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-floating-ip.md
Previously updated : 12/2/2021 Last updated : 02/28/2023 -+ # Azure Load Balancer Floating IP configuration
Load balancer provides several capabilities for both UDP and TCP applications.
## Floating IP
-Some application scenarios prefer or require the same port to be used by multiple application instances on a single VM in the backend pool. Common examples of port reuse include:
+Some application scenarios prefer or require the use of the same port by multiple application instances on a single VM in the backend pool. Common examples of port reuse include:
- clustering for high availability - network virtual appliances - exposing multiple TLS endpoints without re-encryption. If you want to reuse the backend port across multiple rules, you must enable Floating IP in the rule definition.
-When Floating IP is enabled, Azure changes the IP address mapping to the Frontend IP address of the Load Balancer frontend instead of backend instance's IP. Without Floating IP, Azure exposes the VM instances' IP. Enabling Floating IP changes the IP address mapping to the Frontend IP of the load Balancer to allow for more flexibility. Learn more [here](load-balancer-multivip-overview.md).
+When you enable Floating IP, Azure changes the IP address mapping to the Frontend IP address of the Load Balancer frontend instead of backend instance's IP. Without Floating IP, Azure exposes the VM instances' IP. Enabling Floating IP changes the IP address mapping to the Frontend IP of the load Balancer to allow for more flexibility. Learn more [here](load-balancer-multivip-overview.md).
-In the diagrams below, you see how IP address mapping works before and after enabling Floating IP:
+In the diagrams, you see how IP address mapping works before and after enabling Floating IP:
-Floating IP can be configured on a Load Balancer rule via the Azure portal, REST API, CLI, PowerShell, or other client. In addition to the rule configuration, you must also configure your virtual machine's Guest OS in order to use Floating IP.
+You configure Floating IP on a Load Balancer rule via the Azure portal, REST API, CLI, PowerShell, or other client. In addition to the rule configuration, you must also configure your virtual machine's Guest OS in order to use Floating IP.
## Floating IP Guest OS configuration
-In order to function, the Guest OS for the virtual machine needs to be configured to receive all traffic bound for the frontend IP and port of the load balancer. To accomplish this requires:
-* a loopback network interface to be added
+In order to function, you configure the Guest OS for the virtual machine to receive all traffic bound for the frontend IP and port of the load balancer. Configuring the VM requires:
+* adding a loopback network interface
* configuring the loopback with the frontend IP address of the load balancer
-* ensure the system can send/receive packets on interfaces that don't have the IP address assigned to that interface (on Windows, this requires setting interfaces to use the "weak host" model; on Linux this model is normally used by default)
-The host firewall also needs to be open to receiving traffic on the frontend IP port.
+* ensuring the system can send/receive packets on interfaces that don't have the IP address assigned to that interface.Windows systems require setting interfaces to use the "weak host" model. For Linux systems, this model is normally used by default.
+* configuring the host firewall to allow traffic on the frontend IP port.
> [!NOTE] > The examples below all use IPv4; to use IPv6, substitute "ipv6" for "ipv4". Also note that Floating IP for IPv6 does not work for Internal Load Balancers.
netsh interface ipv4 show interface
For the VM NIC (Azure managed), type this command. ```console
-netsh interface ipv4 set interface ΓÇ£interfacenameΓÇ¥ weakhostreceive=enabled
+netsh interface ipv4 set interface "interfacename" weakhostreceive=enabled
```
-(replace **interfacename** with the name of this interface)
+(replace **"interfacename"** with the name of this interface)
-For each loopback interface you added, repeat the commands below.
+For each loopback interface you added, repeat these commands:
```console
-netsh interface ipv4 add addr "loopbackinterface" floatingip floatingipnetmask
-netsh interface ipv4 set interface "loopbackinterface" weakhostreceive=enabled weakhostsend=enabled
+netsh interface ipv4 add addr floatingipaddress floatingip floatingipnetmask
+netsh interface ipv4 set interface floatingipaddress weakhostreceive=enabled weakhostsend=enabled
```
-(replace **loopbackinterface** with the name of this loopback interface and **floatingip** and **floatingipnetmask** with the appropriate values, e.g. that correspond to the load balancer frontend IP)
+(replace **loopbackinterface** with the name of this loopback interface and **floatingip** and **floatingipnetmask** with the appropriate values that correspond to the load balancer frontend IP)
-Finally, if firewall is being used on the guest host, ensure a rule set up so the traffic can reach the VM on the appropriate ports.
+Finally, if the guest host uses a firewall, ensure a rule set up so the traffic can reach the VM on the appropriate ports.
-A full example configuration is below (assuming a load balancer frontend IP configuration of 1.2.3.4 and a load balancing rule for port 80):
+This example configuration assumes a load balancer frontend IP configuration of 1.2.3.4 and a load balancing rule for port 80:
```console netsh int ipv4 set int "Ethernet" weakhostreceive=enabled
For each loopback interface, repeat these commands, which assign the floating IP
```console sudo ip addr add floatingip/floatingipnetmask dev lo:0 ```
-(replace **floatingip** and **floatingipnetmask** with the appropriate values, e.g. that correspond to the load balancer frontend IP)
+(replace **floatingip** and **floatingipnetmask** with the appropriate values that correspond to the load balancer frontend IP)
-Finally, if firewall is being used on the guest host, ensure a rule set up so the traffic can reach the VM on the appropriate ports.
+Finally, if the guest host uses a firewall, ensure a rule set up so the traffic can reach the VM on the appropriate ports.
-A full example configuration is below (assuming a load balancer frontend IP configuration of 1.2.3.4 and a load balancing rule for port 80). This example also assumes the use of [UFW (Uncomplicated Firewall)](https://www.wikipedia.org/wiki/Uncomplicated_Firewall) in Ubuntu.
+This example configuration assumes a load balancer frontend IP configuration of 1.2.3.4 and a load balancing rule for port 80. This example also assumes the use of [UFW (Uncomplicated Firewall)](https://www.wikipedia.org/wiki/Uncomplicated_Firewall) in Ubuntu.
```console sudo ip addr add 1.2.3.4/24 dev lo:0
sudo ufw allow 80/tcp
## <a name = "limitations"></a>Limitations -- Floating IP isn't currently supported on secondary IP configurations for Load Balancing scenarios. This doesn't apply to Public load balancers with dual-stack configurations or to architectures that utilize a NAT Gateway for outbound connectivity.
+- You can't use Floating IP on secondary IP configurations for Load Balancing scenarios. This limitation doesn't apply to Public load balancers with dual-stack configurations or to architectures that utilize a NAT Gateway for outbound connectivity.
## Next steps
logic-apps Biztalk Server To Azure Integration Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-to-azure-integration-services-overview.md
In on-premises architectures, SSIS was a popular option for managing the loading
- In [Azure Logic Apps](./logic-apps-overview.md), the following options are available:
- - For Consumption logic app workflows, you can install the Logic Apps Management Solution (Preview) in the Azure portal and set up Azure Monitor logs to collect diagnostic data. After you set up your logic app to send that data to an Azure Log Analytics workspace, telemetry flows to where the Logic Apps Management Solution can provide health visualizations. For more information, see [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](./monitor-logic-apps-log-analytics.md). With diagnostics enabled, you can also use Azure Monitor to send alerts based on different signal types such as when a trigger or a run fails. For more information, see [Monitor run status, review trigger history, and set up alerts for Azure Logic Apps](./monitor-logic-apps.md?tabs=consumption#set-up-monitoring-alerts).
+ - For Consumption logic app workflows, you can install the Logic Apps Management Solution (Preview) in the Azure portal and set up Azure Monitor logs to collect diagnostic data. After you set up your logic app to send that data to an Azure Log Analytics workspace, telemetry flows to where the Logic Apps Management Solution can provide health visualizations. For more information, see [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](./monitor-workflows-collect-diagnostic-data.md). With diagnostics enabled, you can also use Azure Monitor to send alerts based on different signal types such as when a trigger or a run fails. For more information, see [Monitor run status, review trigger history, and set up alerts for Azure Logic Apps](./monitor-logic-apps.md?tabs=consumption#set-up-monitoring-alerts).
- For Standard logic app workflows, you can enable Application Insights at logic app resource creation to send diagnostic logging and traces from your logic app's workflows. In Application Insights, you can view an [application map](../azure-monitor/app/app-map.md) to better understand the performance and health characteristics of your interfaces. Application Insights also includes [availability capabilities](../azure-monitor/app/availability-overview.md) for you to configure synthetic tests that proactively call endpoints and then evaluate the response for specific HTTP status codes or payload. Based upon your configured criteria, you can send notifications to stakeholders or call a webhook for additional orchestration capabilities.
logic-apps Business Continuity Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/business-continuity-disaster-recovery-guidance.md
Currently, this capability is preview and available for new Consumption logic ap
You can set up logging for your logic app runs and send the resulting diagnostic data to services such as Azure Storage, Azure Event Hubs, and Azure Log Analytics for further handling and processing.
-* If you want to use this data with Azure Log Analytics, you can make the data available for both the primary and secondary locations by setting up your logic app's **Diagnostic settings** and sending the data to multiple Log Analytics workspaces. For more information, see [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../logic-apps/monitor-logic-apps-log-analytics.md).
+* If you want to use this data with Azure Log Analytics, you can make the data available for both the primary and secondary locations by setting up your logic app's **Diagnostic settings** and sending the data to multiple Log Analytics workspaces. For more information, see [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../logic-apps/monitor-workflows-collect-diagnostic-data.md).
* If you want to send the data to Azure Storage or Azure Event Hubs, you can make the data available for both the primary and secondary locations by setting up geo-redundancy. For more information, see these articles:<p>
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
For a stateful workflow, after each workflow run, you can view the run history,
| **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. | | **Cancelled** | The run was triggered and started but received a cancel request. | | **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. |
- | **Running** | The run was triggered and is in progress, but this status can also appear for a run that is throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Tip**: If you set up [diagnostics logging](monitor-logic-apps-log-analytics.md), you can get information about any throttle events that happen. |
+ | **Running** | The run was triggered and is in progress, but this status can also appear for a run that is throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Tip**: If you set up [diagnostics logging](monitor-workflows-collect-diagnostic-data.md), you can get information about any throttle events that happen. |
| **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. | | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <p><p>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. | | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
To test your logic app, follow these steps to start a debugging session, and fin
| **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. | | **Cancelled** | The run was triggered and started but received a cancellation request. | | **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. |
- | **Running** | The run was triggered and is in progress, but this status can also appear for a run that is throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Tip**: If you set up [diagnostics logging](monitor-logic-apps-log-analytics.md), you can get information about any throttle events that happen. |
+ | **Running** | The run was triggered and is in progress, but this status can also appear for a run that is throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Tip**: If you set up [diagnostics logging](monitor-workflows-collect-diagnostic-data.md), you can get information about any throttle events that happen. |
| **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. | | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <p><p>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. | | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |
logic-apps Healthy Unhealthy Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/healthy-unhealthy-resource.md
When you monitor your Azure Logic Apps resources in [Microsoft Azure Security Ce
## Enable diagnostic logging
-Before you can view the resource health status for your logic apps, you must first [set up diagnostic logging](monitor-logic-apps-log-analytics.md). If you already have a Log Analytics workspace, you can enable logging either when you create your logic app or on existing logic apps.
+Before you can view the resource health status for your logic apps, you must first [set up diagnostic logging](monitor-workflows-collect-diagnostic-data.md). If you already have a Log Analytics workspace, you can enable logging either when you create your logic app or on existing logic apps.
> [!TIP] > The default recommendation is to enable diagnostic logs for Azure Logic Apps. However, you control this setting for your logic apps. When you enable diagnostic logs for your logic apps, you can use the information to help analyze security incidents.
logic-apps Logic Apps Create Logic Apps From Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-logic-apps-from-templates.md
This how-to guide shows how to use these templates as provided or edit them to f
| **Resource Group** | <*your-Azure-resource-group-name*> | Create or select an [Azure resource group](../azure-resource-manager/management/overview.md) for this logic app resource and its associated resources. | | **Logic App name** | <*your-logic-app-name*> | Provide a unique logic app resource name. | | **Region** | <*your-Azure-datacenter-region*> | Select the datacenter region for deploying your logic app, for example, **West US**. |
- | **Enable log analytics** | **No** (default) or **Yes** | To set up [diagnostic logging](../logic-apps/monitor-logic-apps-log-analytics.md) for your logic app resource by using [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md), select **Yes**. This selection requires that you already have a Log Analytics workspace. |
+ | **Enable log analytics** | **No** (default) or **Yes** | To set up [diagnostic logging](monitor-workflows-collect-diagnostic-data.md) for your logic app resource by using [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md), select **Yes**. This selection requires that you already have a Log Analytics workspace. |
| **Plan type** | **Consumption** or **Standard** | Select **Consumption** to create a Consumption logic app workflow from a template. | | **Zone redundancy** | **Disabled** (default) or **Enabled** | If this option is available, select **Enabled** if you want to protect your logic app resource from a regional failure. But first [check that zone redundancy is available in your Azure region](./set-up-zone-redundancy-availability-zones.md?tabs=consumption#considerations). |
logic-apps Logic Apps Examples And Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-examples-and-scenarios.md
You can fully develop and deploy logic apps with Visual Studio, Azure DevOps, or
### Monitor * [Monitor run status, review trigger history, and set up alerts for Azure Logic Apps](../logic-apps/monitor-logic-apps.md)
-* [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../logic-apps/monitor-logic-apps-log-analytics.md)
+* [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../logic-apps/monitor-workflows-collect-diagnostic-data.md)
* [Set up Azure Monitor logs and collect diagnostics data for B2B messages in Azure Logic Apps](../logic-apps/monitor-b2b-messages-log-analytics.md) * [View and create queries for monitoring and tracking in Azure Monitor logs for Azure Logic Apps](../logic-apps/create-monitoring-tracking-queries.md)
logic-apps Logic Apps Exception Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-exception-handling.md
Title: Handle errors and exceptions in workflows
description: How to handle errors and exceptions that happen in automated tasks and workflows created by using Azure Logic Apps. ms.suite: integration--
The previous patterns are useful ways to handle errors and exceptions that happe
For example, [Azure Monitor](../azure-monitor/overview.md) provides a streamlined way to send all workflow events, including all run and action statuses, to a destination. You can [set up alerts for specific metrics and thresholds in Azure Monitor](monitor-logic-apps.md#set-up-monitoring-alerts). You can also send workflow events to a [Log Analytics workspace](../azure-monitor/logs/data-platform-logs.md) or [Azure storage account](../storage/blobs/storage-blobs-overview.md). Or, you can stream all events through [Azure Event Hubs](../event-hubs/event-hubs-about.md) into [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/). In Stream Analytics, you can write live queries based on any anomalies, averages, or failures from the diagnostic logs. You can use Stream Analytics to send information to other data sources, such as queues, topics, SQL, Azure Cosmos DB, or Power BI.
-For more information, review [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](monitor-logic-apps-log-analytics.md).
+For more information, review [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](monitor-workflows-collect-diagnostic-data.md).
## Next steps
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
Many triggers and actions have settings to secure inputs, outputs, or both from
Before using these settings to help you secure this data, review these considerations:
-* When you obscure the inputs or outputs on a trigger or action, Azure Logic Apps doesn't send the secured data to Azure Log Analytics. Also, you can't add [tracked properties](../logic-apps/monitor-logic-apps-log-analytics.md#extend-data) to that trigger or action for monitoring.
+* When you obscure the inputs or outputs on a trigger or action, Azure Logic Apps doesn't send the secured data to Azure Log Analytics. Also, you can't add [tracked properties](monitor-workflows-collect-diagnostic-data.md#other-destinations) to that trigger or action for monitoring.
* The [Azure Logic Apps API for handling workflow history](/rest/api/logic/) doesn't return secured outputs.
For more information about isolation, review the following documentation:
* [Azure security baseline for Azure Logic Apps](../logic-apps/security-baseline.md) * [Automate deployment for Azure Logic Apps](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md)
-* [Monitor logic apps](../logic-apps/monitor-logic-apps-log-analytics.md)
+* [Monitor logic apps](monitor-workflows-collect-diagnostic-data.md)
logic-apps Monitor B2b Messages Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-b2b-messages-log-analytics.md
After you set up B2B communication between trading partners in your integration
* Correlations between messages and acknowledgments * Detailed error descriptions for failures
-Azure Monitor lets you create [log queries](../azure-monitor/logs/log-query-overview.md) to help you find and review this information. You can also [use this diagnostics data with other Azure services](../logic-apps/monitor-logic-apps-log-analytics.md#extend-data), such as Azure Storage and Azure Event Hubs.
+Azure Monitor lets you create [log queries](../azure-monitor/logs/log-query-overview.md) to help you find and review this information. You can also [use this diagnostics data with other Azure services](monitor-workflows-collect-diagnostic-data.md#other-destinations), such as Azure Storage and Azure Event Hubs.
To set up logging for your integration account, [install the Logic Apps B2B solution](#install-b2b-solution) in the Azure portal. This solution provides aggregated information for B2B message events. Then, to enable logging and creating queries for this information, set up [Azure Monitor logs](#set-up-resource-logs).
logic-apps Monitor Logic Apps Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps-log-analytics.md
- Title: Monitor logic apps with Azure Monitor logs
-description: Troubleshoot logic apps using Azure Monitor logs and collecting diagnostics data for Azure Logic Apps.
--- Previously updated : 03/14/2022--
-# Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps
--
-> [!NOTE]
-> This article applies only to Consumption logic apps. For information about monitoring Standard logic apps, review
-> [Enable or open Application Insights after deployment for Standard logic apps](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights).
-
-To get richer debugging information about your logic apps during runtime, you can set up and use [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md) to record and store information about runtime data and events, such as trigger events, run events, and action events in a [Log Analytics workspace](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace). [Azure Monitor](../azure-monitor/overview.md) helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. By using Azure Monitor logs, you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you collect and review this information. You can also [use this diagnostics data with other Azure services](#extend-data), such as Azure Storage and Azure Event Hubs.
-
-To set up logging for your logic app, you can [enable Log Analytics when you create your logic app](#logging-for-new-logic-apps), or you can [install the Logic Apps Management solution](#install-management-solution) in your Log Analytics workspace for existing logic apps. This solution provides aggregated information for your logic app runs and includes specific details such as status, execution time, resubmission status, and correlation IDs. Then, to enable logging and creating queries for this information, [set up Azure Monitor logs](#set-up-resource-logs).
-
-This article shows how to enable Log Analytics on new logic apps and existing logic apps, how to install and set up the Logic Apps Management solution, and how to set up and create queries for Azure Monitor logs.
-
-## Prerequisites
-
-* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
-* Azure subscription Owner or Contributor permissions so you can install the Logic Apps Management solution from the Azure Marketplace. For more information, review [Permission to purchase - Azure Marketplace purchasing](/marketplace/azure-purchasing-invoicing#permission-to-purchase) and [Azure roles - Classic subscription administrator roles, Azure roles, and Azure AD roles](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles).
-
-* A [Log Analytics workspace](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace). If you don't have a workspace, learn [how to create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
-
-<a name="logging-for-new-logic-apps"></a>
-
-## Enable Log Analytics for new logic apps
-
-You can turn on Log Analytics when you create your logic app.
-
-1. In the [Azure portal](https://portal.azure.com), on the **Create Logic App** pane where you provide the information to create your Consumption plan-based logic app, follow these steps:
-
- 1. Under **Enable log analytics**, select **Yes**.
-
- 1. From the **Log Analytics workspace** list, select the workspace where you want to send the data from your logic app runs.
-
- ![Provide logic app information](./media/monitor-logic-apps-log-analytics/create-logic-app-details.png)
-
-1. Finish creating your logic app. When you're done, your logic app is associated with your Log Analytics workspace. This step also automatically installs the Logic Apps Management solution in your workspace.
-
-1. After you run your logic app, to view your logic app runs, [continue with these steps](#view-logic-app-runs).
-
-<a name="install-management-solution"></a>
-
-## Install Logic Apps Management solution
-
-If you turned on Log Analytics when you created your logic app, skip this step. You already have the Logic Apps Management solution installed in your Log Analytics workspace.
-
-1. In the [Azure portal](https://portal.azure.com)'s search box, enter **log analytics workspaces**. Select **Log Analytics workspaces**.
-
- ![Select "Log Analytics workspaces"](./media/monitor-logic-apps-log-analytics/find-select-log-analytics-workspaces.png)
-
-1. Under **Log Analytics workspaces**, select your workspace.
-
- ![Select your Log Analytics workspace](./media/monitor-logic-apps-log-analytics/select-log-analytics-workspace.png)
-
-1. On the **Overview** pane, under **Get started with Log Analytics** > **Configure monitoring solutions**, select **View solutions**.
-
- ![On overview pane, select "View solutions"](./media/monitor-logic-apps-log-analytics/log-analytics-workspace.png)
-
-1. Under **Overview**, select **Add**.
-
- ![On overview pane, add new solution](./media/monitor-logic-apps-log-analytics/add-logic-apps-management-solution.png)
-
-1. After the **Marketplace** opens, in the search box, enter **logic apps management**. Select **Logic Apps Management**.
-
- ![From Marketplace, select "Logic Apps Management"](./media/monitor-logic-apps-log-analytics/select-logic-apps-management.png)
-
-1. On the **Logic Apps Management** tile, from the **Create** list, select **Logic Apps Management**.
-
- ![Select "Create" to add "Logic Apps Management" solution](./media/monitor-logic-apps-log-analytics/create-logic-apps-management-solution.png)
-
-1. On the **Create Logic Apps Management (Preview) Solution** pane, select the Log Analytics workspace where you want to install the solution. Select **Review + create**, review your information, and select **Create**.
-
- ![Select "Create" for "Logic Apps Management"](./media/monitor-logic-apps-log-analytics/confirm-log-analytics-workspace.png)
-
- After Azure deploys the solution to the Azure resource group that contains your Log Analytics workspace, the solution appears on your workspace summary pane under **Overview**.
-
- ![Screenshot showing workspace summary pane with Logic Apps Management solution.](./media/monitor-logic-apps-log-analytics/workspace-summary-pane-logic-apps-management.png)
-
-<a name="set-up-resource-logs"></a>
-
-## Set up Azure Monitor logs
-
-When you store information about runtime events and data in [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md), you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you find and review this information.
-
-> [!NOTE]
-> After you enable diagnostics settings, diagnostics data might not flow for up to 30 minutes to the logs at the specified destination,
-> such as Log Analytics, event hub, or storage account. This delay means that diagnostics data from this time period might not exist for you
-> to review. Completed events and [tracked properties](#extend-data) might not appear in your Log Analytics workspace for 10-15 minutes.
-
-1. In the [Azure portal](https://portal.azure.com), find and select your logic app.
-
-1. On your logic app menu, under **Monitoring**, select **Diagnostic settings** > **Add diagnostic setting**.
-
- ![Under "Monitoring", select "Diagnostic settings" > "Add diagnostic setting"](./media/monitor-logic-apps-log-analytics/logic-app-diagnostics.png)
-
-1. To create the setting, follow these steps:
-
- 1. For **Diagnostic setting name**, provide a name for the setting.
-
- 1. Under **Destination details**, select **Send to Log Analytics workspace**.
-
- 1. For **Subscription**, select the Azure subscription that's associated with your Log Analytics workspace.
-
- 1. For **Log Analytics workspace**, select your workspace.
-
- 1. Under **Logs** > **Categories**, select **WorkflowRuntime**, which specifies the event category that you want to record.
-
- 1. Under **Metrics**, select **AllMetrics**.
-
- 1. When you're done, select **Save**.
-
- When you're done, your version looks similar to the following example:
-
- ![Select Log Analytics workspace and data for logging](./media/monitor-logic-apps-log-analytics/send-diagnostics-data-log-analytics-workspace.png)
-
-<a name="view-logic-app-runs"></a>
-
-## View logic app runs status
-
-After your logic app runs, you can view the data about those runs in your Log Analytics workspace.
-
-1. In the [Azure portal](https://portal.azure.com), find and open your Log Analytics workspace.
-
-1. On your workspace menu, under **General**, select **Workspace summary** > **Logic Apps Management**.
-
- > [!NOTE]
- > If the Logic Apps Management tile doesn't immediately show results after a run,
- > try selecting **Refresh** or wait for a short time before trying again.
-
- ![Logic app run status and count](./media/monitor-logic-apps-log-analytics/logic-app-runs-summary.png)
-
- Here, your logic app runs are grouped by name or by execution status. This page also shows details about failures in actions or triggers for the logic app runs.
-
- ![Status summary for your logic app runs](./media/monitor-logic-apps-log-analytics/logic-app-runs-summary-details.png)
-
-1. To view all the runs for a specific logic app or status, select the row for that logic app or status.
-
- Here is an example that shows all the runs for a specific logic app:
-
- ![View logic app runs and status](./media/monitor-logic-apps-log-analytics/logic-app-run-details.png)
-
- For actions where you [set up tracked properties](#extend-data), you can also view those properties by selecting **View** in the **Tracked Properties** column. To search the tracked properties, use the column filter.
-
- ![View tracked properties for a logic app](./media/monitor-logic-apps-log-analytics/logic-app-tracked-properties.png)
-
-1. To filter your results, you can perform both client-side and server-side filtering.
-
- * **Client-side filter**: For each column, select the filters that you want, for example:
-
- ![Example column filters](./media/monitor-logic-apps-log-analytics/filters.png)
-
- * **Server-side filter**: To select a specific time window or to limit the number of runs that appear, use the scope control at the top of the page. By default, only 1,000 records appear at a time.
-
- ![Change the time window](./media/monitor-logic-apps-log-analytics/change-interval.png)
-
-1. To view all the actions and their details for a specific run, select the row for a logic app run.
-
- Here is an example that shows all the actions and triggers for a specific logic app run:
-
- ![View actions for a logic app run](./media/monitor-logic-apps-log-analytics/logic-app-action-details.png)
-
-<!-
- * **Resubmit**: You can resubmit one or more logic apps runs that failed, succeeded, or are still running. Select the check boxes for the runs that you want to resubmit, and then select **Resubmit**.
-
- ![Resubmit logic app runs](./media/monitor-logic-apps-log-analytics/logic-app-resubmit.png)
->
-
-<a name="extend-data"></a>
-
-## Send diagnostic data to Azure Storage and Azure Event Hubs
-
-Along with Azure Monitor logs, you can extend how you use your logic app's diagnostic data with other Azure services, for example:
-
-* [Archive Azure resource logs to storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage)
-* [Stream Azure platform logs to Azure Event Hubs](../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs)
-
-You can then get real-time monitoring by using telemetry and analytics from other services, like [Azure Stream Analytics](../stream-analytics/stream-analytics-introduction.md) and [Power BI](../azure-monitor/logs/log-powerbi.md). For example:
-
-* [Stream data from Event Hubs to Stream Analytics](../stream-analytics/stream-analytics-define-inputs.md)
-* [Analyze streaming data with Stream Analytics and create a real-time analytics dashboard in Power BI](../stream-analytics/stream-analytics-power-bi-dashboard.md)
-
-Based on the locations where you want to send diagnostic data, make sure that you first [create an Azure storage account](../storage/common/storage-account-create.md) or [create an Azure event hub](../event-hubs/event-hubs-create.md).
-You can then select the destinations where you want to send that data. Retention periods apply only when you use a storage account.
-
-![Send data to Azure storage account or event hub](./media/monitor-logic-apps-log-analytics/diagnostics-storage-event-hub-log-analytics.png)
-
-<a name="diagnostic-event-properties"></a>
-
-## Azure Monitor diagnostics events
-
-Each diagnostic event has details about your logic app and that event, for example, the status, start time, end time, and so on. To programmatically set up monitoring, tracking, and logging, you can use this information with the [REST API for Azure Logic Apps](/rest/api/logic) and the [REST API for Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftlogicworkflows). You can also use the `clientTrackingId` and `trackedProperties` properties, which appear in
-
-* `clientTrackingId`: If not provided, Azure automatically generates this ID and correlates events across a logic app run, including any nested workflows that are called from the logic app. You can manually specify this ID in a trigger by passing a `x-ms-client-tracking-id` header with your custom ID value in the trigger request. You can use a request trigger, HTTP trigger, or webhook trigger.
-
-* `trackedProperties`: To track inputs or outputs in diagnostics data, you can add a `trackedProperties` section to an action either by using the Logic App Designer or directly in your logic app's JSON definition. Tracked properties can track only a single action's inputs and outputs, but you can use the `correlation` properties of events to correlate across actions in a run. To track more than one property, one or more properties, add the `trackedProperties` section and the properties that you want to the action definition.
-
- Here's an example that shows how the **Initialize variable** action definition includes tracked properties from the action's input where the input is an array, not a record.
-
- ``` json
- {
- "Initialize_variable": {
- "type": "InitializeVariable",
- "inputs": {
- "variables": [
- {
- "name": "ConnectorName",
- "type": "String",
- "value": "SFTP-SSH"
- }
- ]
- },
- "runAfter": {},
- "trackedProperties": {
- "myTrackedPropertyName": "@action().inputs.variables[0].value"
- }
- }
- }
- ```
-
- This example shows multiple tracked properties:
-
- ``` json
- "HTTP": {
- "type": "Http",
- "inputs": {
- "body": "@triggerBody()",
- "headers": {
- "Content-Type": "application/json"
- },
- "method": "POST",
- "uri": "http://store.fabrikam.com",
- },
- "runAfter": {},
- "trackedProperties": {
- "myActionHTTPStatusCode": "@action()['outputs']['statusCode']",
- "myActionHTTPValue": "@action()['outputs']['body']['<content>']",
- "transactionId": "@action()['inputs']['body']['<content>']"
- }
- }
- ```
-
-This example shows how the `ActionCompleted` event includes the `clientTrackingId` and `trackedProperties` attributes:
-
-```json
-{
- "time": "2016-07-09T17:09:54.4773148Z",
- "workflowId": "/subscriptions/XXXXXXXXXXXXXXX/resourceGroups/MyResourceGroup/providers/Microsoft.Logic/workflows/MyLogicApp",
- "resourceId": "/subscriptions/<subscription-ID>/resourceGroups/MyResourceGroup/providers/Microsoft.Logic/workflows/MyLogicApp/runs/<run-ID>/actions/Http",
- "category": "WorkflowRuntime",
- "level": "Information",
- "operationName": "Microsoft.Logic/workflows/workflowActionCompleted",
- "properties": {
- "$schema": "2016-06-01",
- "startTime": "2016-07-09T17:09:53.4336305Z",
- "endTime": "2016-07-09T17:09:53.5430281Z",
- "status": "Succeeded",
- "code": "OK",
- "resource": {
- "subscriptionId": "<subscription-ID>",
- "resourceGroupName": "MyResourceGroup",
- "workflowId": "<logic-app-workflow-ID>",
- "workflowName": "MyLogicApp",
- "runId": "08587361146922712057",
- "location": "westus",
- "actionName": "Http"
- },
- "correlation": {
- "actionTrackingId": "e1931543-906d-4d1d-baed-dee72ddf1047",
- "clientTrackingId": "<my-custom-tracking-ID>"
- },
- "trackedProperties": {
- "myTrackedPropertyName": "<value>"
- }
- }
-}
-```
-
-## Next steps
-
-* [Create monitoring and tracking queries](../logic-apps/create-monitoring-tracking-queries.md)
-* [Monitor B2B messages with Azure Monitor logs](../logic-apps/monitor-b2b-messages-log-analytics.md)
logic-apps Monitor Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps.md
Last updated 08/01/2022
After you create and run a [Consumption logic app workflow](quickstart-create-first-logic-app-workflow.md), you can check that workflow's run status, [trigger history](#review-trigger-history), [runs history](#review-runs-history), and performance. To get notifications about failures or other possible problems, set up [alerts](#add-azure-alerts). For example, you can create an alert that detects "when more than five runs fail in an hour."
-For real-time event monitoring and richer debugging, set up diagnostics logging for your logic app by using [Azure Monitor logs](../azure-monitor/overview.md). This Azure service helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. You can then find and view events, such as trigger events, run events, and action events. By storing this information in [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md), you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you find and analyze this information. You can also use this diagnostic data with other Azure services, such as Azure Storage and Azure Event Hubs. For more information, see [Monitor logic apps by using Azure Monitor](monitor-logic-apps-log-analytics.md).
+For real-time event monitoring and richer debugging, set up diagnostics logging for your logic app by using [Azure Monitor logs](../azure-monitor/overview.md). This Azure service helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. You can then find and view events, such as trigger events, run events, and action events. By storing this information in [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md), you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you find and analyze this information. You can also use this diagnostic data with other Azure services, such as Azure Storage and Azure Event Hubs. For more information, see [Monitor logic apps by using Azure Monitor](monitor-workflows-collect-diagnostic-data.md).
> [!NOTE] > If your logic apps run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md)
Each time the trigger successfully fires, Azure Logic Apps creates a workflow in
| **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. | | **Cancelled** | The run was triggered and started, but received a cancellation request. | | **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. |
- | **Running** | The run was triggered and is in progress. However, this status can also appear for a run that's throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <br><br>**Tip**: If you set up [diagnostics logging](monitor-logic-apps-log-analytics.md), you can get information about any throttle events that happen. |
+ | **Running** | The run was triggered and is in progress. However, this status can also appear for a run that's throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <br><br>**Tip**: If you set up [diagnostics logging](monitor-workflows-collect-diagnostic-data.md), you can get information about any throttle events that happen. |
| **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. | | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <br><br>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. | | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |
Each time the trigger successfully fires, Azure Logic Apps creates a workflow in
| **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. | | **Cancelled** | The run was triggered and started, but received a cancellation request. | | **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. |
- | **Running** | The run was triggered and is in progress. However, this status can also appear for a run that's throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <br><br>**Tip**: If you set up [diagnostics logging](monitor-logic-apps-log-analytics.md), you can get information about any throttle events that happen. |
+ | **Running** | The run was triggered and is in progress. However, this status can also appear for a run that's throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <br><br>**Tip**: If you set up [diagnostics logging](monitor-workflows-collect-diagnostic-data.md), you can get information about any throttle events that happen. |
| **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. | | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <br><br>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. | | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |
To get alerts based on specific metrics or exceeded thresholds for your logic ap
## Next steps
-* [Monitor logic apps with Azure Monitor](monitor-logic-apps-log-analytics.md)
+* [Monitor logic apps with Azure Monitor](monitor-workflows-collect-diagnostic-data.md)
logic-apps Monitor Workflows Collect Diagnostic Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-workflows-collect-diagnostic-data.md
+
+ Title: Collect diagnostic data for workflows
+description: Record diagnostic data for workflows in Azure Logic Apps with Azure Monitor Logs.
+
+ms.suite: integration
++++ Last updated : 02/16/2023
+# As a developer, I want to collect and send diagnostics data for my logic app workflows to specific destinations, such as a Log Analytics workspace, storage account, or event hub, for further review.
++
+# Monitor and collect diagnostic data for workflows in Azure Logic Apps
++
+To get richer data for debugging and diagnosing your workflows in Azure Logic Apps, you can log workflow runtime data and events, such as trigger events, run events, and action events, that you can send to a [Log Analytics workspace](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace), Azure [storage account](../storage/common/storage-account-overview.md), Azure [event hub](../event-hubs/event-hubs-features.md#namespace), another partner destination, or all these destinations when you set up and use [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md).
+
+This how-to guide shows how to complete the following tasks, based on whether you have a Consumption or Standard logic app resource.
+
+### [Consumption](#tab/consumption)
+
+1. At Consumption logic app creation, [enable Log Analytics and specify your Log Analytics workspace](#enable-for-new-logic-apps).
+
+ -or-
+
+ For an existing Consumption logic app, [install the Logic Apps Management solution in your Log Analytics workspace](#install-management-solution). This solution provides aggregated information for your logic app runs and includes specific details such as status, execution time, resubmission status, and correlation IDs.
+
+1. [Add a diagnostic setting to enable data collection](#add-diagnostic-setting).
+
+1. [View workflow run status](#view-workflow-run-status).
+
+1. [Send diagnostic data to Azure Storage and Azure Event Hubs](#other-destinations).
+
+1. [Include custom properties in telemetry](#custom-tracking-properties).
+
+### [Standard (preview)](#tab/standard)
+
+1. [Add a diagnostic setting to enable data collection](#add-diagnostic-setting).
+
+1. [View workflow run status](#view-workflow-run-status).
+
+1. [Send diagnostic data to Azure Storage and Azure Event Hubs](#other-destinations).
+
+1. [Include custom properties in telemetry](#custom-tracking-properties).
+++
+## Prerequisites
+
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+ For a Consumption logic app resource, you need Azure subscription Owner or Contributor permissions so you can install the Logic Apps Management solution from the Azure Marketplace. For more information, see the following documentation:
+
+ * [Permission to purchase - Azure Marketplace purchasing](/marketplace/azure-purchasing-invoicing#permission-to-purchase)
+
+ * [Azure roles - Classic subscription administrator roles, Azure roles, and Azure AD roles](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles)
+
+* The destination resource for where you want to send diagnostic data:
+
+ * A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md)
+
+ * An [Azure storage account](../storage/common/storage-account-create.md)
+
+ * An [Azure event hub](../event-hubs/event-hubs-create.md)
+
+* Your logic app resource and workflow
+
+## Enable Log Analytics
+
+### [Consumption](#tab/consumption)
+
+For a Consumption logic app, you need to first enable Log Analytics.
+
+<a name="enable-for-new-logic-apps"></a>
+
+#### Enable Log Analytics at logic app creation
+
+1. In the [Azure portal](https://portal.azure.com), on the **Create Logic App** pane, follow these steps:
+
+ 1. Under **Plan**, make sure to select **Consumption** so that only the options for Consumption workflows appear.
+
+ 1. For **Enable log analytics**, select **Yes**.
+
+ 1. From the **Log Analytics workspace** list, select the workspace where you want to send the data from your workflow run.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/create-logic-app-details.png" alt-text="Screenshot showing the Azure portal and Consumption logic app creation page.":::
+
+ 1. Finish creating your logic app resource.
+
+ When you're done, your logic app is associated with your Log Analytics workspace. This step also automatically installs the Logic Apps Management solution in your workspace.
+
+1. After you run your workflow, [view your workflow run status](#view-workflow-run-status).
+
+<a name="install-management-solution"></a>
+
+#### Install Logic Apps Management solution
+
+If you turned on Log Analytics when you created your logic app resource, skip this section. You already have the Logic Apps Management solution installed in your Log Analytics workspace. Otherwise, continue with the following steps for an existing Consumption logic app:
+
+1. In the [Azure portal](https://portal.azure.com) search box, enter **log analytics workspaces**, and select **Log Analytics workspaces** from the results.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/find-select-log-analytics-workspaces.png" alt-text="Screenshot showing the Azure portal search box with log analytics workspaces selected.":::
+
+1. Under **Log Analytics workspaces**, select your workspace.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/select-log-analytics-workspace.png" alt-text="Screenshot showing the Azure portal, the Log Analytics workspaces list, and a specific workspace selected.":::
+
+1. On the **Overview** pane, under **Get started with Log Analytics** > **Configure monitoring solutions**, select **View solutions**.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/log-analytics-workspace.png" alt-text="Screenshot showing the Azure portal, the workspace's overview page, and View solutions selected.":::
+
+1. Under **Overview**, select **Add**, which adds a new solution to your workspace.
+
+1. After the **Marketplace** page opens, in the search box, enter **logic apps management**, and select **Logic Apps Management**.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/select-logic-apps-management.png" alt-text="Screenshot showing the Azure portal, the Marketplace page search box with 'logic apps management' entered and 'Logic Apps Management' selected.":::
+
+1. On the **Logic Apps Management** tile, from the **Create** list, select **Logic Apps Management**.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/create-logic-apps-management-solution.png" alt-text="Screenshot showing the Azure portal, the Marketplace page, the 'Logic Apps Management' tile, with the Create list open, and Logic Apps Management (Preview) selected.":::
+
+1. On the **Create Logic Apps Management (Preview) Solution** pane, select the Log Analytics workspace where you want to install the solution. Select **Review + create**, review your information, and select **Create**.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/confirm-log-analytics-workspace.png" alt-text="Screenshot showing the Azure portal, the Create Logic Apps Management (Preview) Solution page, and workspace information.":::
+
+ After Azure deploys the solution to the Azure resource group that contains your Log Analytics workspace, the solution appears on your workspace summary pane under **Overview**.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/workspace-summary-pane-logic-apps-management.png" alt-text="Screenshot showing the Azure portal, the workspace summary pane with Logic Apps Management solution.":::
+
+### [Standard (preview)](#tab/standard)
+
+For a Standard logic app, you can continue with [Add a diagnostic setting](#add-diagnostic-setting). No other prerequisite steps are necessary to enable Log Analytics, nor does the Logic Apps Management solution apply to Standard logic apps.
+++
+<a name="add-diagnostic-setting"></a>
+
+## Add a diagnostic setting
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app resource.
+
+1. On the logic app resource menu, under **Monitoring**, select **Diagnostic settings**. On the **Diagnostic settings** page, select **Add diagnostic setting**.
+
+ :::image type="content" source="media/monitor-workflows-collect-diagnostic-data/consumption/add-diagnostic-setting.png" alt-text="Screenshot showing Azure portal, Consumption logic app resource menu with 'Diagnostic settings' selected and then 'Add diagnostic setting' selected.":::
+
+1. For **Diagnostic setting name**, provide the name that you want for the setting.
+
+1. Under **Logs** > **Categories**, select **Workflow runtime diagnostic events**. Under **Metrics**, select **AllMetrics**.
+
+1. Under **Destination details**, select one or more destinations, based on where you want to send the logs.
+
+ | Destination | Directions |
+ |-||
+ | **Send to Log Analytics workspace** | Select the Azure subscription for your Log Analytics workspace and the workspace. |
+ | **Archive to a storage account** | Select the Azure subscription for your Azure storage account and the storage account. For more information, see [Send diagnostic data to Azure Storage and Azure Event Hubs](#other-destinations). |
+ | **Stream to an event hub** | Select the Azure subscription for your event hub namespace, event hub, and event hub policy name. For more information, see [Send diagnostic data to Azure Storage and Azure Event Hubs](#other-destinations) and [Azure Monitor partner integrations](../azure-monitor/partners.md). |
+ | **Send to partner solution** | Select your Azure subscription and the destination. For more information, see [Azure Native ISV Services overview](../partner-solutions/overview.md). |
+
+ The following example selects a Log Analytics workspace as the destination:
+
+ :::image type="content" source="media/monitor-workflows-collect-diagnostic-data/consumption/send-diagnostics-data-log-analytics-workspace.png" alt-text="Screenshot showing Azure portal, Log Analytics workspace, and data to collect.":::
+
+1. To finish adding your diagnostic setting, select **Save**.
+
+### [Standard (preview)](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
+
+1. On the logic app resource menu, under **Monitoring**, select **Diagnostic settings**. On the **Diagnostic settings** page, select **Add diagnostic setting**.
+
+ :::image type="content" source="media/monitor-workflows-collect-diagnostic-data/standard/add-diagnostic-setting.png" alt-text="Screenshot showing Azure portal, Standard logic app resource menu with 'Diagnostic settings' selected and then 'Add diagnostic setting' selected.":::
+
+1. For **Diagnostic setting name**, provide the name that you want for the setting.
+
+1. Under **Logs** > **Categories**, select **Workflow Runtime Logs**. Under **Metrics**, select **AllMetrics**.
+
+1. Under **Destination details**, select one or more destinations, based on where you want to send the logs.
+
+ | Destination | Directions |
+ |-||
+ | **Send to Log Analytics workspace** | Select the Azure subscription for your Log Analytics workspace and the workspace. |
+ | **Archive to a storage account** | Select the Azure subscription for your Azure storage account and the storage account. For more information, see [Send diagnostic data to Azure Storage and Azure Event Hubs](#other-destinations). |
+ | **Stream to an event hub** | Select the Azure subscription for your event hub namespace, event hub, and event hub policy name. For more information, see [Send diagnostic data to Azure Storage and Azure Event Hubs](#other-destinations) and [Azure Monitor partner integrations](../azure-monitor/partners.md). |
+ | **Send to partner solution** | Select your Azure subscription and the destination. For more information, see [Azure Native ISV Services overview](../partner-solutions/overview.md). |
+
+ The following example selects a Log Analytics workspace as the destination:
+
+ :::image type="content" source="media/monitor-workflows-collect-diagnostic-data/standard/send-to-log-analytics-workspace.png" alt-text="Screenshot showing Azure portal, Standard logic app resource menu with log analytics options selected.":::
+
+1. Optionally, to include telemetry for events such as **Host.Startup**, **Host.Bindings**, and **Host.LanguageWorkerConfig**, select **Function Application Logs**. For more information, see [Monitor Azure Functions with Azure Monitor Logs](../azure-functions/functions-monitor-log-analytics.md).
+
+1. To finish adding your diagnostic setting, select **Save**.
+
+Azure Logic Apps now sends telemetry about your workflow runs to your Log Analytics workspace.
+
+> [!NOTE]
+>
+> After you enable diagnostics settings, diagnostics data might not flow for up to 30 minutes to the logs
+> at the specified destination, such as Log Analytics, storage account, or event hub. This delay means that
+> diagnostics data from this time period might not exist for you to review. Completed events and
+> [tracked properties](#custom-tracking-properties) might not appear in your Log Analytics workspace for 10-15 minutes.
+++
+<a name="view-workflow-run-status"></a>
+
+## View workflow run status
+
+### [Consumption](#tab/consumption)
+
+After your workflow runs, you can view the data about those runs in your Log Analytics workspace.
+
+1. In the [Azure portal](https://portal.azure.com), open your Log Analytics workspace.
+
+1. On your workspace menu, under **Classic**, select **Workspace summary**. On the **Overview** page, select **Logic Apps Management**.
+
+ > [!NOTE]
+ >
+ > If the Logic Apps Management tile doesn't immediately show results after a run,
+ > try selecting **Refresh** or wait for a short time before trying again.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/logic-app-runs-summary.png" alt-text="Screenshot showing Azure portal, Log Analytics workspace with Consumption logic app workflow run status and count.":::
+
+ The summary page shows workflows grouped by name or by execution status. The page also shows details about failures in the actions or triggers for the workflow runs.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/logic-app-runs-summary-details.png" alt-text="Screenshot showing status summary for Consumption logic app workflow runs.":::
+
+1. To view all the runs for a specific workflow or status, select the row for that workflow or status.
+
+ This example shows all the runs for a specific workflow:
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/logic-app-run-details.png" alt-text="Screenshot showing runs and status for a specific Consumption logic app workflow.":::
+
+ For actions where you added [tracked properties](#custom-tracking-properties), you can search for the tracked properties using the column filter. To view the properties, in the **Tracked Properties** column, select **View**.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/logic-app-tracked-properties.png" alt-text="Screenshot showing tracked properties for a specific Consumption logic app workflow.":::
+
+1. To filter your results, you can perform both client-side and server-side filtering.
+
+ * **Client-side filter**: For each column, select the filters that you want, for example:
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/filters.png" alt-text="Screenshot showing example client-side filter using column filters.":::
+
+ * **Server-side filter**: To select a specific time window or to limit the number of runs that appear, use the scope control at the top of the page. By default, only 1,000 records appear at a time.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/change-interval.png" alt-text="Screenshot showing example server-side filter that changes the time window.":::
+
+1. To view all the actions and their details for a specific run, select the row for a logic app workflow run.
+
+ The following example shows all the actions and triggers for a specific logic app workflow run:
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/logic-app-action-details.png" alt-text="Screenshot showing all operations and details for a specific logic app workflow run.":::
+
+### [Standard (preview)](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), open your Log Analytics workspace.
+
+1. On the workspace navigation menu, select **Logs**.
+
+1. On the new query tab, in the left column, under **Tables**, expand **LogManagement**, and select **LogicAppWorkflowRuntime**.
+
+ In the right pane, under **Results**, the table shows records related to the following events:
+
+ * WorkflowRunStarted
+ * WorkflowRunCompleted
+ * WorkflowTriggerStarted
+ * WorkflowTriggerEnded
+ * WorkflowActionStarted
+ * WorkflowActionCompleted
+ * WorkflowBatchMessageSend
+ * WorkflowBatchMessageRelease
+
+ For completed events, the **EndTime** column publishes the timestamp for when those finished. This value helps you determine the duration between the start event and the completed event.
+
+ :::image type="content" source="media/monitor-workflows-collect-diagnostic-data/standard/log-analytics-workspace-results.png" alt-text="Screenshot showing Azure portal, Log Analytics workspace, and captured telemetry for Standard logic app workflow run.":::
+
+## Sample queries
+
+In your Log Analytics workspace's query pane, you can enter your own queries to find specific data, for example:
+
+* Select all events for a specific workflow run ID:
+
+ ```
+ LogicAppWorkflowRuntime
+ | where RunId == "08585258189921908774209033046CU00"
+ ```
+
+* List all exceptions:
+
+ ```
+ LogicAppWorkflowRuntime
+ | where Error != ""
+ | sort by StartTime desc
+ ```
+
+* Identify actions that have experienced retries:
+
+ ```
+ LogicAppWorkflowRuntime
+ | where RetryHistory != ""
+ | sort by StartTime desc
+ ```
+++
+<a name="other-destinations"></a>
+
+## Send diagnostic data to Azure Storage and Azure Event Hubs
+
+Along with Azure Monitor Logs, you can send the collected data to other destinations, for example:
+
+* [Archive Azure resource logs to storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage)
+* [Stream Azure platform logs to Azure Event Hubs](../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs)
+
+You can then get real-time monitoring by using telemetry and analytics from other services, such as [Azure Stream Analytics](../stream-analytics/stream-analytics-introduction.md) and [Power BI](../azure-monitor/logs/log-powerbi.md), for example:
+
+* [Stream data from Event Hubs to Stream Analytics](../stream-analytics/stream-analytics-define-inputs.md)
+* [Analyze streaming data with Stream Analytics and create a real-time analytics dashboard in Power BI](../stream-analytics/stream-analytics-power-bi-dashboard.md)
+
+> [!NOTE]
+>
+> Retention periods apply only when you use a storage account.
++
+<a name="custom-tracking-properties"></a>
+
+## Include custom properties in telemetry
+
+In your workflow, triggers and actions have the capability for you to add the following custom properties so that their values appear along with the emitted telemetry in your Log Analytics workspace.
+
+### Custom tracking ID
+
+Most triggers have a **Custom Tracking Id** property where you can specify a tracking ID using an expression. You can use this expression to get data from the received message payload or to generate unique values, for example:
+
+If you don't specify this custom tracking ID, Azure automatically generates this ID and correlates events across a workflow run, including any nested workflows that are called from the parent workflow. You can manually specify this ID in a trigger by passing a `x-ms-client-tracking-id` header with your custom ID value in the trigger request. You can use a Request trigger, HTTP trigger, or webhook-based trigger.
+
+### [Consumption](#tab/consumption)
++
+### [Standard (preview)](#tab/standard)
++++
+### Tracked properties
+
+Actions have a **Tracked Properties** section where you can specify a custom property name and value by entering an expression or hardcoded value to track specific inputs or outputs, for example:
+
+### [Consumption](#tab/consumption)
++
+### [Standard (preview)](#tab/standard)
++++
+Tracked properties can track only a single action's inputs and outputs, but you can use the `correlation` properties of events to correlate across actions in a workflow run.
+
+The following examples shows where custom properties appear in your Log Analytics workspace:
+
+### [Consumption](#tab/consumption)
+
+1. On your Log Analytics workspace menu, under **Classic**, select **Workspace summary**. On the **Overview** page, select **Logic Apps Management**.
+
+1. Select the row for the workflow that you want to review.
+
+1. On the **Runs** page, in the **Logic App Runs** table, find the **Tracking ID** column and the **Tracked Properties** column.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/logic-app-run-details.png" alt-text="Screenshot showing runs and status for a specific Consumption workflow.":::
+
+1. To search the tracked properties, use the column filter. To view the properties, select **View**.
+
+ :::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/example-tracked-properties.png" alt-text="Screenshot showing example tracked properties for a specific Consumption workflow.":::
+
+### [Standard (preview)](#tab/standard)
+
+The custom tracking ID appears in the **ClientTrackingId** column and tracked properties appear in the **TrackedProperties** column, for example:
++++
+## Next steps
+
+* [Create monitoring and tracking queries](create-monitoring-tracking-queries.md)
+* [Monitor B2B messages with Azure Monitor Logs](monitor-b2b-messages-log-analytics.md)
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
> * [v1](./v1/how-to-deploy-mlflow-models.md) > * [v2 (current version)](how-to-deploy-mlflow-models-online-endpoints.md)
-In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) for real-time inference. When you deploy your MLflow model to an online endpoint, you don't need to indicate a scoring script or an environment. This characteristic is usually referred as __no-code deployment__.
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) for real-time inference. When you deploy your MLflow model to an online endpoint, you don't need to indicate a scoring script or an environment. This characteristic is referred as __no-code deployment__.
For no-code-deployment, Azure Machine Learning
-* Dynamically installs Python packages provided in the `conda.yaml` file, this means the dependencies are installed during container runtime.
+* Dynamically installs Python packages provided in the `conda.yaml` file. Hence, dependencies are installed during container runtime.
* Provides a MLflow base image/curated environment that contains the following items: * [`azureml-inference-server-http`](how-to-inference-server-http.md) * [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst)
For no-code-deployment, Azure Machine Learning
## About this example
-This example shows how you can deploy an MLflow model to an online endpoint to perform predictions. This example uses an MLflow model based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements obtained from n = 442 diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline (regression).
+This example shows how you can deploy an MLflow model to an online endpoint to perform predictions. This example uses an MLflow model based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements obtained from n = 442 diabetes patients. It also contains the response of interest, a quantitative measure of disease progression one year after baseline (regression).
-The model has been trained using an `scikit-learn` regressor and all the required preprocessing has been packaged as a pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
+The model was trained using an `scikit-learn` regressor and all the required preprocessing has been packaged as a pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
-The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/online` if you are using the Azure CLI or `sdk/endpoints/online` if you are using our SDK for Python.
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to the `cli/endpoints/online` if you are using the Azure CLI or `sdk/endpoints/online` if you are using our SDK for Python.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
Before following the steps in this article, make sure you have the following pre
- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role allowing Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).-- You must have a MLflow model registered in your workspace. Particularly, this example will register a model trained for the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html).
+- You must have a MLflow model registered in your workspace. Particularly, this example registers a model trained for the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html).
-Additionally, you will need to:
+Additionally, you need to:
# [Azure CLI](#tab/cli)
Additionally, you will need to:
# [Studio](#tab/studio)
-There are no additional prerequisites when working in Azure Machine Learning studio.
+There are no more prerequisites when working in Azure Machine Learning studio.
az configure --defaults workspace=<workspace> group=<resource-group> location=<l
# [Python (Azure ML SDK)](#tab/sdk)
-The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we connect to the workspace in which you perform deployment tasks.
1. Import the required libraries:
Use the following steps to deploy an MLflow model with a custom scoring script.
raise Exception("Request must contain a top level key named 'input_data'") serving_input = json.dumps(json_data["input_data"])
- data = infer_and_parse_json_input(raw_data, input_schema)
+ data = infer_and_parse_json_input(serving_input, input_schema)
result = model.predict(data) result = StringIO()
machine-learning How To Network Isolation Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-isolation-planning.md
+
+ Title: Plan for network isolation
+
+description: Demystify Azure Machine Learning network isolation with recommendations and automation templates
++++++ Last updated : 02/14/2023++++
+# Plan for network isolation
+
+In this article, you learn how to plan your network isolation for Azure Machine Learning and our recommendations. This is a document for IT administrators who want to design network architecture.
+
+## Key considerations
+
+### Azure Machine Learning has both IaaS and PaaS resources
+
+Azure Machine Learning's network isolation involves both Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) components. PaaS services, such as the Azure Machine Learning workspace, storage, key vault, container registry, and monitor, can be isolated using Private Link. IaaS computing services, such as compute instances/clusters for AI model training, and Azure Kubernetes Service (AKS) or managed online endpoints for AI model scoring, can be injected into your virtual network and communicate with PaaS services using Private Link. The following diagram is an example of this architecture.
++
+In this diagram, the compute instances, compute clusters, and AKS Clusters are located within your virtual network. They can access the Azure Machine Learning workspace or storage using a private endpoint. Instead of a private endpoint, you can use a service endpoint for Azure Storage and Azure Key Vault. The other services don't support service endpoint.
+
+### Required inbound and outbound configurations
+
+Azure Machine Learning has [several required inbound and outbound configurations](how-to-access-azureml-behind-firewall.md) with your virtual network. If you have a standalone virtual network, the configuration is straightforward using network security group. However, you may have a hub-spoke or mesh network architecture, firewall, network virtual appliance, proxy, and user defined routing. In either case, make sure to allow inbound and outbound with your network security components.
++
+In this diagram, you have a hub and spoke network architecture. The spoke VNet has resources for Azure Machine Learning. The hub VNet has a firewall that control internet outbound from your virtual networks. In this case, your firewall must allow outbound to required resources and your compute resources in spoke VNet must be able to reach your firewall.
+
+> [!TIP]
+> In the diagram, the compute instance and compute cluster are configured for no public IP. If you instead use a compute instance or cluster __with public IP__, you need to allow inbound from the Azure Machine Learning service tag using a Network Security Group (NSG) and user defined routing to skip your firewall. This inbound traffic would be from a Microsoft service (Azure Machine Learning). However, we recommend using the no public IP option to remove this inbound requirement.
+
+### DNS resolution of private link resources and application on compute instance
+
+If you have your own DNS server hosted in Azure or on-premises, you need to create a conditional forwarder in your DNS server. The conditional forwarder sends DNS requests to the Azure DNS for all private link enabled PaaS services. For more information, see the [DNS configuration scenarios](/azure/private-link/private-endpoint-dns#dns-configuration-scenarios) and [Azure Machine Learning specific DNS configuration](how-to-custom-dns.md) articles.
+
+### Data exfiltration protection
+
+We have two types of outbound; read only and read/write. Read only outbound can't be exploited by malicious actors but read/write outbound can be. Azure Storage and Azure Frontdoor (the `frontdoor.frontend` service tag) are read/write outbound in our case.
+
+You can mitigate this data exfiltration risk using [our data exfiltration prevention solution](how-to-prevent-data-loss-exfiltration.md). We use a service endpoint policy with an Azure Machine Learning alias to allow outbound to only Azure Machine Learning managed storage accounts. You don't need to open outbound to Storage on your firewall.
++
+In this diagram, the compute instance and cluster need to access Azure Machine Learning managed storage accounts to get set-up scripts. Instead of opening the outbound to storage, you can use service endpoint policy with Azure Machine Learning alias to allow the storage access only to Azure Machine Learning storage accounts.
+
+The following tables list the required outbound [Azure Service Tags](/azure/virtual-network/service-tags-overview) and fully qualified domain names (FQDN) with data exfiltration protection setting:
+
+| Outbound service tag | Protocol | Port |
+| - | - | - |
+| `AzureActiveDirectory` | TCP | 80, 443 |
+| `AzureResourceManager` | TCP | 443 |
+| `AzureMachineLearning` | UDP | 5831 |
+| `BatchNodeManagement` | TCP | 443 |
+
+| Outbound FQDN | Protocol | Port |
+| - | - | - |
+| `mcr.microsoft.com` | TCP | 443 |
+| `*.data.mcr.microsoft.com` | TCP | 443 |
+| `ml.azure.com` | TCP | 443 |
+| `automlresources-prod.azureedge.net` | TCP | 443 |
+
+### Managed online endpoint
+
+Azure Machine Learning managed online endpoint uses Azure Machine Learning managed VNet, instead of using your VNet. If you want to disallow public access to your endpoint, set the `public_network_access` flag to disabled. When this flag is disabled, your endpoint can be accessed via the private endpoint of your workspace, and it can't be reached from public networks. If you want to use a private storage account for your deployment, set the `egress_public_network_access` flag disabled. It automatically creates private endpoints to access your private resources.
+
+> [!TIP]
+> The workspace default storage account is the only private storage account supported by managed online endpoint.
++
+For more information, see the [Network isolation of managed online endpoints](how-to-secure-online-endpoint.md) article.
+
+### Private IP address shortage in your main network
+
+Azure Machine Learning requires private IPs; one IP per compute instance, compute cluster node, and private endpoint. You also need many IPs if you use AKS. Your hub-spoke network connected with your on-premises network might not have a large enough private IP address space. In this scenario, you can use isolated, not-peered VNets for your Azure Machine Learning resources.
++
+In this diagram, your main VNet requires the IPs for private endpoints. You can have hub-spoke VNets for multiple Azure Machine Learning workspaces with large address spaces. A downside of this architecture is to double the number of private endpoints.
+
+### Network policy enforcement
+You can use [built-in policies](/how-to-integrate-azure-policy.md) if you want to control network isolation parameters with self-service workspace and computing resources creation.
+
+### Other considerations
+
+#### Image build compute setting for ACR behind VNet
+
+If you put your Azure container registry (ACR) behind your private endpoint, your ACR can't build your docker images. You need to use compute instance or compute cluster to build images. For more information, see the [how to set image build compute](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr) article.
+
+#### Enablement of studio UI with private link enabled workspace
+
+If you plan on using the Azure Machine Learning studio, there are extra configuration steps that are needed. These steps are to preventing any data exfiltration scenarios. For more information, see the [how to use Azure Machine Learning studio in an Azure virtual network](how-to-enable-studio-virtual-network.md) article.
+
+<!-- ### Registry -->
+
+## Recommended architecture
+
+The following diagram is our recommended architecture to make all resources private but allow outbound internet access from your VNet. This diagram describes the following architecture:
+* Put all resources in the same region.
+* A hub VNet, which contains your firewall.
+* A spoke VNet, which contains the following resources:
+ * A training subnet contains compute instances and clusters used for training ML models. These resources are configured for no public IP.
+ * A scoring subnet contains an AKS cluster.
+ * A 'pe' subnet contains private endpoints that connect to the workspace and private resources used by the workspace (storage, key vault, container registry, etc.)
+* Managed online endpoints use the private endpoint of the workspace to process incoming requests. A private endpoint is also used to allow managed online endpoint deployments to access private storage.
+
+This architecture balances your network security and your ML engineers' productivity.
++
+You can automate this environments creation using [a template](tutorial-create-secure-workspace-template.md) without managed online endpoint or AKS. Managed online endpoint is the solution if you don't have an existing AKS cluster for your AI model scoring. See [how to secure online endpoint](how-to-secure-online-endpoint.md) documentation for more info. AKS with Azure Machine Learning extension is the solution if you have an existing AKS cluster for your AI model scoring. See [how to attach kubernetes](how-to-attach-kubernetes-anywhere.md) documentation for more info.
+
+### Removing firewall requirement
+
+If you want to remove the firewall requirement, you can use network security groups and [Azure virtual network NAT](/azure/virtual-network/nat-gateway/nat-overview) to allow internet outbound from your private computing resources.
++
+### Using public workspace
+
+You can use a public workspace if you're OK with Azure AD authentication and authorization with conditional access. A public workspace has some features to show data in your private storage account and we recommend using private workspace.
+
+## Recommended architecture with data exfiltration prevention
+
+This diagram shows the recommended architecture to make all resources private and control outbound destinations to prevent data exfiltration. We recommend this architecture when using Azure Machine Learning with your sensitive data in production. This diagram describes the following architecture:
+* Put all resources in the same region.
+* A hub VNet, which contains your firewall.
+ * In addition to service tags, the firewall uses FQDNs to prevent data exfiltration.
+* A spoke VNet, which contains the following resources:
+ * A training subnet contains compute instances and clusters used for training ML models. These resources are configured for no public IP. Additionally, a service endpoint and service endpoint policy are in place to prevent data exfiltration.
+ * A scoring subnet contains an AKS cluster.
+ * A 'pe' subnet contains private endpoints that connect to the workspace and private resources used by the workspace (storage, key vault, container registry, etc.)
+* Managed online endpoints use the private endpoint of the workspace to process incoming requests. A private endpoint is also used to allow managed online endpoint deployments to access private storage.
++
+The following tables list the required outbound [Azure Service Tags](/azure/virtual-network/service-tags-overview) and fully qualified domain names (FQDN) with data exfiltration protection setting:
+
+| Outbound service tag | Protocol | Port |
+| - | -- | - |
+| `AzureActiveDirectory` | TCP | 80, 443 |
+| `AzureResourceManager` | TCP | 443 |
+| `AzureMachineLearning` | UDP | 5831 |
+| `BatchNodeManagement` | TCP | 443 |
+
+| Outbound FQDN | Protocol | Port |
+| - | - | - |
+| `mcr.microsoft.com` | TCP | 443 |
+| `*.data.mcr.microsoft.com` | TCP | 443 |
+| `ml.azure.com` | TCP | 443 |
+| `automlresources-prod.azureedge.net` | TCP | 443 |
+
+### Using public workspace
+
+You can use the public workspace if you're OK with Azure AD authentication and authorization with conditional access. A public workspace has some features to show data in your private storage account and we recommend using private workspace.
+
+## Next steps
+
+* [Virtual network overview](how-to-network-security-overview.md)
+* [Secure the workspace resources](how-to-secure-workspace-vnet.md)
+* [Secure the training environment](how-to-secure-training-vnet.md)
+* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* [Enable studio functionality](how-to-enable-studio-virtual-network.md)
+* [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md)
+* [Use custom DNS](how-to-custom-dns.md)
machine-learning How To R Deploy R Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-deploy-r-model.md
+
+ Title: Deploy a registered R model to an online (real time) endpoint
+
+description: 'Learn how to deploy your R model to an online (real-time) managed endpoint'
+ Last updated : 01/12/2023++++
+ms.devlang: r
++
+# How to deploy a registered R model to an online (real time) endpoint
++
+In this article, you'll learn how to deploy an R model to a managed endpoint (Web API) so that your application can score new data against the model in near real-time.
+
+## Prerequisites
+
+- An [Azure Machine Learning workspace](quickstart-create-resources.md).
+- Azure [CLI and ml extension installed](how-to-configure-cli.md). Or use a [compute instance in your workspace](quickstart-create-resources.md), which has the CLI pre-installed.
+- At least one custom environment associated with your workspace. Create [an R environment](how-to-r-modify-script-for-production.md#create-an-environment), or any other custom environment if you don't have one.
+- An understanding of the [R `plumber` package](https://www.rplumber.io/https://docsupdatetracker.net/index.html)
+- A model that you've trained and [packaged with `crate`](how-to-r-modify-script-for-production.md#crate-your-models-with-the-carrier-package), and [registered into your workspace](how-to-r-train-model.md#register-model)
+
+## Create a folder with this structure
+
+Create this folder structure for your project:
+
+```
+📂 r-deploy-azureml
+ ├─📂 docker-context
+ Γöé Γö£ΓöÇ Dockerfile
+ Γöé Γö£ΓöÇ start_plumber.R
+ ├─📂 src
+ Γöé Γö£ΓöÇ plumber.R
+ Γö£ΓöÇ deployment.yml
+ Γö£ΓöÇ endpoint.yml
+```
+
+The contents of each of these files is shown and explained in this article.
++
+### Dockerfile
+
+This is the file that defines the container environment. You'll also define the installation of any additional R packages here.
+
+A sample **Dockerfile** will look like this:
+
+```dockerfile
+# REQUIRED: Begin with the latest R container with plumber
+FROM rstudio/plumber:latest
+
+# REQUIRED: Install carrier package to be able to use the crated model (whether from a training job
+# or uploaded)
+RUN R -e "install.packages('carrier', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
+
+# OPTIONAL: Install any additional R packages you may need for your model crate to run
+RUN R -e "install.packages('<PACKAGE-NAME>', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
+RUN R -e "install.packages('<PACKAGE-NAME>', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
+
+# REQUIRED
+ENTRYPOINT []
+
+COPY ./start_plumber.R /tmp/start_plumber.R
+
+CMD ["Rscript", "/tmp/start_plumber.R"]
+```
+
+Modify the file to add the packages you need for your scoring script.
+
+### plumber.R
+
+> [!IMPORTANT]
+> This section shows how to structure the **plumber.R** script. For detailed information about the `plumber` package, see [`plumber` documentation](https://www.rplumber.io/https://docsupdatetracker.net/index.html) .
+
+The file **plumber.R** is the R script where you'll define the function for scoring. This script also performs tasks that are necessary to make your endpoint work. The script:
+
+- Gets the path where the model is mounted from the `AZUREML_MODEL_DIR` environment variable in the container.
+- Loads a model object created with the `crate` function from the `carrier` package, which was saved as **crate.bin** when it was packaged.
+- _Unserializes_ the model object
+- Defines the scoring function
+
+> [!TIP]
+> Make sure that whatever your scoring function produces can be converted back to JSON. Some R objects are not easily converted.
+
+```r
+# plumber.R
+# This script will be deployed to a managed endpoint to do the model scoring
+
+# REQUIRED
+# When you deploy a model as an online endpoint, AzureML mounts your model
+# to your endpoint. Model mounting enables you to deploy new versions of the model without
+# having to create a new Docker image.
+
+model_dir <- Sys.getenv("AZUREML_MODEL_DIR")
+
+# REQUIRED
+# This reads the serialized model with its respecive predict/score method you
+# registered. The loaded load_model object is a raw binary object.
+load_model <- readRDS(paste0(model_dir, "/models/crate.bin"))
+
+# REQUIRED
+# You have to unserialize the load_model object to make it its function
+scoring_function <- unserialize(load_model)
+
+# REQUIRED
+# << Readiness route vs. liveness route >>
+# An HTTP server defines paths for both liveness and readiness. A liveness route is used to
+# check whether the server is running. A readiness route is used to check whether the
+# server's ready to do work. In machine learning inference, a server could respond 200 OK
+# to a liveness request before loading a model. The server could respond 200 OK to a
+# readiness request only after the model has been loaded into memory.
+
+#* Liveness check
+#* @get /live
+function() {
+ "alive"
+}
+
+#* Readiness check
+#* @get /ready
+function() {
+ "ready"
+}
+
+# << The scoring function >>
+# This is the function that is deployed as a web API that will score the model
+# Make sure that whatever you are producing as a score can be converted
+# to JSON to be sent back as the API response
+# in the example here, forecast_horizon (the number of time units to forecast) is the input to scoring_function.
+# the output is a tibble
+# we are converting some of the output types so they work in JSON
++
+#* @param forecast_horizon
+#* @post /score
+function(forecast_horizon) {
+ scoring_function(as.numeric(forecast_horizon)) |>
+ tibble::as_tibble() |>
+ dplyr::transmute(period = as.character(yr_wk),
+ dist = as.character(logmove),
+ forecast = .mean) |>
+ jsonlite::toJSON()
+}
+
+```
+
+### start_plumber.R
+
+The file **start_plumber.R** is the R script that gets run when the container starts, and it calls your **plumber.R** script. Use the following script as-is.
+
+```r
+entry_script_path <- paste0(Sys.getenv('AML_APP_ROOT'),'/', Sys.getenv('AZUREML_ENTRY_SCRIPT'))
+
+pr <- plumber::plumb(entry_script_path)
+
+args <- list(host = '0.0.0.0', port = 8000);
+
+if (packageVersion('plumber') >= '1.0.0') {
+ pr$setDocs(TRUE)
+} else {
+ args$swagger <- TRUE
+}
+
+do.call(pr$run, args)
+```
+
+## Build container
+
+These steps assume you have an Azure Container Registry associated with your workspace, which is created when you create your first custom environment. To see if you have a custom environment:
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
+1. Select your workspace if necessary.
+1. On the left navigation, select **Environments**.
+1. On the top, select **Custom environments**.
+1. If you see custom environments, nothing more is needed.
+1. If you don't see any custom environments, create [an R environment](how-to-r-modify-script-for-production.md#create-an-environment), or any other custom environment. (You *won't* use this environment for deployment, but you *will* use the container registry that is also created for you.)
+
+Once you have verified that you have at least one custom environment, use the following steps to build a container.
+
+1. Open a terminal window and sign in to Azure. If you're doing this from an [Azure Machine Learning compute instance](quickstart-create-resources.md#create-compute-instance), use:
+
+ ```azurecli
+ az login --identity
+ ```
+
+ If you're not on the compute instance, omit `--identity` and follow the prompt to open a browser window to authenticate.
+
+1. Make sure you have the most recent versions of the CLI and the `ml` extension:
+
+ ```azurecli
+ az upgrade
+ ```
+
+1. If you have multiple Azure subscriptions, set the active subscription to the one you're using for your workspace. (You can skip this step if you only have access to a single subscription.) Replace `<SUBSCRIPTION-NAME>` with your subscription name. Also remove the brackets `<>`.
+
+ ```azurecli
+ az account set --subscription "<SUBSCRIPTION-NAME>"
+ ```
+
+1. Set the default workspace. If you're doing this from a compute instance, you can use the following command as is. If you're on any other computer, substitute your resource group and workspace name instead. (You can find these values in [Azure Machine Learning studio](how-to-r-train-model.md#submit-the-job).)
+
+ ```azurecli
+ az configure --defaults group=$CI_RESOURCE_GROUP workspace=$CI_WORKSPACE
+ ```
+
+1. Make sure you are in your project directory.
+
+ ```bash
+ cd r-deploy-azureml
+ ```
+
+1. To build the image in the cloud, execute the following bash commands in your terminal. Replace `<IMAGE-NAME>` with the name you want to give the image.
+
+ If your workspace is in a virtual network, see [Enable Azure Container Registry (ACR)](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr) for additional steps to add `--image-build-compute` to the `az acr build` command in the last line of this code.
+
+ ```azurecli
+ WORKSPACE=$(az config get --query "defaults[?name == 'workspace'].value" -o tsv)
+ ACR_NAME=$(az ml workspace show -n $WORKSPACE --query container_registry -o tsv | cut -d'/' -f9-)
+ IMAGE_TAG=${ACR_NAME}.azurecr.io/<IMAGE-NAME>
+
+ az acr build ./docker-context -t $IMAGE_TAG -r $ACR_NAME
+ ```
+
+> [!IMPORTANT]
+> It will take a few minutes for the image to be built. Wait until the build process is complete before proceeding to the next section. Don't close this terminal, you'll use it next to create the deployment.
+
+The `az acr` command will automatically upload your docker-context folder - that contains the artifacts to build the image - to the cloud where the image will be built and hosted in an Azure Container Registry.
++
+## Deploy model
+
+In this section of the article, you'll define and create an [endpoint and deployment](concept-endpoints.md) to deploy the model and image built in the previous steps to a managed online endpoint.
+
+An *endpoint* is an HTTPS endpoint that clients - such as an application - can call to receive the scoring output of a trained model. It provides:
+
+> [!div class="checklist"]
+> - Authentication using "key & token" based auth
+> - SSL termination
+> - A stable scoring URI (endpoint-name.region.inference.ml.Azure.com)
+
+A *deployment* is a set of resources required for hosting the model that does the actual scoring. A **single** *endpoint* can contain **multiple** *deployments*. The load balancing capabilities of Azure Machine Learning managed endpoints allows you to give any percentage of traffic to each deployment. Traffic allocation can be used to do safe rollout blue/green deployments by balancing requests between different instances.
+
+### Create managed online endpoint
+
+1. In your project directory, add the **endpoint.yml** file with the following code. Replace `<ENDPOINT-NAME>` with the name you want to give your managed endpoint.
+
+ ```yml
+ $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
+ name: <ENDPOINT-NAME>
+ auth_mode: aml_token
+ ```
+
+1. Using the same terminal where you built the image, execute the following CLI command to create an endpoint:
+
+ ```azurecli
+ az ml online-endpoint create -f endpoint.yml
+ ```
+
+1. Leave the terminal open to continue using it in the next section.
+
+### Create deployment
+
+1. To create your deployment, add the following code to the **deployment.yml** file.
+
+ * Replace `<ENDPOINT-NAME>` with the endpoint name you defined in the **endpoint.yml** file
+ * Replace `<DEPLOYMENT-NAME>` with the name you want to give the deployment
+ * Replace `<MODEL-URI>` with the registered model's URI in the form of `azureml:modelname@latest`
+ * Replace `<IMAGE-TAG>` with the value from:
+
+ ```bash
+ echo $IMAGE_TAG
+ ```
+
+ ```yml
+ $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
+ name: <DEPLOYMENT-NAME>
+ endpoint_name: <ENDPOINT-NAME>
+ code_configuration:
+ code: ./src
+ scoring_script: plumber.R
+ model: <MODEL-URI>
+ environment:
+ image: <IMAGE-TAG>
+ inference_config:
+ liveness_route:
+ port: 8000
+ path: /live
+ readiness_route:
+ port: 8000
+ path: /ready
+ scoring_route:
+ port: 8000
+ path: /score
+ instance_type: Standard_DS2_v2
+ instance_count: 1
+ ```
+
+1. Next, in your terminal execute the following CLI command to create the deployment (notice that you're setting 100% of the traffic to this model):
+
+ ```azurecli
+ az ml online-deployment create -f r-deployment.yml --all-traffic --skip-script-validation
+ ```
+
+> [!NOTE]
+> It may take several minutes for the service to be deployed. Wait until deployment is finished before proceeding to the next section.
+
+## Test
+
+Once your deployment has been successfully created, you can test the endpoint using studio or the CLI:
+
+# [Studio](#tab/azure-studio)
+
+Navigate to the [Azure Machine Learning studio](https://ml.azure.com) and select from the left-hand menu **Endpoints**. Next, select the **r-endpoint-iris** you created earlier.
+
+Enter the following json into the **Input data to rest real-time endpoint** textbox:
+
+```json
+{
+ "forecast_horizon" : [2]
+}
+```
+
+Select **Test**. You should see the following output:
++
+# [Azure CLI](#tab/cli)
+
+### Create a sample request
+
+In your project parent folder, create a file called **sample_request.json** and populate it with:
++
+```json
+{
+ "forecast_horizon" : [2]
+}
+```
+
+### Invoke the endpoint
+
+Invoke the request. This example uses the name r-endpoint-forecast:
+
+```azurecli
+az ml online-endpoint invoke --name r-endpoint-forecast --request-file sample_request.json
+```
+++
+## Clean-up resources
+
+Now that you've successfully scored with your endpoint, you can delete it so you don't incur ongoing cost:
+
+```azurecli
+az ml online-endpoint delete --name r-endpoint-forecast
+```
+
+## Next steps
+
+For more information about using R with Azure Machine Learning, see [Overview of R capabilities in Azure Machine Learning](how-to-r-overview-r-capabilities.md)
machine-learning How To R Interactive Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-interactive-development.md
+
+ Title: Use R interactively on Azure Machine Learning
+
+description: 'Learn how to work with R interactively on Azure Machine Learning'
+ Last updated : 01/12/2023++++
+ms.devlang: r
++
+# Interactive R development
++
+This article will show you how to use R on a compute instance in Azure Machine Learning studio, running an R kernel in a Jupyter notebook.
+
+Many R users also use RStudio, a popular IDE. You can install RStudio or Posit Workbench in a custom container on a compute instance. However, there are limitations with the container in reading and writing to your Azure Machine Learning workspace.
+
+> [!IMPORTANT]
+> The code shown in this article works on an Azure Machine Learning compute instance. The compute instance has environment and configuration file necessary for the code to run successfully.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today
+- An [Azure Machine Learning workspace and a compute instance](quickstart-create-resources.md)
+- A basic understand of using Jupyter notebooks in Azure Machine Learning studio. For more information, see [Quickstart: Run Jupyter notebooks in studio](quickstart-run-notebooks.md)
+
+## Run R in a notebook in studio
+
+You'll use a notebook in your Azure Machine Learning workspace, on a compute instance.
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com)
+1. Open your workspace if it isn't already open
+1. On the left navigation, select **Notebooks**
+1. Create a new notebook, named **RunR.ipynb**
+
+ > [!TIP]
+ > If you're not sure how to create and work with notebooks in studio, review [Quickstart: Run Jupyter notebooks in studio](quickstart-run-notebooks.md).
+
+1. Select the notebook.
+1. On the notebook toolbar, make sure your compute instance is running. If not, start it now.
+1. On the notebook toolbar, switch the kernel to **R**.
+
+ :::image type="content" source="media/how-to-r-interactive-development/r-kernel.png" alt-text="Screenshot: Switch the notebook kernel to use R." lightbox="media/how-to-r-interactive-development/r-kernel.png":::
+
+Your notebook is now ready for you to run R commands.
+
+## Access data
+
+You can upload files to your workspace file storage and access them in R. But for files stored in Azure [_data assets_ or data from _datastores_](concept-data.md), you first need to install a few packages.
+
+This section describes how to use Python and the `reticulate` package to load your data assets and datastores into R from an interactive session. You'll read tabular data as Pandas DataFrames using the [`azureml-fsspec`](/python/api/azureml-fsspec/?view=azure-ml-py&preserve-view=true) Python package and the `reticulate` R package.
+
+To install these packages:
+
+1. Create a new file on the compute instance, called **setup.sh**.
+1. Copy this code into the file:
+
+ :::code language="bash" source="~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/01-setup-compute-instance-for-interactive-r/setup-ci-for-interactive-data-reads.sh":::
++
+1. Select **Save and run script in terminal** to run the script
+
+The install script performs the following steps:
+
+* `pip` installs `azureml-fsspec` in the default conda environment for the compute instance
+* Installs the R `reticulate` package if necessary (version must be 1.26 or greater)
++
+### Read tabular data from registered data assets or datastores
+
+When your data is stored in a data asset [created in Azure Machine Learning](how-to-create-data-assets.md?tabs=cli#create-a-file-asset), use these steps to read that tabular file into an R `data.frame`:
+> [!NOTE]
+> Reading a file with `reticulate` only works with tabular data.
+
+1. Ensure you have the correct version of `reticulate`. If the version is less than 1.26, try to use a newer compute instance.
+
+ ```r
+ packageVersion("reticulate")
+ ```
+
+1. Load `reticulate` and set the conda environment where `azureml-fsspec` was installed
+
+ [!Notebook-r[](~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/02-develop-in-interactive-r/work-with-data-assets.ipynb?name=reticulate)]
++
+1. Find the URI path to the data file.
+
+ 1. First, get a handle to your workspace
+
+ [!Notebook-r[](~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/02-develop-in-interactive-r/work-with-data-assets.ipynb?name=configure-ml_client)]
+
+ 1. Use this code to retrieve the asset. Make sure to replace `<DATA_NAME>` and `<VERSION_NUMBER>` with the name and number of your data asset.
+
+ > [!TIP]
+ > In studio, select **Data** in the left navigation to find your data asset's name and version number.
+
+ [!Notebook-r[](~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/02-develop-in-interactive-r/work-with-data-assets.ipynb?name=get-uri)]
+
+ 1. Run the code to retrieve the URI.
+
+ [!Notebook-r[](~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/02-develop-in-interactive-r/work-with-data-assets.ipynb?name=py_run_string)]
+
+1. Use Pandas read functions to read the file(s) into the R environment
+
+ [!Notebook-r[](~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/02-develop-in-interactive-r/work-with-data-assets.ipynb?name=read-uri)]
+
+## Install R packages
+
+There are many R packages pre-installed on the compute instance.
+
+When you want to install other packages, you'll need to explicitly state the location and dependencies.
+
+> [!TIP]
+> When you create or use a different compute instance, you'll need to again install any packages you've installed.
++
+For example, to install the `tsibble` package:
+
+```r
+install.packages("tsibble",
+ dependencies = TRUE,
+ lib = "/home/azureuser")
+```
+
+> [!NOTE]
+> Since you are installing packages within an R session running in a Jupyter notebook, `dependencies = TRUE` is required. Otherwise, dependent packages will not be automatically installed. The lib location is also required to install in the correct compute instance location.
+
+## Load R libraries
+
+Add `/home/azureuser` to the R library path.
+
+```r
+.libPaths("/home/azureuser")
+```
+
+> [!TIP]
+> You need to update the `.libPaths` in each interactive R script to access user installed libraries. Add this code to the top of each interactive R script or notebook.
+
+Once the libPath is updated, load libraries as usual
+
+```r
+library('tsibble')
+```
+
+## Use R in the notebook
+
+Other than the above issues, use R as you would in any other environment, such as your local workstation. In your notebook or script, you can read and write to the path where the notebook/script is stored.
+
+> [!NOTE]
+> - From an interactive R session, you can only write to the workspace file system.
+> - From an interactive R session, you can't interact with MLflow (such as, log model or query registry).
+
+## Next steps
+
+* [Adapt your R script to run in production](how-to-r-modify-script-for-production.md)
machine-learning How To R Modify Script For Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-modify-script-for-production.md
+
+ Title: Adapt your R script to run in production
+
+description: 'Learn how to modify your existing R scripts to run in production on Azure Machine Learning'
+ Last updated : 01/11/2023++++
+ms.devlang: r
++
+# Adapt your R script to run in production
+
+This article explains how to take an existing R script and make the appropriate changes to run it as a job in Azure Machine Learning.
+
+You'll have to make most of, if not all, of the changes described in detail in this article.
+
+## Remove user interaction
+
+Your R script must be designed to run unattended and will be executed via the `Rscript` command within the container. Make sure you remove any interactive inputs or outputs from the script.
+
+## Add parsing
+
+If your script requires any sort of input parameter (most scripts do), pass the inputs into the script via the `Rscript` call.
+
+```bash
+Rscript <name-of-r-script>.R
+--data_file ${{inputs.<name-of-yaml-input-1>}}
+--brand ${{inputs.<name-of-yaml-input-2>}}
+```
+
+In your R script, parse the inputs and make the proper type conversions. We recommend that you use the `optparse` package.
+
+The following snippet shows how to:
+* initiate the parser
+* add all your inputs as options
+* parse the inputs with the appropriate data types
+
+You can also add defaults, which are handy for testing. We recommend that you add an `--output` parameter with a default value of `./outputs` so that any output of the script will be stored.
+
+```r
+library(optparse)
+
+parser <- OptionParser()
+
+parser <- add_option(
+ parser,
+ "--output",
+ type = "character",
+ action = "store",
+ default = "./outputs"
+)
+
+parser <- add_option(
+ parser,
+ "--data_file",
+ type = "character",
+ action = "store",
+ default = "data/myfile.csv"
+)
+
+parser <- add_option(
+ parser,
+ "--brand",
+ type = "double",
+ action = "store",
+ default = 1
+)
+args <- parse_args(parser)
+```
+
+`args` is a named list. You can use any of these parameters later in your script.
+
+## Source the `azureml_utils.R` helper script
+
+You must source a helper script called `azureml_utils.R` script in the same working directory of the R script that will be run. The helper script is required for the running R script to be able to communicate with the MLflow server. The helper script provides a method to continuously retrieve the authentication token, since the token changes quickly in a running job. The helper script also allows you to use the logging functions provided in the [R MLflow API](https://MLflow.org/docs/latest/R-api.html) to log models, parameters, tags and general artifacts.
+
+1. Create your file, `azureml_utils.R`, with this code:
+
+ ::: code language="r" source="~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/03-train-model-job/src/azureml_utils.R" :::
+
+1. Start your R script with the following line:
+
+```r
+source("azureml_utils.R")
+```
+
+## Read data files as local files
+
+When you run an R script as a job, Azure Machine Learning takes the data you specify in the job submission and mounts it on the running container. Therefore you'll be able to read the data file(s) as if they were local files on the running container.
+
+* Make sure your source data is registered as a data asset
+* Pass the data asset by name in the job submission parameters
+* Read the files as you normally would read a local file
+
+Define the input parameter as shown in the [parameters section](#add-parsing). Use the parameter, `data-file`, to specify a whole path, so that you can use `read_csv(args$data_file)` to read the data asset.
++
+## Save job artifacts (images, data, etc.)
+
+> [!IMPORTANT]
+> This section does not apply to models. See the following two sections for model specific saving and logging instructions.
+
+You can store arbitrary script outputs like data files, images, serialized R objects, etc. that are generated by the R script in Azure Machine Learning. Create a `./outputs` directory to store any generated artifacts (images, models, data, etc.) Any files saved to `./outputs` will be automatically included in the run and uploaded to the experiment at the end of the run. Since you added a default value for the `--output` parameter in the [input parameters](#add-parsing) section, include the following code snippet in your R script to create the `output` directory.
+
+```r
+if (!dir.exists(args$output)) {
+ dir.create(args$output)
+}
+```
+
+After you create the directory, save your artifacts to that directory. For example:
+
+```r
+# create and save a plot
+library(ggplot2)
+
+myplot <- ggplot(...)
+
+ggsave(myplot,
+ filename = "./outputs/myplot.png")
++
+# save an rds serialized object
+saveRDS(myobject, file = "./outputs/myobject.rds")
+
+```
+
+## `crate` your models with the `carrier` package
+
+The [R MLflow API documentation](https://MLflow.org/docs/latest/models.html#r-function-crate) specifies that your R models need to be of the `crate` _model flavor_.
+
+* If your R script trains a model and you produce a model object, you'll need to `crate` it to be able to deploy it at a later time with Azure Machine Learning.
+* When using the `crate` function, use explicit namespaces when calling any package function you need.
+
+Let's say you have a timeseries model object called `my_ts_model` created with the `fable` package. In order to make this model callable when it's deployed, create a `crate` where you'll pass in the model object and a forecasting horizon in number of periods:
+
+```r
+library(carrier)
+crated_model <- crate(function(x)
+{
+ fabletools::forecast(!!my_ts_model, h = x)
+})
+```
+
+The `crated_model` object is the one you'll log.
+
+## Log models, parameters, tags, or other artifacts with the R MLflow API
+
+In addition to [saving any generated artifacts](#save-job-artifacts-images-data-etc), you can also log models, tags, and parameters for each run. Use the R MLflow API to do so.
+
+When you log a model, you log the _crated model_ you created as described in the [previous section](#crate-your-models-with-the-carrier-package).
+
+> [!NOTE]
+> When you log a model, the model is also saved and added to the run artifacts. There is no need to explicitly save a model unless you did not log it.
+
+To log a model, and/or parameter:
+
+1. Start the run with `mlflow_start_run()`
+1. Log artifacts with `mlflow_log_model`, `mlflow_log_param`, or `mlflow_log_batch`
+1. Do **not** end the run with `mlflow_end_run()`. Skip this call, as it currently causes an error.
+
+For example, to log the `crated_model` object as created in the [previous section](#crate-your-models-with-the-carrier-package), you would include the following code in your R script:
+
+> [!TIP]
+> Use `models` as value for `artifact_path` when logging a model, this is a best practice (even though you can name it something else.)
+
+```r
+mlflow_start_run()
+
+mlflow_log_model(
+ model = crated_model, # the crate model object
+ artifact_path = "models" # a path to save the model object to
+ )
+
+mlflow_log_param(<key-name>, <value>)
+
+# mlflow_end_run() - causes an error, do not include mlflow_end_run()
+```
+
+## Script structure and example
+
+Use these code snippets as a guide to structure your R script, following all the changes outlined in this article.
+
+```r
+# BEGIN R SCRIPT
+
+# source the azureml_utils.R script which is needed to use the MLflow back end
+# with R
+source("azureml_utils.R")
+
+# load your packages here. Make sure that they are installed in the container.
+library(...)
+
+# parse the command line arguments.
+library(optparse)
+
+parser <- OptionParser()
+
+parser <- add_option(
+ parser,
+ "--output",
+ type = "character",
+ action = "store",
+ default = "./outputs"
+)
+
+parser <- add_option(
+ parser,
+ "--data_file",
+ type = "character",
+ action = "store",
+ default = "data/myfile.csv"
+)
+
+parser <- add_option(
+ parser,
+ "--brand",
+ type = "double",
+ action = "store",
+ default = 1
+)
+args <- parse_args(parser)
+
+# your own R code goes here
+# - model building/training
+# - visualizations
+# - etc.
+
+# create the ./outputs directory
+if (!dir.exists(args$output)) {
+ dir.create(args$output)
+}
+
+# log models and parameters to MLflow
+mlflow_start_run()
+
+mlflow_log_model(
+ model = crated_model, # the crate model object
+ artifact_path = "models" # a path to save the model object to
+ )
+
+mlflow_log_param(<key-name>, <value>)
+
+# mlflow_end_run() - causes an error, do not include mlflow_end_run()
+## END OF R SCRIPT
+```
++
+## Create an environment
+
+To run your R script, you'll use the `ml` extension for Azure CLI, also referred to as CLI v2. The `ml` command uses a YAML job definitions file. For more information about submitting jobs with `az ml`, see [Train models with Azure Machine Learning CLI](how-to-train-model.md?tabs=azurecli#4-submit-the-training-job).
+
+The YAML job file specifies an [environment](concept-environments.md). You'll need to create this environment in your workspace before you can run the job.
+
+You can create the environment in [Azure Machine Learning studio](how-to-manage-environments-in-studio.md#create-an-environment) or with [the Azure CLI](how-to-manage-environments-v2.md#create-an-environment-from-a-docker-image).
+
+Whatever method you use, you'll use a Dockerfile. All Docker context files for R environments must have the following specification in order to work on Azure Machine Learning:
+
+```dockerfile
+FROM rocker/tidyverse:latest
+
+# Install python
+RUN apt-get update -qq && \
+ apt-get install -y python3-pip tcl tk libz-dev libpng-dev
+
+RUN ln -f /usr/bin/python3 /usr/bin/python
+RUN ln -f /usr/bin/pip3 /usr/bin/pip
+RUN pip install -U pip
+
+# Install azureml-MLflow
+RUN pip install azureml-MLflow
+RUN pip install MLflow
+
+# Create link for python
+RUN ln -f /usr/bin/python3 /usr/bin/python
+
+# Install R packages required for logging with MLflow (these are necessary)
+RUN R -e "install.packages('MLflow', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
+RUN R -e "install.packages('carrier', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
+RUN R -e "install.packages('optparse', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
+RUN R -e "install.packages('tcltk2', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
+```
+
+The base image is `rocker/tidyverse:latest`, which has many R packages and their dependencies already installed.
+
+> [!IMPORTANT]
+> You must install any R packages your script will need to run in advance. Add more lines to the Docker context file as needed.
+
+```dockerfile
+RUN R -e "install.packages('<package-to-install>', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
+```
+
+## Additional suggestions
+
+Some additional suggestions you may want to consider:
+
+- Use R's `tryCatch` function for exception and error handling
+- Add explicit logging for troubleshooting and debugging
+
+## Next steps
+
+* [How to train R models in Azure Machine Learning](how-to-r-train-model.md)
machine-learning How To R Overview R Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-overview-r-capabilities.md
+
+ Title: Bring R workloads into Azure Machine Learning
+
+description: 'Learn how to bring your R workloads into Azure Machine Learning'
+ Last updated : 01/12/2023++++
+ms.devlang: r
++
+# Bring your R workloads
++
+There's no Azure Machine Learning SDK for R. Instead, you'll use either the CLI or a Python control script to run your R scripts.
+
+This article outlines the key scenarios for R that are supported in Azure Machine Learning and known limitations.
+
+## Typical R workflow
+
+A typical workflow for using R with Azure Machine Learning:
+
+- [Develop R scripts interactively](how-to-r-interactive-development.md) using Jupyter Notebooks on a compute instance. (While you can also add Posit or RStudio to a compute instance, you can't currently access data assets in the workspace from these applications on the compute instance. So for now, interactive work is best done in a Jupyter notebook.)
+
+ - Read tabular data from a registered data asset or datastore
+ - Install additional R libraries
+ - Save artifacts to the workspace file storage
+
+- [Adapt your script](how-to-r-modify-script-for-production.md) to run as a production job in Azure Machine Learning
+
+ - Remove any code that may require user interaction
+ - Add command line input parameters to the script as necessary
+ - Include and source the `azureml_utils.R` script in the same working directory of the R script to be executed
+ - Use `crate` to package the model
+ - Include the R/MLflow functions in the script to **log** artifacts, models, parameters, and/or tags to the job on MLflow
+
+- [Submit remote asynchronous R jobs](how-to-r-train-model.md) (you submit jobs via the CLI or Python SDK, not R)
+
+ - Build an environment
+ - Log job artifacts, parameters, tags and models
+
+- [Register your model](how-to-r-train-model.md#register-model) using Azure Machine Learning studio
+- [Deploy registered R models](how-to-r-deploy-r-model.md) to managed online endpoints
+ - Use the deployed endpoints for real-time inferencing/scoring
+
+## Known limitations
+ 
+
+| Limitation | Do this instead |
+|||
+| There's no R _control-plane_ SDK. | Use the Azure CLI or Python control script to submit jobs. |
+| RStudio running as a custom application (such as Posit or RStudio) within a container on the compute instance can't access workspace assets or MLflow. | Use Jupyter Notebooks with the R kernel on the compute instance. |
+| Interactive querying of workspace MLflow registry from R isn't supported. | |
+| Nested MLflow runs in R are not supported. | |
+| Parallel job step isn't supported. | Run a script in parallel `n` times using different input parameters. But you'll have to meta-program to generate `n` YAML or CLI calls to do it. |
+| Programmatic model registering/recording from a running job with R isn't supported. | |
+| Zero code deployment (that is, automatic deployment) of an R MLflow model is currently not supported. | Create a custom container with `plumber` for deployment. |
+| Scoring an R model with batch endpoints isn't supported. | |
+| AzureML online deployment yml can only use image URIs directly from the registry for the environment specification; not pre-built environments from the same Dockerfile. | Follow the steps in [How to deploy a registered R model to an online (real time) endpoint](how-to-r-deploy-r-model.md) for the correct way to deploy. |
+++
+## Next steps
+
+Learn more about R in Azure Machine Learning:
+
+* [Interactive R development](how-to-r-interactive-development.md)
+* [Adapt your R script to run in production](how-to-r-modify-script-for-production.md)
+* [How to train R models in Azure Machine Learning](how-to-r-train-model.md)
+* [How to deploy an R model to an online (real time) endpoint](how-to-r-deploy-r-model.md)
machine-learning How To R Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-train-model.md
+
+ Title: Train R models
+
+description: 'Learn how to train your machine learning model with R for use in Azure Machine Learning.'
+ Last updated : 01/12/2023++++
+ms.devlang: r
++
+# Run an R job to train a model
++
+This article explains how to take the R script that you [adapted to run in production](how-to-r-modify-script-for-production.md) and set it up to run as an R job using the AzureML CLI V2.
+
+> [!NOTE]
+> Although the title of this article refers to _training_ a model, you can actually run any kind of R script as long as it meets the requirements listed in the adapting article.
+
+## Prerequisites
+
+- An [Azure Machine Learning workspace](quickstart-create-resources.md).
+- [A registered data asset](how-to-create-data-assets.md) that your training job will use.
+- Azure [CLI and ml extension installed](how-to-configure-cli.md). Or use a [compute instance in your workspace](quickstart-create-resources.md), which has the CLI pre-installed.
+- [A compute cluster](how-to-create-attach-compute-cluster.md) or [compute instance](quickstart-create-resources.md#create-compute-instance) to run your training job.
+- [An R environment](how-to-r-modify-script-for-production.md#create-an-environment) for the compute cluster to use to run the job.
+
+## Create a folder with this structure
+
+Create this folder structure for your project:
+
+```
+📁 r-job-azureml
+Γö£ΓöÇ src
+Γöé Γö£ΓöÇ azureml_utils.R
+Γöé Γö£ΓöÇ r-source.R
+Γö£ΓöÇ job.yml
+```
+
+> [!IMPORTANT]
+> All source code goes in the `src` directory.
+
+* The **r-source.R** file is the R script that you adapted to run in production
+* The **azureml_utils.R** file is necessary. The source code is shown [here](how-to-r-modify-script-for-production.md#source-the-azureml_utilsr-helper-script)
+++
+## Prepare the job YAML
+
+AzureML CLI v2 has different [different YAML schemas](reference-yaml-overview.md) for different operations. You'll use the [job YAML schema](reference-yaml-job-command.md) to submit a job. This is the **job.yml** file that is a part of this project.
+
+You'll need to gather specific pieces of information to put into the YAML:
+
+- The name of the registered data asset you'll use as the data input (with version): `azureml:<REGISTERED-DATA-ASSET>:<VERSION>`
+- The name of the environment you created (with version): `azureml:<R-ENVIRONMENT-NAME>:<VERSION>`
+- The name of the compute cluster: `azureml:<COMPUTE-CLUSTER-NAME>`
++
+> [!TIP]
+> For AzureML artifacts that require versions (data assets, environments), you can use the shortcut URI `azureml:<AZUREML-ASSET>@latest` to get the latest version of that artifact if you don't need to set a specific version.
++
+### Sample YAML schema to submit a job
+
+Edit your **job.yml** file to contain the following. Make sure to replace values shown `<IN-BRACKETS-AND-CAPS>` and remove the brackets.
+
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
+# the Rscript command goes in the command key below. Here you also specify
+# which parameters are passed into the R script and can reference the input
+# keys and values further below
+# Modify any value shown below <IN-BRACKETS-AND-CAPS> (remove the brackets)
+command: >
+Rscript <NAME-OF-R-SCRIPT>.R
+--data_file ${{inputs.datafile}}
+--other_input_parameter ${{inputs.other}}
+code: src # this is the code directory
+inputs:
+ datafile: # this is a registered data asset
+ type: uri_file
+ path: azureml:<REGISTERED-DATA-ASSET>@latest
+ other: 1 # this is a sample parameter, which is the number 1 (as text)
+environment: azureml:<R-ENVIRONMENT-NAME>@latest
+compute: azureml:<COMPUTE-CLUSTER-OR-INSTANCE-NAME>
+experiment_name: <NAME-OF-EXPERIMENT>
+description: <DESCRIPTION>
+```
+
+## Submit the job
+
+In the following commands in this section, you may need to know:
+
+- The AzureML workspace name
+- The resource group name where the workspace is
+- The subscription where the workspace is
+
+Find these values from [Azure Machine Learning studio](https://ml.azure.com):
+
+1. Sign in and open your workspace.
+1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
+1. You can copy the values from the section that appears.
++
+To submit the job, run the following commands in a terminal window:
+
+1. Change directories into the `r-job-azureml`.
+
+ ```bash
+ cd r-job-azureml
+ ```
+
+1. Sign in to Azure. If you're doing this from an [Azure Machine Learning compute instance](quickstart-create-resources.md#create-compute-instance), use:
+
+ ```azurecli
+ az login --identity
+ ```
+
+ If you're not on the compute instance, omit `--identity` and follow the prompt to open a browser window to authenticate.
+
+1. Make sure you have the most recent versions of the CLI and the `ml` extension:
+
+ ```azurecli
+ az upgrade
+ ```
+
+1. If you have multiple Azure subscriptions, set the active subscription to the one you're using for your workspace. (You can skip this step if you only have access to a single subscription.) Replace `<SUBSCRIPTION-NAME>` with your subscription name. Also remove the brackets `<>`.
+
+ ```azurecli
+ az account set --subscription "<SUBSCRIPTION-NAME>"
+ ```
+
+1. Now use CLI to submit the job. If you are doing this on a compute instance in your workspace, you can use environment variables for the workspace name and resource group as show in the following code. If you are not on a compute instance, replace these values with your workspace name and resource group.
+
+ ```azurecli
+ az ml job create -f job.yml --workspace-name $CI_WORKSPACE --resource-group $CI_RESOURCE_GROUP
+ ```
+
+Once you've submitted the job, you can check the status and results in studio:
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
+1. Select your workspace if it isn't already loaded.
+1. On the left navigation, select **Jobs**.
+1. Select the **Experiment name** that you used to train your model.
+1. Select the **Display name** of the job to view details and artifacts of the job, including metrics, images, child jobs, outputs, logs, and code used in the job.
++
+## Register model
+
+Finally, once the training job is complete, register your model if you want to deploy it. Start in the studio from the page showing your job details.
+
+1. On the toolbar at the top, select **+ Register model**.
+1. Select **MLflow** for the **Model type**.
+1. Select the folder which contains the model.
+1. Select **Next**.
+1. Supply the name you wish to use for your model. Add **Description**, **Version**, and **Tags** if you wish.
+1. Select **Next**.
+1. Review the information.
+1. Select **Register**.
+
+You'll see a confirmation that the model is registered.
+
+## Next steps
+
+Now that you have a registered model, learn [How to deploy an R model to an online (real time) endpoint](how-to-r-deploy-r-model.md).
managed-grafana How To Set Up Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-set-up-private-access.md
+
+ Title: How to set up private access (preview) in Azure Managed Grafana
+description: How to disable public access to your Azure Managed Grafana instance and configure private endpoints.
++++ Last updated : 02/16/2023+++
+# Set up private access (preview) in Azure Managed Grafana
+
+In this guide, you'll learn how to disable public access to your Azure Managed Grafana instance and set up private endpoints. Setting up private endpoints in Azure Managed Grafana increases security by limiting incoming traffic only to specific network.
+
+> [!IMPORTANT]
+> Private access is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- An existing Managed Grafana workspace. [Create one if you haven't already](quickstart-managed-grafana-portal.md).
+
+## Disable public access to a workspace
+
+Public access is enabled by default when you create an Azure Grafana workspace. Disabling public access prevents all traffic from accessing the resource unless you go through a private endpoint.
+
+> [!NOTE]
+> When private access (preview) is enabled, pinging charts using the [*Pin to Grafana*](/azure/azure-monitor/visualize/grafana-plugin#pin-charts-from-the-azure-portal-to-azure-managed-grafana) feature will no longer work as the Azure portal canΓÇÖt access Azure Managed Grafana instances using a private IP address.
+
+To disable access to an Azure Managed Grafana instance from public network, follow these steps:
+
+### [Portal](#tab/azure-portal)
+
+1. Navigate to your Azure Managed Grafana workspace in the Azure portal.
+1. In the left-hand menu, under **Settings**, select **Networking (Preview)**.
+1. Under **Public Access**, select **Disabled** to disable public access to the Azure Managed Grafana instance and only allow access through private endpoints. If you already had public access disabled and instead wanted to enable public access to your Azure Managed Grafana instance, you would select **Enabled**.
+1. Select **Save**.
+
+ :::image type="content" source="media/private-endpoints/disable-public-access.png" alt-text="Screenshot of the Azure portal disabling public access.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+In the CLI, run the [az grafana update](/cli/azure/grafana#az-grafana-update) command and replace the placeholders `<grafana-workspace>` and `<resource-group>` with your own information:
+
+```azurecli-interactive
+az grafana update --name <grafana-workspace> resource-group <resource-group> --public-network-access disabled
+```
+++
+## Create a private endpoint
+
+Once you have disabled public access, set up a [private endpoint](../private-link/private-endpoint-overview.md) with Azure Private Link. Private endpoints allow access to your Azure Managed Grafana instance using a private IP address from a virtual network.
+
+### [Portal](#tab/azure-portal)
+
+1. In **Networking (Preview)**, select the **Private Access** tab and then **Add** to start setting up a new private endpoint.
+
+ :::image type="content" source="media/private-endpoints/add-private-endpoint.png" alt-text="Screenshot of the Azure portal selecting Add button.":::
+
+1. Fill out the **Basics** tab with the following information:
+
+ | Parameter | Description | Example |
+ ||--|-|
+ | Subscription | Select an Azure subscription. Your private endpoint must be in the same subscription as your virtual network. You'll select a virtual network later in this how-to guide. | *MyAzureSubscription* |
+ | Resource group | Select a resource group or create a new one. | *MyResourceGroup* |
+ | Name | Enter a name for the new private endpoint for your Azure Managed Grafana instance. | *MyPrivateEndpoint* |
+ | Network Interface Name | This field is completed automatically. Optionally edit the name of the network interface. | *MyPrivateEndpoint-nic* |
+ | Region | Select a region. Your private endpoint must be in the same region as your virtual network. | *(US) West Central US* |
+
+ :::image type="content" source="media/private-endpoints/create-endpoint-basics.png" alt-text="Screenshot of the Azure portal filling out Basics tab.":::
+
+1. Select **Next : Resource >**. Private Link offers options to create private endpoints for different types of Azure resources. The current Azure Managed Grafana instance is automatically filled in the **Resource** field.
+
+ 1. The resource type **Microsoft.Dashboard/grafana** and the target sub-resource **grafana** indicate that you're creating an endpoint for an Azure Managed Grafana workspace.
+
+ 1. The name of your instance is listed under **Resource**.
+
+ :::image type="content" source="media/private-endpoints/create-endpoint-resource.png" alt-text="Screenshot of the Azure portal filling out Resource tab.":::
+
+1. Select **Next : Virtual Network >**.
+
+ 1. Select an existing **Virtual network** to deploy the private endpoint to. If you don't have a virtual network, [create a virtual network](../private-link/create-private-endpoint-portal.md#create-a-virtual-network-and-bastion-host).
+
+ 1. Select a **Subnet** from the list.
+
+ 1. **Network policy for private endpoints** is disabled by default. Optionally, select **edit** to add a network security group or a route table policy. This change would affect all private endpoints associated to the selected subnet.
+
+ 1. Under **Private IP configuration**, select the option to allocate IP addresses dynamically. For more information, refer to [Private IP addresses](../virtual-network/ip-services/private-ip-addresses.md#allocation-method).
+
+ 1. Optionally, you can select or create an **Application security group**. Application security groups allow you to group virtual machines and define network security policies based on those groups.
+
+ :::image type="content" source="media/private-endpoints/create-endpoint-vnet.png" alt-text="Screenshot of the Azure portal filling out virtual network tab.":::
+
+1. Select **Next : DNS >** to configure a DNS record. If you don't want to make changes to the default settings, you can move forward to the next tab.
+
+ 1. For **Integrate with private DNS zone**, select **Yes** to integrate your private endpoint with a private DNS zone. You may also use your own DNS servers or create DNS records using the host files on your virtual machines.
+
+ 1. A subscription and resource group for your private DNS zone are preselected. You can change them optionally.
+
+ To learn more about DNS configuration, go to [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) and [DNS configuration for Private Endpoints](../private-link/private-endpoint-overview.md#dns-configuration).
+
+ :::image type="content" source="media/private-endpoints/create-endpoint-dns.png" alt-text="Screenshot of the Azure portal filling out DNS tab.":::
+
+1. Select **Next : Tags >** and optionally create tags. Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups.
+
+1. Select **Next : Review + create >** to review information about your Azure Managed Grafana instance, private endpoint, virtual network and DNS. You can also select **Download a template for automation** to reuse JSON data from this form later.
+
+1. Select **Create**.
+
+Once deployment is complete, you'll get a notification that your endpoint has been created. If it's auto-approved, you can start accessing your instance privately. Otherwise, you will have to wait for approval.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. To set up your private endpoint (preview), you need a virtual network. If you don't have one yet, create a virtual network with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). Replace the placeholder texts `<vnet>`, `<resource-group>`, `<subnet>`, and `<vnet-location>` with the name of your new virtual network, resource group, and name, and vnet location.
+
+ ```azurecli-interactive
+ az network vnet create --name <vnet> --resource-group <resource-group> --subnet-name <subnet> --location <vnet-location>
+ ```
+
+ > [!div class="mx-tdBreakAll"]
+ > | Placeholder | Description | Example |
+ > ||-|-|
+ > | `<vnet>` | Enter a name for your new virtual network. A virtual network enables Azure resources to communicate privately with each other, and with the internet. | `MyVNet` |
+ > | `<resource-group>` | Enter the name of an existing resource group for your virtual network. | `MyResourceGroup` |
+ > | `<subnet>` | Enter a name for your new subnet. A subnet is a network inside a network. This is where the private IP address is assigned. | `MySubnet` |
+ > | `<vnet-location>`| Enter an Azure region. Your virtual network must be in the same region as your private endpoint. | `centralus` |
+
+1. Run the command [az grafana show](/cli/azure/grafana#az-grafana-show) to retrieve the properties of the Azure Managed Grafana workspace, for which you want to set up private access. Replace the placeholder `<grafana-workspace` with the name of your workspace.
+
+ ```azurecli-interactive
+ az grafana show --name <grafana-workspace>
+ ```
+
+ This command generates an output with information about your Azure Managed Grafana workspace. Note down the `id` value. For instance: `/subscriptions/123/resourceGroups/MyResourceGroup/providers/Microsoft.Dashboard/grafana/my-azure-managed-grafana`.
+
+1. Run the command [az network private-endpoint create](/cli/azure/network/private-endpoint#az-network-private-endpoint-create) to create a private endpoint for your Azure Managed Grafana instance. Replace the placeholder texts `<resource-group>`, `<private-endpoint>`, `<vnet>`, `<private-connection-resource-id>`, `<connection-name>`, and `<location>` with your own information.
+
+ ```azurecli-interactive
+ az network private-endpoint create --resource-group <resource-group> --name <private-endpoint> --vnet-name <vnet> --subnet Default --private-connection-resource-id <private-connection-resource-id> --connection-name <connection-name> --location <location> --group-id grafana
+ ```
+
+ > [!div class="mx-tdBreakAll"]
+ > | Placeholder | Description | Example |
+ > ||-|-|
+ > | `<resource-group>` | Enter the name of an existing resource group for your private endpoint. | `MyResourceGroup` |
+ > | `<private-endpoint>` | Enter a name for your new private endpoint. | `MyPrivateEndpoint` |
+ > | `<vnet>` | Enter the name of an existing vnet. | `Myvnet` |
+ > | `<private-connection-resource-id>` | Enter your Azure Managed Grafana workspace's private connection resource ID. This is the ID you saved from the output of the previous step. | `/subscriptions/123/resourceGroups/MyResourceGroup/providers/Microsoft.Dashboard/grafana/my-azure-managed-grafana`|
+ > | `<connection-name>` | Enter a connection name. |`MyConnection` |
+ > | `<location>` | Enter an Azure region. Your private endpoint must be in the same region as your virtual network. |`centralus` |
+++
+## Manage private link connection
+
+### [Portal](#tab/azure-portal)
+
+Go to **Networking (Preview)** > **Private Access** in your Azure Managed Grafana workspace to access the private endpoints linked to your instance.
+
+1. Check the connection state of your private link connection. When you create a private endpoint, the connection must be approved. If the resource for which you're creating a private endpoint is in your directory and you have [sufficient permissions](../private-link/rbac-permissions.md), the connection request will be auto-approved. Otherwise, you must wait for the owner of that resource to approve your connection request. For more information about the connection approval models, go to [Manage Azure Private Endpoints](../private-link/manage-private-endpoint.md#private-endpoint-connections).
+
+1. To manually approve, reject or remove a connection, select the checkbox next to the endpoint you want to edit and select an action item from the top menu.
+
+1. Select the name of the private endpoint to open the private endpoint resource and access more information or to edit the private endpoint.
+
+ :::image type="content" source="media/private-endpoints/create-endpoint-approval.png" alt-text="Screenshot of the Azure portal, manage private endpoint.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+#### Review private endpoint connection details
+
+Run the [az network private-endpoint-connection list](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-list) command to review all private endpoint connections linked to your Azure Managed Grafana workspace and check their connection state. Replace the placeholders `<resource-group>` and `<grafana-workspace>` with the name of the resource group and Azure Managed Grafana workspace.
+
+```azurecli-interactive
+az network private-endpoint-connection list --resource-group <resource-group> --name <grafana-workspace> --type Microsoft.Dashboard/grafana
+```
+
+Optionally, to get the details of a specific private endpoint, use the [az network private-endpoint-connection show](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-show) command. Replace the placeholder texts `<resource-group>` and `<grafana-workspace>` with the name of the resource group and the name of the Azure Managed Grafana workspace.
+
+```azurecli-interactive
+az network private-endpoint-connection show --resource-group <resource-group> --name <grafana-workspace> --type Microsoft.Dashboard/grafana
+```
+
+#### Get connection approval
+
+When you create a private endpoint, the connection must be approved. If the resource for which you're creating a private endpoint is in your directory and you have [sufficient permissions](../private-link/rbac-permissions.md), the connection request will be auto-approved. Otherwise, you must wait for the owner of that resource to approve your connection request.
+
+To approve a private endpoint connection, use the [az network private-endpoint-connection approve](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-approve) command. Replace the placeholder texts `<resource-group>`, `<private-endpoint>`, and `<grafana-workspace>` with the name of the resource group, the name of the private endpoint and the name of the Azure Managed Grafana resource.
+
+```azurecli-interactive
+az network private-endpoint-connection approve --resource-group <resource-group> --name <private-endpoint> --type Microsoft.Dashboard/grafana --resource-name <grafana-workspace>
+```
+
+For more information about the connection approval models, go to [Manage Azure Private Endpoints](../private-link/manage-private-endpoint.md#private-endpoint-connections).
+
+#### Delete a private endpoint connection
+
+To delete a private endpoint connection, use the [az network private-endpoint-connection delete](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-delete) command. Replace the placeholder texts `<resource-group>` and `<private-endpoint>` with the name of the resource group and the name of the private endpoint.
+
+```azurecli-interactive
+az network private-endpoint-connection delete --resource-group <resource-group> --name <private-endpoint>
+```
+
+For more CLI commands, go to [az network private-endpoint-connection](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection)
+++
+If you have issues with a private endpoint, check the following guide: [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Share an Azure Managed Grafana instance](./how-to-share-grafana-workspace.md)
managed-grafana How To Transition Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-transition-domain.md
Previously updated : 09/27/2022 Last updated : 11/27/2022+ # Transition to using the grafana.azure.com domain
Verify that you are set to use the grafana.azure.com domain:
1. In the Azure portal, go to your Azure Managed Grafana resource. 1. At the top of the **Overview** page, in **Essentials**, look for the endpoint of your Grafana workspace. Verify that the URL ends in grafana.azure.com and that clicking the link takes you to your Grafana endpoint.
- :::image type="content" source="media/grafana-endpoint/grafana-domain-view-endpoint.png" alt-text="Screenshot of the Azure platform showing the Grafana endpoint URL.":::
+ :::image type="content" source="media/domain-transition/grafana-domain-view-endpoint.png" alt-text="Screenshot of the Azure platform showing the Grafana endpoint URL.":::
1. If you have any bookmarks or links in your own documentation to your Grafana workspace, make sure that they point to the URL ending in grafana.azure.com listed in the Azure portal. ## Next steps
managed-grafana Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md
Title: Azure Managed Grafana limitations
description: Learn about current limitations in Azure Managed Grafana. Previously updated : 02/14/2023 Last updated : 02/16/2023
Azure Managed Grafana delivers the native Grafana functionality in the highest p
## Current limitations
-Managed Grafana has the following known limitations:
+Azure Managed Grafana has the following known limitations:
* All users must have accounts in an Azure Active Directory. Microsoft (also known as MSA) and 3rd-party accounts aren't supported. As a workaround, use the default tenant of your Azure subscription with your Grafana instance and add other users as guests.
Managed Grafana has the following known limitations:
* Azure Managed Grafana currently doesn't support the Grafana Role Based Access Control (RBAC) feature and the [RBAC API](https://grafana.com/docs/grafana/latest/developers/http_api/access_control/) is therefore disabled.
-* Private endpoints are currently not available in Grafana.
- * Reporting is currently not supported. ## Next steps
marketplace Commercial Marketplace Lead Management Instructions Marketo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md
Previously updated : 06/08/2022 Last updated : 01/26/2023 # Use Marketo to manage commercial marketplace leads
This article describes how to set up your Marketo CRM system to process sales le
:::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-3.png" alt-text="Screenshot showing Marketo Design Studio New Form creation.":::
-1. Ensure that the fields mappings are setup correctly. Here are the list of fields that the connector needs to be setup on the form.
+1. Ensure that the fields mappings are set up correctly. Here's the list of fields that the connector needs to be entered on the form.
> [!NOTE] > The field with name "Lead Source" is expected to be configured in the form. It can be mapped to the **SourceSystemName** system field in Marketo or a custom field.
This article describes how to set up your Marketo CRM system to process sales le
1. On the **MarketplaceLeadBackend** tab, select **Embed Code**.
- :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-6.png" alt-text="Screenshot showing the Marketo Embed Code form.":::
+ :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-6.png" alt-text="Screenshot showing the Marketo Embed Code screen.":::
1. Marketo Embed Code displays code similar to the following example.
This article describes how to set up your Marketo CRM system to process sales le
- Munchkin ID = **123-PQR-789** - Form ID = **1179**
- The following is another way to figure out these values:
+ The following method is another way to figure out these values:
- - Get your subscription's Munchkin ID by going to your **Admin** > **Munchkin** menu in the **Munchkin Account ID** field, or from the first part of your Marketo REST API host subdomain: `https://{Munchkin ID}.mktorest.com`.
+ - Get your subscription's Munchkin ID by going to your **Admin** > **Munchkin** menu in the **Munchkin Account ID** field, or from the first part of your Marketo REST API host subdomain (`https://{Munchkin ID}.mktorest.com`).
- Form ID is the ID of the Embed Code form you created in step 7 to route leads from the marketplace.
-## Obtain a API access from your Marketo Admin
+## Obtain API access from your Marketo Admin
1. See this [Marketo article on getting API access](https://aka.ms/marketo-api), specifically a **ClientID** and **Client Secret** needed for the new Marketo configuration. Follow the step-by-step guide to create an API-only user and a Launchpoint connection for the Partner Center lead management service.
-1. Ensure that the **Custom service created** indicates Partner Center as shown below.
+1. Ensure that the **Custom service created** indicates Partner Center.
:::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-new-service.png" alt-text="Screenshot showing Marketo API new service form":::
When you're ready to configure the lead management information for your offer in
1. Under the **Customer leads** section, select **Connect**.
- :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/customer-leads.png" alt-text="Screenshot showing the Partner Center customer leads page.":::
+ :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/customer-leads.png" alt-text="Screenshot showing the Partner Center Customer leads page.":::
1. On the **Connection details** pop-up window, select **Marketo** for the **Lead destination**.
- :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/choose-lead-destination.png" alt-text="Screenshot showing the Partner Center customer lead destination.":::
+ :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/choose-lead-destination.png" alt-text="Screenshot showing the lead destination page.":::
1. Provide the **Munchkin ID**, **Form ID**, **Client ID** and **Client Secret** fields.
When you're ready to configure the lead management information for your offer in
1. Select **OK**. To make sure you've successfully connected to a lead destination, select **Validate**. If successful, you'll have a test lead in the lead destination.-
+
:::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-connection-details.png" alt-text="Screenshot showing the Partner Center connection details.":::
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-restrict-egress.md
Previously updated : 12/15/2022 Last updated : 02/16/2023 # Control egress traffic for your Azure Red Hat OpenShift (ARO) cluster
-This article provides the necessary details that allow you to secure outbound traffic from your Azure Red Hat OpenShift cluster (ARO). With the release of the [Egress Lockdown Feature](./concepts-egress-lockdown.md), all of the required connections for a private cluster will be proxied through the service. There are additional destinations that you may want to allow to use features such as Operator Hub, or Red Hat telemetry. An [example](#private-aro-cluster-setup) will be provided at the end on how to configure these requirements with Azure Firewall. Keep in mind, you can apply this information to Azure Firewall or to any outbound restriction method or appliance.
+This article provides the necessary details that allow you to secure outbound traffic from your Azure Red Hat OpenShift cluster (ARO). With the release of the [Egress Lockdown Feature](./concepts-egress-lockdown.md), all of the required connections for a private cluster are proxied through the service. There are additional destinations that you may want to allow to use features such as Operator Hub, or Red Hat telemetry. An [example](#private-aro-cluster-setup) is be provided at the end showing how to configure these requirements with Azure Firewall. Keep in mind, you can apply this information to Azure Firewall or to any outbound restriction method or appliance.
## Before you begin
This article assumes that you're creating a new cluster. If you need a basic ARO
This list is based on the list of FQDNs found in the OpenShift docs here: https://docs.openshift.com/container-platform/latest/installing/install_config/configuring-firewall.html
-The following FQDNs are proxied through the service, and will not need additional firewall rules. They are here for informational purposes.
+The following FQDNs are proxied through the service, and won't need additional firewall rules. They're here for informational purposes.
| Destination FQDN | Port | Use | | -- | -- | - |
-| **`arosvc.azurecr.io`** | **HTTPS:443** | Global Internal Private registry for ARO Operators. Required if you do not allow the service-endpoints Microsoft.ContainerRegistry on your subnets. |
-| **`arosvc.$REGION.data.azurecr.io`** | **HTTPS:443** | Regional Internal Private registry for ARO Operators. Required if you do not allow the service-endpoints Microsoft.ContainerRegistry on your subnets. |
-| **`management.azure.com`** | **HTTPS:443** | This is used by the cluster to access Azure APIs. |
-| **`login.microsoftonline.com`** | **HTTPS:443** | This is used by the cluster for authentication to Azure. |
-| **`*.monitor.core.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
-| **`*.monitoring.core.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
-| **`*.blob.core.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
-| **`*.servicebus.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
-| **`*.table.core.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
+| **`arosvc.azurecr.io`** | **HTTPS:443** | Global Internal Private registry for ARO Operators. Required if you don't allow the service-endpoints Microsoft.ContainerRegistry on your subnets. |
+| **`arosvc.$REGION.data.azurecr.io`** | **HTTPS:443** | Regional Internal Private registry for ARO Operators. Required if you don't allow the service-endpoints Microsoft.ContainerRegistry on your subnets. |
+| **`management.azure.com`** | **HTTPS:443** | Used by the cluster to access Azure APIs. |
+| **`login.microsoftonline.com`** | **HTTPS:443** | Used by the cluster for authentication to Azure. |
+| **`*.monitor.core.windows.net`** | **HTTPS:443** | Used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
+| **`*.monitoring.core.windows.net`** | **HTTPS:443** | Used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
+| **`*.blob.core.windows.net`** | **HTTPS:443** | Used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
+| **`*.servicebus.windows.net`** | **HTTPS:443** | Used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
+| **`*.table.core.windows.net`** | **HTTPS:443** | Used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
> [!NOTE] > For many customers exposing *.blob, *.table and other large address spaces creates a potential data exfiltration concern. You may want to consider using the [OpenShift Egress Firewall](https://docs.openshift.com/container-platform/latest/networking/openshift_sdn/configuring-egress-firewall.html) to protect applications deployed in the cluster from reaching these destinations and use Azure Private Link for specific application needs.
The following FQDNs are proxied through the service, and will not need additiona
### ADDITIONAL CONTAINER IMAGES - **`registry.redhat.io`**: Used to provide images for things such as Operator Hub. -- **`*.quay.io`**: May be used to download images from the Red Hat managed Quay registry. Also a possible fall-back target for ARO required system images. If your firewall cannot use wildcards, you can find the [full list of subdomains in the Red Hat documentation.](https://docs.openshift.com/container-platform/latest/installing/install_config/configuring-firewall.html)
+- **`*.quay.io`**: May be used to download images from the Red Hat managed Quay registry. Also a possible fall-back target for ARO required system images. If your firewall can't use wildcards, you can find the [full list of subdomains in the Red Hat documentation.](https://docs.openshift.com/container-platform/latest/installing/install_config/configuring-firewall.html)
### TELEMETRY
-All this section can be opted out, but before we know how, please check what it is: https://docs.openshift.com/container-platform/4.6/support/remote_health_monitoring/about-remote-health-monitoring.html
+You can opt out of telemetry, but make sure you understand this feature before doing so: https://docs.openshift.com/container-platform/4.6/support/remote_health_monitoring/about-remote-health-monitoring.html
- **`cert-api.access.redhat.com`**: Used for Red Hat telemetry. - **`api.access.redhat.com`**: Used for Red Hat telemetry. - **`infogw.api.openshift.com`**: Used for Red Hat telemetry.
In OpenShift Container Platform, customers can opt out of reporting health and u
- **`*.apps.<cluster_name>.<base_domain>`** (OR EQUIVALENT ARO URL): When allowlisting domains, this is used in your corporate network to reach applications deployed in OpenShift, or to access the OpenShift console. - **`api.openshift.com`**: Used by the cluster for release graph parsing. https://access.redhat.com/labs/ocpupgradegraph/ can be used as an alternative. - **`registry.access.redhat.com`**: Registry access is required in your VDI or laptop environment to download dev images when using the ODO CLI tool. (This CLI tool is an alternative CLI tool for developers who aren't familiar with kubernetes). https://docs.openshift.com/container-platform/4.6/cli_reference/developer_cli_odo/understanding-odo.html
+- **`access.redhat.com`**: Used in conjunction with `registry.access.redhat.com` when pulling images. Failure to add this access could result in an error message.
## ARO integrations
Keep the saved `pull-secret.txt` file somewhere safe - it will be used in each c
When running the `az aro create` command, you can reference your pull secret using the `--pull-secret @pull-secret.txt` parameter. Execute `az aro create` from the directory where you stored your `pull-secret.txt` file. Otherwise, replace `@pull-secret.txt` with `@<path-to-my-pull-secret-file`.
-If you are copying your pull secret or referencing it in other scripts, your pull secret should be formatted as a valid JSON string.
+If you're copying your pull secret or referencing it in other scripts, format your pull secret as a valid JSON string.
```azurecli az aro create \
az network route-table route create -g $RESOURCEGROUP --name aro-udr --route-tab
``` ### Add Application Rules for Azure Firewall
-Example rule for telemetry to work. Additional possibilities can be found on this [list](https://docs.openshift.com/container-platform/4.3/installing/install_config/configuring-firewall.html#configuring-firewall_configuring-firewall):
+Example rule for telemetry to work. Additional possibilities are listed [here](https://docs.openshift.com/container-platform/4.3/installing/install_config/configuring-firewall.html#configuring-firewall_configuring-firewall):
```azurecli az network firewall application-rule create -g $RESOURCEGROUP -f aro-private \ --collection-name 'ARO' \
partner-solutions New Relic Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-create.md
Use the Azure portal to find the Azure Native New Relic Service application:
| Property | Description | |--|--| | **Subscription** | Select the Azure subscription that you want to use for creating the New Relic resource. You must have owner access.|
- | **Resource group** |Specify whether you want to create a new resource group or use an existing one. A [resource group](../../azure-resource-manager/management/overview.md) is a container that holds related resources for an Azure solution.|
+ | **Resource group** |Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure/azure-resource-manager/management/overview) is a container that holds related resources for an Azure solution.|
| **Resource name** |Specify a name for the New Relic resource. This name will be the friendly name of the New Relic account.| | **Region** |Select the region where the New Relic resource on Azure and the New Relic account will be created.|
Your next step is to configure metrics and logs on the **Logs** tab. When you're
1. To send subscription-level logs to New Relic, select **Subscription activity logs**. If you leave this option cleared, no subscription-level logs will be sent to New Relic.
- These logs provide insight into the operations on your resources at the [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). These logs also include updates on service-health events.
+ These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). These logs also include updates on service-health events.
Use the activity log to determine what, who, and when for any write operations (`PUT`, `POST`, `DELETE`). There's a single activity log for each Azure subscription.
-1. To send Azure resource logs to New Relic, select **Azure resource logs** for all supported resource types. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md).
+1. To send Azure resource logs to New Relic, select **Azure resource logs** for all supported resource types. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure/azure-monitor/essentials/resource-logs-categories).
- These logs provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a key vault is a data plane operation. Making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+ These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a key vault is a data plane operation. Making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
:::image type="content" source="media/new-relic-create/new-relic-metrics.png" alt-text="Screenshot of the tab for logs in a New Relic resource, with resource logs selected.":::
Your next step is to configure metrics and logs on the **Logs** tab. When you're
- All Azure resources with tags defined in exclude rules don't send logs to New Relic. - If there's a conflict between inclusion and exclusion rules, the exclusion rule applies.
- Azure charges for logs sent to New Relic. For more information, see the [pricing of platform logs](/azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
+ Azure charges for logs sent to New Relic. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) to Azure Marketplace partners.
> [!NOTE] > You can collect metrics for virtual machines and app services by installing the New Relic agent after you create the New Relic resource.
partner-solutions New Relic Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-link-to-existing.md
description: Learn how to link to an existing New Relic account.
Previously updated : 01/16/2023 Last updated : 02/16/2023
When you use Azure Native New Relic Service Preview in the Azure portal for link
|Property | Description | ||| | **Subscription** | Select the Azure subscription that you want to use for creating the New Relic resource. This subscription will be linked to the New Relic account for monitoring purposes.|
- | **Resource group** | Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure-resource-manager/management/overview) is a container that holds related resources for an Azure solution.|
+ | **Resource group** | Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure/azure-resource-manager/management/overview) is a container that holds related resources for an Azure solution.|
| **Resource name** | Specify a name for the New Relic resource.| | **Region** | Select the Azure region where the New Relic resource should be created.| | **New Relic account** | The Azure portal displays a list of existing accounts that can be linked. Select the desired account from the available options.|
When you use Azure Native New Relic Service Preview in the Azure portal for link
Your next step is to configure metrics and logs on the **Metrics + Logs** tab. When you're linking an existing New Relic account, you can set up automatic log forwarding for two types of logs: -- **Send subscription activity logs**: These logs provide insight into the operations on your resources at the [control plane](/azure-resource-manager/management/control-plane-and-data-plane). The logs also include updates on service-health events.
+- **Send subscription activity logs**: These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). The logs also include updates on service-health events.
Use the activity log to determine what, who, and when for any write operations (`PUT`, `POST`, `DELETE`). There's a single activity log for each Azure subscription. -- **Azure resource logs**: These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a key vault is a data plane operation. Making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+- **Azure resource logs**: These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a key vault is a data plane operation. Making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
:::image type="content" source="media/new-relic-link-to-existing/new-relic-metrics.png" alt-text="Screenshot that shows the tab for metrics and logs, with actions to complete.":::
-1. To send Azure resource logs to New Relic, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor resource log categories](/azure-monitor/essentials/resource-logs-categories).
+1. To send Azure resource logs to New Relic, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor resource log categories](/azure/azure-monitor/essentials/resource-logs-categories).
1. When the checkbox for Azure resource logs is selected, logs are forwarded for all resources by default. To filter the set of Azure resources that are sending logs to New Relic, use inclusion and exclusion rules and set the Azure resource tags:
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
During preview, if in-place major version upgrade pre-check operations fail then
- Servers configured with logical replication slots aren't supported. -- MVU is currently not supported for PgBouncer enabled servers.
+- MVU is currently not supported for PgBouncer,AAD,CMK enabled servers. We are going to support this during GA.
+ ## How to Perform in-place major version upgrade:
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
Azure Database for PostgreSQL - Flexible Server currently supports the following
## PostgreSQL version 14
-The current minor release is **14.5**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/14.5/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+The current minor release is **14.6**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/14.5/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
## PostgreSQL version 13
-The current minor release is **13.8**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/13.8/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+The current minor release is **13.9**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/13.8/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
## PostgreSQL version 12
-The current minor release is **12.12**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/12.12/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **12.13**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/12.12/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 11
-The current minor release is **11.17**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/11.17/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **11.18**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/11.17/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 10 and older
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
## Release: February 2023 * Public preview of [Autovacuum Metrics](./concepts-monitoring.md#autovacuum-metrics) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* Support for [extension](concepts-extensions.md) semver with new servers<sup>$</sup>
-* Public Preview of [Major Version Upgrade](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* Support for [Geo-redundant backup feature](./concepts-backup-restore.md#geo-redundant-backup-and-restore) when using [Disk Encryption with Customer Managed Key (CMK)](./concepts-data-encryption.md#how-data-encryption-with-a-customer-managed-key-work) feature.
+* Support for [extension](concepts-extensions.md) semver with new servers<sup>$</sup>
+* Public Preview of [Major Version Upgrade](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* Support for [Geo-redundant backup feature](./concepts-backup-restore.md#geo-redundant-backup-and-restore) when using [Disk Encryption with Customer Managed Key (CMK)](./concepts-data-encryption.md#how-data-encryption-with-a-customer-managed-key-work) feature.
+* Support for [minor versions](./concepts-supported-versions.md) 14.6, 13.9, 12.13, 11.18. <sup>$</sup>
+ ## Release: January 2023 * General availability of [Azure Active Directory Support](./concepts-azure-ad-authentication.md) for Azure Database for PostgreSQL - Flexible Server in all Azure Public Regions * General availability of [Customer Managed Key feature](./concepts-data-encryption.md) with Azure Database for PostgreSQL - Flexible Server in all Azure Public Regions
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
Title: "Migrate from Single Server to Flexible Server by using the Azure CLI"-
-description: Learn about migrating your Single Server databases to Azure Database for PostgreSQL Flexible Server by using the Azure CLI.
+ Title: "Tutorial: Migrate Azure Database for PostgreSQL - Single Server to Flexible Server using the Azure CLI"
+
+description: "Learn about migrating your Single Server databases to Azure Database for PostgreSQL Flexible Server by using the Azure CLI."
- Previously updated : 05/09/2022+ Last updated : 02/02/2023+
-# Migrate from Single Server to Flexible Server by using the Azure CLI
+# Tutorial: Migrate Azure Database for PostgreSQL - Single Server to Flexible Server by using the Azure CLI
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-This article shows you how to use the migration tool in the Azure CLI to migrate databases from Azure Database for PostgreSQL Single Server to Flexible Server.
+You can migrate an instance of Azure Database for PostgreSQL ΓÇô Single Server to Azure Database for PostgreSQL ΓÇô Flexible Server by using the Azure Command Line Interface (CLI). In this tutorial, we perform migration of a sample database from an Azure Database for PostgreSQL single server to a PostgreSQL flexible server using the Azure CLI.
>[!NOTE] > The migration tool is in public preview.
+In this tutorial, you learn about:
+
+> [!div class="checklist"]
+>
+> * Prerequisites
+> * Getting started
+> * Migration CLI commands
+> * Monitor the migration
+> * Cancel the migration
+> * Migration best practices
+
+## Prerequisites
+
+To complete this tutorial, you need to:
+
+* Use an existing instance of Azure Database for PostgreSQL ΓÇô Single Server (the source server)
+* All extensions used on the Single Server (source) must be [allow-listed on the Flexible Server (target)](./concepts-single-to-flexible.md#allow-list-required-extensions)
+
+> [!IMPORTANT]
+> To provide the best migration experience, performing migration using a burstable instance of Flexible server is not supported. Please use a general purpose or a memory optimized instance (4 VCore or higher) as your Target Flexible server to perform the migration. Once the migration is complete, you can downscale back to a burstable instance if necessary.
+
+* Create the target flexible server. For guided steps, refer to the quickstart [Create an Azure Database for PostgreSQL flexible server using the Portal](../flexible-server/quickstart-create-server-portal.md) or [Create an Azure Database for PostgreSQL flexible server using the CLI](../flexible-server/quickstart-create-server-cli.md)
+ ## Getting started 1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings. 2. Install the latest Azure CLI for your operating system from the [Azure CLI installation page](/cli/azure/install-azure-cli).
- If the Azure CLI is already installed, check the version by using the `az version` command. The version should be 2.28.0 or later to use the migration CLI commands. If not, [update your Azure CLI version](/cli/azure/update-azure-cli).
+ If the Azure CLI is already installed, check the version by using the `az version` command. The version should be **2.45.0** or later to use the migration CLI commands. If not, [update your Azure CLI version](/cli/azure/update-azure-cli).
3. Run the `az login` command:
This article shows you how to use the migration tool in the Azure CLI to migrate
az login ```
- A browser window opens with the Azure sign-in page. Provide your Azure credentials to do a successful authentication. For other ways to sign with the Azure CLI, see [this article](/cli/azure/authenticate-azure-cli).
-
-4. Complete the prerequisites listed in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#migration-prerequisites). It is very important to complete the prerequisite steps before you initiate a migration using this tool.
+ A browser window opens with the Azure sign-in page. Provide your Azure credentials to do a successful authentication. For other ways to sign with the Azure CLI, refer [this article](/cli/azure/authenticate-azure-cli).
## Migration CLI commands The migration tool comes with easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with `az postgres flexible-server migration`.-
+Allow-list all required extensions as shown in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#allow-list-required-extensions). It is important to allow-list the extensions before you initiate a migration using this tool.
For help with understanding the options associated with a command and with framing the right syntax, you can use the `help` parameter: ```azurecli-interactive az postgres flexible-server migration --help ```
-That command gives you the following output:
+The above command gives you the following output:
The output lists the supported migration commands, along with their actions. Let's look at these commands in detail.
-### Create a migration
+### Create a migration using the Azure CLI
The `create` command helps in creating a migration from a source server to a target server:
The `create` command helps in creating a migration from a source server to a tar
az postgres flexible-server migration create -- help ```
-That command gives the following result:
+The above command gives you the following result:
-It calls out the expected arguments and has an example syntax for creating a successful migration from the source server to the target server. Here's the CLI command to create a migration:
+It lists the expected arguments and has an example syntax for successfully creating a migration from the source server to the target server. Here's the CLI command to create a new migration:
```azurecli az postgres flexible-server migration create [--subscription] [--resource-group] [--name] [--migration-name]
- [--properties]
+ [--properties]
``` | Parameter | Description |
For example:
az postgres flexible-server migration create --subscription 11111111-1111-1111-1111-111111111111 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --properties "C:\Users\Administrator\Documents\migrationBody.JSON" ```
-The `migration-name` argument used in the `create` command will be used in other CLI commands, such as `update`, `delete`, and `show.` In all those commands, it will uniquely identify the migration attempt in the corresponding actions.
+The `migration-name` argument used in the `create` command will be used in other CLI commands, such as `update`, `delete`, and `show.` In all those commands, it uniquely identifies the migration attempt in the corresponding actions.
-The migration tool offers online and offline modes of migration. To know more about the migration modes and their differences, see [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md).
-
-Create a migration between source and target servers by using the migration mode of your choice. The `create` command needs a JSON file to be passed as part of its `properties` argument.
+Finally, the `create` command needs a JSON file to be passed as part of its `properties` argument.
The structure of the JSON is:
The structure of the JSON is:
"properties": { "SourceDBServerResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<src_ rg_name>/providers/Microsoft.DBforPostgreSQL/servers/<source server name>",
-"SourceDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the source server as per the custom DNS server",
-"TargetDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the target server as per the custom DNS server",
- "SecretParameters": { "AdminCredentials": { "SourceServerPassword": "<password>", "TargetServerPassword": "<password>"
- },
-"AADApp":
- {
- "ClientId": "<client id>",
- "TenantId": "<tenant id>",
- "AadSecret": "<secret>"
- }
+ }
},
-"MigrationResourceGroup":
- {
- "ResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<temp_rg_name>",
- "SubnetResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<rg_name>/providers/Microsoft.Network/virtualNetworks/<Vnet_name>/subnets/<subnet_name>"
- },
- "DBsToMigrate": [ "<db1>","<db2>" ],
-"SetupLogicalReplicationOnSourceDBIfNeeded":ΓÇ»"true",
-
-"OverwriteDBsInTarget":ΓÇ»"true"
+"OverwriteDBsInTarget":"true"
} } ```
->[!NOTE]
-> Gentle reminder to complete the [prerequisites](./concepts-single-to-flexible.md#migration-prerequisites) before you execute **Create** in case it is not yet done. It is very important to complete the prerequisite steps in before you initiate a migration using this tool.
-Here are the `create` parameters:
+The `create` parameters that go into the json file format are as shown below:
| Parameter | Type | Description | | - | - | - |
-| `SourceDBServerResourceId` | Required | This is the resource ID of the Single Server source and is mandatory. |
-| `SourceDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution for a virtual network. Provide the FQDN of the Single Server source according to the custom DNS server for this property. |
-| `TargetDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution inside a virtual network. Provide the FQDN of the Flexible Server target according to the custom DNS server. <br> `SourceDBServerFullyQualifiedDomainName` and `TargetDBServerFullyQualifiedDomainName` should be included as a part of the JSON only in the rare scenario of a custom DNS server being used for name resolution instead of Azure-provided DNS. Otherwise, don't include these parameters as a part of the JSON file. |
+| `SourceDBServerResourceId` | Required | This parameter is the resource ID of the Single Server source and is mandatory. |
| `SecretParameters` | Required | This parameter lists passwords for admin users for both the Single Server source and the Flexible Server target, along with the Azure Active Directory app credentials. These passwords help to authenticate against the source and target servers. They also help in checking proper authorization access to the resources.
-| `MigrationResourceGroup` | Optional | This section consists of two properties: <br><br> `ResourceID` (optional): The migration infrastructure and other network infrastructure components are created to migrate data and schemas from the source to the target. By default, all the components that this tool creates are provisioned under the resource group of the target server. If you want to deploy them under a different resource group, you can assign the resource ID of that resource group to this property. <br><br> `SubnetResourceID` (optional): If your source has public access turned off, or if your target server is deployed inside a virtual network, specify a subnet under which migration infrastructure needs to be created so that it can connect to both source and target servers. |
| `DBsToMigrate` | Required | Specify the list of databases that you want to migrate to Flexible Server. You can include a maximum of eight database names at a time. |
+| `OverwriteDBsinTarget` | Required | When set to true (default), if the target server happens to have an existing database with the same name as the one you're trying to migrate, migration tool automatically overwrites the database. |
| `SetupLogicalReplicationOnSourceDBIfNeeded` | Optional | You can enable logical replication on the source server automatically by setting this property to `true`. This change in the server settings requires a server restart with a downtime of two to three minutes. |
-| `OverwriteDBsinTarget` | Optional | If the target server happens to have an existing database with the same name as the one you're trying to migrate, the migration will pause until you acknowledge that overwrites in the target databases are allowed. You can avoid this pause by setting the value of this property to `true`, which gives the migration tool permission to automatically overwrite databases. |
+| `SourceDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution for a virtual network. Provide the FQDN of the Single Server source according to the custom DNS server for this property. |
+| `TargetDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution inside a virtual network. Provide the FQDN of the Flexible Server target according to the custom DNS server. <br> `SourceDBServerFullyQualifiedDomainName` and `TargetDBServerFullyQualifiedDomainName` are included as a part of the JSON only in the rare scenario that a custom DNS server is used for name resolution instead of Azure-provided DNS. Otherwise, don't include these parameters as a part of the JSON file. |
-### Choose a migration mode
+Note these important points for the command response:
-The default migration mode for migrations created through CLI commands is *online*. Filling out the preceding properties in your JSON file would create an online migration from your Single Server source to the Flexible Server target.
+- As soon as the `create` command is triggered, the migration moves to the `InProgress` state and the `PerformingPreRequisiteSteps` substate. The migration workflow takes a couple of minutes to deploy the migration infrastructure and setup connections between the source and target.
+- After the `PerformingPreRequisiteSteps` substate is completed, the migration moves to the substate of `Migrating Data`, where the Cloning/Copying of the databases take place.
+- Each database migrated has its own section with all migration details, such as table count, incremental inserts, deletions, and pending bytes.
+- The time that the `Migrating Data` substate takes to finish depends on the size of databases that are migrated.
+- The migration moves to the `Succeeded` state as soon as the `Migrating Data` substate finishes successfully. If there's a problem at the `Migrating Data` substate, the migration moves into a `Failed` state.
-If you want to migrate in offline mode, you need to add another property (`"TriggerCutover":"true"`) to your JSON file before you initiate the `create` command.
+>[!NOTE]
+> Gentle reminder to [allow-list the extensions](./concepts-single-to-flexible.md#allow-list-required-extensions) before you execute **Create** in case it is not yet done. It is important to allow-list the extensions before you initiate a migration using this tool.
-### List migrations
+### List the migration(s)
-The `list` command shows the migration attempts that were made to a Flexible Server target. Here's the CLI command to list migrations:
+The `list` command lists all the migration attempts made to a Flexible Server target:
```azurecli az postgres flexible-server migration list [--subscription]
az postgres flexible-server migration list [--subscription]
[--filter] ```
-The `filter` parameter can take these values:
+The `filter` parameter has two options:
-- `Active`: Lists the current active migration attempts for the target server. It does not include the migrations that have reached a failed, canceled, or succeeded state.-- `All`: Lists all the migration attempts to the target server. This includes both the active and past migrations, regardless of the state.
+- `Active`: Lists the current active migration attempts (in progress) into the target server. It does not include the migrations that have reached a failed, canceled, or succeeded state.
+- `All`: Lists all the migration attempts into the target server. This includes both the active and past migrations, regardless of the state.
-For more information about this command, use the `help` parameter:
+For more information about this command, use the `help` parameter:
```azurecli-interactive az postgres flexible-server migration list -- help ```
-### Show details
+## Monitor the migration
-Use the following `list` command to get the details of a specific migration. These details include information on the current state and substate of the migration.
+The `show` command helps you monitor ongoing migrations and gives the current state and substate of the migration.
+These details include information on the current state and substate of the migration.
```azurecli
-az postgres flexible-server migration list [--subscription]
+az postgres flexible-server migration show [--subscription]
[--resource-group] [--name] [--migration-name] ```
-The `migration_name` parameter is the name assigned to the migration during the `create` command. Here's a snapshot of the sample response from the CLI command for showing details:
--
-Note these important points for the command response:
+The `migration_name` parameter is the name you have assigned to the migration during the `create` command. Here's a snapshot of the sample response from the CLI command for showing details:
-- As soon as the `create` command is triggered, the migration moves to the `InProgress` state and the `PerformingPreRequisiteSteps` substate. It takes up to 15 minutes for the migration workflow to deploy the migration infrastructure, configure firewall rules with source and target servers, and perform a few maintenance tasks. -- After the `PerformingPreRequisiteSteps` substate is completed, the migration moves to the substate of `Migrating Data`, where the dump and restore of the databases take place.-- Each database being migrated has its own section with all migration details, such as table count, incremental inserts, deletions, and pending bytes.-- The time that the `Migrating Data` substate takes to finish depends on the size of databases that are being migrated.-- For offline mode, the migration moves to the `Succeeded` state as soon as the `Migrating Data` substate finishes successfully. If there's a problem at the `Migrating Data` substate, the migration moves into a `Failed` state.-- For online mode, the migration moves to the state of `WaitingForUserAction` and a substate of `WaitingForCutoverTrigger` after the `Migrating Data` state finishes successfully. The next section covers the details of the `WaitingForUserAction` state. For more information about this command, use the `help` parameter:
For more information about this command, use the `help` parameter:
az postgres flexible-server migration show -- help ```
-### Update a migration
-
-As soon as the infrastructure setup is complete, the migration activity will pause. Messages in the response for the CLI command will show details if some prerequisites are missing or if the migration is at a state to perform a cutover. At this point, the migration goes into a state called `WaitingForUserAction`.
-
-You use the `update` command to set values for parameters, which helps the migration move to the next stage in the process. Let's look at each of the substates.
-
-#### WaitingForLogicalReplicationSetupRequestOnSourceDB
-
-If the logical replication is not set at the source server or if it was not included as a part of the JSON file, the migration will wait for logical replication to be enabled at the source. You can enable the logical replication setting manually by changing the replication flag to `Logical` on the portal. This change requires a server restart.
-
-You can also enable the logical replication setting by using the following CLI command:
-
-```azurecli
-az postgres flexible-server migration update [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
- [--initiate-data-migration]
-```
-
-To set logical replication on your source server, pass the value `true` to the `initiate-data-migration` property. For example:
-
-```azurecli-interactive
-az postgres flexible-server migration update --subscription 11111111-1111-1111-1111-111111111111 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --initiate-data-migration true"
-```
-
-If you enable it manually, *you still need to issue the preceding `update` command* for the migration to move out of the `WaitingForUserAction` state. The server doesn't need to restart again because that already happened via the portal action.
-
-#### WaitingForTargetDBOverwriteConfirmation
-
-`WaitingForTargetDBOverwriteConfirmation` is the state where migration is waiting for confirmation on target overwrite, because data is already present in the target server for the database that's being migrated. You can enable it by using the following CLI command:
-
-```azurecli
-az postgres flexible-server migration update [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
- [--overwrite-dbs]
-```
-
-To give the migration permissions to overwrite any existing data in the target server, you need to pass the value `true` to the `overwrite-dbs` property. For example:
-
-```azurecli-interactive
-az postgres flexible-server migration update --subscription 11111111-1111-1111-1111-111111111111 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --overwrite-dbs true"
-```
-
-#### WaitingForCutoverTrigger
-
-Migration gets to the `WaitingForCutoverTrigger` state when the dump and restore of the databases have finished and the ongoing writes at your Single Server source are being replicated to the Flexible Server target. You should wait for the replication to finish so that the target is in sync with the source.
-
-You can monitor the replication lag by using the response from the `show` command. A metric called **Pending Bytes** is associated with each database that's being migrated. This metric gives you an indication of the difference between the source and target databases in bytes. This number should be nearing zero over time. After the number reaches zero for all the databases, stop any further writes to your Single Server source. Then, validate the data and schema on your Flexible Server target to make sure they match exactly with the source server.
-
-After you complete the preceding steps, you can trigger a cutover by using the following CLI command:
-
-```azurecli
-az postgres flexible-server migration update [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
- [--cutover]
-```
-
-For example:
-
-```azurecli-interactive
-az postgres flexible-server migration update --subscription 11111111-1111-1111-1111-111111111111 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --cutover"
-```
-
-After you use the preceding command, use the command for showing details to monitor if the cutover has finished successfully. Upon successful cutover, migration will move to a `Succeeded` state. Update your application to point to the new Flexible Server target.
+The following tables describe the migration states and substates.
-For more information about this command, use the `help` parameter:
+| Migration state | Description |
+| - | - |
+| `InProgress` | The migration infrastructure is set up, or the actual data migration is in progress. |
+| `Canceled` | The migration is canceled or deleted. |
+| `Failed` | The migration has failed. |
+| `Succeeded` | The migration has succeeded and is complete. |
-```azurecli-interactive
- az postgres flexible-server migration update -- help
- ```
+| Migration substate | Description |
+| - | - |
+| `PerformingPreRequisiteSteps` | Infrastructure is set up and is prepped for data migration. |
+| `MigratingData` | Data migration is in progress. |
+| `CompletingMigration` | Migration cutover is in progress. |
+| `Completed` | Cutover was successful, and migration is complete. |
-### Delete or cancel a migration
+## Cancel the migration
-You can delete or cancel any ongoing migration attempts by using the `delete` command. This command stops all migration activities in that task, but it doesn't drop or roll back any changes on your target server. Here's the CLI command to delete a migration:
+You can cancel any ongoing migration attempts by using the `cancel` command. This command stops the particular migration attempt, but it doesn't drop or roll back any changes on your target server. Here's the CLI command to delete a migration:
```azurecli
-az postgres flexible-server migration delete [--subscription]
+az postgres flexible-server migration update cancel [--subscription]
[--resource-group] [--name] [--migration-name]
az postgres flexible-server migration delete [--subscription]
For example: ```azurecli-interactive
-az postgres flexible-server migration delete --subscription 11111111-1111-1111-1111-111111111111 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1"
+az postgres flexible-server migration update cancel --subscription 11111111-1111-1111-1111-111111111111 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1"
``` For more information about this command, use the `help` parameter: ```azurecli-interactive
- az postgres flexible-server migration delete -- help
+ az postgres flexible-server migration update cancel -- help
```
-## Monitoring migration
-
-The `create` command starts a migration between the source and target servers. The migration goes through a set of states and substates before eventually moving into the `completed` state. The `show` command helps you monitor ongoing migrations, because it gives the current state and substate of the migration.
-
-The following tables describe the migration states and substates.
-
-| Migration state | Description |
-| - | - |
-| `InProgress` | The migration infrastructure is being set up, or the actual data migration is in progress. |
-| `Canceled` | The migration has been canceled or deleted. |
-| `Failed` | The migration has failed. |
-| `Succeeded` | The migration has succeeded and is complete. |
-| `WaitingForUserAction` | Migration is waiting on a user action. This state has a list of substates that were discussed in detail in the previous section. |
-
-| Migration substate | Description |
-| - | - |
-| `PerformingPreRequisiteSteps` | Infrastructure is being set up and is being prepped for data migration. |
-| `MigratingData` | Data is being migrated. |
-| `CompletingMigration` | Migration cutover is in progress. |
-| `WaitingForLogicalReplicationSetupRequestOnSourceDB` | Waiting for logical replication enablement. You can enable this substate manually or by using the `update` CLI command covered in the next section. |
-| `WaitingForCutoverTrigger` | Migration is ready for cutover. You can start the cutover when ready. |
-| `WaitingForTargetDBOverwriteConfirmation` | Waiting for confirmation on target overwrite. Data is present in the target server. <br> You can enable this substate via the `update` CLI command. |
-| `Completed` | Cutover was successful, and migration is complete. |
-
-## Custom DNS for name resolution
-
-To find out if custom DNS is used for name resolution, go to the virtual network where you deployed your source or target server, and then select **DNS server**. The virtual network should indicate if it's using a custom DNS server or the default Azure-provided DNS server.
+The command gives you the following output:
-## Next steps
+## Migration best practices
-- For a successful end-to-end migration, follow the post-migration steps in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md).
+- For a successful end-to-end migration, follow the post-migration steps in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#best-practices).
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-server-cli.md
If you have not already created a server, refer to this [quickstart](quickstart-
## Scale compute and storage
-You can scale up your pricing tier, compute, and storage easily using the following command. You can see all the server operation you can perform [az postgres server overview](/cli/azure/mysql/server)
+You can scale up your pricing tier, compute, and storage easily using the following command. You can see all the server operation you can perform [az postgres server overview](/cli/azure/postgres/server)
```azurecli-interactive az postgres server update --resource-group myresourcegroup --name mydemoserver --sku-name GP_Gen5_4 --storage-size 6144
private-multi-access-edge-compute-mec Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-multi-access-edge-compute-mec/overview.md
For more information, see [Azure Private 5G Core](../private-5g-core/private-5g-
### Azure Stack hardware and services **Azure Stack Edge**: Azure Stack Edge offers a portfolio of devices that bring compute, storage, and intelligence to the edge right where data is created. The devices are 1U rack-mountable appliances that come with 1-2 NVIDIA T4 GPUs. Azure IoT Edge allows you to deploy and manage containers from IoT Hub and integrate with Azure IoT solutions on the Azure Stack Edge. The Azure Stack Edge Pro SKU is certified to run Network Functions at the edge. For more information, see [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/).
-**Azure Stack HCI**: Azure Stack HCI is a new hyper-converged infrastructure (HCI) operating system delivered as an Azure service that provides the latest security, performance, and feature updates. Deploy and run Windows and Linux virtual machines (VMs) in your datacenter, or, at the edge using your existing tools and processes. Extend your datacenter to the cloud with Azure Backup, Azure Monitor, and Microsoft Defender for Cloud. For more information, see [Azure Stack HCI](https://azure.microsoft.com/products/azure-stack/hci/).
- ### Application services **Azure IoT Edge Runtime**: Azure IoT Edge Runtime enables cloud workloads to be managed and deployed across edge compute appliances using the same tools and security posture as cloud native workloads. For more information, see [Azure IoT Edge Runtime](/windows/ai/windows-ml-container/iot-edge-runtime).
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [Elastic SAN Owner](#elastic-san-owner) | Allows for full access to all resources under Azure Elastic SAN including changing network security policies to unblock data path access | 80dcbedb-47ef-405d-95bd-188a1b4ac406 | > | [Elastic SAN Reader](#elastic-san-reader) | Allows for control path read access to Azure Elastic SAN | af6a70f8-3c9f-4105-acf1-d719e9fca4ca | > | [Elastic SAN Volume Group Owner](#elastic-san-volume-group-owner) | Allows for full access to a volume group in Azure Elastic SAN including changing network security policies to unblock data path access | a8281131-f312-4f34-8d98-ae12be9f0d23 |
-> | [Reader and Data Access](#reader-and-data-access) | Lets you view everything but will not let you delete or create a storage account or contained resource. It will also allow read/write access to all data contained in a storage account via access to storage account keys. | c12c1c16-33a1-487b-954d-41c89c60f349 |
+> | [Reader and Data Access](#reader-and-data-access) | Lets you view everything but
+. It will also allow read/write access to all data contained in a storage account via access to storage account keys. | c12c1c16-33a1-487b-954d-41c89c60f349 |
> | [Storage Account Backup Contributor](#storage-account-backup-contributor) | Lets you perform backup and restore operations using Azure Backup on the storage account. | e5e2a7ff-d759-4cd2-bb51-3152d37e2eb1 | > | [Storage Account Contributor](#storage-account-contributor) | Permits management of storage accounts. Provides access to the account key, which can be used to access data via Shared Key authorization. | 17d1049b-9a84-46fb-8f53-869881c3d3ab | > | [Storage Account Key Operator Service Role](#storage-account-key-operator-service-role) | Permits listing and regenerating storage account access keys. | 81a9662b-bebf-436f-a333-f67b29880f12 |
Allows for full access to a volume group in Azure Elastic SAN including changing
### Reader and Data Access
-Lets you view everything but will not let you delete or create a storage account or contained resource. It will also allow read/write access to all data contained in a storage account via access to storage account keys.
+Lets you view everything but will not let you delete or create a storage account. It will also allow read/write/delete access to all data contained in a storage account via access to storage account keys.
> [!div class="mx-tableFixed"] > | Actions | Description |
Lets you view everything but will not let you delete or create a storage account
"assignableScopes": [ "/" ],
- "description": "Lets you view everything but will not let you delete or create a storage account or contained resource. It will also allow read/write access to all data contained in a storage account via access to storage account keys.",
+ "description": "Lets you view everything but will not let you delete or create a storage account. It will also allow read/write/delete access to all data contained in a storage account via access to storage account keys.",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/c12c1c16-33a1-487b-954d-41c89c60f349", "name": "c12c1c16-33a1-487b-954d-41c89c60f349", "permissions": [
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
You can still use Route Server to direct traffic between subnets in different vi
Azure Route Server supports ***NO_ADVERTISE*** BGP Community. If an NVA advertises routes with this community string to the route server, the route server won't advertise it to other peers including the ExpressRoute gateway. This feature can help reduce the number of routes to be sent from Azure Route Server to ExpressRoute.
-### Can Azure Route Server provide transit between ExpressRoute and a Point-to-Site (P2S) VPN gateway connection if the Branch-to-Branch setting is enabled?
-
-No, Azure Route Server provides transit only between ExpressRoute and a Site-to-Site (S2S) VPN gateway connections if the Branch-to-Branch setting is enabled.
- ## <a name = "limitations"></a>Route Server Limits Azure Route Server has the following limits (per deployment).
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- February 17, 2023: Add support and Sentinel sections, few other minor updates in [RISE with SAP integration](rise-integration.md)
- February 02, 2023: Add new HA provider susChkSrv for [SAP HANA Scale-out HA on SUSE](sap-hana-high-availability-scale-out-hsr-suse.md) and change from SAPHanaSR to SAPHanaSrMultiTarget provider, enabling HANA multi-target replication - January 27, 2023: Mark Azure Active Directory Domain Services as supported AD solution in [SAP workload on Azure virtual machine supported scenarios](planning-supported-configurations.md) after successful testing - December 28, 2022: Update documents [Azure Storage types for SAP workload](./planning-guide-storage.md) and [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) to provide more details on ANF deployment processes to achieve proximity and low latency. Introduction of zonal deployment process of NFS shares on ANF
sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration.md
# Integrating Azure with SAP RISE managed workloads
-For customers with SAP solutions such as RISE with SAP Enterprise Cloud Services (ECS) and SAP S/4HANA Cloud, private edition (PCE) which are deployed on Azure, integrating the SAP managed environment with their own Azure ecosystem and third party applications is of particular importance. The following article explains the concepts utilized and best practices to follow for a secure and performant solution.
+For customers with SAP solutions such as RISE with SAP Enterprise Cloud Services (ECS) and SAP S/4HANA Cloud, private edition (PCE) deployed in Azure, integrating the SAP managed environment with their own Azure ecosystem and third party applications is of particular importance. The following article explains the concepts and best practices to follow for a secure and performant solution.
-RISE with SAP S/4HANA Cloud, private edition and SAP Enterprise Cloud Services are SAP managed services of your SAP landscape, in an Azure subscription owned by SAP. The virtual network (vnet) utilized by these managed systems should fit well in your overall network concept and your available IP address space. Requirements for private IP range for RISE PCE or ECS environments are coming from SAP reference deployments. Customers specify the chosen RFC1918 CIDR IP address range to SAP. To facilitate connectivity between SAP and customers owned Azure subscriptions/vnets, a direct vnet peering can be set up. Another option is the use of a VPN vnet-to-vnet connection.
+## Azure support aspects
+
+RISE with SAP S/4HANA Cloud, private edition and SAP Enterprise Cloud Services are SAP managed services of your SAP landscape, in an Azure subscription owned by SAP. This means all Azure resources of your SAP environment are visible and managed only by SAP. In turn, the customer's own Azure environment contains applications that interact with the SAP systems. Elements such as virtual networks, network security groups, firewalls, routing, Azure services such as Azure Data Factory and others running inside the customer subscription accessing the SAP managed applications. When engaging with Azure support on Azure topics, only resources owned in your own customer subscriptions are in scope. Contact SAP for issues with any resources operated in SAP's Azure subscriptions for your RISE workload.
+
+As part of your RISE project, document the interface points between on-premises, your own Azure environment and SAP workload managed by SAP. This needs to include any network information such as address space, firewall(s) and routing, network file shares, Azure services, DNS and others. Document ownership of any interface partner and where any resource is running, so this information you can access quickly in a support situation and determine your best way to obtain support. Contact SAP's support organization for services running in SAP's Azure subscriptions.
> [!IMPORTANT] > For all details about RISE with SAP Enterprise Cloud Services and SAP S/4HANA Cloud private edition, contact your SAP representative. ## Virtual network peering with SAP RISE/ECS
-A vnet peering is the most performant way to connect securely and privately two standalone vnets, utilizing the Microsoft private backbone network. The peered networks appear as one for connectivity purposes, allowing applications to talk to each other. Applications running in different vnets, subscriptions, Azure tenants or regions are enabled to communicate directly. Like network traffic on a single vnet, vnet peering traffic remains on MicrosoftΓÇÖs private network and doesn't traverse the internet.
+A vnet peering is the most performant way to connect securely and privately two standalone vnets, utilizing the Microsoft private backbone network. The peered networks appear as one for connectivity purposes, allowing applications to talk to each other. Applications running in different vnets, subscriptions, Azure tenants or regions can communicate directly. Like network traffic on a single vnet, vnet peering traffic remains on MicrosoftΓÇÖs private network and doesn't traverse the internet.
For SAP RISE/ECS deployments, virtual peering is the preferred way to establish connectivity with customerΓÇÖs existing Azure environment. Both the SAP vnet and customer vnet(s) are protected with network security groups (NSG), enabling communication on SAP and database ports through the vnet peering. Communication between the peered vnets is secured through these NSGs, limiting communication to customerΓÇÖs SAP environment. For details and a list of open ports, contact your SAP representative.
-SAP managed workload is preferably deployed in the same [Azure region](https://azure.microsoft.com/global-infrastructure/geographies/) as customerΓÇÖs central infrastructure and applications accessing it. Virtual network peering can be set up within the same region as your SAP managed environment, but also through [global virtual network peering](../../virtual-network/virtual-network-peering-overview.md) between any two Azure regions. With SAP RISE/ECS available in many Azure regions, the region ideally should be matched with workload running in customer vnets due to latency and vnet peering cost considerations. However, some of the scenarios (for example, central S/4HANA deployment for a multi-national, globally presented company) also require to peer networks globally.
+SAP managed workload should run in the same [Azure region](https://azure.microsoft.com/global-infrastructure/geographies/) as customerΓÇÖs central infrastructure and applications accessing it. Virtual network peering can be set up within the same region as your SAP managed environment, but also through [global virtual network peering](../../virtual-network/virtual-network-peering-overview.md) between any two Azure regions. With SAP RISE/ECS available in many Azure regions, the region should match with workload running in customer vnets due to latency and vnet peering cost considerations. However, some of the scenarios (for example, central S/4HANA deployment for a multi-national, globally presented company) also require to peer networks globally.
:::image type="complex" source="./media/sap-rise-integration/sap-rise-peering.png" alt-text="Customer peering with SAP RISE/ECS"::: This diagram shows a typical SAP customer's hub and spoke virtual networks. Cross-tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. :::image-end:::
-Since SAP RISE/ECS runs in SAPΓÇÖs Azure tenant and subscriptions, the virtual network peering needs to be set up between [different tenants](../../virtual-network/create-peering-different-subscriptions.md). This can be accomplished by setting up the peering with the SAP provided networkΓÇÖs Azure resource ID and have SAP approve the peering. Add a user from the opposite AAD tenant as a guest user, accept the guest user invitation and follow process documented at [Create a VNet peering - different subscriptions](../../virtual-network/create-peering-different-subscriptions.md). Contact your SAP representative for the exact steps required. Engage the respective team(s) within your organization that deal with network, user administration and architecture to enable this process to be completed swiftly.
+Since SAP RISE/ECS runs in SAPΓÇÖs Azure tenant and subscriptions, set up the virtual network peering between [different tenants](../../virtual-network/create-peering-different-subscriptions.md). You accomplish this by setting up the peering with the SAP provided networkΓÇÖs Azure resource ID and have SAP approve the peering. Add a user from the opposite Azure AD tenant as a guest user, accept the guest user invitation and follow process documented at [Create a vnet peering - different subscriptions](../../virtual-network/create-peering-different-subscriptions.md). Contact your SAP representative for the exact steps required. Engage the respective team(s) within your organization that deal with network, user administration and architecture to enable this process to be completed swiftly.
### Connectivity during migration to ECS/RISE
-Migration of your SAP landscape to ECS/RISE is done in several phases over several months or longer. Some of your SAP environments will be migrated and used productively, while other SAP systems are prepared for migration. In most customer projects the biggest and most critical systems are migrated in the middle or at end of the project. This means that you need to consider having ample bandwidth for data migration or database replication, and not impact the network path of your users to the already productive ECS/RISE environments. Already migrated SAP systems also might need to communicate with the SAP landscape still on-premises or at existing service provider.
+Migration of your SAP landscape to ECS/RISE is done in several phases over several months or longer. Some of your SAP environments are migrated already and in use productively, while other SAP systems are prepared for migration. In most customer projects the biggest and most critical systems are migrated in the middle or at end of the project. You need to consider having ample bandwidth for data migration or database replication, and not impact the network path of your users to the already productive ECS/RISE environments. Already migrated SAP systems also might need to communicate with the SAP landscape still on-premises or at existing service provider.
-During your migration planning to ECS/RISE, plan how in each phase SAP systems are reachable for your base and how data transfer to ECS/RISE vnet is routed. This is important if you have consider multiple locations and parties involved, such as existing service provider and data centers with own connection to your corporate network. Make sure no temporary solutions with VPN connections are created without considering how in later phases SAP data gets migrated for the business critical and largest systems.
+During your migration planning to ECS/RISE, plan how in each phase SAP systems are reachable for your user base and how data transfer to ECS/RISE vnet is routed. Often multiple locations and parties are involved, such as existing service provider and data centers with own connection to your corporate network. Make sure no temporary solutions with VPN connections are created without considering how in later phases SAP data gets migrated for the business critical and largest systems.
-## VPN Vnet-to-Vnet