Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Application Proxy Configure Cookie Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md | Azure Active Directory (Azure AD) has access and session cookies for accessing o | Cookie setting | Default | Description | Recommendations | | -- | - | -- | | | Use HTTP-Only Cookie | **No** | **Yes** allows Application Proxy to include the HTTPOnly flag in HTTP response headers. This flag provides additional security benefits, for example, it prevents client-side scripting (CSS) from copying or modifying the cookies.<br></br><br></br>Before we supported the HTTP-Only setting, Application Proxy encrypted and transmitted cookies over a secured TLS channel to protect against modification.ΓÇ»| Use **Yes** because of the additional security benefits.<br></br><br></br>Use **No** for clients or user agents that do require access to the session cookie. For example, use **No** for an RDP or MTSC client that connects to a Remote Desktop Gateway server through Application Proxy.|-| Use Secure Cookie | **No** | **Yes** allows Application Proxy to include the Secure flag in HTTP response headers. Secure Cookies enhances security by transmitting cookies over a TLS secured channel such as HTTPS. This prevents cookies from being observed by unauthorized parties due to the transmission of the cookie in clear text. | Use **Yes** because of the additional security benefits.| +| Use Secure Cookie | **Yes** | **Yes** allows Application Proxy to include the Secure flag in HTTP response headers. Secure Cookies enhances security by transmitting cookies over a TLS secured channel such as HTTPS. This prevents cookies from being observed by unauthorized parties due to the transmission of the cookie in clear text. | Use **Yes** because of the additional security benefits.| | Use Persistent Cookie | **No** | **Yes** allows Application Proxy to set its access cookies to not expire when the web browser is closed. The persistence lasts until the access token expires, or until the user manually deletes the persistent cookies. | Use **No** because of the security risk associated with keeping users authenticated.<br></br><br></br>We suggest only using **Yes** for older applications that can't share cookies between processes. It's better to update your application to handle sharing cookies between processes instead of using persistent cookies. For example, you might need persistent cookies to allow a user to open Office documents in explorer view from a SharePoint site. Without persistent cookies, this operation might fail if the access cookies aren't shared between the browser, the explorer process, and the Office process. | ## SameSite Cookies |
active-directory | All Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/all-reports.md | Title: View a list and description of all system reports available in Permissions Management reports description: View a list and description of all system reports available in Permissions Management. --++ Last updated 02/23/2022-+ # View a list and description of system reports |
active-directory | Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md | Title: Frequently asked questions (FAQs) about Permissions Management description: Frequently asked questions (FAQs) about Permissions Management. --++ Last updated 04/20/2022-+ # Frequently asked questions (FAQs) |
active-directory | How To Add Remove Role Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-add-remove-role-task.md | Title: Add and remove roles and tasks for groups, users, and service accounts for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in Permissions Management description: How to attach and detach permissions for groups, users, and service accounts for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # Add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities |
active-directory | How To Attach Detach Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-attach-detach-permissions.md | Title: Attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in Permissions Management description: How to attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # Attach and detach policies for Amazon Web Services (AWS) identities |
active-directory | How To Audit Trail Results | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-audit-trail-results.md | Title: Generate an on-demand report from a query in the Audit dashboard in Permissions Management description: How to generate an on-demand report from a query in the **Audit** dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # Generate an on-demand report from a query |
active-directory | How To Clone Role Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-clone-role-policy.md | Title: Clone a role/policy in the Remediation dashboard in Permissions Management description: How to clone a role/policy in the Just Enough Permissions (JEP) Controller. --++ Last updated 02/23/2022-+ # Clone a role/policy in the Remediation dashboard |
active-directory | How To Create Alert Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-alert-trigger.md | Title: Create and view activity alerts and alert triggers in Permissions Management description: How to create and view activity alerts and alert triggers in Permissions Management. --++ Last updated 02/23/2022-+ # Create and view activity alerts and alert triggers |
active-directory | How To Create Approve Privilege Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-approve-privilege-request.md | Title: Create or approve a request for permissions in the Remediation dashboard in Permissions Management description: How to create or approve a request for permissions in the Remediation dashboard. --++ Last updated 02/23/2022-+ # Create or approve a request for permissions |
active-directory | How To Create Custom Queries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-custom-queries.md | Title: Create a custom query in Permissions Management description: How to create a custom query in the Audit dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # Create a custom query |
active-directory | How To Create Group Based Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md | Title: Select group-based permissions settings in Permissions Management with the User management dashboard description: How to select group-based permissions settings in Permissions Management with the User management dashboard. --++ Last updated 02/23/2022-+ # Select group-based permissions settings |
active-directory | How To Create Role Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-role-policy.md | Title: Create a role/policy in the Remediation dashboard in Permissions Management description: How to create a role/policy in the Remediation dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # Create a role/policy in the Remediation dashboard |
active-directory | How To Create Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-rule.md | Title: Create a rule in the Autopilot dashboard in Permissions Management description: How to create a rule in the Autopilot dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # Create a rule in the Autopilot dashboard |
active-directory | How To Delete Role Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-delete-role-policy.md | Title: Delete a role/policy in the Remediation dashboard in Permissions Management description: How to delete a role/policy in the Just Enough Permissions (JEP) Controller. --++ Last updated 02/23/2022-+ # Delete a role/policy in the Remediation dashboard |
active-directory | How To Modify Role Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-modify-role-policy.md | Title: Modify a role/policy in the Remediation dashboard in Permissions Management description: How to modify a role/policy in the Remediation dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # Modify a role/policy in the Remediation dashboard |
active-directory | How To Notifications Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-notifications-rule.md | Title: View notification settings for a rule in the Autopilot dashboard in Permissions Management description: How to view notification settings for a rule in the Autopilot dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # View notification settings for a rule in the Autopilot dashboard |
active-directory | How To Recommendations Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-recommendations-rule.md | Title: Generate, view, and apply rule recommendations in the Autopilot dashboard in Permissions Management description: How to generate, view, and apply rule recommendations in the Autopilot dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # Generate, view, and apply rule recommendations in the Autopilot dashboard |
active-directory | How To Revoke Task Readonly Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-revoke-task-readonly-status.md | Title: Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in Permissions Management description: How to revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities |
active-directory | How To View Role Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-view-role-policy.md | Title: View information about roles/ policies in the Remediation dashboard in Permissions Management description: How to view and filter information about roles/ policies in the Remediation dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # View information about roles/ policies in the Remediation dashboard |
active-directory | Integration Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/integration-api.md | Title: Set and view configuration settings in Permissions Management description: How to view the Permissions Management API integration settings and create service accounts and roles. --++ Last updated 02/23/2022-+ # Set and view configuration settings |
active-directory | Multi Cloud Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/multi-cloud-glossary.md | Title: Permissions Management glossary description: Permissions Management glossary --++ Last updated 02/23/2022-+ # The Permissions Management glossary |
active-directory | Onboard Add Account After Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md | Title: Add an account /subscription/ project to Permissions Management after onboarding is complete description: How to add an account/ subscription/ project to Permissions Management after onboarding is complete. --++ Last updated 02/23/2022-+ # Add an account/ subscription/ project after onboarding is complete |
active-directory | Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md | Title: Onboard an Amazon Web Services (AWS) account on Permissions Management description: How to onboard an Amazon Web Services (AWS) account on Permissions Management. --++ Last updated 04/20/2022-+ # Onboard an Amazon Web Services (AWS) account |
active-directory | Onboard Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md | Title: Onboard a Microsoft Azure subscription in Permissions Management description: How to a Microsoft Azure subscription on Permissions Management. --++ Last updated 04/20/2022-+ # Onboard a Microsoft Azure subscription |
active-directory | Onboard Enable Controller After Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md | Title: Enable or disable the controller in Permissions Management after onboarding is complete description: How to enable or disable the controller in Permissions Management after onboarding is complete. --++ Last updated 02/23/2022-+ # Enable or disable the controller after onboarding is complete |
active-directory | Onboard Enable Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md | Title: Enable Permissions Management in your organization description: How to enable Permissions Management in your organization. --++ Last updated 04/20/2022-+ # Enable Permissions Management in your organization |
active-directory | Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md | Title: Onboard a Google Cloud Platform (GCP) project in Permissions Management description: How to onboard a Google Cloud Platform (GCP) project on Permissions Management. --++ Last updated 04/20/2022-+ # Onboard a Google Cloud Platform (GCP) project |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md | Title: What's Permissions Management? description: An introduction to Permissions Management. --++ Last updated 04/20/2022-+ # What's Permissions Management? |
active-directory | Product Account Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-explorer.md | Title: View roles and identities that can access account information from an external account description: How to view information about identities that can access accounts from an external account in Permissions Management. -+ -+ Last updated 02/23/2022-+ # View roles and identities that can access account information from an external account |
active-directory | Product Account Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-settings.md | Title: View personal and organization information in Permissions Management description: How to view personal and organization information in the Account settings dashboard in Permissions Management. -+ -+ Last updated 02/23/2022-+ # View personal and organization information |
active-directory | Product Audit Trail | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-audit-trail.md | Title: Filter and query user activity in Permissions Management description: How to filter and query user activity in Permissions Management. --++ Last updated 02/23/2022-+ # Filter and query user activity |
active-directory | Product Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-dashboard.md | Title: View data about the activity in your authorization system in Permissions Management description: How to view data about the activity in your authorization system in the Permissions Management Dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # View data about the activity in your authorization system |
active-directory | Product Data Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-inventory.md | Title: Display an inventory of created resources and licenses for your authorization system description: How to display an inventory of created resources and licenses for your authorization system in Permissions Management. --++ Last updated 02/23/2022-+ # Display an inventory of created resources and licenses for your authorization system |
active-directory | Product Data Sources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-sources.md | Title: View and configure settings for data collection from your authorization system in Permissions Management description: How to view and configure settings for collecting data from your authorization system in Permissions Management. --++ Last updated 02/23/2022-+ # View and configure settings for data collection |
active-directory | Product Define Permission Levels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-define-permission-levels.md | Title: Define and manage users, roles, and access levels in Permissions Management description: How to define and manage users, roles, and access levels in Permissions Management User management dashboard. --++ Last updated 02/23/2022-+ # Define and manage users, roles, and access levels |
active-directory | Product Permission Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permission-analytics.md | Title: Create and view permission analytics triggers in Permissions Management description: How to create and view permission analytics triggers in the Permission analytics tab in Permissions Management. --++ Last updated 02/23/2022-+ # Create and view permission analytics triggers |
active-directory | Product Permissions Analytics Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md | Title: Generate and download the Permissions analytics report in Permissions Management description: How to generate and download the Permissions analytics report in Permissions Management. --++ Last updated 02/23/2022-+ # Generate and download the Permissions analytics report |
active-directory | Product Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-reports.md | Title: View system reports in the Reports dashboard in Permissions Management description: How to view system reports in the Reports dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # View system reports in the Reports dashboard |
active-directory | Product Rule Based Anomalies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-rule-based-anomalies.md | Title: Create and view rule-based anomalies and anomaly triggers in Permissions Management description: How to create and view rule-based anomalies and anomaly triggers in Permissions Management. --++ Last updated 02/23/2022-+ # Create and view rule-based anomaly alerts and anomaly triggers |
active-directory | Product Statistical Anomalies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-statistical-anomalies.md | Title: Create and view statistical anomalies and anomaly triggers in Permissions Management description: How to create and view statistical anomalies and anomaly triggers in the Statistical Anomaly tab in Permissions Management. --++ Last updated 02/23/2022-+ # Create and view statistical anomalies and anomaly triggers |
active-directory | Report Create Custom Report | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-create-custom-report.md | Title: Create, view, and share a custom report a custom report in Permissions Management description: How to create, view, and share a custom report in the Permissions Management. --++ Last updated 02/23/2022-+ # Create, view, and share a custom report |
active-directory | Report View System Report | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-view-system-report.md | Title: Generate and view a system report in Permissions Management description: How to generate and view a system report in the Permissions Management. --++ Last updated 02/23/2022-+ # Generate and view a system report |
active-directory | Training Videos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/training-videos.md | Title: Permissions Management training videos description: Permissions Management training videos. --++ Last updated 04/20/2022-+ # Entra Permissions Management training videos |
active-directory | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/troubleshoot.md | Title: Troubleshoot issues with Permissions Management description: Troubleshoot issues with Permissions Management --++ Last updated 02/23/2022-+ # Troubleshoot issues with Permissions Management |
active-directory | Ui Audit Trail | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-audit-trail.md | Title: Use queries to see how users access information in an authorization system in Permissions Management description: How to use queries to see how users access information in an authorization system in Permissions Management. --++ Last updated 02/23/2022-+ # Use queries to see how users access information |
active-directory | Ui Autopilot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-autopilot.md | Title: View rules in the Autopilot dashboard in Permissions Management description: How to view rules in the Autopilot dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # View rules in the Autopilot dashboard |
active-directory | Ui Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-dashboard.md | Title: View key statistics and data about your authorization system in Permissions Management description: How to view statistics and data about your authorization system in the Permissions Management. --++ Last updated 02/23/2022-+ |
active-directory | Ui Remediation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-remediation.md | Title: View existing roles/policies and requests for permission in the Remediation dashboard in Permissions Management description: How to view existing roles/policies and requests for permission in the Remediation dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # View roles/policies and requests for permission in the Remediation dashboard |
active-directory | Ui Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-tasks.md | Title: View information about active and completed tasks in Permissions Management description: How to view information about active and completed tasks in the Activities pane in Permissions Management. --++ Last updated 02/23/2022-+ # View information about active and completed tasks |
active-directory | Ui Triggers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-triggers.md | Title: View information about activity triggers in Permissions Management description: How to view information about activity triggers in the Activity triggers dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # View information about activity triggers |
active-directory | Ui User Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-user-management.md | Title: Manage users and groups with the User management dashboard in Permissions Management description: How to manage users and groups in the User management dashboard in Permissions Management. --++ Last updated 02/23/2022-+ # Manage users and groups with the User management dashboard |
active-directory | Usage Analytics Access Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-access-keys.md | Title: View analytic information about access keys in Permissions Management description: How to view analytic information about access keys in Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about access keys |
active-directory | Usage Analytics Active Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-resources.md | Title: View analytic information about active resources in Permissions Management description: How to view usage analytics about active resources in Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about active resources |
active-directory | Usage Analytics Active Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-tasks.md | Title: View analytic information about active tasks in Permissions Management description: How to view analytic information about active tasks in Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about active tasks |
active-directory | Usage Analytics Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-groups.md | Title: View analytic information about groups in Permissions Management description: How to view analytic information about groups in Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about groups |
active-directory | Usage Analytics Home | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-home.md | Title: View analytic information with the Analytics dashboard in Permissions Management description: How to use the Analytics dashboard in Permissions Management to view details about users, groups, active resources, active tasks, access keys, and serverless functions. --++ Last updated 02/23/2022-+ # View analytic information with the Analytics dashboard |
active-directory | Usage Analytics Serverless Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-serverless-functions.md | Title: View analytic information about serverless functions in Permissions Management description: How to view analytic information about serverless functions in Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about serverless functions |
active-directory | Usage Analytics Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-users.md | Title: View analytic information about users in Permissions Management description: How to view analytic information about users in Permissions Management. --++ Last updated 02/23/2022-+ # View analytic information about users |
active-directory | Block Legacy Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md | Before you can block legacy authentication in your directory, you need to first #### Indicators from Azure AD 1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-in logs**.-1. Add the Client App column if it isn't shown by clicking on **Columns** > **Client App**. -1. **Add filters** > **Client App** > select all of the legacy authentication protocols. Select outside the filtering dialog box to apply your selections and close the dialog box. +1. Add the **Client App** column if it isn't shown by clicking on **Columns** > **Client App**. +1. Select **Add filters** > **Client App** > choose all of the legacy authentication protocols and select **Apply**. 1. If you've activated the [new sign-in activity reports preview](../reports-monitoring/concept-all-sign-ins.md), repeat the above steps also on the **User sign-ins (non-interactive)** tab. Filtering will only show you sign-in attempts that were made by legacy authentication protocols. Clicking on each individual sign-in attempt will show you more details. The **Client App** field under the **Basic Info** tab will indicate which legacy authentication protocol was used. |
active-directory | Manage Stale Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-stale-devices.md | You have two options to retrieve the value of the activity timestamp: To efficiently clean up stale devices in your environment, you should define a related policy. This policy helps you to ensure that you capture all considerations that are related to stale devices. The following sections provide you with examples for common policy considerations. +> [!CAUTION] +> If your organization uses BitLocker drive encryption, you should ensure that BitLocker recovery keys are either backed up or no longer needed before deleting devices. Failure to do this may cause loss of data. + ### Cleanup account To update a device in Azure AD, you need an account that has one of the following roles assigned: It isn't advisable to immediately delete a device that appears to be stale becau ### MDM-controlled devices -If your device is under control of Intune or any other MDM solution, retire the device in the management system before disabling or deleting it. For more information see the article [Remove devices by using wipe, retire, or manually unenrolling the device](/mem/intune/remote-actions/devices-wipe). +If your device is under control of Intune or any other MDM solution, retire the device in the management system before disabling or deleting it. For more information, see the article [Remove devices by using wipe, retire, or manually unenrolling the device](/mem/intune/remote-actions/devices-wipe). ### System-managed devices |
active-directory | Directory Delete Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md | You can't delete a organization in Azure AD until it passes several checks. Thes * There can be no multifactor authentication providers linked to the organization. * There can be no subscriptions for any Microsoft Online Services such as Microsoft Azure, Microsoft 365, or Azure AD Premium associated with the organization. For example, if a default Azure AD tenant was created for you in Azure, you can't delete this organization if your Azure subscription still relies on it for authentication. You also can't delete a tenant if another user has associated an Azure subscription with it. +[!NOTE] Microsoft is aware that customers with certain tenant configurations may be unable to successfully delete their Azure AD organization. We are working to address this problem. In the meantime, if needed, you can contact Microsoft support for details about the issue. + ## Delete the organization 1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) with an account that is the Global Administrator for your organization. |
active-directory | User Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md | It's possible to convert UserType from Member to Guest and vice-versa by editing Guest users have [default restricted directory permissions](../fundamentals/users-default-permissions.md). They can manage their own profile, change their own password, and retrieve some information about other users, groups, and apps. However, they can't read all directory information. +B2B guest users are not supported in Microsoft Teams shared channels. For access to shared channels see [B2B direct connect.](b2b-direct-connect-overview.md) + There may be cases where you want to give your guest users higher privileges. You can add a guest user to any role and even remove the default guest user restrictions in the directory to give a user the same privileges as members. It's possible to turn off the default limitations so that a guest user in the company directory has the same permissions as a member user. For more information, check out the [Restrict guest access permissions in Azure Active Directory](../enterprise-users/users-restrict-guest-permissions.md) article.  If a guest user accepts your invitation and they subsequently change their email * [What is Azure AD B2B collaboration?](what-is-b2b.md) * [B2B collaboration user tokens](user-token.md)-* [B2B collaboration user claims mapping](claims-mapping.md) +* [B2B collaboration user claims mapping](claims-mapping.md) |
active-directory | Automate Provisioning To Applications Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/automate-provisioning-to-applications-introduction.md | + + Title: Automate identity provisioning to applications introduction +description: Learn to design solutions to automatically provision identities in hybrid environments to provide application access. +++++++ Last updated : 09/23/2022+++ - it-pro + - seodec18 + - kr2b-contr-experiment ++++# Introduction ++The article helps architects, Microsoft partners, and IT professionals with information addressing identity [provisioning](https://www.gartner.com/en/information-technology/glossary/user-provisioning) needs in their organizations, or the organizations they're working with. The content focuses on automating user provisioning for access to applications across all systems in your organization. ++Employees in an organization rely on many applications to perform their work. These applications often require IT admins or application owners to provision accounts before an employee can start accessing them. Organizations also need to manage the lifecycle of these accounts and keep them up to date with the latest information and remove accounts when users don't require them anymore. ++The Azure AD provisioning service automates your identity lifecycle and keeps identities in sync across trusted source systems (like HR systems) and applications that users need access to. It enables you to bring users into Azure AD and provision them into the various applications that they require. The provisioning capabilities are foundational building blocks that enable rich governance and lifecycle workflows. For [hybrid](../hybrid/whatis-hybrid-identity.md) scenarios, the Azure AD agent model connects to on-premises or IaaS systems, and includes components such as the Azure AD provisioning agent, Microsoft Identity Manager (MIM), and Azure AD Connect. ++Thousands of organizations are running Azure AD cloud-hosted services, with its hybrid components delivered on-premises, for provisioning scenarios. Microsoft invests in cloud-hosted and on-premises functionality, including MIM and Azure AD Connect sync, to help organizations provision users in their connected systems and applications. This article focuses on how organizations can use Azure AD to address their provisioning needs and make clear which technology is most right for each scenario. ++ ++ Use the following table to find content specific to your scenario. For example, if you want employee and contractor identities management from an HR system to Active Directory Domain Services (AD DS) or Azure Active Directory (Azure AD), follow the link to *Connect identities with your system of record*. ++| What | From | To | Read | +| - | - | - | - | +| Employees and contractors| HR systems| AD and Azure AD| [Connect identities with your system of record](automate-provisioning-to-applications-solutions.md) | +| Existing AD users and groups| AD| Azure AD| [Synchronize identities between Azure AD and Active Directory](automate-provisioning-to-applications-solutions.md) | +| Users, groups| Azure AD| SaaS and on-prem apps| [Automate provisioning to non-Microsoft applications](../governance/entitlement-management-organization.md) | +| Access rights| Azure AD Identity Governance| SaaS and on-prem apps| [Entitlement management](../governance/entitlement-management-overview.md) | +| Existing users and groups| AD, SaaS and on-prem apps| Identity governance (so I can review them)| [Azure AD Access reviews](../governance/access-reviews-overview.md) | +| Non-employee users (with approval)| Other cloud directories| SaaS and on-prem apps| [Connected organizations](../governance/entitlement-management-organization.md) | +| Users, groups| Azure AD| Managed AD domain| [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | ++## Example topologies ++Organizations vary greatly in the applications and infrastructure that they rely on to run their business. Some organizations have all their infrastructure in the cloud, relying solely on SaaS applications, while others have invested deeply in on-premises infrastructure over several years. The three topologies below depict how Microsoft can meet the needs of a cloud only customer, hybrid customer with basic provisioning requirements, and a hybrid customer with advanced provisioning requirements. ++### Cloud only ++In this example, the organization has a cloud HR system such as Workday or SuccessFactors, uses Microsoft 365 for collaboration, and SaaS apps such as ServiceNow and Zoom. ++ ++1. The Azure AD provisioning service imports users from the cloud HR system and creates an account in Azure AD, based on business rules that the organization defines. ++1. The user complete sets up the suitable authentication methods, such as the authenticator app, Fast Identity Online 2 (FIDO2)/Windows Hello for Business (WHfB) keys via [Temporary Access Pass](../authentication/howto-authentication-temporary-access-pass.md) and then signs into Teams. This Temporary Access Pass was automatically generated for the user through Azure AD Life Cycle Workflows. ++1. The Azure AD provisioning service creates accounts in the various applications that the user needs, such as ServiceNow and Zoom. The user is able to request the necessary devices they need and start chatting with their teams. ++### Hybrid-basic ++In this example, the organization has a mix of cloud and on-premises infrastructure. In addition to the systems mentioned above, the organization relies on SaaS applications and on-premises applications that are both AD integrated and non-AD integrated. ++ ++1. The Azure AD provisioning service imports the user from Workday and creates an account in AD DS, enabling the user to access AD-integrated applications. ++2. Azure AD Connect Cloud Sync provisions the user into Azure AD, which enables the user to access SharePoint Online and their OneDrive files. ++3. The Azure AD provisioning service detects a new account was created in Azure AD. It then creates accounts in the SaaS and on-premises applications the user needs access to. ++### Hybrid-advanced ++In this example, the organization has users spread across multiple on-premises HR systems and cloud HR. They have large groups and device synchronization requirements. ++ ++1. MIM imports user information from each HR stem. MIM determines which users are needed for those employees in different directories. MIM provisions those identities in AD DS. ++2. Azure AD Connect Sync then synchronizes those users and groups to Azure AD and provides users access to their resources. ++## Next steps ++* [Solutions to automate user provisioning to applications](automate-provisioning-to-applications-solutions.md) |
active-directory | Automate Provisioning To Applications Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/automate-provisioning-to-applications-solutions.md | + + Title: Solutions to automate identity provisioning to applications +description: Learn to design solutions to automatically provision identities based on various scenarios. +++++++ Last updated : 09/23/2022+++ - it-pro + - seodec18 + - kr2b-contr-experiment ++++# Solutions ++This article presents solutions that enable you to: ++* Connect identities with your system of record +* Synchronize identities between Active Directory Domain Services (AD DS) and Azure Active Directory (Azure AD) +* Automate provisioning of users into non-Microsoft applications ++## Connect identities with your system of record ++In most designs, the human resources (HR) system is the source-of-authority for newly created digital identities. The HR system is often the starting point for many provisioning processes. For example, if a new user joins a company, they have a record in the HR system. That user likely needs an account to access Microsoft 365 services such as Teams and SharePoint, or non-Microsoft applications. ++### Synchronizing identities with cloud HR ++The Azure AD provisioning service enables organizations to [bring identities from popular HR systems](../app-provisioning/what-is-hr-driven-provisioning.md) (examples: [Workday](../saas-apps/workday-inbound-tutorial.md) and [SuccessFactors](../saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md)), into Azure AD directly, or into AD DS. This provisioning capability enables new hires to access the resources they need from the first day of work. ++### On-premises HR + joining multiple data sources ++To create a full user profile for an employee identity, organizations often merge information from multiple HR systems, databases, and other user data stores. MIM provides a rich set of [connectors](https://learn.microsoft.com/microsoft-identity-manager/supported-management-agents) and integration solutions interoperating with heterogeneous platforms. ++MIM offers [rule extension](/previous-versions/windows/desktop/forefront-2010/ms698810(v=vs.100)?redirectedfrom=MSDN) and [workflow capabilities](https://microsoft.github.io/MIMWAL/) features for advanced scenarios requiring data transformation and consolidation from multiple sources. These connectors, rule extensions, and workflow capabilities enable organizations to aggregate user data in the MIM metaverse to form a single identity for each user. The identity can be [provisioned into downstream systems](/microsoft-identity-manager/microsoft-identity-manager-2016-supported-platforms) such as AD DS. ++ ++## Synchronize identities between Active Directory Domain Services (AD DS) and Azure AD ++As customers move applications to the cloud, and integrate with Azure AD, users often need accounts in Azure AD, and AD to access the applications for their work. Here are five common scenarios in which objects need to be synchronized between AD and Azure AD. ++The scenarios are divided by the direction of synchronization needed, and are listed, one through five. Use the table following the scenarios to determine what technical solution provides the synchronization. ++Use the numbered sections in the next two section to cross reference the following table. ++**Synchronize identities from AD into Azure AD** ++1. For users in AD that need access to Office 365 or other applications that are connected to Azure AD, Azure AD Connect cloud sync is the first solution to explore. It provides a lightweight solution to create users in Azure AD, manage password rests, and synchronize groups. Configuration and management are primarily done in the cloud, minimizing your on-premises footprint. It provides high-availability and automatic failover, ensuring password resets and synchronization continue, even if there's an issue with on-premises servers. ++1. For complex, large-scale AD to Azure AD sync needs such as synchronizing groups over 50,000 and device sync, customers can use Azure AD Connect sync to meet their needs. ++**Synchronize identities from Azure AD into AD** ++As customers transition identity management to the cloud, more users and groups are created directly in Azure AD. However, they still need a presence on-premises in AD DS to access various resources. ++3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to on-premises Windows-Integrated Authentication or Kerberos-based applications. ++1. When a group is created in Azure AD, it can be automatically synchronized to AD DS using [Azure AD Connect sync](../hybrid/how-to-connect-group-writeback-v2.md). ++1. When users need access to cloud apps that still rely on legacy access protocols (for example, LDAP and Kerberos/NTLM), [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) synchronizes identities between Azure AD and a managed AD domain. ++|No.| What | From | To | Technology | +| - | - | - | - | - | +| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](https://learn.microsoft.com/azure/active-directory/cloud-sync/what-is-cloud-sync) | +| 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](https://learn.microsoft.com/azure/active-directory/hybrid/whatis-azure-ad-connect) | +| 3 |Groups| Azure AD| AD DS| [Azure AD Connect Sync](../hybrid/how-to-connect-group-writeback-v2.md) | +| 4 |Guest accounts| Azure AD| AD DS| [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) | +| 5 |Users, groups| Azure AD| Managed AD| [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | ++The table depicts common scenarios and the recommended technology. ++## Automate provisioning users into non-Microsoft applications ++After identities are in Azure AD through HR-provisioning or Azure AD Connect Could Sync / Azure AD Connect Sync, the employee can use the identity to access Teams, SharePoint, and Microsoft 365 applications. However, employees still need access to many Microsoft applications to perform their work. ++ ++### Automate provisioning to apps and clouds that support the SCIM standard ++Azure AD supports the System for Cross-Domain Identity Management ([SCIM 2.0](https://aka.ms/scimoverview)) standard and integrates with hundreds of popular SaaS applications such as [Dropbox](../saas-apps/dropboxforbusiness-provisioning-tutorial.md) and [Atlassian](../saas-apps/atlassian-cloud-provisioning-tutorial.md) or other clouds such as [Amazon Web Services (AWS)](../saas-apps/aws-single-sign-on-provisioning-tutorial.md), [Google Cloud](../saas-apps/g-suite-provisioning-tutorial.md). Application developers can use the System for Cross-Domain Identity Management (SCIM) user management API to automate provisioning users and groups between Azure AD and your application. ++ ++In addition to the pre-integrated gallery applications, Azure AD supports provisioning to SCIM enabled line of business applications, whether hosted [on-premises](../app-provisioning/on-premises-scim-provisioning.md) or in the cloud. The Azure AD provisioning service creates users and groups in these applications, and manages updates such as when a user is promoted or leaves the company). ++[Learn more about provisioning to SCIM enabled applications](../app-provisioning/use-scim-to-provision-users-and-groups.md) ++### Automate provisioning to SQL and LDAP based applications ++ Many applications don't support the SCIM standard, and customers have historically used connectors developed for MIM to connect to them. The Azure AD provisioning service supports reusing connectors developed for MIM and provisioning users into applications that rely on an LDAP user store or a SQL database. ++[Learn more about on-premises application provisioning](../app-provisioning/user-provisioning.md) ++### Use integrations developed by partners ++Many applications may not yet support SCIM or rely on SQL / LDAP databases. Microsoft partners have developed SCIM gateways that allow you to synchronize users between Azure AD and various systems such as mainframes, HR systems, and legacy databases. In the image below, the SCIM Gateways are built and managed by partners. ++ ++[Learn more about partner driven integrations](../app-provisioning/partner-driven-integrations.md) ++### Manage local app passwords ++Many applications have a local authentication store and a UI that only checks the userΓÇÖs supplied credentials against that store. As a result, these applications can't support Multi Factor Authentication (MFA) through Azure AD and pose a security risk. Microsoft recommends enabling single sign-on and MFA for all your applications. Based on our studies, your account is more than 99.9% less likely to be compromised if you [use MFA](https://aka.ms/securitysteps). However, in cases where the application canΓÇÖt externalize authentication, customers can use MIM to sync password changes to these applications. ++ ++[Learn more about the MIM password change notification service](/microsoft-identity-manager/infrastructure/mim2016-password-management) ++### Define and provision access for a user based on organizational data ++MIM enables you to import organizational data such as job codes and locations. That information can then be used to automatically set up access rights for that user. ++ ++### Automate common business workflows ++After users are provisioned into Azure AD, use Lifecycle Workflows (LCW) to automate appropriate actions at key moments in a userΓÇÖs lifecycle such as joiner, mover, and leaver. These custom workflows can be triggered by Azure AD LCW automatically, or on demand to enable or disable accounts, generate Temporary Access Passes, update Teams and/or group membership, send automated emails, and trigger a Logic App. This can help organizations ensure: ++* **Joiner**: When a user joins the organization, they're ready to go on day one. They have the correct access to the information and applications they need. They have the required hardware necessary to do their job. ++* **Leaver**: When users leave the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner. ++[Learn more about Azure AD Lifecycle Workflows](https://learn.microsoft.com/azure/active-directory/governance/what-are-lifecycle-workflows) ++> [!Note] +> For scenarios not covered by LCW, customers can leverage the extensibility of [Logic Applications](../..//logic-apps/logic-apps-overview.md). ++### Reconcile changes made directly in the target system ++Organizations often need a complete audit trail of what users have access to applications containing data subject to regulation. To provide an audit trail, any access provided to a user directly must be traceable through the system of record. MIM provides the [reconciliation capabilities](/microsoft-identity-manager/mim-how-provision-users-adds) to detect changes made directly in a target system and roll back the changes. In addition to detecting changes in target applications, MIM can import identities from third party applications to Azure AD. These applications often augment the set of user records that originated in the HR system. ++### Next steps ++1. Automate provisioning with any of your applications that are in the [Azure AD app gallery](../saas-apps/tutorial-list.md), support [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md), [SQL](../app-provisioning/on-premises-sql-connector-configure.md), or [LDAP](../app-provisioning/on-premises-ldap-connector-configure.md). +2. Evaluate [Azure AD Cloud Sync](../cloud-sync/what-is-cloud-sync.md) for synchronization between AD DS and Azure AD +3. Use the [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) for complex provisioning scenarios |
active-directory | Custom Security Attributes Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-overview.md | Currently, you can add custom security attributes for the following Azure AD obj - Azure AD enterprise applications (service principals) - Managed identities for Azure resources -## How do custom security attributes compare with directory schema extensions? +## How do custom security attributes compare with directory extensions? -Here are some ways that custom security attributes compare with [directory schema extensions](../develop/active-directory-schema-extensions.md): +Here are some ways that custom security attributes compare with [directory extensions](../develop/active-directory-schema-extensions.md): -- Directory schema extensions cannot be used for authorization scenarios and attributes because the access control for the extension attributes is tied to the Azure AD object. Custom security attributes can be used for authorization and attributes needing access control because the custom security attributes can be managed and protected through separate permissions.-- Directory schema extensions are tied to an application and share the lifecycle of an application. Custom security attributes are tenant wide and not tied to an application.-- Directory schema extensions support assigning a single value to an attribute. Custom security attributes support assigning multiple values to an attribute.+- Directory extensions cannot be used for authorization scenarios and attributes because the access control for the extension attributes is tied to the Azure AD object. Custom security attributes can be used for authorization and attributes needing access control because the custom security attributes can be managed and protected through separate permissions. +- Directory extensions are tied to an application and share the lifecycle of an application. Custom security attributes are tenant wide and not tied to an application. +- Directory extensions support assigning a single value to an attribute. Custom security attributes support assigning multiple values to an attribute. ## Steps to use custom security attributes Azure AD provides built-in roles to work with custom security attributes. The At > [!IMPORTANT] > By default, [Global Administrator](../roles/permissions-reference.md#global-administrator) and other administrator roles do not have permissions to read, define, or assign custom security attributes. -## Graph Explorer +## Microsoft Graph APIs + +You can manage custom security attributes programmatically using Microsoft Graph APIs. For more information, see [Overview of custom security attributes using the Microsoft Graph API](/graph/api/resources/custom-security-attributes-overview). -If you use the Microsoft Graph API, you can use [Graph Explorer](/graph/graph-explorer/graph-explorer-overview) to more easily try the Microsoft Graph APIs for custom security attributes. For more information, see [Overview of custom security attributes using the Microsoft Graph API](/graph/api/resources/custom-security-attributes-overview). +You can use an API client such as [Graph Explorer](/graph/graph-explorer/graph-explorer-overview) or Postman to more easily try the Microsoft Graph APIs for custom security attributes.  |
active-directory | Road To The Cloud Posture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-posture.md | Many companies that migrate from Active Directory to Azure AD start with an envi [](media/road-to-cloud-posture/road-to-the-cloud-start.png#lightbox) -Microsoft has modeled five states of transformation that commonly align with the business goals of customers. As the goals of customers mature, it's typical for them to shift from one state to the next at a pace that suits their resources and culture. This approach closely follows [Active Directory in Transition: Gartner Survey Results and Analysis](https://www.gartner.com/en/documents/4006741). +Microsoft has modeled five states of transformation that commonly align with the business goals of customers. As the goals of customers mature, it's typical for them to shift from one state to the next at a pace that suits their resources and culture. The five states have exit criteria to help you determine where your environment resides today. Some projects, such as application migration, span all five states. Other projects span a single state. |
active-directory | How To Use Vm Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-token.md | The managed identities endpoint signals errors via the status code field of the | Status Code | Error Reason | How To Handle | | -- | | - | | 404 Not found. | IMDS endpoint is updating. | Retry with Exponential Backoff. See guidance below. |+| 410 | IMDS is going through updates | IMDS will be available within 70 seconds | | 429 Too many requests. | IMDS Throttle limit reached. | Retry with Exponential Backoff. See guidance below. | | 4xx Error in request. | One or more of the request parameters was incorrect. | Don't retry. Examine the error details for more information. 4xx errors are design-time errors.| | 5xx Transient error from service. | The managed identities for Azure resources subsystem or Azure Active Directory returned a transient error. | It's safe to retry after waiting for at least 1 second. If you retry too quickly or too often, IMDS and/or Azure AD may return a rate limit error (429).| This section documents the possible error responses. A "200 OK" status is a succ ## Retry guidance -It's recommended to retry if you receive a 404, 429, or 5xx error code (see [Error handling](#error-handling) above). +It's recommended to retry if you receive a 404, 429, or 5xx error code (see [Error handling](#error-handling) above). If you receive a 410 error, it indicates that IMDS is going through updates and will be available in a maximum of 70 seconds. Throttling limits apply to the number of calls made to the IMDS endpoint. When the throttling threshold is exceeded, IMDS endpoint limits any further requests while the throttle is in effect. During this period, the IMDS endpoint will return the HTTP status code 429 ("Too many requests"), and the requests fail. |
active-directory | Permissions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md | This article lists the Azure AD built-in roles you can assign to allow managemen > | [Partner Tier1 Support](#partner-tier1-support) | Do not use - not intended for general use. | 4ba39ca4-527c-499a-b93d-d9b492c50246 | > | [Partner Tier2 Support](#partner-tier2-support) | Do not use - not intended for general use. | e00e864a-17c5-4a4b-9c06-f5b95a8d5bd8 | > | [Password Administrator](#password-administrator) | Can reset passwords for non-administrators and Password Administrators. | 966707d0-3269-4727-9be2-8c3a10f19b9d |+> [Permissions Management Administrator](#permissions-management-administrator) | Can manage all aspects of Permissions Management. | af78dc32-cf4d-46f9-ba4e-4428526346b5 | > | [Power BI Administrator](#power-bi-administrator) | Can manage all aspects of the Power BI product. | a9ea8996-122f-4c74-9520-8edcd192826c | > | [Power Platform Administrator](#power-platform-administrator) | Can create and manage all aspects of Microsoft Dynamics 365, Power Apps and Power Automate. | 11648597-926c-4cf3-9c36-bcebb0ba8dcc | > | [Printer Administrator](#printer-administrator) | Can manage all aspects of printers and printer connectors. | 644ef478-e28f-4e28-b9dc-3fdde9aa0b1f | Users with this role can't change the credentials or reset MFA for members and o > | microsoft.directory/users/password/update | Reset passwords for all users | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | +## Permissions Management Administrator ++Assign the Permissions Management Administrator role to users who need to do the following tasks: ++- Manage all aspects of Entry Permissions Management, when the service is present ++Learn more about Permissions Management roles and polices at [View information about roles/policies](../cloud-infrastructure-entitlement-management/how-to-view-role-policy.md). ++> [!div class="mx-tableFixed"] +> | Actions | Description | +> | | | +> | microsoft.permissionsManagement/allEntities/allProperties/allTasks | Manage all aspects of Entra Permissions Management | + ## Power BI Administrator Users with this role have global permissions within Microsoft Power BI, when the service is present, as well as the ability to manage support tickets and monitor service health. More information at [Understanding the Power BI Administrator role](/power-bi/service-admin-role). |
active-directory | Broker Groupe Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/broker-groupe-tutorial.md | + + Title: 'Tutorial: Azure AD SSO integration with Broker groupe Achat Solutions' +description: Learn how to configure single sign-on between Azure Active Directory and Broker groupe Achat Solutions. ++++++++ Last updated : 09/08/2022+++++# Tutorial: Azure AD SSO integration with Broker groupe Achat Solutions ++In this tutorial, you'll learn how to integrate Broker groupe Achat Solutions with Azure Active Directory (Azure AD). When you integrate Broker groupe Achat Solutions with Azure AD, you can: ++* Control in Azure AD who has access to Broker groupe Achat Solutions. +* Enable your users to be automatically signed-in to Broker groupe Achat Solutions with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++## Prerequisites ++To get started, you need the following items: ++* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Broker groupe Achat Solutions single sign-on (SSO) enabled subscription. +* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD. +For more information, see [Azure built-in roles](../roles/permissions-reference.md). ++## Scenario description ++In this tutorial, you configure and test Azure AD SSO in a test environment. ++* Broker groupe Achat Solutions supports **SP** initiated SSO. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Add Broker groupe Achat Solutions from the gallery ++To configure the integration of Broker groupe Achat Solutions into Azure AD, you need to add Broker groupe Achat Solutions from the gallery to your list of managed SaaS apps. ++1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. +1. On the left navigation pane, select the **Azure Active Directory** service. +1. Navigate to **Enterprise Applications** and then select **All Applications**. +1. To add new application, select **New application**. +1. In the **Add from the gallery** section, type **Broker groupe Achat Solutions** in the search box. +1. Select **Broker groupe Achat Solutions** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about Office 365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true). ++## Configure and test Azure AD SSO for Broker groupe Achat Solutions ++Configure and test Azure AD SSO with Broker groupe Achat Solutions using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Broker groupe Achat Solutions. ++To configure and test Azure AD SSO with Broker groupe Achat Solutions, perform the following steps: ++1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. + 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. + 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. +1. **[Configure Broker groupe Achat Solutions SSO](#configure-broker-groupe-achat-solutions-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Broker groupe Achat Solutions test user](#create-broker-groupe-achat-solutions-test-user)** - to have a counterpart of B.Simon in Broker groupe Achat Solutions that is linked to the Azure AD representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Azure AD SSO ++Follow these steps to enable Azure AD SSO in the Azure portal. ++1. In the Azure portal, on the **Broker groupe Achat Solutions** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: + + a. In the **Reply URL** text box, type the URL: + `https://id.awsolutions.fr/auth/realms/awsolutions` ++ b. In the **Sign-on URL** text box, type a URL using the following pattern: + `https://app.marcoweb.fr/Marco?idp_hint=<INSTANCENAME>` + + > [!NOTE] + > This value is not real. Update this value with the actual Sign-on URL. Contact [Broker groupe Achat Solutions Client support team](mailto:devops@achatsolutions.fr) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++  ++### Create an Azure AD test user ++In this section, you'll create a test user in the Azure portal called B.Simon. ++1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. +1. Select **New user** at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Name** field, enter `B.Simon`. + 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Click **Create**. ++### Assign the Azure AD test user ++In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Broker groupe Achat Solutions. ++1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. +1. In the applications list, select **Broker groupe Achat Solutions**. +1. In the app's overview page, find the **Manage** section and select **Users and groups**. +1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. +1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. +1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. +1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Broker groupe Achat Solutions SSO ++To configure single sign-on on **Broker groupe Achat Solutions** side, you need to send the **App Federation Metadata Url** to [Broker groupe Achat Solutions support team](mailto:devops@achatsolutions.fr). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Broker groupe Achat Solutions test user ++In this section, you create a user called Britta Simon at Broker groupe Achat Solutions. Work with [Broker groupe Achat Solutions support team](mailto:devops@achatsolutions.fr) to add the users in the Broker groupe Achat Solutions platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to Broker groupe Achat Solutions Sign-on URL where you can initiate the login flow. ++* Go to Broker groupe Achat Solutions Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the Broker groupe Achat Solutions tile in the My Apps, this will redirect to Broker groupe Achat Solutions Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510). ++## Next steps ++Once you configure Broker groupe Achat Solutions you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Confluencemicrosoft Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md | Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Confluence SAML SSO by Microsoft | Microsoft Docs' + Title: 'Tutorial: Azure AD SSO integration with Confluence SAML SSO by Microsoft' description: Learn how to configure single sign-on between Azure Active Directory and Confluence SAML SSO by Microsoft. -+ Previously updated : 05/07/2021- Last updated : 09/23/2022+ -# Tutorial: Azure Active Directory single sign-on (SSO) integration with Confluence SAML SSO by Microsoft +# Tutorial: Azure AD SSO integration with Confluence SAML SSO by Microsoft In this tutorial, you'll learn how to integrate Confluence SAML SSO by Microsoft with Azure Active Directory (Azure AD). When you integrate Confluence SAML SSO by Microsoft with Azure AD, you can: Use your Microsoft Azure Active Directory account with Atlassian Confluence serv To configure Azure AD integration with Confluence SAML SSO by Microsoft, you need the following items: -- An Azure AD subscription-- Confluence server application installed on a Windows 64-bit server (on-premises or on the cloud IaaS infrastructure)-- Confluence server is HTTPS enabled+- An Azure AD subscription. +- Confluence server application installed on a Windows 64-bit server (on-premises or on the cloud IaaS infrastructure). +- Confluence server is HTTPS enabled. - Note the supported versions for Confluence Plugin are mentioned in below section.-- Confluence server is reachable on internet particularly to Azure AD Login page for authentication and should able to receive the token from Azure AD-- Admin credentials are set up in Confluence-- WebSudo is disabled in Confluence-- Test user created in the Confluence server application+- Confluence server is reachable on internet particularly to Azure AD Login page for authentication and should able to receive the token from Azure AD. +- Admin credentials are set up in Confluence. +- WebSudo is disabled in Confluence. +- Test user created in the Confluence server application. > [!NOTE] > To test the steps in this tutorial, we do not recommend using a production environment of Confluence. Test the integration first in development or staging environment of the application and then use the production environment. Follow these steps to enable Azure AD SSO in the Azure portal. 1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. -  +  -1. On the **Basic SAML Configuration** section, enter the values for the following fields: +1. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Identifier** box, type a URL using the following pattern: `https://<DOMAIN:PORT>/` Follow these steps to enable Azure AD SSO in the Azure portal. `https://<DOMAIN:PORT>/plugins/servlet/saml/auth` > [!NOTE]- > These values are not real. Update these values with the actual Identifier, Reply URL, and Sign-On URL. Port is optional in case itΓÇÖs a named URL. These values are received during the configuration of Confluence plugin, which is explained later in the tutorial. + > These values are not real. Update these values with the actual Identifier, Reply URL, and Sign-on URL. Port is optional in case itΓÇÖs a named URL. These values are received during the configuration of Confluence plugin, which is explained later in the tutorial. 1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. -  +  ### Create an Azure AD test user |
active-directory | Factset Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/factset-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. `https://auth.factset.com` b. In the **Reply URL** text box, type the URL:- `https://login.factset.com/services/saml2/` + `https://auth.factset.com/sp/ACS.saml2` c. In the **Sign-on URL** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.factset.com/services/saml2/` |
active-directory | Github Enterprise Managed User Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-tutorial.md | In this section, you'll take the information provided from AAD above and enter t 1. Click on Sign In at the top-right corner 1. Enter the credentials for the first administrator user account. The login handle should be in the format: `<your enterprise short code>_admin` 1. Navigate to `https://github.com/enterprises/` `<your enterprise name>`. This information should be provided by your Solutions Engineering contact.-1. On the navigation menu on the left, select **Settings**, then **Security**. -1. Click on the checkbox **Enable SAML authentication** -1. Enter the Sign on URL. This URL is the Login URL that you copied from AAD above. +1. On the navigation menu on the left, select **Settings**, then **Authentication security**. +1. Click on the checkbox **Require SAML authentication** +1. Enter the Sign-on URL. This URL is the Login URL that you copied from AAD above. 1. Enter the Issuer. This URL is the Azure AD Identifier that you copied from AAD above. 1. Enter the Public Certificate. Please open the base64 certificate that you downloaded above and paste the text contents of that file into this dialog. 1. Click on **Test SAML configuration**. This will open up a dialog for you to log in with your Azure AD credentials to validate that SAML SSO is configured correctly. Log in with your AAD credentials. you will receive a message **Passed: Successfully authenticated your SAML SSO identity** upon successful validation. |
active-directory | Headspace Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/headspace-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. `urn:auth0:<Auth0TenantName>:<CustomerConnectionName>` b. In the **Reply URL** textbox, type a value using the following pattern:- `https://auth.<Enviornment>.headspace.com/login/callback?connection=<CustomerConnectionName>` + `https://auth.<Environment>.headspace.com/login/callback?connection=<CustomerConnectionName>` - c. In the **Sign on URL** textbox, type a value using the following pattern: - `https://<Environment>.headspace.com/sso-login` + c. In the **Sign on URL** textbox, type the URL: + `https://headspace.com/sso-login` > [!Note]- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Headspace Client support team](mailto:ecosystem-integration-squad@headspace.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Headspace Client support team](mailto:ecosystem-integration-squad@headspace.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 1. Headspace application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. In this section, you test your Azure AD single sign-on configuration with follow ## Next steps -Once you configure Headspace you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). +Once you configure Headspace you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Jira52microsoft Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jira52microsoft-tutorial.md | Title: 'Tutorial: Azure AD SSO integration with JIRA SAML SSO by Microsoft (V5.2)' description: Learn how to configure single sign-on between Azure Active Directory and JIRA SAML SSO by Microsoft (V5.2). -+ Previously updated : 09/08/2021- Last updated : 09/23/2022+ # Tutorial: Azure AD SSO integration with JIRA SAML SSO by Microsoft (V5.2) To configure and test Azure AD single sign-on with JIRA SAML SSO by Microsoft (V 1. On the **Select a Single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. -  +  -4. On the **Basic SAML Configuration** section, perform the following steps: +1. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Identifier** box, type a URL using the following pattern: `https://<domain:port>/` To configure and test Azure AD single sign-on with JIRA SAML SSO by Microsoft (V 5. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. -  +  ### Create an Azure AD test user |
active-directory | Jiramicrosoft Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md | Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with JIRA SAML SSO by Microsoft | Microsoft Docs' + Title: 'Tutorial: Azure AD SSO integration with JIRA SAML SSO by Microsoft' description: Learn how to configure single sign-on between Azure Active Directory and JIRA SAML SSO by Microsoft. -+ Previously updated : 12/28/2020- Last updated : 09/23/2022+ -# Tutorial: Azure Active Directory single sign-on (SSO) integration with JIRA SAML SSO by Microsoft +# Tutorial: Azure AD SSO integration with JIRA SAML SSO by Microsoft In this tutorial, you'll learn how to integrate JIRA SAML SSO by Microsoft with Azure Active Directory (Azure AD). When you integrate JIRA SAML SSO by Microsoft with Azure AD, you can: Use your Microsoft Azure Active Directory account with Atlassian JIRA server to To configure Azure AD integration with JIRA SAML SSO by Microsoft, you need the following items: - An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- JIRA Core and Software 6.4 to 8.22.1 or JIRA Service Desk 3.0 to 4.22.1 should installed and configured on Windows 64-bit version-- JIRA server is HTTPS enabled+- JIRA Core and Software 6.4 to 8.22.1 or JIRA Service Desk 3.0 to 4.22.1 should installed and configured on Windows 64-bit version. +- JIRA server is HTTPS enabled. - Note the supported versions for JIRA Plugin are mentioned in below section.-- JIRA server is reachable on the Internet particularly to the Azure AD login page for authentication and should able to receive the token from Azure AD-- Admin credentials are set up in JIRA-- WebSudo is disabled in JIRA-- Test user created in the JIRA server application+- JIRA server is reachable on the Internet particularly to the Azure AD login page for authentication and should able to receive the token from Azure AD. +- Admin credentials are set up in JIRA. +- WebSudo is disabled in JIRA. +- Test user created in the JIRA server application. > [!NOTE] > To test the steps in this tutorial, we do not recommend using a production environment of JIRA. Test the integration first in development or staging environment of the application and then use the production environment. To get started, you need the following items: ## Supported versions of JIRA -* JIRA Core and Software: 6.4 to 8.22.1 -* JIRA Service Desk 3.0 to 4.22.1 -* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](jira52microsoft-tutorial.md) +* JIRA Core and Software: 6.4 to 8.22.1. +* JIRA Service Desk 3.0 to 4.22.1. +* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](jira52microsoft-tutorial.md). > [!NOTE] > Please note that our JIRA Plugin also works on Ubuntu Version 16.04 and Linux. To get started, you need the following items: In this tutorial, you configure and test Azure AD SSO in a test environment. -* JIRA SAML SSO by Microsoft supports **SP** initiated SSO +* JIRA SAML SSO by Microsoft supports **SP** initiated SSO. ## Adding JIRA SAML SSO by Microsoft from the gallery Follow these steps to enable Azure AD SSO in the Azure portal. 1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. -  +  -1. On the **Basic SAML Configuration** section, enter the values for the following fields: +1. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Sign-on URL** text box, type a URL using the following pattern: `https://<domain:port>/plugins/servlet/saml/auth` Follow these steps to enable Azure AD SSO in the Azure portal. 1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. -  ---+  1. The Name ID attribute in Azure AD can be mapped to any desired user attribute by editing the Attributes & Claims section. |
active-directory | Keeperpasswordmanager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keeperpasswordmanager-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. * For on-premises SSO: `https://<KEEPER_FQDN>/sso-connect/saml/sso` c. For **Sign on URL**, type a URL using one of the following patterns:- * For cloud SSO: `https://keepersecurity.com/api/rest/sso/saml/sso/<CLOUD_INSTANCE_ID>` + * For cloud SSO: `https://keepersecurity.com/api/rest/sso/ext_login/<CLOUD_INSTANCE_ID>` * For on-premises SSO: `https://<KEEPER_FQDN>/sso-connect/saml/login` + d. For **Sign out URL**, type a URL using one of the following patterns: + * For cloud SSO: `https://keepersecurity.com/api/rest/sso/saml/slo/<CLOUD_INSTANCE_ID>` + * There is no configuration for on-premises SSO. + > [!NOTE] > These values aren't real. Update these values with the actual Identifier,Reply URL and Sign on URL. To get these values, contact the [Keeper Password Manager Client support team](https://keepersecurity.com/contact.html). You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. Follow these steps to enable Azure AD SSO in the Azure portal. | Last | user.surname | | Email | user.mail | -5. On **Set up Single Sign-On with SAML**, in the **SAML Signing Certificate** section, select **Download**. This downloads **Federation Metadata XML** from the options per your requirement, and saves it on your computer. +1. On **Set up Single Sign-On with SAML**, in the **SAML Signing Certificate** section, select **Download**. This downloads **Federation Metadata XML** from the options per your requirement, and saves it on your computer.  -6. On **Set up Keeper Password Manager**, copy the appropriate URLs, per your requirement. +1. On **Set up Keeper Password Manager**, copy the appropriate URLs, per your requirement.  |
active-directory | Mindflash Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mindflash-tutorial.md | Title: 'Tutorial: Azure AD SSO integration with Mindflash' -description: Learn how to configure single sign-on between Azure Active Directory and Mindflash. + Title: 'Tutorial: Azure AD SSO integration with Trakstar Learn' +description: Learn how to configure single sign-on between Azure Active Directory and Trakstar Learn (Mindflash). -# Tutorial: Azure AD SSO integration with Mindflash +# Tutorial: Azure AD SSO integration with Trakstar Learn -In this tutorial, you'll learn how to integrate Mindflash with Azure Active Directory (Azure AD). When you integrate Mindflash with Azure AD, you can: +In this tutorial, you'll learn how to integrate Trakstar Learn (Mindflash) with Azure Active Directory (Azure AD). When you integrate Learn with Azure AD, you can: -* Control in Azure AD who has access to Mindflash. -* Enable your users to be automatically signed-in to Mindflash with their Azure AD accounts. +* Control in Azure AD who has access to Learn. +* Enable your users to be automatically signed-in to Learn with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal. ## Prerequisites In this tutorial, you'll learn how to integrate Mindflash with Azure Active Dire To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-* Mindflash single sign-on (SSO) enabled subscription. +* Trakstar Learn single sign-on (SSO) enabled subscription. * Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD. For more information, see [Azure built-in roles](../roles/permissions-reference.md). For more information, see [Azure built-in roles](../roles/permissions-reference. In this tutorial, you configure and test Azure AD single sign-on in a test environment. -* Mindflash supports **SP** initiated SSO. +* Learn supports **SP** initiated SSO. -## Add Mindflash from the gallery +## Add Learn from the gallery -To configure the integration of Mindflash into Azure AD, you need to add Mindflash from the gallery to your list of managed SaaS apps. +To configure the integration of Learn into Azure AD, you need to add Learn from the gallery to your list of managed SaaS apps. 1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.-1. In the **Add from the gallery** section, type **Mindflash** in the search box. -1. Select **Mindflash** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. +1. In the **Add from the gallery** section, type **Trakstar Learn** in the search box. Trakstar Learn was formerly Mindlfash. +1. Select **Trakstar Learn** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) +## Configure and test Azure AD SSO for Learn -## Configure and test Azure AD SSO for Mindflash +Configure and test Azure AD SSO with Learn using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Learn. -Configure and test Azure AD SSO with Mindflash using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Mindflash. --To configure and test Azure AD SSO with Mindflash, perform the following steps: +To configure and test Azure AD SSO with Learn, perform the following steps: 1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.-1. **[Configure Mindflash SSO](#configure-mindflash-sso)** - to configure the single sign-on settings on application side. - 1. **[Create Mindflash test user](#create-mindflash-test-user)** - to have a counterpart of B.Simon in Mindflash that is linked to the Azure AD representation of user. +1. **[Configure Trakstar Learn SSO](#configure-trakstar-learn-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Trakstar Learn test user](#create-trakstar-learn-test-user)** - to have a counterpart of B.Simon in Trakstar Learn that is linked to the Azure AD representation of user. 1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal. -1. In the Azure portal, on the **Mindflash** application integration page, find the **Manage** section and select **single sign-on**. +1. In the Azure portal, on the **Trakstar Learn** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. Follow these steps to enable Azure AD SSO in the Azure portal. `https://<companyname>.mindflash.com` > [!NOTE]- > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Mindflash Client support team](https://www.mindflash.com/contact/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Trakstar Learn Client support team](mailto:learn@trakstar.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.  -1. On the **Set up Mindflash** section, copy the appropriate URL(s) as per your requirement. +1. On the **Set up Trakstar Learn** section, copy the appropriate URL(s) as per your requirement.  In this section, you'll create a test user in the Azure portal called B.Simon. ### Assign the Azure AD test user -In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Mindflash. +In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Learn. 1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.-1. In the applications list, select **Mindflash**. +1. In the applications list, select **Trakstar Learn**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button. -## Configure Mindflash SSO +## Configure Trakstar Learn SSO -To configure single sign-on on **Mindflash** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Mindflash support team](https://www.mindflash.com/contact/). They set this setting to have the SAML SSO connection set properly on both sides. +To configure single sign-on on **Trakstar Learn** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Trakstar Learn support team](mailto:learn@trakstar.com). They set this setting to have the SAML SSO connection set properly on both sides. -### Create Mindflash test user +### Create Trakstar Learn test user -In order to enable Azure AD users to log into Mindflash, they must be provisioned into Mindflash. In the case of Mindflash, provisioning is a manual task. +In order to enable Azure AD users to log into Learn, they must be provisioned into Learn. In the case of Learn, provisioning is a manual task. -### To provision a user accounts, perform the following steps: +### To provision a user account, perform the following steps: -1. Log in to your **Mindflash** company site as an administrator. +1. Log in to your **Trakstar Learn** company site as an administrator. 1. Go to **Manage Users**. In order to enable Azure AD users to log into Mindflash, they must be provisione b. Click **Add**. >[!NOTE]->You can use any other Mindflash user account creation tools or APIs provided by Mindflash to provision Azure AD user accounts. +>You can use any other Learn user account creation tools or APIs provided by Learn to provision Azure AD user accounts. > ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options. -* Click on **Test this application** in Azure portal. This will redirect to Mindflash Sign-on URL where you can initiate the login flow. +* Click on **Test this application** in Azure portal. This will redirect to Learn Sign on URL where you can initiate the login flow. -* Go to Mindflash Sign-on URL directly and initiate the login flow from there. +* Go to Learn Sign on URL directly and initiate the login flow from there. -* You can use Microsoft My Apps. When you click the Mindflash tile in the My Apps, this will redirect to Mindflash Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). +* You can use Microsoft My Apps. When you click the Trakstar Learn tile in the My Apps, this will redirect to Learn Sign on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ## Next steps -Once you configure Mindflash you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). +Once you configure Trakstar Learn you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Ms Confluence Jira Plugin Adminguide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md | Title: 'Atlassian Jira/Confluence admin guide - Azure Active Directory| Microsoft Docs' + Title: 'Atlassian Jira/Confluence admin guide - Azure Active Directory' description: Admin guide to use Atlassian Jira and Confluence with Azure Active Directory (Azure AD).. -+ Previously updated : 11/19/2018- Last updated : 09/23/2022+ # Atlassian Jira and Confluence admin guide for Azure Active Directory Note the following information before you install the plug-in: The plug-in supports the following versions of Jira and Confluence: -* Jira Core and Software: 6.0 to 8.22.1 -* Jira Service Desk: 3.0.0 to 4.22.1 -* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md) -* Confluence: 5.0 to 5.10 -* Confluence: 6.0.1 to 6.15.9 -* Confluence: 7.0.1 to 7.19.0 +* Jira Core and Software: 6.0 to 8.22.1. +* Jira Service Desk: 3.0.0 to 4.22.1. +* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md). +* Confluence: 5.0 to 5.10. +* Confluence: 6.0.1 to 6.15.9. +* Confluence: 7.0.1 to 7.19.0. ## Installation No. The plug-in supports only on-premises versions of Jira and Confluence. The plug-in supports these versions: -* Jira Core and Software: 6.0 to 8.22.1 -* Jira Service Desk: 3.0.0 to 4.22.1 -* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md) -* Confluence: 5.0 to 5.10 -* Confluence: 6.0.1 to 6.15.9 -* Confluence: 7.0.1 to 7.19.0 +* Jira Core and Software: 6.0 to 8.22.1. +* Jira Service Desk: 3.0.0 to 4.22.1. +* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md). +* Confluence: 5.0 to 5.10. +* Confluence: 6.0.1 to 6.15.9. +* Confluence: 7.0.1 to 7.19.0. ### Is the plug-in free or paid? |
active-directory | Sciforma Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sciforma-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. 4. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:- `https://<SUBDOMAIN>.sciforma.net/sciforma/saml` + `https://<SUBDOMAIN>.sciforma.net/sciforma` - b. In the **Sign on URL** text box, type a URL using the following pattern: + b. In the **Reply URL** text box, type a URL using the following pattern: + `https://<SUBDOMAIN>.sciforma.net/sciforma/saml/post` ++ c. In the **Sign on URL** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.sciforma.net/sciforma/main.html` > [!NOTE] |
active-directory | Sketch Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sketch-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. `https://www.sketch.com` > [!Note]- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Sketch support team](mailto:sso-support@sketch.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > Please use **Identifier** and **Reply URL** values from [Choose a shortname for your Workspace in Sketch](#choose-a-shortname-for-your-workspace-in-sketch) section. 1. Sketch application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. |
active-directory | Workhub Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workhub-tutorial.md | To configure the integration of workhub into Azure AD, you need to add workhub f 1. In the **Add from the gallery** section, type **workhub** in the search box. 1. Select **workhub** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) + Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ## Configure and test Azure AD SSO for workhub In this section, you'll enable B.Simon to use Azure single sign-on by granting a ## Configure workhub SSO -To configure single sign-on on **workhub** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [workhub support team](mailto:team_bkp@bitkey.jp). They set this setting to have the SAML SSO connection set properly on both sides. +To configure single sign-on on **workhub** side, you need to send the downloaded **Certificate (Base64)**, and appropriate copied URLs from Azure portal to [workhub support team](mailto:support_work@bitkey.jp). They set this setting to have the SAML SSO connection set properly on both sides. ### Create workhub test user -In this section, you create a user called Britta Simon at workhub. Work with [workhub support team](mailto:team_bkp@bitkey.jp) to add the users in the workhub platform. Users must be created and activated before you use single sign-on. +In this section, you create a user called Britta Simon at workhub. Work with [workhub support team](mailto:support_work@bitkey.jp) to add the users in the workhub platform. Users must be created and activated before you use single sign-on. ## Test SSO |
active-directory | Memo 22 09 Enterprise Wide Identity Management System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-enterprise-wide-identity-management-system.md | Memorandum 22-09 requires agencies to develop a plan to consolidate their identi ## Why Azure Active Directory? -Azure Active Directory (Azure AD) provides the capabilities necessary to implement the recommendations from memorandum 22-09. It also provides broad identity controls that support Zero Trust initiatives. If your agency uses Microsoft Office 365, you already have an Azure AD back end to which you can consolidate. +Azure Active Directory (Azure AD) provides the capabilities necessary to implement the recommendations from memorandum 22-09. It also provides broad identity controls that support Zero Trust initiatives. Today, If your agency uses Microsoft Office 365 or Azure, you already have Azure AD as an identity provider (IdP) and you can connect your applications and resources to Azure AD as your enterprise-wide identity system. ## Single sign-on requirements The memo requires that users sign in once and then directly access applications. ## Integration across agencies -[Azure AD B2B collaboration](../external-identities/what-is-b2b.md) helps you meet the requirement to facilitate integration among agencies. It does this by: +[Azure AD B2B collaboration](../external-identities/what-is-b2b.md) helps you meet the requirement to facilitate integration/collaboration among agencies. Whether the users reside in different Microsoft tenant in the same cloud, [tenant on another microsoft cloud](../external-identities/b2b-government-national-clouds.md), or a [non Azure AD tenant (SAML/WS-Fed identity provider)](..//external-identities/direct-federation.md). ++Azure AD cross-tenant access settings allow agencies to manage how they collaborate with other Azure AD organizations and other Microsoft Azure clouds. It does this by: - Limiting what other Microsoft tenants your users can access.-- Enabling you to allow access to users whom you don't have to manage in your own tenant, but whom you can subject to your multifactor authentication (MFA) and other access requirements.+- Granular settings to control access for external users including enforcement of multifactor authentication (MFA) and device signal. ## Connecting applications Devices integrated with Azure AD can be either [hybrid joined devices](../device * [Azure Linux virtual machines](../devices/howto-vm-sign-in-azure-ad-linux.md) +* [Azure Virtual Desktop](https://learn.microsoft.com/azure/architecture/example-scenario/wvd/azure-virtual-desktop-azure-active-directory-join) + * [Virtual desktop infrastructure](../devices/howto-device-identity-virtual-desktop-infrastructure.md) ## Next steps |
aks | Concepts Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-identity.md | Kubernetes RBAC and AKS help you secure your cluster access and provide only the This article introduces the core concepts that help you authenticate and assign permissions in AKS. -## AKS service permissions --When creating a cluster, AKS generates or modifies resources it needs (like VMs and NICs) to create and run the cluster on behalf of the user. This identity is distinct from the cluster's identity permission, which is created during cluster creation. --### Identity creating and operating the cluster permissions --The following permissions are needed by the identity creating and operating the cluster. --> [!div class="mx-tableFixed"] -> | Permission | Reason | -> ||| -> | `Microsoft.Compute/diskEncryptionSets/read` | Required to read disk encryption set ID. | -> | `Microsoft.Compute/proximityPlacementGroups/write` | Required for updating proximity placement groups. | -> | `Microsoft.Network/applicationGateways/read` <br/> `Microsoft.Network/applicationGateways/write` <br/> `Microsoft.Network/virtualNetworks/subnets/join/action` | Required to configure application gateways and join the subnet. | -> | `Microsoft.Network/virtualNetworks/subnets/join/action` | Required to configure the Network Security Group for the subnet when using a custom VNET.| -> | `Microsoft.Network/publicIPAddresses/join/action` <br/> `Microsoft.Network/publicIPPrefixes/join/action` | Required to configure the outbound public IPs on the Standard Load Balancer. | -> | `Microsoft.OperationalInsights/workspaces/sharedkeys/read` <br/> `Microsoft.OperationalInsights/workspaces/read` <br/> `Microsoft.OperationsManagement/solutions/write` <br/> `Microsoft.OperationsManagement/solutions/read` <br/> `Microsoft.ManagedIdentity/userAssignedIdentities/assign/action` | Required to create and update Log Analytics workspaces and Azure monitoring for containers. | --### AKS cluster identity permissions --The following permissions are used by the AKS cluster identity, which is created and associated with the AKS cluster. Each permission is used for the reasons below: --> [!div class="mx-tableFixed"] -> | Permission | Reason | -> ||| -> | `Microsoft.ContainerService/managedClusters/*` <br/> | Required for creating users and operating the cluster -> | `Microsoft.Network/loadBalancers/delete` <br/> `Microsoft.Network/loadBalancers/read` <br/> `Microsoft.Network/loadBalancers/write` | Required to configure the load balancer for a LoadBalancer service. | -> | `Microsoft.Network/publicIPAddresses/delete` <br/> `Microsoft.Network/publicIPAddresses/read` <br/> `Microsoft.Network/publicIPAddresses/write` | Required to find and configure public IPs for a LoadBalancer service. | -> | `Microsoft.Network/publicIPAddresses/join/action` | Required for configuring public IPs for a LoadBalancer service. | -> | `Microsoft.Network/networkSecurityGroups/read` <br/> `Microsoft.Network/networkSecurityGroups/write` | Required to create or delete security rules for a LoadBalancer service. | -> | `Microsoft.Compute/disks/delete` <br/> `Microsoft.Compute/disks/read` <br/> `Microsoft.Compute/disks/write` <br/> `Microsoft.Compute/locations/DiskOperations/read` | Required to configure AzureDisks. | -> | `Microsoft.Storage/storageAccounts/delete` <br/> `Microsoft.Storage/storageAccounts/listKeys/action` <br/> `Microsoft.Storage/storageAccounts/read` <br/> `Microsoft.Storage/storageAccounts/write` <br/> `Microsoft.Storage/operations/read` | Required to configure storage accounts for AzureFile or AzureDisk. | -> | `Microsoft.Network/routeTables/read` <br/> `Microsoft.Network/routeTables/routes/delete` <br/> `Microsoft.Network/routeTables/routes/read` <br/> `Microsoft.Network/routeTables/routes/write` <br/> `Microsoft.Network/routeTables/write` | Required to configure route tables and routes for nodes. | -> | `Microsoft.Compute/virtualMachines/read` | Required to find information for virtual machines in a VMAS, such as zones, fault domain, size, and data disks. | -> | `Microsoft.Compute/virtualMachines/write` | Required to attach AzureDisks to a virtual machine in a VMAS. | -> | `Microsoft.Compute/virtualMachineScaleSets/read` <br/> `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/read` <br/> `Microsoft.Compute/virtualMachineScaleSets/virtualmachines/instanceView/read` | Required to find information for virtual machines in a virtual machine scale set, such as zones, fault domain, size, and data disks. | -> | `Microsoft.Network/networkInterfaces/write` | Required to add a virtual machine in a VMAS to a load balancer backend address pool. | -> | `Microsoft.Compute/virtualMachineScaleSets/write` | Required to add a virtual machine scale set to a load balancer backend address pools and scale out nodes in a virtual machine scale set. | -> | `Microsoft.Compute/virtualMachineScaleSets/delete` | Required to delete a virtual machine scale set to a load balancer backend address pools and scale down nodes in a virtual machine scale set. | -> | `Microsoft.Compute/virtualMachineScaleSets/virtualmachines/write` | Required to attach AzureDisks and add a virtual machine from a virtual machine scale set to the load balancer. | -> | `Microsoft.Network/networkInterfaces/read` | Required to search internal IPs and load balancer backend address pools for virtual machines in a VMAS. | -> | `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/networkInterfaces/read` | Required to search internal IPs and load balancer backend address pools for a virtual machine in a virtual machine scale set. | -> | `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/networkInterfaces/ipconfigurations/publicipaddresses/read` | Required to find public IPs for a virtual machine in a virtual machine scale set. | -> | `Microsoft.Network/virtualNetworks/read` <br/> `Microsoft.Network/virtualNetworks/subnets/read` | Required to verify if a subnet exists for the internal load balancer in another resource group. | -> | `Microsoft.Compute/snapshots/delete` <br/> `Microsoft.Compute/snapshots/read` <br/> `Microsoft.Compute/snapshots/write` | Required to configure snapshots for AzureDisk. | -> | `Microsoft.Compute/locations/vmSizes/read` <br/> `Microsoft.Compute/locations/operations/read` | Required to find virtual machine sizes for finding AzureDisk volume limits. | --### Additional cluster identity permissions --When creating a cluster with specific attributes, you will need the following additional permissions for the cluster identity. Since these permissions are not automatically assigned, you must add them to the cluster identity after it's created. --> [!div class="mx-tableFixed"] -> | Permission | Reason | -> ||| -> | `Microsoft.Network/networkSecurityGroups/write` <br/> `Microsoft.Network/networkSecurityGroups/read` | Required if using a network security group in another resource group. Required to configure security rules for a LoadBalancer service. | -> | `Microsoft.Network/virtualNetworks/subnets/read` <br/> `Microsoft.Network/virtualNetworks/subnets/join/action` | Required if using a subnet in another resource group such as a custom VNET. | -> | `Microsoft.Network/routeTables/routes/read` <br/> `Microsoft.Network/routeTables/routes/write` | Required if using a subnet associated with a route table in another resource group such as a custom VNET with a custom route table. Required to verify if a subnet already exists for the subnet in the other resource group. | -> | `Microsoft.Network/virtualNetworks/subnets/read` | Required if using an internal load balancer in another resource group. Required to verify if a subnet already exists for the internal load balancer in the resource group. | -> | `Microsoft.Network/privatednszones/*` | Required if using a private DNS zone in another resource group such as a custom privateDNSZone. | --## AKS Node Access --By default Node Access is not required for AKS. The following access is needed for the node if a specific component is leveraged. --| Access | Reason | -||| -| `kubelet` | Required for customer to grant MSI access to ACR. | -| `http app routing` | Required for write permission to "random name".aksapp.io. | -| `container insights` | Required for customer to grant permission to the Log Analytics workspace. | - ## Kubernetes RBAC Kubernetes RBAC provides granular filtering of user actions. With this control mechanism: To bind roles across the entire cluster, or to cluster resources outside a given With a ClusterRoleBinding, you bind roles to users and apply to resources across the entire cluster, not a specific namespace. This approach lets you grant administrators or support engineers access to all resources in the AKS cluster. - > [!NOTE] > Microsoft/AKS performs any cluster actions with user consent under a built-in Kubernetes role `aks-service` and built-in role binding `aks-service-rolebinding`. -> +> > This role enables AKS to troubleshoot and diagnose cluster issues, but can't modify permissions nor create roles or role bindings, or other high privilege actions. Role access is only enabled under active support tickets with just-in-time (JIT) access. Read more about [AKS support policies](support-policies.md). - ### Kubernetes service accounts *Service accounts* are one of the primary user types in Kubernetes. The Kubernetes API holds and manages service accounts. Service account credentials are stored as Kubernetes secrets, allowing them to be used by authorized pods to communicate with the API Server. Most API requests provide an authentication token for a service account or a normal user account. Normal user accounts allow more traditional access for human administrators or d For more information on the identity options in Kubernetes, see [Kubernetes authentication][kubernetes-authentication]. -## Azure AD integration --Enhance your AKS cluster security with Azure AD integration. Built on decades of enterprise identity management, Azure AD is a multi-tenant, cloud-based directory and identity management service that combines core directory services, application access management, and identity protection. With Azure AD, you can integrate on-premises identities into AKS clusters to provide a single source for account management and security. -- --With Azure AD-integrated AKS clusters, you can grant users or groups access to Kubernetes resources within a namespace or across the cluster. --1. To obtain a `kubectl` configuration context, a user runs the [az aks get-credentials][az-aks-get-credentials] command. -1. When a user interacts with the AKS cluster with `kubectl`, they're prompted to sign in with their Azure AD credentials. --This approach provides a single source for user account management and password credentials. The user can only access the resources as defined by the cluster administrator. --Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID Connect, see the [Open ID connect documentation][openid-connect]. From inside of the Kubernetes cluster, [Webhook Token Authentication][webhook-token-docs] is used to verify authentication tokens. Webhook token authentication is configured and managed as part of the AKS cluster. --### Webhook and API server -- --As shown in the graphic above, the API server calls the AKS webhook server and performs the following steps: --1. `kubectl` uses the Azure AD client application to sign in users with [OAuth 2.0 device authorization grant flow](../active-directory/develop/v2-oauth2-device-code.md). -2. Azure AD provides an access_token, id_token, and a refresh_token. -3. The user makes a request to `kubectl` with an access_token from `kubeconfig`. -4. `kubectl` sends the access_token to API Server. -5. The API Server is configured with the Auth WebHook Server to perform validation. -6. The authentication webhook server confirms the JSON Web Token signature is valid by checking the Azure AD public signing key. -7. The server application uses user-provided credentials to query group memberships of the logged-in user from the MS Graph API. -8. A response is sent to the API Server with user information such as the user principal name (UPN) claim of the access token, and the group membership of the user based on the object ID. -9. The API performs an authorization decision based on the Kubernetes Role/RoleBinding. -10. Once authorized, the API server returns a response to `kubectl`. -11. `kubectl` provides feedback to the user. - -Learn how to integrate AKS with Azure AD with our [AKS-managed Azure AD integration how-to guide](managed-aad.md). - ## Azure role-based access control Azure role-based access control (RBAC) is an authorization system built on [Azure Resource Manager](../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources. AKS provides the following four built-in roles. They are similar to the [Kuberne | Azure Kubernetes Service RBAC Admin | Allows admin access, intended to be granted within a namespace. <br> Allows read/write access to most resources in a namespace (or cluster scope), including the ability to create roles and role bindings within the namespace. <br> Doesn't allow write access to resource quota or to the namespace itself. | | Azure Kubernetes Service RBAC Cluster Admin | Allows super-user access to perform any action on any resource. <br> Gives full control over every resource in the cluster and in all namespaces. | +## Azure AD integration ++Enhance your AKS cluster security with Azure AD integration. Built on decades of enterprise identity management, Azure AD is a multi-tenant, cloud-based directory and identity management service that combines core directory services, application access management, and identity protection. With Azure AD, you can integrate on-premises identities into AKS clusters to provide a single source for account management and security. ++ ++With Azure AD-integrated AKS clusters, you can grant users or groups access to Kubernetes resources within a namespace or across the cluster. ++1. To obtain a `kubectl` configuration context, a user runs the [az aks get-credentials][az-aks-get-credentials] command. +1. When a user interacts with the AKS cluster with `kubectl`, they're prompted to sign in with their Azure AD credentials. ++This approach provides a single source for user account management and password credentials. The user can only access the resources as defined by the cluster administrator. ++Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID Connect, see the [Open ID connect documentation][openid-connect]. From inside of the Kubernetes cluster, [Webhook Token Authentication][webhook-token-docs] is used to verify authentication tokens. Webhook token authentication is configured and managed as part of the AKS cluster. ++### Webhook and API server ++ ++As shown in the graphic above, the API server calls the AKS webhook server and performs the following steps: ++1. `kubectl` uses the Azure AD client application to sign in users with [OAuth 2.0 device authorization grant flow](../active-directory/develop/v2-oauth2-device-code.md). +2. Azure AD provides an access_token, id_token, and a refresh_token. +3. The user makes a request to `kubectl` with an access_token from `kubeconfig`. +4. `kubectl` sends the access_token to API Server. +5. The API Server is configured with the Auth WebHook Server to perform validation. +6. The authentication webhook server confirms the JSON Web Token signature is valid by checking the Azure AD public signing key. +7. The server application uses user-provided credentials to query group memberships of the logged-in user from the MS Graph API. +8. A response is sent to the API Server with user information such as the user principal name (UPN) claim of the access token, and the group membership of the user based on the object ID. +9. The API performs an authorization decision based on the Kubernetes Role/RoleBinding. +10. Once authorized, the API server returns a response to `kubectl`. +11. `kubectl` provides feedback to the user. ++Learn how to integrate AKS with Azure AD with our [AKS-managed Azure AD integration how-to guide](managed-aad.md). ++## AKS service permissions ++When creating a cluster, AKS generates or modifies resources it needs (like VMs and NICs) to create and run the cluster on behalf of the user. This identity is distinct from the cluster's identity permission, which is created during cluster creation. ++### Identity creating and operating the cluster permissions ++The following permissions are needed by the identity creating and operating the cluster. ++> [!div class="mx-tableFixed"] +> | Permission | Reason | +> ||| +> | `Microsoft.Compute/diskEncryptionSets/read` | Required to read disk encryption set ID. | +> | `Microsoft.Compute/proximityPlacementGroups/write` | Required for updating proximity placement groups. | +> | `Microsoft.Network/applicationGateways/read` <br/> `Microsoft.Network/applicationGateways/write` <br/> `Microsoft.Network/virtualNetworks/subnets/join/action` | Required to configure application gateways and join the subnet. | +> | `Microsoft.Network/virtualNetworks/subnets/join/action` | Required to configure the Network Security Group for the subnet when using a custom VNET.| +> | `Microsoft.Network/publicIPAddresses/join/action` <br/> `Microsoft.Network/publicIPPrefixes/join/action` | Required to configure the outbound public IPs on the Standard Load Balancer. | +> | `Microsoft.OperationalInsights/workspaces/sharedkeys/read` <br/> `Microsoft.OperationalInsights/workspaces/read` <br/> `Microsoft.OperationsManagement/solutions/write` <br/> `Microsoft.OperationsManagement/solutions/read` <br/> `Microsoft.ManagedIdentity/userAssignedIdentities/assign/action` | Required to create and update Log Analytics workspaces and Azure monitoring for containers. | ++### AKS cluster identity permissions ++The following permissions are used by the AKS cluster identity, which is created and associated with the AKS cluster. Each permission is used for the reasons below: ++> [!div class="mx-tableFixed"] +> | Permission | Reason | +> ||| +> | `Microsoft.ContainerService/managedClusters/*` <br/> | Required for creating users and operating the cluster +> | `Microsoft.Network/loadBalancers/delete` <br/> `Microsoft.Network/loadBalancers/read` <br/> `Microsoft.Network/loadBalancers/write` | Required to configure the load balancer for a LoadBalancer service. | +> | `Microsoft.Network/publicIPAddresses/delete` <br/> `Microsoft.Network/publicIPAddresses/read` <br/> `Microsoft.Network/publicIPAddresses/write` | Required to find and configure public IPs for a LoadBalancer service. | +> | `Microsoft.Network/publicIPAddresses/join/action` | Required for configuring public IPs for a LoadBalancer service. | +> | `Microsoft.Network/networkSecurityGroups/read` <br/> `Microsoft.Network/networkSecurityGroups/write` | Required to create or delete security rules for a LoadBalancer service. | +> | `Microsoft.Compute/disks/delete` <br/> `Microsoft.Compute/disks/read` <br/> `Microsoft.Compute/disks/write` <br/> `Microsoft.Compute/locations/DiskOperations/read` | Required to configure AzureDisks. | +> | `Microsoft.Storage/storageAccounts/delete` <br/> `Microsoft.Storage/storageAccounts/listKeys/action` <br/> `Microsoft.Storage/storageAccounts/read` <br/> `Microsoft.Storage/storageAccounts/write` <br/> `Microsoft.Storage/operations/read` | Required to configure storage accounts for AzureFile or AzureDisk. | +> | `Microsoft.Network/routeTables/read` <br/> `Microsoft.Network/routeTables/routes/delete` <br/> `Microsoft.Network/routeTables/routes/read` <br/> `Microsoft.Network/routeTables/routes/write` <br/> `Microsoft.Network/routeTables/write` | Required to configure route tables and routes for nodes. | +> | `Microsoft.Compute/virtualMachines/read` | Required to find information for virtual machines in a VMAS, such as zones, fault domain, size, and data disks. | +> | `Microsoft.Compute/virtualMachines/write` | Required to attach AzureDisks to a virtual machine in a VMAS. | +> | `Microsoft.Compute/virtualMachineScaleSets/read` <br/> `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/read` <br/> `Microsoft.Compute/virtualMachineScaleSets/virtualmachines/instanceView/read` | Required to find information for virtual machines in a virtual machine scale set, such as zones, fault domain, size, and data disks. | +> | `Microsoft.Network/networkInterfaces/write` | Required to add a virtual machine in a VMAS to a load balancer backend address pool. | +> | `Microsoft.Compute/virtualMachineScaleSets/write` | Required to add a virtual machine scale set to a load balancer backend address pools and scale out nodes in a virtual machine scale set. | +> | `Microsoft.Compute/virtualMachineScaleSets/delete` | Required to delete a virtual machine scale set to a load balancer backend address pools and scale down nodes in a virtual machine scale set. | +> | `Microsoft.Compute/virtualMachineScaleSets/virtualmachines/write` | Required to attach AzureDisks and add a virtual machine from a virtual machine scale set to the load balancer. | +> | `Microsoft.Network/networkInterfaces/read` | Required to search internal IPs and load balancer backend address pools for virtual machines in a VMAS. | +> | `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/networkInterfaces/read` | Required to search internal IPs and load balancer backend address pools for a virtual machine in a virtual machine scale set. | +> | `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/networkInterfaces/ipconfigurations/publicipaddresses/read` | Required to find public IPs for a virtual machine in a virtual machine scale set. | +> | `Microsoft.Network/virtualNetworks/read` <br/> `Microsoft.Network/virtualNetworks/subnets/read` | Required to verify if a subnet exists for the internal load balancer in another resource group. | +> | `Microsoft.Compute/snapshots/delete` <br/> `Microsoft.Compute/snapshots/read` <br/> `Microsoft.Compute/snapshots/write` | Required to configure snapshots for AzureDisk. | +> | `Microsoft.Compute/locations/vmSizes/read` <br/> `Microsoft.Compute/locations/operations/read` | Required to find virtual machine sizes for finding AzureDisk volume limits. | ++### Additional cluster identity permissions ++When creating a cluster with specific attributes, you will need the following additional permissions for the cluster identity. Since these permissions are not automatically assigned, you must add them to the cluster identity after it's created. ++> [!div class="mx-tableFixed"] +> | Permission | Reason | +> ||| +> | `Microsoft.Network/networkSecurityGroups/write` <br/> `Microsoft.Network/networkSecurityGroups/read` | Required if using a network security group in another resource group. Required to configure security rules for a LoadBalancer service. | +> | `Microsoft.Network/virtualNetworks/subnets/read` <br/> `Microsoft.Network/virtualNetworks/subnets/join/action` | Required if using a subnet in another resource group such as a custom VNET. | +> | `Microsoft.Network/routeTables/routes/read` <br/> `Microsoft.Network/routeTables/routes/write` | Required if using a subnet associated with a route table in another resource group such as a custom VNET with a custom route table. Required to verify if a subnet already exists for the subnet in the other resource group. | +> | `Microsoft.Network/virtualNetworks/subnets/read` | Required if using an internal load balancer in another resource group. Required to verify if a subnet already exists for the internal load balancer in the resource group. | +> | `Microsoft.Network/privatednszones/*` | Required if using a private DNS zone in another resource group such as a custom privateDNSZone. | ++## AKS Node Access ++By default Node Access is not required for AKS. The following access is needed for the node if a specific component is leveraged. ++| Access | Reason | +||| +| `kubelet` | Required for customer to grant MSI access to ACR. | +| `http app routing` | Required for write permission to "random name".aksapp.io. | +| `container insights` | Required for customer to grant permission to the Log Analytics workspace. | ## Summary In the Azure portal, you can find: * The Cluster Admin Azure AD Group is shown on the **Configuration** tab. * Also found with parameter name `--aad-admin-group-object-ids` in the Azure CLI. - | Description | Role grant required| Cluster admin Azure AD group(s) | When to use | | -||-|-| | Legacy admin login using client certificate| **Azure Kubernetes Admin Role**. This role allows `az aks get-credentials` to be used with the `--admin` flag, which downloads a [legacy (non-Azure AD) cluster admin certificate](control-kubeconfig-access.md) into the user's `.kube/config`. This is the only purpose of "Azure Kubernetes Admin Role".|n/a|If you're permanently blocked by not having access to a valid Azure AD group with access to your cluster.| |
analysis-services | Analysis Services Manage Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-manage-users.md | Roles at this level apply to users or accounts that need to perform tasks that c By default, when you create a new tabular model project, the model project does not have any roles. Roles can be defined by using the Role Manager dialog box in Visual Studio. When roles are defined during model project design, they are applied only to the model workspace database. When the model is deployed, the same roles are applied to the deployed model. After a model has been deployed, server and database administrators can manage roles and members by using SSMS. To learn more, see [Manage database roles and users](analysis-services-database-users.md). +## Considerations and limitations ++* Azure Analysis Services does not support the use of One-Time Password for B2B users + ## Next steps [Manage access to resources with Azure Active Directory groups](../active-directory/fundamentals/active-directory-manage-groups.md) [Manage database roles and users](analysis-services-database-users.md) [Manage server administrators](analysis-services-server-admins.md) -[Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) +[Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) |
api-management | Api Management Howto Add Products | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-add-products.md | After you publish a product, developers can access the APIs. Depending on how th * **Open product** - Developers can access an open product's APIs without a subscription key. However, you can configure other mechanisms to secure client access to the APIs, including [OAuth 2.0](api-management-howto-protect-backend-with-aad.md), [client certificates](api-management-howto-mutual-certificates-for-clients.md), and [restricting caller IP addresses](./api-management-access-restriction-policies.md#RestrictCallerIPs). + > [!NOTE] + > Open products aren't listed in the developer portal for developers to learn about or subscribe to. They're visible only to the **Administrators** group. You'll need to use another mechanism to inform developers about APIs that can be accessed without a subscription key. + When a client makes an API request without a subscription key: * API Management checks whether the API is associated with an open product. |
api-management | Api Management Howto Deploy Multi Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md | Title: Deploy Azure API Management services to multiple Azure regions + Title: Deploy Azure API Management instance to multiple Azure regions -description: Learn how to deploy an Azure API Management service instance to multiple Azure regions. +description: Learn how to deploy a Premium tier Azure API Management instance to multiple Azure regions to improve API gateway availability. Previously updated : 04/13/2021 Last updated : 09/27/2022 -# How to deploy an Azure API Management service instance to multiple Azure regions +# Deploy an Azure API Management instance to multiple Azure regions -Azure API Management supports multi-region deployment, which enables API publishers to distribute a single Azure API management service across any number of supported Azure regions. Multi-region feature helps reduce request latency perceived by geographically distributed API consumers and improves service availability if one region goes offline. +Azure API Management supports multi-region deployment, which enables API publishers to add regional API gateways to an existing API Management instance in one or more supported Azure regions. Multi-region deployment helps reduce request latency perceived by geographically distributed API consumers and improves service availability if one region goes offline. -A new Azure API Management service initially contains only one [unit][unit] in a single Azure region, the Primary region. Additional units can be added to the Primary or Secondary regions. An API Management gateway component is deployed to every selected Primary and Secondary region. Incoming API requests are automatically directed to the closest region. If a region goes offline, the API requests will be automatically routed around the failed region to the next closest gateway. +When adding a region, you configure: -> [!NOTE] -> Only the gateway component of API Management is deployed to all regions. The service management component and developer portal are hosted in the Primary region only. Therefore, in case of the Primary region outage, access to the developer portal and ability to change configuration (e.g. adding APIs, applying policies) will be impaired until the Primary region comes back online. While the Primary region is offline, available Secondary regions will continue to serve the API traffic using the latest configuration available to them. Optionally enable [zone redundancy](../availability-zones/migrate-api-mgt.md) to improve the availability and resiliency of the Primary or Secondary regions. +* The number of scale [units](upgrade-and-scale.md) that region will host. ++* Optional [zone redundancy](../availability-zones/migrate-api-mgt.md), if that region supports it. ++* [Virtual network](virtual-network-concepts.md) settings in the added region, if networking is configured in the existing region or regions. >[!IMPORTANT] > The feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo. For all other regions, customer data is stored in Geo. [!INCLUDE [premium.md](../../includes/api-management-availability-premium.md)] +## About multi-region deployment + ## Prerequisites -* If you have not yet created an API Management service instance, see [Create an API Management service instance](get-started-create-service-instance.md). Select the Premium service tier. -* If your API Management instance is deployed in a [virtual network](api-management-using-with-vnet.md), ensure that you set up a virtual network, subnet, and public IP address in the location that you plan to add. +* If you haven't created an API Management service instance, see [Create an API Management service instance](get-started-create-service-instance.md). Select the Premium service tier. +* If your API Management instance is deployed in a virtual network, ensure that you set up a virtual network, subnet, and public IP address in the location that you plan to add. See [virtual network prerequisites](api-management-using-with-vnet.md#prerequisites). -## <a name="add-region"> </a>Deploy API Management service to an additional location +## <a name="add-region"> </a>Deploy API Management service to an additional region -1. In the Azure portal, navigate to your API Management service and select **Locations** in the menu. +1. In the Azure portal, navigate to your API Management service and select **Locations** from the left menu. 1. Select **+ Add** in the top bar.-1. Select the location from the drop-down list. +1. Select the added location from the dropdown list. 1. Select the number of scale **[Units](upgrade-and-scale.md)** in the location.-1. Optionally enable [**Availability zones**](../availability-zones/migrate-api-mgt.md). +1. Optionally select one or more [**Availability zones**](../availability-zones/migrate-api-mgt.md). 1. If the API Management instance is deployed in a [virtual network](api-management-using-with-vnet.md), configure virtual network settings in the location. Select an existing virtual network, subnet, and public IP address that are available in the location. 1. Select **Add** to confirm. 1. Repeat this process until you configure all locations. 1. Select **Save** in the top bar to start the deployment process. -## <a name="remove-region"> </a>Delete an API Management service location +## <a name="remove-region"> </a>Remove an API Management service region -1. In the Azure portal, navigate to your API Management service and click on the **Locations** entry in the menu. -2. For the location you would like to remove, open the context menu using the **...** button at the right end of the table. Select the **Delete** option. -3. Confirm the deletion and click **Save** to apply the changes. +1. In the Azure portal, navigate to your API Management service and select **Locations** from the left menu. +2. For the location you would like to remove, select the context menu using the **...** button at the right end of the table. Select **Delete**. +3. Confirm the deletion and select **Save** to apply the changes. ## <a name="route-backend"> </a>Route API calls to regional backend services -By default, each API routes requests to a single backend service URL. Even though there are Azure API Management instances in various regions, the API gateway will still forward requests to the same backend service, which is deployed in only one region. In this case, the performance gain will come only from responses cached within Azure API Management in a region specific to the request, but contacting the backend across the globe may still cause high latency. +By default, each API routes requests to a single backend service URL. Even if you've configured Azure API Management gateways in various regions, the API gateway will still forward requests to the same backend service, which is deployed in only one region. In this case, the performance gain will come only from responses cached within Azure API Management in a region specific to the request; contacting the backend across the globe may still cause high latency. -To fully leverage geographical distribution of your system, you should have backend services deployed in the same regions as Azure API Management instances. Then, using policies and `@(context.Deployment.Region)` property, you can route the traffic to local instances of your backend. +To take advantage of geographical distribution of your system, you should have backend services deployed in the same regions as Azure API Management instances. Then, using policies and `@(context.Deployment.Region)` property, you can route the traffic to local instances of your backend. -1. Navigate to your Azure API Management instance and click on **APIs** from the left menu. +> [!TIP] +> Optionally set the `disableGateway` property in a regional gateway to disable routing of API traffic there. For example, temporarily disable a regional gateway when testing or updating a regional backend service. ++1. Navigate to your Azure API Management instance and select **APIs** from the left menu. 2. Select your desired API.-3. Click **Code editor** from the arrow dropdown in the **Inbound processing**. +3. Select **Code editor** from the arrow dropdown in the **Inbound processing**.  4. Use the `set-backend` combined with conditional `choose` policies to construct a proper routing policy in the `<inbound> </inbound>` section of the file. - For example, the below XML file would work for West US and East Asia regions: + For example, the following XML file would work for West US and East Asia regions: ```xml <policies> To fully leverage geographical distribution of your system, you should have back <base /> <choose> <when condition="@("West US".Equals(context.Deployment.Region, StringComparison.OrdinalIgnoreCase))">- <set-backend-service base-url="http://contoso-us.com/" /> + <set-backend-service base-url="http://contoso-backend-us.com/" /> </when> <when condition="@("East Asia".Equals(context.Deployment.Region, StringComparison.OrdinalIgnoreCase))">- <set-backend-service base-url="http://contoso-asia.com/" /> + <set-backend-service base-url="http://contoso-backend-asia.com/" /> </when> <otherwise>- <set-backend-service base-url="http://contoso-other.com/" /> + <set-backend-service base-url="http://contoso-backend-other.com/" /> </otherwise> </choose> </inbound> To fully leverage geographical distribution of your system, you should have back ## <a name="custom-routing"> </a>Use custom routing to API Management regional gateways -API Management routes the requests to a regional _gateway_ based on [the lowest latency](../traffic-manager/traffic-manager-routing-methods.md#performance). Although it is not possible to override this setting in API Management, you can use your own Traffic Manager with custom routing rules. +API Management routes the requests to a regional gateway based on [the lowest latency](../traffic-manager/traffic-manager-routing-methods.md#performance). Although it isn't possible to override this setting in API Management, you can use your own Traffic Manager with custom routing rules. 1. Create your own [Azure Traffic Manager](https://azure.microsoft.com/services/traffic-manager/).-1. If you are using a custom domain, [use it with the Traffic Manager](../traffic-manager/traffic-manager-point-internet-domain.md) instead of the API Management service. +1. If you're using a custom domain, [use it with the Traffic Manager](../traffic-manager/traffic-manager-point-internet-domain.md) instead of the API Management service. 1. [Configure the API Management regional endpoints in Traffic Manager](../traffic-manager/traffic-manager-manage-endpoints.md). The regional endpoints follow the URL pattern of `https://<service-name>-<region>-01.regional.azure-api.net`, for example `https://contoso-westus2-01.regional.azure-api.net`. 1. [Configure the API Management regional status endpoints in Traffic Manager](../traffic-manager/traffic-manager-monitoring.md). The regional status endpoints follow the URL pattern of `https://<service-name>-<region>-01.regional.azure-api.net/status-0123456789abcdef`, for example `https://contoso-westus2-01.regional.azure-api.net/status-0123456789abcdef`. 1. Specify [the routing method](../traffic-manager/traffic-manager-routing-methods.md) of the Traffic Manager. +## Virtual networking ++This section provides considerations for multi-region deployments when the API Management instance is injected in a virtual network. ++* Configure each regional network independently. The [connectivity requirements](virtual-network-reference.md) such as required network security group rules for a virtual network in an added region are the same as those for a network in the primary region. +* Virtual networks in the different regions don't need to be peered. ++### IP addresses ++* A public virtual IP address is created in every region added with a virtual network. For virtual networks in either [external mode](api-management-using-with-vnet.md) or [internal mode](api-management-using-with-internal-vnet.md), this public IP address is required for management traffic on port `3443`. ++ * **External VNet mode** - The public IP addresses are also required to route public HTTP traffic to the API gateways. ++ * **Internal VNet mode** - A private IP address is also created in every region added with a virtual network. Use these addresses to connect within the network to the API Management endpoints in the primary and secondary regions. ++### Routing ++* **External VNet mode** - Routing of public HTTP traffic to the regional gateways is handled automatically, in the same way it is for a non-networked API Management instance. ++* **Internal VNet mode** - Private HTTP traffic isn't routed or load-balanced to the regional gateways by default. Users own the routing and are responsible for bringing their own solution to manage routing and private load balancing across multiple regions. Example solutions include Azure Application Gateway and Azure Traffic Manager. ++## Next steps ++* Learn more about [zone redundancy](../availability-zones/migrate-api-mgt.md) to improve the availability of an API Management instance in a region. ++* For more information about virtual networks and API Management, see: ++ * [Connect to a virtual network using Azure API Management](api-management-using-with-vnet.md) ++ * [Connect to a virtual network in internal mode using Azure API Management](api-management-using-with-internal-vnet.md) ++ * [IP addresses of API Management](api-management-howto-ip-addresses.md) ++ [create an api management service instance]: get-started-create-service-instance.md [get started with azure api management]: get-started-create-service-instance.md [deploy an api management service instance to a new region]: #add-region |
api-management | Front Door Api Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/front-door-api-management.md | + + Title: Configure Azure Front Door in front of Azure API Management +description: Learn how to front your API Management instance with Azure Front Door Standard/Premium to provide global HTTPS load balancing, TLS offloading, dynamic request acceleration, and other capabilities. +++++ Last updated : 09/27/2022+++# Configure Front Door Standard/Premium in front of Azure API Management ++Azure Front Door is a modern application delivery network platform providing a secure, scalable content delivery network (CDN), dynamic site acceleration, and global HTTP(s) load balancing for your global web applications. When used in front of API Management, Front Door can provide TLS offloading, end-to-end TLS, load balancing, response caching of GET requests, and a web application firewall, among other capabilities. For a full list of supported features, see [What is Azure Front Door?](../frontdoor/front-door-overview.md) ++This article shows how to: ++* Set up an Azure Front Door Standard/Premium profile in front of a publicly accessible Azure API Management instance: either non-networked, or injected in a virtual network in [external mode](api-management-using-with-vnet.md). +* Restrict API Management to accept API traffic only from Azure Front Door. ++## Prerequisites ++* An API Management instance. + * If you choose to use a network-injected instance, it must be deployed in an external VNet. (Virtual network injection is supported in the Developer and Premium service tiers.) +* Import one or more APIs to your API Management instance to confirm routing through Front Door. ++## Configure Azure Front Door ++### Create profile ++For steps to create an Azure Front Door Standard/Premium profile, see [Quickstart: Create an Azure Front Door profile - Azure portal](../frontdoor/create-front-door-portal.md). For this article, you may choose a Front Door Standard profile. For a comparison of Front Door Standard and Front Door Premium, see [Tier comparison](../frontdoor/standard-premium/tier-comparison.md). ++Configure the following Front Door settings that are specific to using the gateway endpoint of your API Management instance as a Front Door origin. For an explanation of other settings, see the Front Door quickstart. ++|Setting |Value | +||| +| **Origin type** | Select **API Management** | +| **Origin hostname** | Select the hostname of your API Management instance, for example, *myapim*.azure-api.net | +| **Caching** | Select **Enable caching** for Front Door to [cache static content](../frontdoor/front-door-caching.md?pivots=front-door-standard-premium) | +| **Query string caching behavior** | Select **Use Query String** | +++### Update default origin group ++After the profile is created, update the default origin group to include an API Management health probe. ++1. In the [portal](https://portal.azure.com), go to your Front Door profile. +1. In the left menu, under **Settings** select **Origin groups** > **default-origin-group**. +1. In the **Update origin group** window, configure the following **Health probe** settings and select **Update**: ++ + |Setting |Value | + ||| + |**Status** | Select **Enable health probes** | + |**Path** | Enter `/status-0123456789abcdef` | + |**Protocol** | Select **HTTPS** | + |**Method** | Select **GET** | + |**Interval (in seconds)** | Enter **30** | + + :::image type="content" source="media/front-door-api-management/update-origin-group.png" alt-text="Screenshot of updating the default origin group in the portal."::: ++### Update default route ++We recommend updating the default route that's associated with the API Management origin group to use HTTPS as the forwarding protocol. ++1. In the [portal](https://portal.azure.com), go to your Front Door profile. +1. In the left menu, under **Settings** select **Origin groups**. +1. Expand **default-origin-group**. +1. In the context menu (**...**) of **default-route**, select **Configure route**. +1. Set **Accepted protocols** to **HTTP and HTTPS**. +1. Enable **Redirect all traffic to use HTTPS**. +1. Set **Forwarding protocol** to **HTTPS only** and then select **Update**. ++### Test the configuration ++Test the Front Door profile configuration by calling an API hosted by API Management. First, call the API directly through the API Management gateway to ensure that the API is reachable. Then, call the API through Front Door. To test, you can use a command line client such as `curl` for the calls, or a tool such as [Postman](https://www.getpostman.com). ++### Call an API directly through API Management ++In the following example, an operation in the Demo Conference API hosted by the API Management instance is called directly using Postman. In this example, the instance's hostname is in the default `azure-api.net` domain, and a valid subscription key is passed using a request header. A successful response shows `200 OK` and returns the expected data: +++### Call an API directly through Front Door ++In the following example, the same operation in the Demo Conference API is called using the Front Door endpoint configured for your instance. The endpoint's hostname in the `azurefd.net` domain is shown in the portal on the **Overview** page of your Front Door profile. A successful response shows `200 OK` and returns the same data as in the previous example: +++## Restrict incoming traffic to API Management instance ++Use API Management policies to ensure that your API Management instance accepts traffic only from Azure Front Door. You can accomplish this restriction using one or both of the [following methods](../frontdoor/front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-): ++1. Restrict incoming IP addresses to your API Management instances +1. Restrict traffic based on the value of the `X-Azure-FDID` header ++### Restrict incoming IP addresses ++You can configure an inbound [ip-filter](/api-management-access-restriction-policies.md#CheckHTTPHeader) policy in API Management to allow only Front Door-related traffic, which includes: ++* **Front Door's backend IP address space** - Allow IP addresses corresponding to the *AzureFrontDoor.Backend* section in [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519). ++ > [!NOTE] + > If your API Management instance is deployed in an external virtual network, accomplish the same restriction by adding an inbound network security group rule in the subnet used for your API Management instance. Configure the rule to allow HTTPS traffic from source service tag *AzureFrontDoor.Backend* on port 443. ++* **Azure infrastructure services** - Allow IP addresses 168.63.129.16 and 169.254.169.254. ++### Check Front Door header ++Requests routed through Front Door include headers specific to your Front Door configuration. You can configure the [check-header](/api-management-access-restriction-policies.md#CheckHTTPHeader) policy to filter incoming requests based on the unique value of the `X-Azure-FDID` HTTP request header that is sent to API Management. This header value is the **Front Door ID**, which is shown in the portal on the **Overview** page of the Front Door profile. ++In the following policy example, the Front Door ID is specified using a [named value](api-management-howto-properties.md) named `FrontDoorId`. ++```xml +<check-header name="X-Azure-FDID" failed-check-httpcode="403" failed-check-error-message="Invalid request." ignore-case="false"> + <value>{{FrontDoorId}}</value> +</check-header> +``` ++Requests that aren't accompanied by a valid `X-Azure-FDID` header return a `403 Forbidden` response. ++## (Optional) Configure Front Door for developer portal ++Optionally, configure the API Management instance's developer portal as an endpoint in the Front Door profile. While the managed developer portal is already fronted by an Azure-managed CDN, you might want to take advantage of Front Door features such as a WAF. ++The following are high level steps to add an endpoint for the developer portal to your profile: ++* To add an endpoint and configure a route, see [Configure and endpoint with Front Door manager](../frontdoor/how-to-configure-endpoints.md). ++* When adding the route, add an origin group and origin settings to represent the developer portal: ++ * **Origin type** - Select **Custom** + * **Host name** - Enter the developer portal's hostname, for example, *myapim*.developer.azure-api.net ++For more information and details about settings, see [How to configure an origin for Azure Front Door](../frontdoor/how-to-configure-origin.md#create-a-new-origin-group). ++> [!NOTE] +> If you've configured an [Azure AD](api-management-howto-aad.md) or [Azure AD B2C](api-management-howto-aad-b2c.md) identity provider for the developer portal, you need to update the corresponding app registration with an additional redirect URL to Front Door. In the app registration, add the URL for the developer portal endpoint configured in your Front Door profile. ++## Next steps ++* To automate deployments of Front Door with API Management, see the template [Front Door Standard/Premium with API Management origin](https://azure.microsoft.com/resources/templates/front-door-standard-premium-api-management-external/) ++* Learn how to deploy [Web Application Firewall (WAF)](../web-application-firewall/afds/afds-overview.md) on Azure Front Door to protect the API Management instance from malicious attacks. |
api-management | High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/high-availability.md | + + Title: Ensure reliability of your Azure API Management instance ++description: Learn how to use Azure reliability features including availability zones and multiregion deployments to make your Azure API Management service instance resilient to cloud failures. +++ Last updated : 09/27/2022++++# Ensure API Management availability and reliability +++This article introduces service capabilities and considerations to ensure that your API Management instance continues to serve API requests if Azure outages occur. ++API Management supports the following key service capabilities that are recommended for [reliable and resilient](../availability-zones/overview.md) Azure solutions. Use them individually, or together, to improve the availability of your API Management solution: ++* **Availability zones**, to provide resilience to datacenter-level outages ++* **Multi-region deployment**, to provide resilience to regional outages ++> [!NOTE] +> API Management supports availability zones and multi-region deployment in the **Premium** service tier. ++## Availability zones ++Azure [availability zones](../availability-zones/az-overview.md) are physically separate locations within an Azure region that are tolerant to datacenter-level failures. Each zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions. +++Enabling [zone redundancy](../availability-zones/migrate-api-mgt.md) for an API Management instance in a supported region provides redundancy for all [service components](api-management-key-concepts.md#api-management-components): gateway, management plane, and developer portal. Azure automatically replicates all service components across the zones that you select. ++When you enable zone redundancy in a region, consider the number of API Management scale [units](upgrade-and-scale.md) that need to be distributed. Minimally, configure the same number of units as the number of availability zones, or a multiple so that the units are distributed evenly across the zones. For example, if you select 3 availability zones in a region, you could have 3 units so that each zone hosts one unit. ++> [!NOTE] +> Use the [capacity](api-management-capacity.md) metric and your own testing to decide on the number of scale units that will provide the gateway performance for your needs. Learn more about [scaling and upgrading](upgrade-and-scale.md) your service instance. ++## Multi-region deployment +++## Combine availability zones and multi-region deployment ++The combination of availability zones for redundancy within a region, and multi-region deployments to improve the gateway availability if there is a regional outage, helps enhance both the reliability and performance of your API Management instance. ++Examples: ++* Use availability zones to improve the resilience of the primary region in a multi-region deployment ++* Distribute scale units across availability zones and regions to enhance regional gateway performance +++## SLA considerations ++API Management provides an SLA of 99.99% when you deploy at least one unit in two or more availability zones or regions. For more information, see [Pricing](https://azure.microsoft.com/pricing/details/api-management/). ++> [!NOTE] +> While Azure continually strives for highest possible resiliency in SLA for the cloud platform, you must define your own target SLAs for other components of your solution. ++## Backend availability ++Depending on where and how your backend services are hosted, you may need to set up redundant backends in different regions to meet your requirements for service availability. You can manage regional backends and handle failover through API Management to maintain availability. For example: ++* In multi-region deployments, use [policies to route requests](api-management-howto-deploy-multi-region.md#-route-api-calls-to-regional-backend-services) through regional gateways to regional backends. ++* Configure policies to route requests conditionally to different backends if there is backend failure in a particular region. ++* Use caching to reduce failing calls. ++For details, see the blog post [Back-end API redundancy with Azure API Manager](https://devblogs.microsoft.com/premier-developer/back-end-api-redundancy-with-azure-api-manager/). ++## Next steps ++* Learn more about [resiliency in Azure](../availability-zones/overview.md) +* Learn more about [designing reliable Azure applications](/azure/architecture/framework/resiliency/app-design) +* Read [API Management and reliability](/azure/architecture/framework/services/networking/api-management/reliability) in the Azure Well-Architected Framework |
api-management | Upgrade And Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/upgrade-and-scale.md | You can choose between four dedicated tiers: **Developer**, **Basic**, **Standa 1. Select **Apply**. > [!NOTE]-> In the Premium service tier, you can optionally configure availability zones and a virtual network in a selected location. For more information, see [Deploy API Management service to an additional location](api-management-howto-deploy-multi-region.md#-deploy-api-management-service-to-an-additional-location). +> In the Premium service tier, you can optionally configure availability zones and a virtual network in a selected location. For more information, see [Deploy API Management service to an additional location](api-management-howto-deploy-multi-region.md). ## Change your API Management service tier |
api-management | Virtual Network Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md | The following are virtual network resource requirements for API Management. Some The minimum size of the subnet in which API Management can be deployed is /29, which gives three usable IP addresses. Each extra scale [unit](api-management-capacity.md) of API Management requires two more IP addresses. The minimum size requirement is based on the following considerations: -* Azure reserves some IP addresses within each subnet that can't be used. The first and last IP addresses of the subnets are reserved for protocol conformance. Three more addresses are used for Azure services. For more information, see [Are there any restrictions on using IP addresses within these subnets?](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets). +* Azure reserves five IP addresses within each subnet that can't be used. The first and last IP addresses of the subnets are reserved for protocol conformance. Three more addresses are used for Azure services. For more information, see [Are there any restrictions on using IP addresses within these subnets?](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets). * In addition to the IP addresses used by the Azure VNet infrastructure, each API Management instance in the subnet uses:- * Two IP addresses per unit of Premium SKU, or + * Two IP addresses per unit of Basic, Standard, or Premium SKU, or * One IP address for the Developer SKU. * When deploying into an [internal VNet](./api-management-using-with-internal-vnet.md), the instance requires an extra IP address for the internal load balancer. +#### Examples ++* For Basic, Standard, or Premium SKUs: ++ * **/29 subnet**: 8 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP for internal load balancer, if used in internal mode = 0 remaining IP addresses left for scaling units. + + * **/28 subnet**: 16 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP for internal load balancer, if used in internal mode = 8 remaining IP addresses left for four scale-out units (2 IP addresses/scale-out unit) for a total of five units. **This subnet efficiently maximizes Basic and Standard SKU scale-out limits.** + + * **/27 subnet**: 32 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP for internal load balancer, if used in internal mode = 24 remaining IP addresses left for twelve scale-out units (2 IP addresses/scale-out unit) for a total of thirteen units. **This subnet efficiently maximizes the soft-limit Premium SKU scale-out limit.** + + * **/26 subnet**: 64 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP for internal load balancer, if used in internal mode = 56 remaining IP addresses left for twenty-eight scale-out units (2 IP addresses/scale-out unit) for a total of twenty-nine units. It is possible, with an Azure Support ticket, to scale the Premium SKU past twelve units. If you foresee such high demand, consider the /26 subnet. + + * **/25 subnet**: 128 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP for internal load balancer, if used in internal mode = 120 remaining IP addresses left for sixty scale-out units (2 IP addresses/scale-out unit) for a total of sixty-one units. This is an extremely large, theoretical number of scale-out units. + ### Routing See the Routing guidance when deploying your API Management instance into an [external VNet](./api-management-using-with-vnet.md#routing) or [internal VNet](./api-management-using-with-internal-vnet.md#routing). |
app-service | Deploy Zip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md | az webapp deploy --resource-group <group-name> --name <app-name> --src-path ./<p [!INCLUDE [deploying to network secured sites](../../includes/app-service-deploy-network-secured-sites.md)] -The following example uses the `--src-url` parameter to specify the URL of an Azure Storage account that the web app should pull the ZIP from. +The following example uses the `--src-url` parameter to specify the URL of an Azure Storage account that the web app should pull the WAR from. ```azurecli-interactive-az webapp deploy --resource-group <group-name> --name <app-name> --src-url "https://storagesample.blob.core.windows.net/sample-container/myapp.war?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3 +az webapp deploy --resource-group <group-name> --name <app-name> --src-url "https://storagesample.blob.core.windows.net/sample-container/myapp.war?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3 --type war ``` The CLI command uses the [Kudu publish API](#kudu-publish-api-reference) to deploy the package and can be fully customized. |
app-service | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md | Title: App Service Environment networking description: App Service Environment networking details Previously updated : 08/01/2022 Last updated : 09/27/2022 You must delegate the subnet to `Microsoft.Web/hostingEnvironments`, and the sub The size of the subnet can affect the scaling limits of the App Service plan instances within the App Service Environment. It's a good idea to use a `/24` address space (256 addresses) for your subnet, to ensure enough addresses to support production scale. +>[!NOTE] +> Windows Containers uses an additional IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If your App Service Environment has for example 2 Windows Container App Service plans each with 25 instances and each with 5 apps running, you will need 300 IP addresses and additional addresses to support horizontal (up/down) scale. + If you use a smaller subnet, be aware of the following: - Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure, and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses). |
app-service | Overview Vnet Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md | Title: Integrate your app with an Azure virtual network description: Integrate your app in Azure App Service with Azure virtual networks. Previously updated : 08/01/2022 Last updated : 09/27/2022 When you scale up or down in size, the required address space is doubled for a s <sup>*</sup>Assumes that you'll need to scale up or down in either size or SKU at some point. -Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required. +Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required. If the subnet already exists before integrating through the portal you can use a /28 subnet. ++>[!NOTE] +> Windows Containers uses an additional IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If you have for example 10 Windows Container App Service plan instances with 4 apps running, you will need 50 IP addresses and additional addresses to support horizontal (up/down) scale. When you want your apps in your plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing virtual network integration. |
app-service | Reference App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md | For more information on custom containers, see [Run a custom container in Azure] | Setting name| Description | Example | |-|-|-|-| `WEBSITES_ENABLE_APP_SERVICE_STORAGE` | Set to `true` to enable the `/home` directory to be shared across scaled instances. The default is `false` for custom containers. || +| `WEBSITES_ENABLE_APP_SERVICE_STORAGE` | Set to `true` to enable the `/home` directory to be shared across scaled instances. The default is `true` for custom containers. || | `WEBSITES_CONTAINER_START_TIME_LIMIT` | Amount of time in seconds to wait for the container to complete start-up before restarting the container. Default is `230`. You can increase it up to the maximum of `1800`. || | `DOCKER_REGISTRY_SERVER_URL` | URL of the registry server, when running a custom container in App Service. For security, this variable is not passed on to the container. | `https://<server-name>.azurecr.io` | | `DOCKER_REGISTRY_SERVER_USERNAME` | Username to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. || |
automation | Automation Graphical Authoring Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-graphical-authoring-intro.md | +> [!IMPORTANT] +> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. + All runbooks in Azure Automation are Windows PowerShell workflows. Graphical runbooks and graphical PowerShell Workflow runbooks generate PowerShell code that the Automation workers run but that you cannot view or modify. You can convert a graphical runbook to a graphical PowerShell Workflow runbook, and vice versa. However, you can't convert these runbooks to a textual runbook. Additionally, the Automation graphical editor can't import a textual runbook. Graphical authoring allows you to create runbooks for Azure Automation without the complexities of the underlying Windows PowerShell or PowerShell Workflow code. You can add activities to the canvas from a library of cmdlets and runbooks, link them together, and configure them to form a workflow. If you have ever worked with System Center Orchestrator or Service Management Automation (SMA), graphical authoring should look familiar. This article provides an introduction to the concepts you need to get started creating a graphical runbook. > [!NOTE] > You can't add a digital signature to a Graphical runbook. This feature is not supported in Azure Automation.-> ## Overview of graphical editor |
automation | Automation Hrw Run Runbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md | +> [!IMPORTANT] +> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. ++ Runbooks that run on a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md) typically manage resources on the local computer or against resources in the local environment where the worker is deployed. Runbooks in Azure Automation typically manage resources in the Azure cloud. Even though they are used differently, runbooks that run in Azure Automation and runbooks that run on a Hybrid Runbook Worker are identical in structure. When you author a runbook to run on a Hybrid Runbook Worker, you should edit and test the runbook on the machine that hosts the worker. The host machine has all the PowerShell modules and network access required to manage the local resources. Once you test the runbook on the Hybrid Runbook Worker machine, you can then upload it to the Azure Automation environment, where it can be run on the worker. |
automation | Automation Managed Identity Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-managed-identity-faq.md | + + Title: Azure Automation migration to managed identity FAQ +description: This article gives answers to frequently asked questions when you're migrating from Run As account to managed identity +++ Last updated : 07/25/2021++#Customer intent: As an implementer, I want answers to various questions. +++# Frequently asked questions when migrating from Run As account to managed identities ++This Microsoft FAQ is a list of commonly asked questions when you're migrating from Run As account to Managed Identity. If you have any other questions about the capabilities, go to the [discussion forum](https://aka.ms/retirement-announcement-automation-runbook-start-using-managed-identities) and post your questions. When a question is frequently asked, we add it to this article so that it benefits all. ++## How long will you support Run As account? + +Automation Run As account will be supported for the next one year until **September 30, 2023**. While we continue to support existing users, we recommend all new users to use Managed identities as the preferred way of runbook authentication. Existing users can still create the Run As account, see the account properties and renew the certificate upon expiration till **January 30, 2023**. After this date, you won't be able to create a Run As account from the Azure portal. You will still be able to create a Run As account through [PowerShell script](/azure/automation/create-run-as-account#create-account-using-powershell) until the supported time of one year. You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the certificate post **January 30, 2023** until **September 30, 2023**. This script will assess automation account which has configured Run As accounts and renews the certificate if the user chooses to do so. On confirmation, it will renew the key credentials of Azure-AD App and upload new self-signed certificate to the Azure-AD App. +++## Will existing runbooks that use the Run As account be able to authenticate? +Yes, they will be able to authenticate and there will be no impact to the existing runbooks using Run As account. ++## How can I renew the existing Run as accounts post January 30, 2023 when portal support to renew the account to removed? +You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the Run As account certificate post January 30, 2023 until September 30, 2023. ++## Can Run As account still be created post September 30, 2023 when Run As account will retire? +Yes, you can still create the Run As account using the [PowerShell script](../automation/create-run-as-account.md#create-account-using-powershell). However, this would be an unsupported scenario. ++## Can Run As accounts still be renewed post September 30, 2023 when Run As account will retire? +You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the Run As account certificate post September 30, 2023 when Run As account will retire. However, it would be an unsupported scenario. ++## Will the runbooks that still use the Run As account be able to authenticate even after September 30, 2023? +Yes, the runbooks will be able to authenticate until the Run As account certificate expires. ++## What is managed identity? +Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without managing credentials, secrets, certificates or keys. ++For more information about managed identities in Azure AD, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview). ++## What can I do with a managed identity in Automation accounts? +An Azure Automation managed identity from Azure Active Directory (Azure AD) allows your runbook to access other Azure AD-protected resources easily. This identity is managed by the Azure platform and doesn't require you to provision or rotate any secrets. Key benefits are: +- You can use managed identities to authenticate to any Azure service that supports Azure AD authentication. +- Managed identities eliminate the management overhead associated with managing Run As account in your runbook code. You can access resources via a managed identity of an Automation account from a runbook without worrying about creating the service principal, Run As Certificate, Run As Connection and so on. +- You donΓÇÖt have to renew the certificate used by the Automation Run As account. + +## Are Managed identities more secure than Run As account? +Run As account creates an Azure AD app used to manage the resources within the subscription through a certificate having contributor access at the subscription level by default. A malicious user could use this certificate to perform a privileged operation against resources in the subscription leading to potential vulnerabilities. Run As accounts also have a management overhead associated that involves creating a service principal, RunAsCertificate, RunAsConnection, certificate renewal and so on. ++Managed identities eliminate this overhead by providing a secure method for the users to authenticate and access resources that support Azure AD authentication without worrying about any certificate or credential management. ++## Can managed identity be used for both cloud and hybrid jobs? +Azure Automation supports [System-assigned managed identities](/azure/automation/automation-security-overview#managed-identities) for both cloud and Hybrid jobs. Currently, Azure Automation [User-assigned managed identities](/azure/automation/automation-security-overview#managed-identities-preview) can only be used for cloud jobs only and cannot be used for jobs run on a Hybrid Worker. ++## Can I use Run as account for new Automation account? +Yes, only in a scenario when Managed identities aren't supported for specific on-premises resources. We'll allow the creation of Run As account through [PowerShell script](/azure/automation/create-run-as-account#create-account-using-powershell). ++## How can I migrate from existing Run As account to managed identities? +Follow the steps mentioned in [migrate Run As accounts to Managed identity](/azure/automationmigrate-run-as-accounts-managed-identity). ++## How do I see the runbooks that are using Run As account and know what permissions are assigned to the Run As account? +Use the [script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/Check-AutomationRunAsAccountRoleAssignments.ps1) here to find out which Automation accounts are using Run As account. If your Azure Automation accounts contain a Run As account, it will by default, have the built-in contributor role assigned to it. You can use this script to check the role assignments of your Azure Automation Run As accounts and determine if their role assignment is the default one or if it has been changed to a different role definition. ++## Next steps ++If your question isn't answered here, you can refer to the following sources for more questions and answers. ++- [Azure Automation](https://docs.microsoft.com/answers/topics/azure-automation.html) +- [Feedback forum](https://feedback.azure.com/d365community/forum/721a322e-bd25-ec11-b6e6-000d3a4f0f1c) |
automation | Automation Security Guidelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md | +> [!IMPORTANT] +> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. + This article details the best practices to securely execute the automation jobs. [Azure Automation](./overview.md) provides you the platform to orchestrate frequent, time consuming, error-prone infrastructure management and operational tasks, as well as mission-critical operations. This service allows you to execute scripts, known as automation runbooks seamlessly across cloud and hybrid environments. |
automation | Automation Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-overview.md | +> [!IMPORTANT] +> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. + Azure Automation allows you to automate tasks against resources in Azure, on-premises, and with other cloud providers such as Amazon Web Services (AWS). You can use runbooks to automate your tasks, or a Hybrid Runbook Worker if you have business or operational processes to manage outside of Azure. Working in any one of these environments require permissions to securely access the resources with the minimal rights required. This article covers authentication scenarios supported by Azure Automation and tells how to get started based on the environment or environments that you need to manage. + ## Automation account When you start Azure Automation for the first time, you must create at least one Automation account. Automation accounts allow you to isolate your Automation resources, runbooks, assets, and configurations from the resources of other accounts. You can use Automation accounts to separate resources into separate logical environments or delegated responsibilities. For example, you might use one account for development, another for production, and another for your on-premises environment. Or you might dedicate an Automation account to manage operating system updates across all of your machines with [Update Management](update-management/overview.md). |
automation | Create Run As Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/create-run-as-account.md | +> [!IMPORTANT] +> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. ++ Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features. This article describes how to create a Run As or Classic Run As account from the Azure portal or Azure PowerShell. When you create the Run As or Classic Run As account in the Azure portal, by default it uses a self-signed certificate. If you want to use a certificate issued by your enterprise or third-party certification authority (CA), can use the [PowerShell script to create a Run As account](#powershell-script-to-create-a-run-as-account). |
automation | Delete Run As Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-run-as-account.md | +> [!IMPORTANT] +> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. + Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features. This article describes how to delete a Run As or Classic Run As account. When you perform this action, the Automation account is retained. After you delete the Run As account, you can re-create it in the Azure portal or with the provided PowerShell script. ## Delete a Run As or Classic Run As account |
automation | Manage Run As Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-run-as-account.md | +> [!IMPORTANT] +> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. +++ Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features. In this article we cover how to manage a Run as or Classic Run As account, including: |
automation | Migrate Run As Accounts Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md | + + Title: Migrate run as accounts to managed identity in Azure Automation account +description: This article describes how to migrate from run as accounts to managed identity. ++ Last updated : 04/27/2022+++++# Migrate from existing Run As accounts to managed identity ++> [!IMPORTANT] +> Azure Automation Run As Account will retire on **September 30, 2023**, and there will be no support provided beyond this date. From now through **September 30, 2023**, you can continue to use the Azure Automation Run As Account. However, we recommend you to transition to [managed identities](/automation-security-overview.md#managed-identities) before **September 30, 2023**. ++See the [frequently asked questions](/automation/automation-managed-identity.md) for more information about migration cadence and support timeline for Run As account creation and certificate renewal. ++ Run As accounts in Azure Automation provide authentication for managing Azure Resource Manager resources or resources deployed on the classic deployment model. Whenever a Run As account is created, an Azure AD application is registered, and a self-signed certificate will be generated which will be valid for one year. This adds an overhead of renewing the certificate every year before it expires to prevent the Automation account to stop working. ++Automation accounts can now be configured to use [Managed Identity](/automation/automation-security-overview.md#managed-identities) which is the default option when an Automation account is created. With this feature, Automation account can authenticate to Azure resources without the need to exchange any credentials, hence removing the overhead of renewing the certificate or managing the service principal. ++Managed identity can be [system assigned]( /automation/enable-managed-identity-or-automation) or [user assigned](/automation/add-user-assigned-identity). However, when a new Automation account is created, a system assigned managed identity is enabled. ++## Prerequisites ++Ensure the following to migrate from the Run As account to Managed identities: ++1. Create a [system-assigned](enable-managed-identity-for-automation.md) or [user-assigned](add-user-assigned-identity.md), or both types of managed identities. To learn more about the differences between the two types of managed identities, see [Managed Identity Types](/active-directory/managed-identities-azure-resources/overview#managed-identity-types). ++ > [!NOTE] + > - User-assigned identities are supported for cloud jobs only. It isn't possible to use the Automation Account's User Managed Identity on a Hybrid Runbook Worker. To use hybrid jobs, you must create a System-assigned identities. + > - There are two ways to use the Managed Identities in Hybrid Runbook Worker scripts. Either the System-assigned Managed Identity for the Automation account **OR** VM Managed Identity for an Azure VM running as a Hybrid Runbook Worker. + > - Both the VM's User-assigned Managed Identity or the VM's system assigned Managed Identity will **NOT** work in an Automation account that is configured with an Automation account Managed Identity. When you enable the Automation account Managed Identity, you can only use the Automation Account System-Assigned Managed Identity and not the VM Managed Identity. For more information, see [Use runbook authentication with managed identities](/automation/automation-hrw-run-runbooks?tabs=sa-mi#runbook-auth-managed-identities). ++1. Assign same role to the managed identity to access the Azure resources matching the Run As account. Follow the steps in [Check role assignment for Azure Automation Run As account](/automation/manage-run-as-account#check-role-assignment-for-azure-automation-run-as-account). +Ensure that you don't assign high privilege permissions like Contributor, Owner and so on to Run as account. Follow the RBAC guidelines to limit the permissions from the default Contributor permissions assigned to Run As account using this [script](/azure/automation/manage-runas-account#limit-run-as-account-permissions). ++ For example, if the Automation account is only required to start or stop an Azure VM, then the permissions assigned to the Run As account needs to be only for starting or stopping the VM. Similarly, assign read-only permissions if a runbook is reading from blob storage. Read more about [Azure Automation security guidelines](/azure/automation/automation-security-guidelines#authentication-certificate-and-identities). ++## Migrate from Automation Run As account to Managed Identity ++To migrate from an Automation Run As account to a Managed Identity for your runbook authentication, follow the steps below: + +1. Change the runbook code to use managed identity. We recommend that you test the managed identity to verify if the runbook works as expected by creating a copy of your production runbook to use managed identity. Update your test runbook code to authenticate by using the managed identities. This ensures that you don't override the AzureRunAsConnection in your production runbook and break the existing Automation. After you are sure that the runbook code executes as expected using the Managed Identities, update your production runbook to use managed identities. ++ For Managed Identity support, use the Az cmdlet Connect-AzAccount cmdlet. use the Az cmdlet `Connect-AzAccount` cmdlet. See [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) in the PowerShell reference. ++ - If you are using Az modules, update to the latest version following the steps in the [Update Azure PowerShell modules](automation-update-azure-modules.md#update-az-modules) article. + - If you are using AzureRM modules, Update `AzureRM.Profile` to latest version and replace using `Add-AzureRMAccount` cmdlet with `Connect-AzureRMAccount –Identity`. + + Follow the sample scripts below to know the change required to the runbook code to use Managed Identities ++1. Once you are sure that the runbook is executing successfully by using managed identities, you can safely [delete the Run as account](/azure/automation/delete-run-as-account) if the Run as account is not used by any other runbook. ++## Sample scripts ++Following are the examples of a runbook that fetches the ARM resources using the Run As Account (Service Principal) and managed identity. ++# [Run As account](#tab/run-as-account) ++```powershell + $connectionName = "AzureRunAsConnection" + try + { + # Get the connection "AzureRunAsConnection " + $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName ++ "Logging in to Azure..." + Add-AzureRmAccount ` + -ServicePrincipal ` + -TenantId $servicePrincipalConnection.TenantId ` + -ApplicationId $servicePrincipalConnection.ApplicationId ` + -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint + } + catch { + if (!$servicePrincipalConnection) + { + $ErrorMessage = "Connection $connectionName not found." + throw $ErrorMessage + } else{ + Write-Error -Message $_.Exception + throw $_.Exception + } + } ++ #Get all ARM resources from all resource groups + $ResourceGroups = Get-AzureRmResourceGroup ++ foreach ($ResourceGroup in $ResourceGroups) + { + Write-Output ("Showing resources in resource group " + $ResourceGroup.ResourceGroupName) + $Resources = Find-AzureRmResource -ResourceGroupNameContains $ResourceGroup.ResourceGroupName | Select ResourceName, ResourceType + ForEach ($Resource in $Resources) + { + Write-Output ($Resource.ResourceName + " of type " + $Resource.ResourceType) + } + Write-Output ("") + } + ``` ++# [System-assigned Managed identity](#tab/sa-managed-identity) ++>[!NOTE] +> Enable appropriate RBAC permissions to the system identity of this automation account. Otherwise, the runbook may fail. ++ ```powershell + { + "Logging in to Azure..." + Connect-AzAccount -Identity + } + catch { + Write-Error -Message $_.Exception + throw $_.Exception + } ++ #Get all ARM resources from all resource groups + $ResourceGroups = Get-AzResourceGroup ++ foreach ($ResourceGroup in $ResourceGroups) + { + Write-Output ("Showing resources in resource group " + $ResourceGroup.ResourceGroupName) + $Resources = Get-AzResource -ResourceGroupName $ResourceGroup.ResourceGroupName + foreach ($Resource in $Resources) + { + Write-Output ($Resource.Name + " of type " + $Resource.ResourceType) + } + Write-Output ("") + } + ``` +# [User-assigned Managed identity](#tab/ua-managed-identity) ++```powershell +{ ++ "Logging in to Azure..." ++$identity = Get-AzUserAssignedIdentity -ResourceGroupName <myResourceGroup> -Name <myUserAssignedIdentity> +Connect-AzAccount -Identity -AccountId $identity.ClientId +} +catch { + Write-Error -Message $_.Exception + throw $_.Exception +} +#Get all ARM resources from all resource groups +$ResourceGroups = Get-AzResourceGroup +foreach ($ResourceGroup in $ResourceGroups) +{ + Write-Output ("Showing resources in resource group " + $ResourceGroup.ResourceGroupName) + $Resources = Get-AzResource -ResourceGroupName $ResourceGroup.ResourceGroupName + foreach ($Resource in $Resources) + { + Write-Output ($Resource.Name + " of type " + $Resource.ResourceType) + } + Write-Output ("") +} +``` +++## Graphical runbooks ++### How to check if Run As account is used in Graphical Runbooks ++To check if Run As account is used in Graphical Runbooks: ++1. Check each of the activities within the runbook to see if they use the Run As Account when calling any logon cmdlets/aliases. For example, `Add-AzRmAccount/Connect-AzRmAccount/Add-AzAccount/Connect-AzAccount` + + :::image type="content" source="./media/migrate-run-as-account-managed-identity/check-graphical-runbook-use-run-as-inline.png" alt-text="Screenshot to check if graphical runbook uses Run As." lightbox="./media/migrate-run-as-account-managed-identity/check-graphical-runbook-use-run-as-expanded.png"::: ++1. Examine the parameters used by the cmdlet. ++ :::image type="content" source="./medilet"::: ++1. For use with the Run As account, it will use the *ServicePrinicipalCertificate* parameter set *ApplicationId* and *Certificate Thumbprint* will be from the RunAsAccountConnection. ++ :::image type="content" source="./media/migrate-run-as-account-managed-identity/parameter-sets-inline.png" alt-text="Screenshot to check the parameter sets." lightbox="./media/migrate-run-as-account-managed-identity/parameter-sets-expanded.png"::: + ++### How to edit graphical Runbook to use managed identity ++You must test the managed identity to verify if the Graphical runbook is working as expected by creating a copy of your production runbook to use the managed identity and updating your test graphical runbook code to authenticate by using the managed identity. You can add this functionality to a graphical runbook by adding `Connect-AzAccount` cmdlet. ++Listed below is an example to guide on how a graphical runbook that uses Run As account uses managed identities: ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Open the Automation account and select **Process Automation**, **Runbooks**. +1. Here, select a runbook. For example, select *Start Azure V2 VMs* runbook either from the list and select **Edit** or go to **Browse Gallery** and select *start Azure V2 VMs*. ++ :::image type="content" source="./media/migrate-run-as-account-managed-identity/edit-graphical-runbook-inline.png" alt-text="Screenshot of edit graphical runbook." lightbox="./media/migrate-run-as-account-managed-identity/edit-graphical-runbook-expanded.png"::: ++1. Replace, Run As connection that uses `AzureRunAsConnection`and connection asset that internally uses PowerShell `Get-AutomationConnection` cmdlet with `Connect-AzAccount` cmdlet. ++1. Connect to Azure that uses `Connect-AzAccount` to add the identity support for use in the runbook using `Connect-AzAccount` activity from the `Az.Accounts` cmdlet that uses the PowerShell code to connect to identity. ++ :::image type="content" source="./media/migrate-run-as-account-managed-identity/add-functionality-inline.png" alt-text="Screenshot of add functionality to graphical runbook." lightbox="./media/migrate-run-as-account-managed-identity/add-functionality-expanded.png"::: ++1. Select **Code** to enter the following code to pass the identity. ++```powershell-interactive +try +{ + Write-Output ("Logging in to Azure...") + Connect-AzAccount -Identity +} +catch { + Write-Error -Message $_.Exception + throw $_.Exception +} +``` ++For example, in the runbook `Start Azure V2 VMs` in the runbook gallery, you must replace `Get Run As Connection` and `Connect to Azure` activities with `Connect-AzAccount` cmdlet activity. ++For more information, see sample runbook name *AzureAutomationTutorialWithIdentityGraphical* that gets created with the Automation account. +++## Next steps ++- Review the Frequently asked questions for [Migrating to Managed Identities](automation-managed-identity-faq.md). ++- If your runbooks aren't completing successfully, review [Troubleshoot Azure Automation managed identity issues](troubleshoot/managed-identity.md). ++- Learn more about system assigned managed identity, see [Using a system-assigned managed identity for an Azure Automation account](enable-managed-identity-for-automation.md) ++- Learn more about user assigned managed identity, see [Using a user-assigned managed identity for an Azure Automation account]( add-user-assigned-identity.md) ++- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md). ++ |
automation | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md | Azure Automation receives improvements on an ongoing basis. To stay up to date w This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md). ++## July 2022 ++### Support for Run As accounts ++**Type:** Plan for change +++Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities.Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. + ## August 2022 ### Azure Automation Hybrid Worker Extension (preview) now supports Arc-enabled VMware VMs This page is updated monthly, so revisit it regularly. If you're looking for ite In addition to the support for Azure VMs and Arc-enabled Servers, Azure Automation Hybrid Worker Extension (preview) now supports Arc-enabled VMware VMs as a target. You can now orchestrate management tasks using PowerShell and Python runbooks on Azure VMs, Arc-enabled Servers, and Arc-enabled VMWare VMs with an identical experience. Read [here](extension-based-hybrid-runbook-worker-install.md) for more information. + ## March 2022 ### Forward diagnostic audit data to Azure Monitor logs |
azure-arc | Agent Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md | Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 09/14/2022 Last updated : 09/27/2022 This page is updated monthly, so revisit it regularly. If you're looking for ite - The default login flow for Windows computers now loads the local web browser to authenticate with Azure Active Directory instead of providing a device code. You can use the `--use-device-code` flag to return to the old behavior or [provide service principal credentials](onboard-service-principal.md) for a non-interactive authentication experience. - If the resource group provided to `azcmagent connect` does not exist, the agent will try to create it and continue connecting the server to Azure.+- Added support for Ubuntu 22.04 - Added `--no-color` flag for all azcmagent commands to suppress the use of colors in terminals that do not support ANSI codes. ### Fixed |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 09/14/2022 Last updated : 09/27/2022 The following versions of the Windows and Linux operating system are officially * Azure Editions are supported when running as a virtual machine on Azure Stack HCI * Windows IoT Enterprise * Azure Stack HCI-* Ubuntu 16.04, 18.04, and 20.04 LTS +* Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS * Debian 10 * CentOS Linux 7 and 8 * SUSE Linux Enterprise Server (SLES) 12 and 15 The following versions of the Windows and Linux operating system are officially * Amazon Linux 2 * Oracle Linux 7 and 8 -> [!NOTE] +> [!NOTE] > On Linux, Azure Arc-enabled servers install several daemon processes. We only support using systemd to manage these processes. In some environments, systemd may not be installed or available, in which case Arc-enabled servers are not supported, even if the distribution is otherwise supported. These environments include **Windows Subsystem for Linux** (WSL) and most container-based systems, such as Kubernetes or Docker. The Azure Connected Machine agent can be installed on the node that runs the containers but not inside the containers themselves. - > [!WARNING] > If the Linux hostname or Windows computer name uses a reserved word or trademark, attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md). |
azure-cache-for-redis | Cache Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md | The **Virtual Network** section allows you to configure the virtual network sett The **Private Endpoint** section allows you to configure the private endpoint settings for your cache. Private endpoint is supported on all cache tiers Basic, Standard, Premium, and Enterprise. We recommend using private endpoint instead of VNets. Private endpoints are easy to set up or remove, are supported on all tiers, and can connect your cache to multiple different VNets at once. -For more information, see [Azure Cache for Redis with Azure Private Link](./cache-private-link.md). +For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md). ### Firewall -Firewall rules configuration is available for all Azure Cache for Redis tiers. +- Firewall rules configuration is available for all Basic, Standard, and Premium tiers. +- Firewall rules configuration isn't available for Enterprise nor Enterprise Flash tiers. Select **Firewall** to view and configure firewall rules for cache. For more information about databases, see [What are Redis databases?](cache-deve ## Redis commands not supported in Azure Cache for Redis -Configuration and management of Azure Cache for Redis instances is managed by Microsoft, which makes disables the following commands. If you try to invoke them, you receive an error message similar to `"(error) ERR unknown command"`. +Configuration and management of Azure Cache for Redis instances is managed by Microsoft, which disables the following commands. If you try to invoke them, you receive an error message similar to `"(error) ERR unknown command"`. - ACL - BGREWRITEAOF For more information about Redis commands, see [https://redis.io/commands](https ## Next steps - [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)-- [Monitor Azure Cache for Redis](cache-how-to-monitor.md)+- [Monitor Azure Cache for Redis](cache-how-to-monitor.md) |
azure-cache-for-redis | Cache Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-managed-identity.md | Managed identity can be enabled either when you create a cache instance or after ### Prerequisites and limitations -Because managed identity for storage is only used with the import/export feature and persistence feature, it's currently only useful when used with the Premium tier of Azure Cache for Redis. +Managed identity for storage is only used with the import/export feature and persistence feature at present, which limits its use to the Premium tier of Azure Cache for Redis. ++Managed identity for storage is not supported on caches that have a dependency on Cloud Services (classic). For more information on how to check on whether your cache is using Cloud Services (classi), see [How do I know if a cache is affected?](cache-faq.yml#how-do-i-know-if-a-cache-is-affected). ## Create a new cache with managed identity using the portal |
azure-maps | Webgl Custom Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/webgl-custom-layer.md | + + Title: Add a custom WebGL layer to a map ++description: How to add a custom WebGL layer to a map using the Azure Maps Web SDK. ++ Last updated : 09/23/2022++++++# Add a custom WebGL layer to a map ++The Azure Maps Web SDK supports creating custom layers +using [WebGL][getting_started_with_webgl]. WebGL is based +on [OpenGL ES][OpenGL ES] and enables rendering 2D and 3D +graphics in web browsers. ++Using WebGL, you can build high-performance interactive +graphics that render in the browser in real-time that support +scenarios like simulations, data visualization, animations and +3D modeling. ++Developers can access the WebGL context of the map during +rendering and use custom WebGL layers to integrate with other +libraries such as [three.js][threejs] and [deck.gl][deckgl] +to provide enriched and interactive content on the map. ++## Add a WebGL layer ++Before you can add a WebGL layer to a map, you need to have an object +that implements the `WebGLRenderer` interface. First, create a WebGL +layer by providing an `id` and `renderer` object to the constructor, +then add the layer to the map to have it rendered. ++The following sample code demonstrates how to add a WebGL layer to a map: ++```js +var myRenderer = { + /** + * Either "2d" or "3d". Defaults to "2d". + * - "3d" to use the depth buffer and share it with other layers + * - "2d" to add a layer with no depth. If you need to use the depth buffer for a "2d" + * layer you must use an offscreen framebuffer and the prerender method. + */ + renderingMode: "2d", ++ /** + * Optional method called when the layer has been added to the Map. + * This gives the layer a chance to initialize gl resources and register event listeners. + * @param map The Map this custom layer was just added to. + * @param gl The gl context for the map. + */ + onAdd: function (map, gl) {}, ++ /** + * Optional method called when the layer has been removed from the Map. + * This gives the layer a chance to clean up gl resources and event listeners. + * @param map The Map this custom layer was just added to. + * @param gl The gl context for the map. + */ + onRemove: function (map, gl) {}, ++ /** + * Optional method called during a render frame to allow a layer to prepare resources + * or render into a texture. + * + * The layer cannot make any assumptions about the current GL state and must bind a framebuffer before rendering. + * + * @param gl The map's gl context. + * @param matrix The map's camera matrix. + */ + prerender: function (gl, matrix) {}, ++ /** + * Required. Called during a render frame allowing the layer to draw into the GL context. + * + * The layer can assume blending and depth state is set to allow the layer to + * properly blend and clip other layers. The layer cannot make any other + * assumptions about the current GL state. + * + * If the layer needs to render to a texture, it should implement the prerender + * method to do this and only use the render method for drawing directly into the + * main framebuffer. + * + * The blend function is set to gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA). + * This expects colors to be provided in premultiplied alpha form where the r, g and b + * values are already multiplied by the a value. If you are unable to provide colors in + * premultiplied form you may want to change the blend function to + * gl.blendFuncSeparate(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ONE_MINUS_SRC_ALPHA). + * + * @param gl The map's gl context. + * @param matrix The map's camera matrix. + */ + render: function (gl, matrix) {} +}; + +//Add the layer to the map. +map.layers.add(new atlas.layer.WebGLLayer("layerId", { renderer: myRenderer })); +``` ++> [!NOTE] +> The `WebGLLayer` class supports the `minZoom`, `maxZoom`, and `visible` layer options. ++```js +//Add the layer to the map with layer options. +map.layers.add(new atlas.layer.WebGLLayer("layerId", + { + renderer: myRenderer, + minZoom: 10, + maxZoom: 22, + visible: true + } +)); +``` ++This sample renders a triangle on the map using a WebGL layer. ++<!-- Insert example here --> ++ ++The map's camera matrix is used to project spherical Mercator point to +gl coordinates. Mercator point \[0, 0\] represents the top left corner +of the Mercator world and \[1, 1\] represents the bottom right corner. +When the `renderingMode` is `"3d"`, the z coordinate is conformal. +A box with identical x, y, and z lengths in Mercator units would be +rendered as a cube. ++The `MercatorPoint` class has `fromPosition`, `fromPositions`, and +`toFloat32Array` static methods that can be used to convert a geospatial +Position to a Mercator point. Similarly the `toPosition` and `toPositions` +methods can be used to project a Mercator point to a Position. ++## Render a 3D model ++Use a WebGL layer to render 3D models. The following example shows how +to load a [glTF][glTF] file and render it on the map using [three.js][threejs]. ++You need to add the following script files. ++```html +<script src="https://unpkg.com/three@0.102.0/build/three.min.js"></script> ++<script src="https://unpkg.com/three@0.102.0/examples/js/loaders/GLTFLoader.js"></script> +``` ++This sample renders an animated 3D parrot on the map. ++<!-- Insert example here --> ++ ++The `onAdd` function loads a `.glb` file into memory and instantiates +three.js objects such as Camera, Scene, Light, and a `THREE.WebGLRenderer`. ++The `render` function calculates the projection matrix of the camera +and renders the model to the scene. ++>[!TIP] +> +> - To have a continuous and smooth animation, you can trigger the repaint of +a single frame by calling `map.triggerRepaint()` in the `render` function. +> - To enable anti-aliasing simply set `antialias` to `true` as +one of the style options while creating the map. ++## Render a deck.gl layer ++A WebGL layer can be used to render layers from the [deck.gl][deckgl] +library. The following sample demonstrates the data visualization of +people migration flow in the United States from county to county +within a certain time range. ++You need to add the following script file. ++```html +<script src="https://unpkg.com/deck.gl@8.8.9/dist.min.js"></script> +``` ++Define a layer class that extends `atlas.layer.WebGLLayer`. ++```js +class DeckGLLayer extends atlas.layer.WebGLLayer { ++ constructor(options) { + super(options.id); ++ //Create an instance of deck.gl layer + this._mbLayer = new deck.MapboxLayer(options); ++ //Create a renderer + const deckGLRenderer = { + renderingMode: "3d", + onAdd: (map, gl) => { + this._mbLayer.onAdd?.(map["map"], gl); + }, + onRemove: (map, gl) => { + this._mbLayer.onRemove?.(map["map"], gl); + }, + prerender: (gl, matrix) => { + this._mbLayer.prerender?.(gl, matrix); + }, + render: (gl, matrix) => { + this._mbLayer.render(gl, matrix); + } + }; + this.setOptions({ renderer: deckGLRenderer }); + } +} +``` ++This sample renders an arc-layer from the [deck.gl][deckgl] library. ++ ++## Next steps ++Learn more about the classes and methods used in this article: ++> [!div class="nextstepaction"] +> [WebGLLayer][WebGLLayer] ++> [!div class="nextstepaction"] +> [WebGLLayerOptions][WebGLLayerOptions] ++> [!div class="nextstepaction"] +> [WebGLRenderer interface][WebGLRenderer interface] ++> [!div class="nextstepaction"] +> [MercatorPoint][MercatorPoint] ++[getting_started_with_webgl]: https://developer.mozilla.org/en-US/docs/web/api/webgl_api/tutorial/getting_started_with_webgl +[threejs]: https://threejs.org/ +[deckgl]: https://deck.gl/ +[glTF]: https://www.khronos.org/gltf/ +[OpenGL ES]: https://www.khronos.org/opengles/ +[WebGLLayer]: /javascript/api/azure-maps-control/atlas.layer.webgllayer +[WebGLLayerOptions]: /javascript/api/azure-maps-control/atlas.webgllayeroptions +[WebGLRenderer interface]: /javascript/api/azure-maps-control/atlas.webglrenderer +[MercatorPoint]: /javascript/api/azure-maps-control/atlas.data.mercatorpoint |
azure-monitor | Agents Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md | The following tables list the operating systems that Azure Monitor Agent and the | Debian 9 | X | X | X | | Debian 8 | | X | | | Debian 7 | | | X |-| OpenSUSE 15 | X | | | +| OpenSUSE 15 | X | | | | OpenSUSE 13.1+ | | | X | | Oracle Linux 8 | X | X | | | Oracle Linux 7 | X | X | X | The following tables list the operating systems that Azure Monitor Agent and the <sup>1</sup> Requires Python (2 or 3) to be installed on the machine.<br> <sup>2</sup> Requires Python 2 to be installed on the machine and aliased to the `python` command.<br> <sup>3</sup> Also supported on Arm64-based machines.++>[!NOTE] +>Machines and appliances that run heavily customized or stripped-down versions of the above distributions and hosted solutions that disallow customization by the user are not supported. Azure Monitor and legacy agents rely on various packages and other baseline functionality that is often removed from such systems, and their installation may require some environmental modifications considered to be disallowed by the appliance vendor. For instance, [GitHub Enterprise Server](https://docs.github.com/en/enterprise-server/admin/overview/about-github-enterprise-server) is not supported due to heavy customization as well as [documented, license-level disallowance](https://docs.github.com/en/enterprise-server/admin/overview/system-overview#operating-system-software-and-patches) of operating system modification. + ## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines. |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | Post coding questions to [Stack Overflow]() using an Application Insights tag. ### User Voice -Leave product feedback for the engineering team on [UserVoice](https://feedback.azure.com/d365community/forum/8849e04d-1325-ec11-b6e6-000d3a4f09d0). +Leave product feedback for the engineering team on [UserVoice](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0). |
azure-monitor | Java Spring Boot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md | ms.devlang: java -# Configure Azure Monitor Application Insights for Spring Boot +# Using Azure Monitor Application Insights with Spring Boot -You can enable the Azure Monitor Application Insights agent for Java by adding an argument to the JVM. When you can't do this, you can use a programmatic configuration. We detail these two configurations below. +There are two options for enabling Application Insights Java with Spring Boot: JVM argument and programmatically. -## Addition of a JVM argument --### Usual case +## Enabling with JVM argument Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.1.jar"` somewhere before `-jar`, for example: If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicati ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.1.jar" -jar <myapp.jar> ``` -## Programmatic configuration +### Configuration ++See [configuration options](./java-standalone-config.md). ++## Enabling programmatically ++To enable Application Insights Java programmatically, you must add the following dependency: -To use the programmatic configuration and attach the Application Insights agent for Java during the application startup, you must add the following dependency. ```xml <dependency> <groupId>com.microsoft.azure</groupId> To use the programmatic configuration and attach the Application Insights agent </dependency> ``` -And invoke the `attach()` method of the `com.microsoft.applicationinsights.attach.ApplicationInsights` class. +And invoke the `attach()` method of the `com.microsoft.applicationinsights.attach.ApplicationInsights` class +in the first line of your `main()` method. > [!WARNING] > And invoke the `attach()` method of the `com.microsoft.applicationinsights.attac > [!WARNING] > -> The invocation must be requested at the beginning of the `main` method. +> The invocation must be at the beginning of the `main` method. Example: public class SpringBootApp { } ``` -If you want to use a JSON configuration: -* The `applicationinsights.json` file has to be in the classpath -* Or you can use an environmental variable or a system property, more in the _Configuration file path_ part on [this page](../app/java-standalone-config.md). Spring properties defined in a Spring _.properties_ file are not supported. +### Configuration ++> [!NOTE] +> Spring's `application.properties` or `application.yaml` files are not supported as +> as sources for Application Insights Java configuration. ++Programmatic enablement supports all the same [configuration options](./java-standalone-config.md) +as the JVM argument enablement, with the following differences below. ++#### Configuration file location ++By default, when enabling Application Insights Java programmatically, the configuration file `applicationinsights.json` +will be read from the classpath. ++See [configuration file path configuration options](./java-standalone-config.md#configuration-file-path) +to change this location. ++#### Self-diagnostic log file location +By default, when enabling Application Insights Java programmatically, the `applicationinsights.log` file containing +the agent logs will be located in the directory from where the JVM is launched (user directory). -> [!TIP] -> With a programmatic configuration, the `applicationinsights.log` file containing the agent logs is located in the directory from where the JVM is launched (user directory). This default behavior can be changed (see the _Self-diagnostics_ part of [this page](../app/java-standalone-config.md)). +See [self-diagnostic configuration options](./java-standalone-config.md#self-diagnostics) to change this location. |
azure-monitor | Java Standalone Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md | Connection string and role name are the most common settings needed to get start ```json {- "connectionString": "InstrumentationKey=...", + "connectionString": "...", "role": { "name": "my cloud role name" } Connection string is required. You can find your connection string in your Appli ```json {- "connectionString": "InstrumentationKey=..." + "connectionString": "..." } ``` If you specify a relative path, it will be resolved relative to the directory wh } ``` -The file should contain only the connection string, for example: --``` -InstrumentationKey=...;IngestionEndpoint=...;LiveEndpoint=... -``` +The file should contain only the connection string and nothing else. Not setting the connection string will disable the Java agent. Connection string overrides allow you to override the [default connection string "connectionStringOverrides": [ { "httpPathPrefix": "/myapp1",- "connectionString": "12345678-0000-0000-0000-0FEEDDADBEEF" + "connectionString": "..." }, { "httpPathPrefix": "/myapp2",- "connectionString": "87654321-0000-0000-0000-0FEEDDADBEEF" + "connectionString": "..." } ] } Please configure specific options based on your needs. ```json {- "connectionString": "InstrumentationKey=...", + "connectionString": "...", "role": { "name": "my cloud role name" }, |
azure-monitor | Opentelemetry Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md | To provide feedback: - Fill out the OpenTelemetry community's [customer feedback survey](https://docs.google.com/forms/d/e/1FAIpQLScUt4reClurLi60xyHwGozgM9ZAz8pNAfBHhbTZ4gFWaaXIRQ/viewform). - Tell Microsoft about yourself by joining the [OpenTelemetry Early Adopter Community](https://aka.ms/AzMonOTel/). - Engage with other Azure Monitor users in the [Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor).-- Make a feature request at the [Azure Feedback Forum](https://feedback.azure.com/d365community/forum/8849e04d-1325-ec11-b6e6-000d3a4f09d0).+- Make a feature request at the [Azure Feedback Forum](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0). ## Next steps |
azure-monitor | Metrics Supported | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md | -Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI. +Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface). This article is a complete list of all platform (that is, automatically collected) metrics currently available with the consolidated metric pipeline in Azure Monitor. Metrics changed or added after the date at the top of this article might not yet appear in the list. To query for and access the list of metrics programmatically, use the [2018-01-01 api-version](/rest/api/monitor/metricdefinitions). Other metrics not in this list might be available in the portal or through legacy APIs. |
azure-monitor | Metrics Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-troubleshoot.md | Azure metrics charts use dashed line style to indicate that there is a missing v > [!NOTE] > If you still prefer a line chart for your metric, moving mouse over the chart may help to assess the time granularity by highlighting the data point at the location of the mouse pointer. +## Units of measure in metrics charts ++Azure monitor metrics uses SI based prefixes. Metrics will only be using IEC prefixes if the resource provider has chosen an appropriate unit for a metric. +For ex: The resource provider Network interface (resource name: rarana-vm816) has no metric unit defined for "Packets Sent". The prefix used for the metric value here is k representing kilo (1000), a SI prefix. + ++The resource provider Storage account (resource name: ibabichvm) has metric unit defined for "Blob Capacity" as bytes. Hence, the prefix used is mebi (1024^2), an IEC prefix. + ++SI uses decimal ++| Value | abbreviation | SI | +|::|::|:-:| +| 1000 | k | kilo | +| 1000^2 | M | mega | +| 1000^3 | G | giga | +| 1000^4 | T | tera | +| 1000^5 | P | peta | +| 1000^6 | E | exa | +| 1000^7 | Z | zetta | +| 1000^8 | Y | yotta | ++IEC uses binary ++|Value | abbreviation| IEC |Legacy| | +|:-:|:--:|::|:-:|::| +|1024 | Ki |kibi | K | kilo| +|1024^2| Mi |mebi | M | mega| +|1024^3| Gi |gibi | G | giga| +|1024^4| Ti |tebi | T | tera| +|1024^5| Pi |pebi | - | | +|1024^6| Ei |exbi | - | | +|1024^7| Zi |zebi | - | | +|1024^8| Yi |yobi | - | | ++ ## Chart shows unexpected drop in values In many cases, the perceived drop in the metric values is a misunderstanding of the data shown on the chart. You can be misled by a drop in sums or counts when the chart shows the most-recent minutes because the last metric data points havenΓÇÖt been received or processed by Azure yet. Depending on the service, the latency of processing metrics can be within a couple minutes range. For charts showing a recent time range with a 1- or 5- minute granularity, a drop of the value over the last few minutes becomes more noticeable: |
azure-netapp-files | Configure Network Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md | Two settings are available for network features: * If the Standard volume capability is not available for the region, the Network Features field of the Create a Volume page defaults to *Basic*, and you cannot modify the setting. * The ability to locate storage compatible with the desired type of network features depends on the VNet specified. If you cannot create a volume because of insufficient resources, you can try a different VNet for which compatible storage is available.++* You cannot create a standard volume from the snapshot of a basic volume. ++* Conversion between Basic and Standard networking features in either direction is not currently supported. ## Register the feature |
azure-netapp-files | Create Active Directory Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md | Several features of Azure NetApp Files require that you have an Active Directory * It must have the permission to create machine accounts (for example, AD domain join) in the AD DS organizational unit path specified in the **Organizational unit path option** of the AD connection. * It cannot be a [Group Managed Service Account](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview). -* The AD connection admin account supports DES, Kerberos AES-128, and Kerberos AES-256 encryption types for authentication with AD DS for Azure NetApp Files machine account creation (for example, AD domain join operations). +* The AD connection admin account supports Kerberos AES-128 and Kerberos AES-256 encryption types for authentication with AD DS for Azure NetApp Files machine account creation (for example, AD domain join operations). * To enable the AES encryption on the Azure NetApp Files AD connection admin account, you must use an AD domain user account that is a member of one of the following AD DS groups: Several features of Azure NetApp Files require that you have an Active Directory >[!NOTE] >It's not recommended or required to add the Azure NetApp Files AD admin account to the AD domain groups listed above. Nor is it recommended or required to grant `msDS-SupportedEncryptionTypes` write permission to the AD admin account. - If you set both AES-128 and AES-256 Kerberos encryption on the admin account of the AD connection, the highest level of encryption supported by your AD DS will be used. If AES encryption is not set, DES encryption will be used by default. + If you set both AES-128 and AES-256 Kerberos encryption on the admin account of the AD connection, the highest level of encryption supported by your AD DS will be used. * To enable AES encryption support for the admin account in the AD connection, run the following Active Directory PowerShell commands: Several features of Azure NetApp Files require that you have an Active Directory `KerberosEncryptionType` is a multivalued parameter that supports AES-128 and AES-256 values. -* For more information, see the [Set-ADUser documentation](/powershell/module/activedirectory/set-aduser). +* If you have a requirement to enable and disable certain Kerberos encryption types for Active Directory computer accounts for domain-joined Windows hosts used with Azure NetApp Files, you must use the Group Policy `Network Security: Configure Encryption types allowed for Kerberos`. + Do not set the registry key `HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kerberos\Parameters\SupportedEncryptionTypes`. Doing this will break Kerberos authentication with Azure NetApp Files for the Windows host where this registry key was manually set. ++ >[!NOTE] + >The default policy setting for `Network Security: Configure Encryption types allowed for Kerberos` is `Not Defined`. When this policy setting is set to `Not Defined`, all encryption types except DES will be available for Kerberos encryption. You have the option to enable support for only certain Kerberos encryption types (for example, `AES128_HMAC_SHA1` or `AES256_HMAC_SHA1`). However, the default policy should be sufficient in most cases when enabling AES encryption support with Azure NetApp Files. ++ For more information, refer to [Network security: Configure encryption types allowed for Kerberos](/windows/security/threat-protection/security-policy-settings/network-security-configure-encryption-types-allowed-for-kerberos) or [Windows Configurations for Kerberos Supported Encryption Types](/archive/blogs/openspecification/windows-configurations-for-kerberos-supported-encryption-type) ++* For more information, refer to the [Set-ADUser documentation](/powershell/module/activedirectory/set-aduser). ## Create an Active Directory connection |
azure-relay | Relay Hybrid Connections Http Requests Dotnet Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-http-requests-dotnet-get-started.md | Title: Azure Relay Hybrid Connections - HTTP requests in .NET description: Write a C# console application for Azure Relay Hybrid Connections HTTP requests in .NET. Previously updated : 06/21/2022 Last updated : 09/26/2022 # Get started with Relay Hybrid Connections HTTP requests in .NET In this quickstart, you take the following steps: To complete this tutorial, you need the following prerequisites: -* [Visual Studio 2015 or later](https://www.visualstudio.com). The examples in this tutorial use Visual Studio 2017. +* [Visual Studio 2019 or later](https://www.visualstudio.com). The examples in this tutorial use Visual Studio 2022. * An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin. ## Create a namespace |
azure-resource-manager | Bicep Functions Lambda | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-lambda.md | Last updated 09/20/2022 # Lambda functions for Bicep -This article describes the lambda functions to use in Bicep. Lambda expressions (or lambda functions) are essentially blocks of code that can be passed as an argument. In Bicep, lambda expression is in this format: +This article describes the lambda functions to use in Bicep. [Lambda expressions (or lambda functions)](https://learn.microsoft.com/dotnet/csharp/language-reference/operators/lambda-expressions) are essentially blocks of code that can be passed as an argument. They can take multiple parameters, but are resticted to a single line of code. In Bicep, lambda expression is in this format: ```bicep <lambda variable> => <expression> |
azure-resource-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md | There are some important factors to consider when defining your resource group: The resource group stores metadata about the resources. When you specify a location for the resource group, you're specifying where that metadata is stored. For compliance reasons, you may need to ensure that your data is stored in a particular region. - If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can't update them. This condition doesn't apply to global resources like Azure Content Delivery Network, Azure DNS, Azure Traffic Manager, and Azure Front Door. + If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can't update them. This condition doesn't apply to global resources like Azure Content Delivery Network, Azure DNS, Azure DNS Private Zones, Azure Traffic Manager, and Azure Front Door. For more information about building reliable applications, see [Designing reliable Azure applications](/azure/architecture/checklist/resiliency-per-service). |
azure-signalr | Howto Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-custom-domain.md | $ curl -vvv https://contoso.example.com/api/health It should return `200` status code without any certificate error. ++## Key Vault in private network ++If you have configured [Private Endpoint](../private-link/private-endpoint-overview.md) to your Key Vault, Azure SignalR Service cannot access the Key Vault via public network. You need to set up a [Shared Private Endpoint](./howto-shared-private-endpoints-key-vault.md) to let Azure SignalR Service access your Key Vault via private network. ++After you create a Shared Private Endpoint, you can create a custom certificate as usual. **You don't have to change the domain in Key Vault URI**. For example, if your Key Vault base URI is `https://contoso.vault.azure.net`, you still use this URI to configure custom certificate. ++You don't have to explicitly allow Azure SignalR Service IPs in Key Vault firewall settings. For more info, see [Key Vault private link diagnostics](../key-vault/general/private-link-diagnostics.md). + ## Next steps + [How to enable managed identity for Azure SignalR Service](howto-use-managed-identity.md) |
azure-signalr | Howto Shared Private Endpoints Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints-key-vault.md | + + Title: Access Key Vault in private network through Shared Private Endpoints ++description: How to access key vault in private network through Shared Private Endpoints ++++ Last updated : 09/23/2022++++# Access Key Vault in private network through Shared Private Endpoints ++Azure SignalR Service can access your Key Vault in private network through Shared Private Endpoints. In this way you don't have to expose your Key Vault on public network. ++ :::image type="content" alt-text="Diagram showing architecture of shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\shared-private-endpoint-overview.png" ::: ++## Shared Private Link Resources Management ++Private endpoints of secured resources that are created through Azure SignalR Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Key Vault, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure SignalR Service execution environment and aren't directly visible to you. ++> [!NOTE] +> The examples in this article are based on the following assumptions: +> * The resource ID of this Azure SignalR Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_. +> * The resource ID of Azure Key Vault is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv_. ++The rest of the examples show how the *contoso-signalr* service can be configured so that its outbound calls to Key Vault go through a private endpoint rather than public network. ++### Step 1: Create a shared private link resource to the Key Vault ++#### [Azure portal](#tab/azure-portal) ++1. In the Azure portal, go to your Azure SignalR Service resource. +1. In the menu pane, select **Networking**. Switch to **Private access** tab. +1. Click **Add shared private endpoint**. ++ :::image type="content" alt-text="Screenshot of shared private endpoints management." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" ::: ++1. Fill in a name for the shared private endpoint. +1. Select the target linked resource either by selecting from your owned resources or by filling a resource ID. +1. Click **Add**. ++ :::image type="content" alt-text="Screenshot of adding a shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-add.png" ::: ++1. The shared private endpoint resource will be in **Succeeded** provisioning state. The connection state is **Pending** approval at target resource side. ++ :::image type="content" alt-text="Screenshot of an added shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" ::: ++#### [Azure CLI](#tab/azure-cli) ++You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource: ++```dotnetcli +az rest --method put --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/sharedPrivateLinkResources/kv-pe?api-version=2021-06-01-preview --body @create-pe.json +``` ++The contents of the *create-pe.json* file, which represent the request body to the API, are as follows: ++```json +{ + "name": "contoso-kv", + "properties": { + "privateLinkResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv", + "groupId": "vault", + "requestMessage": "please approve" + } +} +``` ++The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following: ++```plaintext +"Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview" +``` ++You can poll this URI periodically to obtain the status of the operation. ++If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value, ++```dotnetcli +az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview +``` ++Wait until the status changes to "Succeeded" before proceeding to the next steps. ++-- ++### Step 2a: Approve the private endpoint connection for the Key Vault ++#### [Azure portal](#tab/azure-portal) ++1. In the Azure portal, select the **Networking** tab of your Key Vault and navigate to **Private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call. ++ :::image type="content" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approve-private-endpoint.png" ::: ++1. Select the private endpoint that Azure SignalR Service created. Click **Approve**. ++ Make sure that the private endpoint connection appears as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal. ++ :::image type="content" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approved-private-endpoint.png" ::: ++#### [Azure CLI](#tab/azure-cli) ++1. List private endpoint connections. ++ ```dotnetcli + az network private-endpoint-connection list -n <key-vault-resource-name> -g <key-vault-resource-group-name> --type 'Microsoft.KeyVault/vaults' + ``` ++ There should be a pending private endpoint connection. Note down its ID. ++ ```json + [ + { + "id": "<id>", + "location": "", + "name": "", + "properties": { + "privateLinkServiceConnectionState": { + "actionRequired": "None", + "description": "Please approve", + "status": "Pending" + } + } + } + ] + ``` ++1. Approve the private endpoint connection. ++ ```dotnetcli + az network private-endpoint-connection approve --id <private-endpoint-connection-id> + ``` ++-- ++### Step 2b: Query the status of the shared private link resource ++It takes minutes for the approval to be propagated to Azure SignalR Service. You can check the state using either Azure portal or Azure CLI. ++#### [Azure portal](#tab/azure-portal) ++ :::image type="content" alt-text="Screenshot of an approved shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-approved.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-approved.png" ::: ++#### [Azure CLI](#tab/azure-cli) ++```dotnetcli +az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview +``` ++This would return a JSON, where the connection state would show up as "status" under the "properties" section. ++```json +{ + "name": "contoso-kv", + "properties": { + "privateLinkResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv", + "groupId": "vaults", + "requestMessage": "please approve", + "status": "Approved", + "provisioningState": "Succeeded" + } +} ++``` ++If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and Azure SignalR Service can communicate over the private endpoint. ++-- ++At this point, the private endpoint between Azure SignalR Service and Azure Key Vault is established. ++Now you can configure features like custom domain as usual. **You don't have to use a special domain for Key Vault**. DNS resolution is automatically handled by Azure SignalR Service. ++## Next steps ++Learn more: +++ [What are private endpoints?](../private-link/private-endpoint-overview.md)++ [Configure custom domain](howto-custom-domain.md) |
azure-signalr | Howto Shared Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints.md | Private endpoints of secured resources that are created through Azure SignalR Se > [!NOTE] > The examples in this article are based on the following assumptions: > * The resource ID of this Azure SignalR Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_.-> * The resource ID of upstream Azure Function is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func. +> * The resource ID of upstream Azure Function is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func_. The rest of the examples show how the *contoso-signalr* service can be configured so that its upstream calls to function go through a private endpoint rather than public network. |
azure-vmware | Concepts Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md | Now that you've covered Azure VMware Solution storage concepts, you may want to - [Scale clusters in the private cloud][tutorial-scale-private-cloud] - You can scale the clusters and hosts in a private cloud as required for your application workload. Performance and availability limitations for specific services should be addressed on a case by case basis. -- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md) - You can use Azure NetApp Files to migrate and run the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. Azure NetApp Files volumes can be attached to virtual machines and can also be connected as data stores directly to Azure VMware Solution. This functionality is in preview.+- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md) - You can use Azure NetApp Files to migrate and run the most demanding enterprise file-workloads in the cloud: databases, and general purpose computing applications, with no code changes. Azure NetApp Files volumes can be attached to virtual machines and can also be connected as data stores directly to Azure VMware Solution. This functionality is in preview. - [vSphere role-based access control for Azure VMware Solution](concepts-identity.md) - You use vCenter Server to manage VM workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter Server and restricted administrator rights for NSX-T Manager. |
azure-vmware | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md | Azure VMware Solution private clouds use vSphere role-based access control for e vSAN data-at-rest encryption, by default, is enabled and is used to provide vSAN datastore security. For more information, see [Storage concepts](concepts-storage.md). +## Data Residency and Customer Data ++Azure VMware Solution does not store customer data. + ## VMware software versions [!INCLUDE [vmware-software-versions](includes/vmware-software-versions.md)] |
cognitive-services | Overview Ocr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md | This documentation contains the following types of articles: <!--* The [conceptual articles](how-to/call-read-api.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. --> -For a more structured approach, follow a Learn module for OCR. -* [Read Text in Images and Documents with the Computer Vision Service](/training/modules/read-text-images-documents-with-computer-vision-service/) - ## Read API The Computer Vision [Read API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) is Azure's latest OCR technology ([learn what's new](./whats-new.md)) that extracts printed text (in several languages), handwritten text (in several languages), digits, and currency symbols from images and multi-page PDF documents. It's optimized to extract text from text-heavy images and multi-page PDF documents with mixed languages. It supports extracting both printed and handwritten text in the same image or document. The **Read** call takes images and documents as its input. They have the followi * Supported file formats: JPEG, PNG, BMP, PDF, and TIFF * For PDF and TIFF files, up to 2000 pages (only the first two pages for the free tier) are processed.-* The file size of images must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels. PDF files do not have a size limit. +* The file size of images must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels. PDF files do not have a dimensions limit. * The minimum height of the text to be extracted is 12 pixels for a 1024 x 768 image. This corresponds to about 8 font point text at 150 DPI. ## Supported languages |
cognitive-services | Data Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/data-limits.md | The following limit specifies the maximum number of characters that can be in a | Feature | Value | ||| | Conversation summarization | 7,000 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements).|+| Conversation PII | 40,000 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements).| | Text Analytics for health | 30,720 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). | | All other pre-configured features (synchronous) | 5,120 as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). | | All other pre-configured features ([asynchronous](use-asynchronously.md)) | 125,000 characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements) (maximum of 25 documents). | |
cognitive-services | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/faq.md | Add any out of scope utterances to the [none intent](./concepts/none-intent.md). ## How do I control the none intent? -You can control the none intent threshhold from UI through the project settings, by changing the none inten threshold value. The values can be between 0.0 and 1.0. Also, you can change this threshold from the APIs by changing the *confidenceThreshold* in settings object. Learn more about [none intent](./concepts/none-intent.md#none-score-threshold) +You can control the none intent threshold from UI through the project settings, by changing the none intent threshold value. The values can be between 0.0 and 1.0. Also, you can change this threshold from the APIs by changing the *confidenceThreshold* in settings object. Learn more about [none intent](./concepts/none-intent.md#none-score-threshold) ## Is there any SDK support? |
cognitive-services | Connect Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/tutorials/connect-services.md | Now your orchestration project is ready to be used. Any incoming request will be ```powershell dotnet add package Azure.AI.Language.Conversations ```+Alternatively, you can search for "Azure.AI.Language.Conversations" in the NuGet package manager and install the latest release. 3. In `Program.cs`, replace `{api-key}` and the `{endpoint}` variables. Use the key and endpoint for the Language resource you created earlier. You can find them in the **Keys and Endpoint** tab in your Language resource in Azure. Uri endpoint = new Uri("{endpoint}"); AzureKeyCredential credential = new AzureKeyCredential("{api-key}"); ``` -4. Replace the orchestrationProject parameters to **Orchestrator** and **Testing** as below if they are not set already. +4. Replace the project and deployment parameters to **Orchestrator** and **Testing** as below if they are not set already. ```csharp-ConversationsProject orchestrationProject = new ConversationsProject("Orchestrator", "Testing"); +string projectName = "Orchestrator"; +string deploymentName = "Testing"; ``` 5. Run the project or press F5 in Visual Studio. |
cognitive-services | Concepts Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md | JSON objects can include nested JSON objects and simple property/values. An arra Personalizer can help you to understand which features of a chosen action are the most and least influential to then model during inference. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference. Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to further analyze how the data is being used by the underlying model. -Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP) +Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: `ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true`. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP) ```JSON { Enabling inference explainability will add a collection to the JSON response fro } ``` -In the example above, three action IDs are returned in the _ranking_ collection along with their respective probabilities scores. The action with the largest probability is the_ best action_ as determined by the model trained on data sent to the Personalizer APIs, which in this case is `"id": "EntertainmentArticle"`. The action ID can be seen again in the _inferenceExplanation_ collection, along with the feature names and scores determined by the model for that action and the features and values sent to the Rank API. +In the example above, three action IDs are returned in the _ranking_ collection along with their respective probabilities scores. The action with the largest probability is the _best action_ as determined by the model trained on data sent to the Personalizer APIs, which in this case is `"id": "EntertainmentArticle"`. The action ID can be seen again in the _inferenceExplanation_ collection, along with the feature names and scores determined by the model for that action and the features and values sent to the Rank API. Recall that Personalizer will either return the _best action_ or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](/azure/cognitive-services/personalizer/concepts-exploration). |
cognitive-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/whats-new.md | Learn what's new in Azure Personalizer. These items may include release notes, v ## Release notes +### September 2022 +* Personalizer Inference Explainabiltiy is now available as a Public Preview. Enabling inference explainability returns feature scores on every Rank API call, providing insight into how influential each feature is to the actions chosen by your Personalizer model. [Learn more about Inference Explainability] (concepts-features#inference-explainability). +* Personalizer SDK now available in [Java](https://search.maven.org/artifact/com.azure/azure-ai-personalizer/1.0.0-beta.1/jar) and [Javascript](https://www.npmjs.com/package/@azure-rest/ai-personalizer). + ### April 2022 * Local inference SDK (Preview): Personalizer now supports near-realtime (sub-10ms) inference without the need to wait for network API calls. Your Personalizer models can be used locally for lightning fast Rank calls using the [C# SDK (Preview)](https://www.nuget.org/packages/Azure.AI.Personalizer/2.0.0-beta.2), empowering your applications to personalize quickly and efficiently. Your model continues to train in Azure while your local model is seamlessly updated. |
container-registry | Container Registry Tutorial Build Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-build-task.md | az acr task create \ --registry $ACR_NAME \ --name taskhelloworld \ --image helloworld:{{.Run.ID}} \- --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#main \ + --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#master \ --file Dockerfile \ --git-access-token $GIT_PAT ``` Output from a successful [az acr task create][az-acr-task-create] command is sim "step": { "arguments": [], "baseImageDependencies": null,- "contextPath": "https://github.com/gituser/acr-build-helloworld-node#main", + "contextPath": "https://github.com/gituser/acr-build-helloworld-node#master", "dockerFilePath": "Dockerfile", "imageNames": [ "helloworld:{{.Run.ID}}" Output from a successful [az acr task create][az-acr-task-create] command is sim "name": "defaultSourceTriggerName", "sourceRepository": { "branch": "main",- "repositoryUrl": "https://github.com/gituser/acr-build-helloworld-node#main", + "repositoryUrl": "https://github.com/gituser/acr-build-helloworld-node#master", "sourceControlAuthProperties": null, "sourceControlType": "GitHub" }, Next, execute the following commands to create, commit, and push a new file to y echo "Hello World!" > hello.txt git add hello.txt git commit -m "Testing ACR Tasks"-git push origin main +git push origin master ``` You may be asked to provide your GitHub credentials when you execute the `git push` command. Provide your GitHub username, and enter the personal access token (PAT) that you created earlier for the password. |
cosmos-db | Consistency Levels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md | Each level provides availability and performance tradeoffs. The following image :::image type="content" source="./media/consistency-levels/five-consistency-levels.png" alt-text="Consistency as a spectrum" border="false" ::: -The consistency levels are region-agnostic and are guaranteed for all operations regardless of the region from which the reads and writes are served, the number of regions associated with your Azure Cosmos account, or whether your account is configured with a single or multiple write regions. - ## Consistency levels and Azure Cosmos DB APIs Azure Cosmos DB provides native support for wire protocol-compatible APIs for popular databases. These include MongoDB, Apache Cassandra, Gremlin, and Azure Table storage. When using Gremlin API and Table API, the default consistency level configured on the Azure Cosmos account is used. For details on consistency level mapping between Cassandra API or the API for MongoDB and Azure Cosmos DB's consistency levels see, [Cassandra API consistency mapping](cassandr). Clients outside of the session performing writes will see the following guarante ### Consistent prefix consistency -In consistent prefix option, updates that are returned contain some prefix of all the updates, with no gaps. Consistent prefix consistency level guarantees that reads never see out-of-order writes. +In consistent prefix, updates made as single document writes see eventual consistency. Updates made as a batch within a transaction, are returned consistent to the transaction in which they were committed. Write operations within a transaction of multiple documents are always visible together. -If writes were performed in the order `A, B, C`, then a client sees either `A`, `A,B`, or `A,B,C`, but never out-of-order permutations like `A,C` or `B,A,C`. Consistent Prefix provides write latencies, availability, and read throughput comparable to that of eventual consistency, but also provides the order guarantees that suit the needs of scenarios where order is important. +Assume two write operations are performed on documents Doc1 and Doc2, within transactions T1 and T2. When client does a read in any replica, the user will see either ΓÇ£Doc1 v1 and Doc2 v1ΓÇ¥ or ΓÇ£ Doc1 v2 and Doc2 v2ΓÇ¥, but never ΓÇ£Doc1 v1 and Doc2 v2ΓÇ¥ or ΓÇ£Doc1 v2 and Doc2 v1ΓÇ¥ for the same read or query operation. -Below are the consistency guarantees for Consistent Prefix: +Below are the consistency guarantees for Consistent Prefix within a transaction context (single document writes see eventual consistency): - Consistency for clients in same region for an account with single write region = [Consistent Prefix](#consistent-prefix-consistency) - Consistency for clients in different regions for an account with single write region = [Consistent Prefix](#consistent-prefix-consistency) |
cosmos-db | How To Setup Cross Tenant Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cross-tenant-customer-managed-keys.md | + + Title: Configure cross-tenant customer-managed keys for your Azure Cosmos DB account with Azure Key Vault (preview) +description: Learn how to configure encryption with customer-managed keys for Azure Cosmos DB using an Azure Key Vault that resides in a different tenant. ++++ Last updated : 09/27/2022++++# Configure cross-tenant customer-managed keys for your Azure Cosmos DB account with Azure Key Vault (preview) +++Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with service-managed keys managed by Microsoft. However, you can choose to add a second layer of encryption with keys you manage. These keys are known as customer-managed keys (or CMK). Customer-managed keys are stored in an Azure Key Vault instance. ++This article walks through how to configure encryption with customer-managed keys at the time that you create an Azure Cosmos DB account. In this example cross-tenant scenario, the Azure Cosmos DB account resides in a tenant managed by an Independent Software Vendor (ISV) referred to as the service provider. The key used for encryption of the Azure Cosmos DB account resides in a key vault in a different tenant that is managed by the customer. ++## About the preview ++To use the preview, you must register for the Azure Active Directory federated client identity feature in the service provider's tenant. Follow these instructions to register with Azure PowerShell or Azure CLI: ++### [Portal](#tab/azure-portal) ++Not yet supported. ++### [PowerShell](#tab/azure-powershell) ++To register with Azure PowerShell, use the [Register-AzProviderFeature](/powershell/module/az.resources/register-azproviderfeature) cmdlet. ++```azurepowershell +$parameters = @{ + FeatureName = "FederatedClientIdentity" + ProviderNamespace = "Microsoft.Storage" +} +Register-AzProviderFeature @parameters +``` ++To check the status of your registration, use [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature). ++```azurepowershell +$parameters = @{ + FeatureName = "FederatedClientIdentity" + ProviderNamespace = "Microsoft.Storage" +} +Get-AzProviderFeature @parameters +``` ++After your registration is approved, you must re-register the Azure Storage resource provider. To re-register the resource provider with PowerShell, use [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider). ++```azurepowershell +$parameters = @{ + ProviderNamespace = "Microsoft.Storage" +} +Register-AzResourceProvider @parameters +``` ++### [Azure CLI](#tab/azure-cli) ++To register with Azure CLI, use the [az feature register](/cli/azure/feature#az-feature-register) command. ++```azurecli +az feature register \ + --name FederatedClientIdentity \ + --namespace Microsoft.Storage +``` ++To check the status of your registration with Azure CLI, use [az feature show](/cli/azure/feature#az-feature-show). ++```azurecli +az feature show \ + --name FederatedClientIdentity \ + --namespace Microsoft.Storage +``` ++After your registration is approved, you must re-register the Azure Storage resource provider. To re-register the resource provider with Azure CLI, use [az provider register](/cli/azure/provider#az-provider-register). ++```azurecli +az provider register \ + --namespace 'Microsoft.Storage' +``` ++++> [!IMPORTANT] +> Using cross-tenant customer-managed keys with Azure Cosmos DB encryption is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++++## Create a new Azure Cosmos DB account encrypted with a key from a different tenant ++> [!NOTE] +> Cross-tenant customer-managed keys with Azure Cosmos DB encryption PREVIEW is not compatible with Continuous Backup or Azure Synapse link features. ++Up to this point, you've configured the multi-tenant application on the service provider's tenant. You've also installed the application on the customer's tenant and configured the key vault and key on the customer's tenant. Next you can create an Azure Cosmos DB account on the service provider's tenant and configure customer-managed keys with the key from the customer's tenant. ++When creating an Azure Cosmos DB account with customer-managed keys, we must ensure that it has access to the keys the customer used. In single-tenant scenarios, either give direct key vault access to the Azure Cosmos DB principal or use a specific managed identity. In a cross-tenant scenario, we can no longer depend on direct access to the key vault as it is in another tenant managed by the customer. This constraint is the reason in the previous sections we created a cross-tenant application and registered a managed identity inside the application to give it access to the customer's key vault. This managed identity, coupled with the cross-tenant application ID, is what we'll use when creating the cross-tenant CMK Azure Cosmos DB Account. For more information, see the [Phase 3 - The service provider encrypts data in an Azure resource using the customer-managed key](#phase-3the-service-provider-encrypts-data-in-an-azure-resource-using-the-customer-managed-key) section of this article. ++Whenever a new version of the key is available in the key vault, it will be automatically updated on the Azure Cosmos DB account. ++## Using Azure Resource Manager JSON templates ++Deploy an ARM template with the following specific parameters: ++> [!NOTE] +> If you are recreating this sample in one of your Azure Resource Manager templates, use an `apiVersion` of `2022-05-15`. ++| Parameter | Description | Example value | +| | | | +| `keyVaultKeyUri` | Identifier of the customer-managed key residing in the service provider's key vault. | `https://my-vault.vault.azure.com/keys/my-key` | +| `identity` | Object specifying that the managed identity should be assigned to the Azure Cosmos DB account. | `"identity":{"type":"UserAssigned","userAssignedIdentities":{"/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/my-resource-group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/my-identity":{}}}` | +| `defaultIdentity` | Combination of the resource ID of the managed identity and the application ID of the multi-tenant Azure Active Directory application. | `UserAssignedIdentity=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/my-resource-group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/my-identity&FederatedClientId=11111111-1111-1111-1111-111111111111` | ++Here's an example of a template segment with the three parameters configured: ++```json +{ + "kind": "GlobalDocumentDB", + "location": "East US 2", + "identity": { + "type": "UserAssigned", + "userAssignedIdentities": { + "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/my-resource-group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/my-identity": {} + } + }, + "properties": { + "locations": [ + { + "locationName": "East US 2", + "failoverPriority": 0, + "isZoneRedundant": false + } + ], + "databaseAccountOfferType": "Standard", + "keyVaultKeyUri": "https://my-vault.vault.azure.com/keys/my-key", + "defaultIdentity": "UserAssignedIdentity=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/my-resource-group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/my-identity&FederatedClientId=11111111-1111-1111-1111-111111111111" + } +} +``` ++> [!IMPORTANT] +> This feature is not yet supported in Azure PowerShell, Azure CLI, or the Azure portal. ++You can't configure customer-managed keys with a specific version of the key version when you create a new Azure Cosmos DB account. The key itself must be passed with no versions and no trailing backslashes. ++To Revoke or Disable customer-managed keys, see [configure customer-managed keys for your Azure Cosmos DB account with Azure Key Vault](how-to-setup-customer-managed-keys.md) ++## See also ++- [Configure customer-managed keys for your Azure Cosmos account with Azure Key Vault](how-to-setup-cmk.md) |
cosmos-db | Set Throughput | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/set-throughput.md | As described in the [Current provisioned throughput](#current-provisioned-throug This can be a concern in situations where you need to store large amounts of data, but have low throughput requirements in comparison. To better accommodate these scenarios, Azure Cosmos DB has introduced a **"high storage / low throughput" program** that decreases the RU/s per GB constraint on eligible accounts. -To join this program and assess your full eligibility, all you have to do is to fill [this survey](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRzBPrdEMjvxPuDm8fCLUtXpUREdDU0pCR0lVVFY5T1lRVEhWNUZITUJGMC4u). The Azure Cosmos DB team will then follow up and proceed with your onboarding. +To join this program and assess your full eligibility, all you have to do is to fill [this survey](https://aka.ms/cosmosdb-high-storage-low-throughput-program). The Azure Cosmos DB team will then follow up and proceed with your onboarding. ## Comparison of models This table shows a comparison between provisioning standard (manual) throughput on a database vs. on a container. |
cosmos-db | How To Write Stored Procedures Triggers Udfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-write-stored-procedures-triggers-udfs.md | function updateMetadata() { function updateMetadataCallback(err, items, responseOptions) { if(err) throw new Error("Error" + err.message);- if(items.length != 1) throw 'Unable to find metadata document'; - var metadataItem = items[0]; + if(items.length != 1) throw 'Unable to find metadata document'; - // update metadata - metadataItem.createdItems += 1; - metadataItem.createdNames += " " + createdItem.id; - var accept = container.replaceDocument(metadataItem._self, - metadataItem, function(err, itemReplaced) { - if(err) throw "Unable to update metadata, abort"; - }); - if(!accept) throw "Unable to update metadata, abort"; - return; + var metadataItem = items[0]; ++ // update metadata + metadataItem.createdItems += 1; + metadataItem.createdNames += " " + createdItem.id; + var accept = container.replaceDocument(metadataItem._self, + metadataItem, function(err, itemReplaced) { + if(err) throw "Unable to update metadata, abort"; + }); ++ if(!accept) throw "Unable to update metadata, abort"; + return; } } ``` |
cosmos-db | Manage With Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-terraform.md | + + Title: Create and manage Azure Cosmos DB with terraform +description: Use terraform to create and configure Azure Cosmos DB for Core (SQL) API ++++ Last updated : 09/16/2022+++++# Manage Azure Cosmos DB Core (SQL) API resources with terraform +++In this article, you learn how to use terraform to deploy and manage your Azure Cosmos DB accounts, databases, and containers. ++This article shows terraform samples for Core (SQL) API accounts. ++> [!IMPORTANT] +> +> * Account names are limited to 44 characters, all lowercase. +> * To change the throughput (RU/s) values, redeploy the terraform file with updated RU/s. +> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately. +> * To provision throughput at the database level and share across all containers, apply the throughput values to the database options property. ++To create any of the Azure Cosmos DB resources below, copy the example into a new terraform file (main.tf) or alternatively, have 2 separate files for resources (main.tf) and variables (variables.tf). Be sure to include the azurerm provider either in the main terraform file or split out to a separate providers file. All examples can be found on the [terraform samples repository](https://github.com/Azure/terraform). +++<a id="create-autoscale"></a> ++## Azure Cosmos account with autoscale throughput ++Create an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for autoscale throughput that has most index policy options enabled. ++### main.tf +++### variables.tf +++<a id="create-analytical-store"></a> ++## Azure Cosmos account with analytical store ++Create an Azure Cosmos account in one region with a container with Analytical TTL enabled and options for manual or autoscale throughput. ++### main.tf +++### variables.tf +++<a id="create-manual"></a> ++## Azure Cosmos account with standard provisioned throughput ++Create an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for standard throughput that has most policy options enabled. ++### main.tf +++### variables.tf +++<a id="create-sproc"></a> ++## Azure Cosmos DB container with server-side functionality ++Create an Azure Cosmos account, database and container with a stored procedure, trigger, and user-defined function. ++### main.tf +++### variables.tf +++<a id="create-rbac"></a> ++## Azure Cosmos DB account with Azure AD and RBAC ++Create an Azure Cosmos account, a natively maintained Role Definition, and a natively maintained Role Assignment for an Azure Active Directory identity. ++### main.tf +++### variables.tf +++<a id="free-tier"></a> ++## Free tier Azure Cosmos DB account ++Create a free-tier Azure Cosmos account and a database with shared throughput that can be shared with up to 25 containers. ++### main.tf +++### variables.tf +++## Next steps ++Here are some additional resources: ++* [Install Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli) +* [Terraform Azure Tutorial](https://learn.hashicorp.com/collections/terraform/azure-get-started) +* [Terraform tools](https://www.terraform.io/docs/terraform-tools) +* [Azure Provider Terraform documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs) +* [Terraform documentation](https://www.terraform.io/docs) |
cosmos-db | Quick Create Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quick-create-terraform.md | + + Title: Quickstart - Create an Azure Cosmos DB and a container using Terraform +description: Quickstart showing how to an Azure Cosmos database and a container using Terraform +++tags: azure-resource-manager, terraform +++ Last updated : 09/22/2022++#Customer intent: As a database admin who is new to Azure, I want to use Azure Cosmos DB to store and manage my data. +++# Quickstart: Create an Azure Cosmos DB and a container using Terraform +++Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deployments via Terraform to create an Azure Cosmos database and a container within that database. You can later store data in this container. ++## Prerequisites ++An Azure subscription or free Azure Cosmos DB trial account ++- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ++Terraform should be installed on your local computer. Installation instructions can be found [here](https://learn.hashicorp.com/tutorials/terraform/install-cli). ++## Review the Terraform File ++The Terraform files used in this quickstart can be found on the [terraform samples repository](https://github.com/Azure/terraform). Please create the below 3 files: providers.tf, main.tf and variables.tf. Variables can be set in command line or alternatively with a terraforms.tfvars file. ++### Provider Terraform File +++### Main Terraform File +++### Variables Terraform File ++++Three Cosmos DB resources are defined in the main terraform file. ++- [Microsoft.DocumentDB/databaseAccounts](/azure/templates/microsoft.documentdb/databaseaccounts): Create an Azure Cosmos account. ++- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases): Create an Azure Cosmos database. ++- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers): Create an Azure Cosmos container. ++## Deploy via terraform ++1. Save the terraform files as main.tf, variables.tf and providers.tf to your local computer. +2. Login to your terminal via Azure CLI or Powershell +3. Deploy via Terraform commands + - terraform init + - terraform plan + - terraform apply ++## Validate the deployment ++Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group. ++# [CLI](#tab/CLI) ++```azurecli-interactive +az resource list --resource-group "your resource group name" +``` ++# [PowerShell](#tab/PowerShell) ++```azurepowershell-interactive +Get-AzResource -ResourceGroupName "your resource group name" +``` ++++## Clean up resources ++If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. +When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources. ++# [CLI](#tab/CLI) ++```azurecli-interactive +az group delete --name "your resource group name" +``` ++# [PowerShell](#tab/PowerShell) ++```azurepowershell-interactive +Remove-AzResourceGroup -Name "your resource group name" +``` ++++## Next steps ++In this quickstart, you created an Azure Cosmos account, a database and a container via terraform and validated the deployment. To learn more about Azure Cosmos DB and Terraform, continue on to the articles below. ++- Read an [Overview of Azure Cosmos DB](../introduction.md). +- Learn more about [Terraform](https://www.terraform.io/intro). +- Learn more about [Azure Terraform Provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs). +- [Manage Cosmos DB with Terraform](manage-with-terraform.md) +- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. + - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md). + - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md). |
cosmos-db | Terraform Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/terraform-samples.md | + + Title: Terraform samples for Azure Cosmos DB Core (SQL API) +description: Use Terraform to create and configure Azure Cosmos DB. ++++ Last updated : 09/16/2022+++++# Terraform for Azure Cosmos DB +++This article shows Terraform samples for Core (SQL) API accounts. ++## Core (SQL) API ++|**Sample**|**Description**| +||| +|[Create an Azure Cosmos account, database, container with autoscale throughput](manage-with-terraform.md#create-autoscale) | Create a Core (SQL) API account in two regions, a database and container with autoscale throughput. | +|[Create an Azure Cosmos account, database, container with analytical store](manage-with-terraform.md#create-analytical-store) | Create a Core (SQL) API account in one region with a container configured with Analytical TTL enabled and option to use manual or autoscale throughput. | +|[Create an Azure Cosmos account, database, container with standard (manual) throughput](manage-with-terraform.md#create-manual) | Create a Core (SQL) API account in two regions, a database and container with standard throughput. | +|[Create an Azure Cosmos account, database and container with a stored procedure, trigger and UDF](manage-with-terraform.md#create-sproc) | Create a Core (SQL) API account in two regions with a stored procedure, trigger and UDF for a container. | +|[Create an Azure Cosmos account with Azure AD identity, Role Definitions and Role Assignment](manage-with-terraform.md#create-rbac) | Create a Core (SQL) API account with Azure AD identity, Role Definitions and Role Assignment on a Service Principal. | +|[Create a free-tier Azure Cosmos account](manage-with-terraform.md#free-tier) | Create an Azure Cosmos DB Core (SQL) API account on free-tier. | ++## Next steps ++Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. ++* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) +* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) |
cosmos-db | Troubleshoot Sdk Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-sdk-availability.md | Title: Diagnose and troubleshoot the availability of Azure Cosmos SDKs in multir description: Learn all about the Azure Cosmos SDK availability behavior when operating in multi regional environments. Previously updated : 09/07/2022 Last updated : 09/27/2022 All the Azure Cosmos SDKs give you an option to customize the regional preferenc * The [CosmosClient.preferred_locations](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) parameter in Python SDK. * The [CosmosClientOptions.ConnectionPolicy.preferredLocations](/javascript/api/@azure/cosmos/connectionpolicy#preferredlocations) parameter in JS SDK. -When you set the regional preference, the client will connect to a region as mentioned in the following table: +When the SDK initializes with a configuration that specifies regional preference, it will first obtain the account information including the available regions from the global endpoint. It will then apply an intersection of the configured regional preference and the account's available regions and use the order in the regional preference to prioritize the result. ++If the regional preference configuration contains regions that aren't an available region in the account, the values will be ignored. If these invalid regions are [added later to the account](#adding-a-region-to-an-account), the SDK will use them if they're higher in the preference configuration. |Account type |Reads |Writes | ||--|--|-| Single write region | Preferred region | Primary region | -| Multiple write regions | Preferred region | Preferred region | +| Single write region | Preferred region with highest order | Primary region | +| Multiple write regions | Preferred region with highest order | Preferred region with highest order | If you **don't set a preferred region**, the SDK client defaults to the primary region: |
data-factory | Tutorial Bulk Copy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy-portal.md | |
data-factory | Tutorial Bulk Copy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy.md | |
data-factory | Tutorial Control Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-control-flow.md | |
data-factory | Tutorial Copy Data Dot Net | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-dot-net.md | |
data-factory | Tutorial Copy Data Portal Private | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-portal-private.md | In this tutorial, you start by creating a pipeline. Then you create linked servi 1. On the home page, select **Orchestrate**. - :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page."::: + :::image type="content" source="media/tutorial-data-flow/orchestrate.png" alt-text="Screenshot that shows the data factory home page with the Orchestrate button highlighted."::: + 1. In the properties pane for the pipeline, enter **CopyPipeline** for the pipeline name. 1. In the **Activities** tool box, expand the **Move and Transform** category, and drag the **Copy data** activity from the tool box to the pipeline designer surface. Enter **CopyFromBlobToSql** for the name. You can debug a pipeline before you publish artifacts (linked services, datasets The pipeline in this sample copies data from Blob storage to SQL Database by using private endpoints in Data Factory Managed Virtual Network. You learned how to: * Create a data factory.-* Create a pipeline with a copy activity. +* Create a pipeline with a copy activity. |
data-factory | Tutorial Copy Data Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-portal.md | |
data-factory | Tutorial Copy Data Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-tool.md | |
data-factory | Tutorial Data Flow Adventure Works Retail Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-adventure-works-retail-template.md | Last updated 10/18/2021 This document explains how to setup and use Microsoft's AdventureWorks pipeline template to jump start the exploration of the AdventureWorks dataset using Azure Synapse Analytics and the Retail database template. ## Overview-AdventureWorks is a fictional sports equipment retailer that is used to demo Microsoft applications. In this case, they are being used as an example for how to use Synapse Pipelines to map retail data to the Retail database template for further analysis within Azure Synapse. +AdventureWorks is a fictional sports equipment retailer that is used to demo Microsoft applications. In this case, they're being used as an example for how to use Synapse Pipelines to map retail data to the Retail database template for further analysis within Azure Synapse. ## Prerequisites Follow these steps to locate the template. These steps open the template overview page. ## Configure the template-The template is designed to require minimal configuration. From the template overview page you can see a preview of the initial starting configuration of the pipeline, and click **Open pipeline** to create the resources in your own workspace. You will get a notification that all 31 resources in the template have been created, and can review these before committing or publishing them. You will find the below components of the template: +The template is designed to require minimal configuration. From the template overview page you can see a preview of the initial starting configuration of the pipeline, and select **Open pipeline** to create the resources in your own workspace. You'll get a notification that all 31 resources in the template have been created, and can review these before committing or publishing them. You'll find the below components of the template: * 17 pipelines: These are scheduled to ensure the data loads into the target tables correctly, and include one pipeline per source table plus the scheduling ones. * 14 data flows: These contain the logic to load from the source system and land the data into the target database. If you have the AdventureWorks dataset loaded into a different database, you can ## Dataset and source/target models-The AdventureWorks dataset in Excel format can be downloaded from this [GitHub site](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledat). +The AdventureWorks dataset in Excel format can be downloaded from this [GitHub site](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledat). With the databases created, ensure the dataflows are pointing to the correct tables by editing the dropdowns in the Workspace DB source and sink settings. You can load the data into the source model by placing the CSV files provided in the example dataset in the correct folders specified by the tables. Once that is done, all that's required is to run the pipelines. With the databases created, ensure the dataflows are pointing to the correct tab If the pipeline fails to run successfully, there's a few main things to check for errors. * Dataset schema. Make sure the data settings for the CSV files are accurate. If you included row headers, make sure the how headers option is checked on the database table.-* Data flow sources. If you used different column or table names than what were provided in the example schema, you will need to step through the data flows to verify that the columns are mapped correctly. +* Data flow sources. If you used different column or table names than what were provided in the example schema, you'll need to step through the data flows to verify that the columns are mapped correctly. * Data flow sink. The schema and data format configurations on the target database will need to match the data flow template. Like above, if any changes were made you those items will need to be aligned. ## Next steps |
data-factory | Tutorial Data Flow Delta Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-delta-lake.md | |
data-factory | Tutorial Data Flow Dynamic Columns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-dynamic-columns.md | -Many times, when processing data for ETL jobs, you will need to change the column names before writing the results. Sometimes this is needed to align column names to a well-known target schema. Other times, you may need to set column names at runtime based on evolving schemas. In this tutorial, you'll learn how to use data flows to set column names for your destination files and database tables dynamically using external configuration files and parameters. +Many times, when processing data for ETL jobs, you'll need to change the column names before writing the results. Sometimes this is needed to align column names to a well-known target schema. Other times, you may need to set column names at runtime based on evolving schemas. In this tutorial, you'll learn how to use data flows to set column names for your destination files and database tables dynamically using external configuration files and parameters. If you're new to Azure Data Factory, see [Introduction to Azure Data Factory](introduction.md). In this step, you'll create a pipeline that contains a data flow activity. 1. In the **Activities** pane, expand the **Move and Transform** accordion. Drag and drop the **Data Flow** activity from the pane to the pipeline canvas. :::image type="content" source="media/tutorial-data-flow/activity1.png" alt-text="Screenshot that shows the pipeline canvas where you can drop the Data Flow activity.":::-1. In the **Adding Data Flow** pop-up, select **Create new Data Flow** and then name your data flow **DynaCols**. Click Finish when done. +1. In the **Adding Data Flow** pop-up, select **Create new Data Flow** and then name your data flow **DynaCols**. Select Finish when done. ## Build dynamic column mapping in data flows You'll learn how to dynamically set column names using a data flow First, let's set up the data flow environment for each of the mechanisms described below for landing data in ADLS Gen2. -1. Click on the source transformation and call it ```movies1```. -1. Click the new button next to dataset in the bottom panel. +1. Select on the source transformation and call it ```movies1```. +1. Select the new button next to dataset in the bottom panel. 1. Choose either Blob or ADLS Gen2 depending on where you stored the moviesDB.csv file from above.-1. Add a 2nd source, which we will use to source the configuration JSON file to lookup field mappings. +1. Add a second source, which we'll use to source the configuration JSON file to look up field mappings. 1. Call this as ```columnmappings```. 1. For the dataset, point to a new JSON file that will store a configuration for column mapping. You can paste the into the JSON file for this tutorial example: ``` First, let's set up the data flow environment for each of the mechanisms describ ] ``` -1. Set this source settings to ```array of documents```. -1. Add a 3rd source and call it ```movies2```. Configure this exactly the same as ```movies1```. +1. Set this source setting to ```array of documents```. +1. Add a third source and call it ```movies2```. Configure this exactly the same as ```movies1```. ### Parameterized column mapping -In this first scenario, you will set output column names in you data flow by setting the column mapping based on matching incoming fields with a parameter that is a string array of columns and match each array index with the incoming column ordinal position. When executing this data flow from a pipeline, you will be able to set different column names on each pipeline execution by sending in this string array parameter to the data flow activity. +In this first scenario, you'll set output column names in your data flow by setting the column mapping based on matching incoming fields with a parameter that is a string array of columns and match each array index with the incoming column ordinal position. When executing this data flow from a pipeline, you'll be able to set different column names on each pipeline execution by sending in this string array parameter to the data flow activity. :::image type="content" source="media/data-flow/dynacols-3.png" alt-text="Parameters"::: 1. Go back to the data flow designer and edit the data flow created above.-1. Click on the parameters tab +1. Select on the parameters tab 1. Create a new parameter and choose string array data type 1. For the default value, enter ```['a','b','c']``` 1. Use the top ```movies1``` source to modify the column names to map to these array values 1. Add a Select transformation. The Select transformation will be used to map incoming columns to new column names for output.-1. We're going to change the first 3 column names to the new names defined in the parameter -1. To do this, add 3 rule-based mapping entries in the bottom pane +1. We're going to change the first three column names to the new names defined in the parameter +1. To do this, add three rule-based mapping entries in the bottom pane 1. For the first column, the matching rule will be ```position==1``` and the name will be ```$parameter1[1]``` 1. Follow the same pattern for column 2 and 3 :::image type="content" source="media/data-flow/dynacols-4.png" alt-text="Select transformation"::: -1. Click on the Inspect and Data Preview tabs of the Select transformation to view the new column name values ```(a,b,c)``` replace the original movie, title, genres column names +1. Select on the Inspect and Data Preview tabs of the Select transformation to view the new column name values ```(a,b,c)``` replace the original movie, title, genres column names ### Create a cached lookup of external column mappings Next, we'll create a cached sink for a later lookup. The cache will read an exte 1. Set sink type to ```Cache```. 1. Under Settings, choose ```prevcolumn``` as the key column. -### Lookup columns names from cached sink +### Look up columns names from cached sink Now that you've stored the configuration file contents in memory, you can dynamically map incoming column names to new outgoing column names. -1. Go back to the data flow designer and edit the data flow create above. Click on the ```movies2``` source transformation. +1. Go back to the data flow designer and edit the data flow create above. Select on the ```movies2``` source transformation. 1. Add a Select transformation. This time, we'll use the Select transformation to rename column names based on the target name in the JSON configuration file that is being stored in the cached sink. 1. Add a rule-based mapping. For the Matching Condition, use this formula: ```!isNull(cachedSink#lookup(name).prevcolumn)```. 1. For the output column name, use this formula: ```cachedSink#lookup($$).newcolumn```. 1. What we've done is to find all column names that match the ```prevcolumn``` property from the external JSON configuration file and renamed each match to the new ```newcolumn``` name.-1. Click on the Data Preview and Inspect tabs in the Select transformation and you should now see the new column names from the external mapping file. +1. Select on the Data Preview and Inspect tabs in the Select transformation and you should now see the new column names from the external mapping file. :::image type="content" source="media/data-flow/dynacols-2.png" alt-text="Source 2"::: |
data-factory | Tutorial Data Flow Private | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-private.md | In this step, you create a data factory and open the Data Factory UI to create a 1. Select **Create**. 1. After the creation is finished, you see the notice in the Notifications center. Select **Go to resource** to go to the **Data Factory** page.-1. Select **Author & Monitor** to launch the Data Factory UI in a separate tab. +1. Select **Open Azure Data Factory Studio** to launch the Data Factory UI in a separate tab. ## Create an Azure IR in Data Factory Managed Virtual Network In this step, you'll create a pipeline that contains a data flow activity. 1. On the home page of Azure Data Factory, select **Orchestrate**. - :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows creating a pipeline."::: + :::image type="content" source="./media/tutorial-data-flow/orchestrate.png" alt-text="Screenshot that shows the data factory home page with the Orchestrate button highlighted."::: 1. In the properties pane for the pipeline, enter **TransformMovies** for the pipeline name. 1. In the **Activities** pane, expand **Move and Transform**. Drag the **Data Flow** activity from the pane to the pipeline canvas. |
data-factory | Tutorial Data Flow Write To Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-write-to-lake.md | In this step, you'll create a pipeline that contains a data flow activity. 1. On the home page of Azure Data Factory, select **Orchestrate**. - :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that show the ADF home page."::: + :::image type="content" source="./media/tutorial-data-flow/orchestrate.png" alt-text="Screenshot that shows the data factory home page with the Orchestrate button highlighted."::: 1. In the **General** tab for the pipeline, enter **DeltaLake** for **Name** of the pipeline. 1. In the factory top bar, slide the **Data Flow debug** slider on. Debug mode allows for interactive testing of transformation logic against a live Spark cluster. Data Flow clusters take 5-7 minutes to warm up and users are recommended to turn on debug first if they plan to do Data Flow development. For more information, see [Debug Mode](concepts-data-flow-debug-mode.md). |
data-factory | Tutorial Data Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow.md | |
data-factory | Tutorial Deploy Ssis Packages Azure Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure-powershell.md | |
data-factory | Tutorial Deploy Ssis Packages Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure.md | In this tutorial, you complete the following steps: To create your data factory via the Azure portal, follow the step-by-step instructions in [Create a data factory via the UI](./quickstart-create-data-factory-portal.md#create-a-data-factory). Select **Pin to dashboard** while doing so, to allow quick access after its creation. -After your data factory is created, open its overview page in the Azure portal. Select the **Author & Monitor** tile to open the **Let's get started** page on a separate tab. There, you can continue to create your Azure-SSIS IR. +After your data factory is created, open its overview page in the Azure portal. Select the **Open Azure Data Factory Studio** tile to open the **Let's get started** page on a separate tab. There, you can continue to create your Azure-SSIS IR. ## Create an Azure-SSIS integration runtime After your data factory is created, open its overview page in the Azure portal. 1. On the home page, select the **Configure SSIS** tile. - :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the Azure Data Factory home page."::: + :::image type="content" source="./media/doc-common-process/configure-ssis-button.png" alt-text="Screenshot that shows the Azure Data Factory home page."::: 1. For the remaining steps to set up an Azure-SSIS IR, see the [Provision an Azure-SSIS integration runtime](#provision-an-azure-ssis-integration-runtime) section. |
data-factory | Tutorial Deploy Ssis Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-virtual-network.md | After you've configured a virtual network, you can join your Azure-SSIS IR to th :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/data-factories-list.png" alt-text="List of data factories"::: -1. Select your ADF with Azure-SSIS IR in the list. You see the home page for your ADF. Select the **Author & Monitor** tile. You see ADF UI on a separate tab. +1. Select your ADF with Azure-SSIS IR in the list. You see the home page for your ADF. Select the **Open Azure Data Factory Studio** tile. You see ADF UI on a separate tab. :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/data-factory-home-page.png" alt-text="Data factory home page"::: |
data-factory | Tutorial Enable Remote Access Intranet Tls Ssl Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-enable-remote-access-intranet-tls-ssl-certificate.md | |
data-factory | Tutorial Hybrid Copy Data Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-data-tool.md | |
data-factory | Tutorial Hybrid Copy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-portal.md | In this step, you create a data factory and start the Data Factory UI to create 1. On the Azure Data Factory home page, select **Orchestrate**. A pipeline is automatically created for you. You see the pipeline in the tree view, and its editor opens. - :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the Azure Data Factory home page."::: + :::image type="content" source="./media/tutorial-data-flow/orchestrate.png" alt-text="Screenshot that shows the data factory home page with the Orchestrate button highlighted."::: 1. In the General panel under **Properties**, specify **SQLServerToBlobPipeline** for **Name**. Then collapse the panel by clicking the Properties icon in the top-right corner. |
data-factory | Tutorial Hybrid Copy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-powershell.md | |
data-factory | Tutorial Incremental Copy Change Data Capture Feature Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal.md | |
data-factory | Tutorial Incremental Copy Lastmodified Copy Data Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-lastmodified-copy-data-tool.md | |
data-factory | Tutorial Incremental Copy Multiple Tables Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-multiple-tables-portal.md | |
data-factory | Tutorial Incremental Copy Multiple Tables Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-multiple-tables-powershell.md | |
data-factory | Tutorial Incremental Copy Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-overview.md | |
data-factory | Tutorial Incremental Copy Partitioned File Name Copy Data Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-partitioned-file-name-copy-data-tool.md | |
data-factory | Tutorial Incremental Copy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-portal.md | In this tutorial, you create a pipeline with two Lookup activities, one Copy act 1. On the home page of Data Factory UI, click the **Orchestrate** tile. - :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the home page of Data Factory UI."::: + :::image type="content" source="./media/tutorial-data-flow/orchestrate.png" alt-text="Screenshot that shows the data factory home page with the Orchestrate button highlighted."::: + 3. In the General panel under **Properties**, specify **IncrementalCopyPipeline** for **Name**. Then collapse the panel by clicking the Properties icon in the top-right corner. 4. Let's add the first lookup activity to get the old watermark value. In the **Activities** toolbox, expand **General**, and drag-drop the **Lookup** activity to the pipeline designer surface. Change the name of the activity to **LookupOldWaterMarkActivity**. |
data-factory | Tutorial Incremental Copy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-powershell.md | |
data-factory | Tutorial Managed Virtual Network Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-migrate.md | |
data-factory | Tutorial Managed Virtual Network On Premise Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md | |
data-factory | Tutorial Managed Virtual Network Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-sql-managed-instance.md | |
data-factory | Tutorial Operationalize Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-operationalize-pipelines.md | |
data-factory | Tutorial Pipeline Failure Error Handling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-pipeline-failure-error-handling.md | |
data-factory | Tutorial Push Lineage To Purview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-push-lineage-to-purview.md | |
data-factory | Tutorial Transform Data Hive Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-hive-virtual-network.md | |
data-factory | Tutorial Transform Data Spark Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-powershell.md | description: 'This tutorial provides step-by-step instructions for transforming Previously updated : 01/28/2022 Last updated : 09/26/2022 |
data-factory | Update Machine Learning Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/update-machine-learning-models.md | |
data-factory | Whitepapers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whitepapers.md | |
data-factory | Wrangling Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-functions.md | |
data-factory | Wrangling Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-overview.md | |
data-factory | Wrangling Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-tutorial.md | |
energy-data-services | Concepts Entitlements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md | For each group, you can either add a user as an OWNER or a MEMBER. The only diff ## Group naming -All group identifiers (emails) will be of form {groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}.com. A group naming convention has been adopted such that the group's name should start with the word "data." for data groups; "service." for service groups; and "users." for user groups. An exception is when a data partition is provisioned. When a data partition is created, so is a corresponding group: users (for example, for data partition `opendes`, the group `users@opendes.dataservices.energy` is created). +All group identifiers (emails) will be of form {groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}.com. A group naming convention has been adopted such that the group's name should start with the word "data." for data groups; "service." for service groups; and "users." for user groups. An exception is when a data partition is provisioned. When a data partition is created, so is a corresponding group-for example, for data partition `opendes`, the group `users@opendes.dataservices.energy` is created. -## Permissions/roles +## Permissions and roles The OSDU™ Data Ecosystem user groups provide an abstraction from permission and user management and--without a user creating their own groups--the following user groups exist by default: |
energy-data-services | How To Manage Legal Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-legal-tags.md | Last updated 08/19/2022 -# How to manage legal tags? -A Legal tag is the entity that represents the legal status of data in the Microsoft Energy Data Services Preview instance. Legal tag is a collection of properties that governs how data can be ingested and consumed. A legal tag is necessarily required for data to be [ingested](concepts-csv-parser-ingestion.md) into your Microsoft Energy Data Services Preview instance. It's also required for the [consumption](concepts-index-and-search.md) of the data from your Microsoft Energy Data Services Preview instance. Legal tags are defined at a data partition level individually. +# How to manage legal tags +In this article, you'll know how to manage legal tags in your Microsoft Energy Data Services Preview instance. A Legal tag is the entity that represents the legal status of data in the Microsoft Energy Data Services Preview instance. Legal tag is a collection of properties that governs how data can be ingested and consumed. A legal tag is required for data to be [ingested](concepts-csv-parser-ingestion.md) into your Microsoft Energy Data Services Preview instance. It's also required for the [consumption](concepts-index-and-search.md) of the data from your Microsoft Energy Data Services Preview instance. Legal tags are defined at a data partition level individually. -While in Microsoft Energy Data Services Preview instance, [entitlement service](concepts-entitlements.md) defines access to data for a given user(s), legal tag defines the overall access to the data across users. A user may have access to manage the data within a data partition however, they may not be able to do so, until certain legal requirements are fulfilled. +While in Microsoft Energy Data Services Preview instance, [entitlement service](concepts-entitlements.md) defines access to data for a given user(s), legal tag defines the overall access to the data across users. A user may have access to manage the data within a data partition however, they may not be able to do so-until certain legal requirements are fulfilled. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] Run the below curl command in Azure Cloud Bash to create a legal tag for a given The country of origin should follow [ISO Alpha2 format](https://www.nationsonline.org/oneworld/country_code_list.htm). -> [!NOTE] -> Create Legal Tag api, internally appends data-partition-id to legal tag name if it isn't already present. For instance, if request has name as: ```legal-tag```, then the create legal tag name would be ```<instancename>-<data-partition-id>-legal-tag``` +The Create Legal Tag api, internally appends data-partition-id to legal tag name if it isn't already present. For instance, if request has name as: ```legal-tag```, then the create legal tag name would be ```<instancename>-<data-partition-id>-legal-tag``` ```bash curl --location --request POST 'https://<instance>.energy.azure.com/api/legal/v1/legaltags' \ |
energy-data-services | How To Manage Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md | Last updated 08/19/2022 -# How to manage users? -This article describes how to manage users in Microsoft Energy Data Services Preview. It uses the [entitlements API](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) and acts as a group-based authorization system for data partitions within Microsoft Energy Data Service instance. For more information about Microsoft Energy Data Services Preview entitlements, see [entitlement services](concepts-entitlements.md). +# How to manage users +In this article, you'll know how to manage users in Microsoft Energy Data Services Preview. It uses the [entitlements API](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) and acts as a group-based authorization system for data partitions within Microsoft Energy Data Service instance. For more information about Microsoft Energy Data Services Preview entitlements, see [entitlement services](concepts-entitlements.md). [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## Prerequisites -Create a Microsoft Energy Data Services Preview instance using guide at [How to create Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). +Create a Microsoft Energy Data Services Preview instance using the tutorial at [How to create Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). -Keep the following values handy. These values will be used to: --* Generate the access token, which you'll need to make valid calls to the Entitlements API of your Microsoft Energy Data Services Preview instance -* Pass as parameters for different user management requests to the Entitlements API. +You will need to pass parameters for generating the access token, which you'll need to make valid calls to the Entitlements API of your Microsoft Energy Data Services Preview instance. You will also need these parameters for different user management requests to the Entitlements API. Hence Keep the following values handy for these actions. #### Find `tenant-id` Navigate to the Azure Active Directory account for your organization. One way to do so is by searching for "Azure Active Directory" in the Azure portal's search bar. Once there, locate `tenant-id` under the basic information section in the *Overview* tab. Copy the `tenant-id` and paste in an editor to be used later. Navigate to the Azure Active Directory account for your organization. One way to #### Find `client-id` Often called `app-id`, it's the same value that you used to register your application during the provisioning of your [Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). You'll find the `client-id` in the *Essentials* pane of Microsoft Energy Data Services Preview *Overview* page. Copy the `client-id` and paste in an editor to be used later. -> [!NOTE] +> [!IMPORTANT] > The 'client-id' that is passed as values in the entitlement API calls needs to be the same which was used for provisioning of your Microsoft Energy Data Services Preview instance.+ :::image type="content" source="media/how-to-manage-users/client-id-or-app-id.png" alt-text="Screenshot of finding the client-id for your registered App."::: #### Find `client-secret`-Sometimes called an application password, a `client-secret` is a string value your app can use in place of a certificate to identity itself. Navigate to *App Registrations*. Once there, open 'Certificates & secrets' under the *Manage* section.Create a `client-secret` for the `client-id` that you used to create your Microsoft Energy Data Services Preview instance, you can add one now by clicking on *New Client Secret*. Record the secret's `value` for use in your client application code. +Sometimes called an application password, a `client-secret` is a string value your app can use in place of a certificate to identity itself. Navigate to *App Registrations*. Once there, open 'Certificates & secrets' under the *Manage* section. Create a `client-secret` for the `client-id` that you used to create your Microsoft Energy Data Services Preview instance, you can add one now by clicking on *New Client Secret*. Record the secret's `value` for use in your client application code. -> [!NOTE] +> [!CAUTION] > Don't forget to record the secret's value for use in your client application code. This secret value is never displayed again after you leave this page at the time of creation of 'client secret'.+ :::image type="content" source="media/how-to-manage-users/client-secret.png" alt-text="Screenshot of finding the client secret."::: #### Find the `url`for your Microsoft Energy Data Services Preview instance Navigate to your Microsoft Energy Data Services Preview *Overview* page on Azure #### Find the `data-partition-id` for your group You have two ways to get the list of data-partitions in your Microsoft Energy Data Services Preview instance. -- By navigating *Data Partitions* menu-item under the Advanced section of your Microsoft Energy Data Services Preview UI.+- One option is to navigate *Data Partitions* menu item under the Advanced section of your Microsoft Energy Data Services Preview UI. :::image type="content" source="media/how-to-manage-users/data-partition-id.png" alt-text="Screenshot of finding the data-partition-id from the Microsoft Energy Data Services Preview instance."::: -- By clicking on the *view* below the *data partitions* field in the essentials pane of your Microsoft Energy Data Services Preview *Overview* page. +- Another option is by clicking on the *view* below the *data partitions* field in the essentials pane of your Microsoft Energy Data Services Preview *Overview* page. :::image type="content" source="media/how-to-manage-users/data-partition-id-second-option.png" alt-text="Screenshot of finding the data-partition-id from the Microsoft Energy Data Services Preview instance overview page."::: curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oa Copy the `access_token` value from the response. You'll need it to pass as one of the headers in all calls to the Entitlements API of your Microsoft Energy Data Services Preview instance. ## User management activities-You can manage user's access to your Microsoft Energy Data Services instance or data partitions. As a prerequisite for the same, you need to find the 'object-id' (OID) of the user(s) first. +You can manage user's access to your Microsoft Energy Data Services instance or data partitions. As a prerequisite for this step, you need to find the 'object-id' (OID) of the user(s) first. You'll need to input `object-id` (OID) of the users as parameters in the calls to the Entitlements API of your Microsoft Energy Data Services Preview Instance. `object-id`(OID) is the Azure Active Directory User Object ID. Run the below curl command in Azure Cloud Bash to add user(s) to the "Users" gro "role": "MEMBER" }' ```-> [!NOTE] -> The value to be sent for the param "email" is the Object ID of the user and not the user's email ++The value to be sent for the param **"email"** is the **Object_ID (OID)** of the user and not the user's email **Sample request** Run the below curl command in Azure Cloud Bash to add user(s) to an entitlement "role": "MEMBER" }' ```-> [!NOTE] -> The value to be sent for the param "email" is the Object ID of the user and not the user's email +The value to be sent for the param **"email"** is the **Object_ID (OID)** of the user and not the user's email + **Sample request** ```bash Run the below curl command in Azure Cloud Bash to get all the groups associated Run the below curl command in Azure Cloud Bash to delete a given user to your Microsoft Energy Data Services instance data partition. -> [!NOTE] -> As stated above, **DO NOT** delete the OWNER of a group unless you have another OWNER that can manage users in that group. +As stated above, **DO NOT** delete the OWNER of a group unless you have another OWNER that can manage users in that group. + ```bash curl --location --request DELETE 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>' \ --header 'data-partition-id: <data-partition-id>' \ |
event-grid | Communication Services Voice Video Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-voice-video-events.md | This section contains an example of what that data would look like for each even { "id": "d5546be8-227a-4db8-b2c3-4f06fd675fd6", "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",- "subject": "/caller/4:+16041234567/recipient/8:acs:ff4181e1-324c-4cd1-9c4f-bda3e5d348f5_00000000-0000-0000-0000-000000000000", + "subject": "/caller/8:acs:98e4cbef-70e7-4733-8594-063c4a72d711_00000014-2033-438d-1000-343a0d006e10/recipient/8:acs:98e4cbef-70e7-4733-8594-063c4a72d711_00000014-1889-f3a7-6a0b-343a0d0061f3", "data": {- "to": { - "kind": "PhoneNumber", - "rawId": "4:+18331234567", - "phoneNumber": { - "value": "+18331234567" - } - }, + "to": { + "kind": "communicationUser", + "rawId": "8:acs:98e4cbef-70e7-4733-8594-063c4a72d711_00000014-1889-f3a7-6a0b-343a0d0061f3", + "communicationUser": { + "id": "8:acs:98e4cbef-70e7-4733-8594-063c4a72d711_00000014-1889-f3a7-6a0b-343a0d0061f3" + } + }, "from": {- "kind": "PhoneNumber", - "rawId": "4:+16041234567", - "phoneNumber": { - "value": "+16041234567" - } - }, + "kind": "communicationUser", + "rawId": "8:acs:98e4cbef-70e7-4733-8594-063c4a72d711_00000014-2033-438d-1000-343a0d006e10", + "communicationUser": { + "id": "8:acs:98e4cbef-70e7-4733-8594-063c4a72d711_00000014-2033-438d-1000-343a0d006e10" + } + }, "callerDisplayName": "", "incomingCallContext": "eyJhbGciOiJub25lIiwidHliSldUIn0.eyJjYyI6Ikg0c0lBQi9iT0JiOUs0SVhtQS9UMGhJbFVaUUlHQVBIc1J1M1RlbzgyNW4xcmtHJNa2hCNVVTQkNUbjFKTVo1NCt3ZDk1WFY0ZnNENUg0VDV2dk5VQ001NWxpRkpJb0pDUWlXS0F3OTJRSEVwUWo4aFFleDl4ZmxjRi9lMTlaODNEUmN6QUpvMVRWVXoxK1dWYm1lNW5zNmF5cFRyVGJ1KzMxU3FMY3E1SFhHWHZpc3FWd2kwcUJWSEhta0xjVFJEQ0hlSjNhdzA5MHE2T0pOaFNqS0pFdXpCcVdidzRoSmJGMGtxUkNaOFA4T3VUMTF0MzVHN0kvS0w3aVQyc09aS2F0NHQ2cFV5d0UwSUlEYm4wQStjcGtiVjlUK0E4SUhLZ2JKUjc1Vm8vZ0hFZGtRT3RCYXl1akc4cUt2U1dITFFCR3JFYjJNY3RuRVF0TEZQV1JEUzJHMDk3TGU5VnhhTktob2JIV0wzOHdab3dWcGVWZmsrL2QxYVZnQ2U1bVVLQTh1T056YmpvdXdnQjNzZTlnTEhjNFlYem5BVU9nRGY5dUFQMndsMXA0WU5nK1cySVRxSEtZUzJDV25IcEUySkhVZzd2UnVHOTBsZ081cU81MngvekR0OElYWHBFSi9peUxtNkdibmR1eEdZREozRXNWWXh4ZzZPd1hqc0pCUjZvR1U3NDIrYTR4M1RpQXFaV245UVIrMHNaVDg3YXpRQzbDNUR3BuZFhST1FTMVRTRzVVTkRGeU5UVjNORTFHU2kxck1UTk9VMUF0TWtWNVNreFRUVVI0YlMxRk1VdEVabnBRTjFsQ1EwWkVlVTQxZURCc1IyaHljVTVYTFROeWVTMVJNVjgyVFhrdGRFNUJZV3hrZW5SSVUwMTFVVE5GWkRKUkluMTlmUS5hMTZ0eXdzTDhuVHNPY1RWa2JnV3FPbTRncktHZmVMaC1KNjZUZXoza0JWQVJmYWYwOTRDWDFJSE5tUXRJeDN1TWk2aXZ3QXFFQWV1UlNGTjhlS3gzWV8yZXppZUN5WDlaSHp6Q1ZKemdZUVprc0RjYnprMGJoR09laWkydkpEMnlBMFdyUW1SeGFxOGZUM25EOUQ1Z1ZSUVczMGRheGQ5V001X1ZuNFNENmxtLVR5TUSVEifQ.", "correlationId": "d732db64-4803-462d-be9c-518943ea2b7a" |
event-grid | Subscribe To Graph Api Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md | The common steps to subscribe to events published by any partner, including Grap ### Enable Microsoft Graph API events to flow to your partner topic > [!IMPORTANT]-> Microsoft Graph API's (MGA) ability to send events to Event Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask-graph-and-grid@microsoft.com?subject=Please allow my application ID">mailto:ask.graph.and.grid@service.microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability. +> Microsoft Graph API's (MGA) ability to send events to Event Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask-graph-and-grid@microsoft.com?subject=Please allow my application ID">mailto:ask-graph-and-grid@service.microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability. You request Microsoft Graph API to send events by creating a Graph API subscription. When you create a Graph API subscription, the http request should look like the following sample: |
event-hubs | Event Hubs Quickstart Kafka Enabled Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md | To complete this quickstart, make sure you have the following prerequisites: * Read through the [Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md) article. * An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.-* [Java Development Kit (JDK) 1.7+](/azure/developer/java/fundamentals/java-support-on-azure). -* [Download](https://maven.apache.org/download.cgi) and [install](https://maven.apache.org/install.html) a Maven binary archive. -* [Git](https://www.git-scm.com/) -* To run this quickstart using managed identity, you need to run it on an Azure virtual machine. +* Create a Windows virtual machine and install the following components: + * [Java Development Kit (JDK) 1.7+](/azure/developer/java/fundamentals/java-support-on-azure). + * [Download](https://maven.apache.org/download.cgi) and [install](https://maven.apache.org/install.html) a Maven binary archive. + * [Git](https://www.git-scm.com/) ## Create an Event Hubs namespace When you create an Event Hubs namespace, the Kafka endpoint for the namespace is ## Send and receive messages with Kafka in Event Hubs ### [Passwordless (Recommended)](#tab/passwordless)--1. Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code. -- Azure Event Hubs supports using Azure Active Directory (Azure AD) to authorize requests to Event Hubs resources. With Azure AD, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which may be a user, or an application service principal. -- To use Managed Identity, you can create or configure a virtual machine using a system-assigned managed identity. For more information about configuring managed identity on a VM, see [Configure managed identities for Azure resources on a VM using the Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). --1. In the virtual machine that you configure managed identity, clone the [Azure Event Hubs for Kafka repository](https://github.com/Azure/azure-event-hubs-for-kafka). --1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/producer*. --1. Update the configuration details for the producer in *src/main/resources/producer.config* as follows: -- After you configure the virtual machine with managed identity, you need to add managed identity to Event Hubs namespace. For that you need to follow these steps. -- * In the Azure portal, navigate to your Event Hubs namespace. Go to **Access Control (IAM)** in the left navigation. -- * Select **Add** and select `Add role assignment`. -- * In the **Role** tab, select **Azure Event Hubs Data Owner**, then select **Next**=. -- * In the **Members** tab, select the **Managed Identity** radio button for the type to assign access to. -- * Select the **Select members** link. In the **Managed Identity** dropdown, select **Virtual Machine**, then select your virtual machine's managed identity. -- * Select **Review + Assign**. --1. After you configure managed identity, you can update *src/main/resources/producer.config* as shown below. -- ```xml - bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093 - security.protocol=SASL_SSL - sasl.mechanism=OAUTHBEARER - sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required; - sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler; - ``` -- You can find the source code for the sample handler class CustomAuthenticateCallbackHandler on GitHub [here](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/appsecret/producer/src/main/java). --1. Run the producer code and stream events into Event Hubs: -- ```shell - mvn clean package - mvn exec:java -Dexec.mainClass="TestProducer" - ``` --1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/consumer*. --1. Update the configuration details for the consumer in *src/main/resources/consumer.config* as follows: --1. Make sure you configure managed identity as mentioned in step 3 and use the following consumer configuration. -- ```xml - bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093 - security.protocol=SASL_SSL - sasl.mechanism=OAUTHBEARER - sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required; - sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler; - ``` -- You can find the source code for the sample handler class CustomAuthenticateCallbackHandler on GitHub [here](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/appsecret/consumer/src/main/java). -- You can find all the OAuth samples for Event Hubs for Kafka [here](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth). --1. Run the consumer code and process events from event hub using your Kafka clients: -- ```java - mvn clean package - mvn exec:java -Dexec.mainClass="TestConsumer" - ``` -- If your Event Hubs Kafka cluster has events, you now start receiving them from the consumer. +1. Enable a system-assigned managed identity for the virtual machine. For more information about configuring managed identity on a VM, see [Configure managed identities for Azure resources on a VM using the Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code. ++ :::image type="content" source="./media/event-hubs-quickstart-kafka-enabled-event-hubs/enable-identity-vm.png" alt-text="Screenshot of the Identity tab of a virtual machine page in the Azure portal."::: +1. Using the **Access control** page of the Event Hubs namespace you created, assign **Azure Event Hubs Data Owner** role to the VM's managed identity. +Azure Event Hubs supports using Azure Active Directory (Azure AD) to authorize requests to Event Hubs resources. With Azure AD, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which may be a user, or an application service principal. + 1. In the Azure portal, navigate to your Event Hubs namespace. Go to "Access Control (IAM)" in the left navigation. + 2. Select + Add and select `Add role assignment`. + + :::image type="content" source="./media/event-hubs-quickstart-kafka-enabled-event-hubs/add-role-assignment-menu.png" alt-text="Screenshot of the Access Control page of an Event Hubs namespace."::: + 1. In the Role tab, select **Azure Event Hubs Data Owner**, and select the **Next** button. + + :::image type="content" source="./media/event-hubs-quickstart-kafka-enabled-event-hubs/select-event-hubs-owner-role.png" alt-text="Screenshot showing the selection of the Azure Event Hubs Data Owner role."::: + 1. In the **Members** tab, select the **Managed Identity** in the **Assign access to** section. + 1. Select the **+Select members** link. + 1. On the **Select managed identities** page, follow these steps: + 1. Select the **Azure subscription** that has the VM. + 1. For **Managed identity**, select **Virtual machine** + 1. Select your virtual machine's managed identity. + 1. Click **Select** at the bottom of the page. + + :::image type="content" source="./media/event-hubs-quickstart-kafka-enabled-event-hubs/add-vm-identity.png" alt-text="Screenshot showing the Add role assignment -> Select managed identities page."::: + 1. Select **Review + Assign**. ++ :::image type="content" source="./media/event-hubs-quickstart-kafka-enabled-event-hubs/review-assign.png" alt-text="Screenshot showing the Add role assignment page with role assigned to VM's managed identity."::: +1. Restart the VM and log in back to the VM for which you configured the managed identity. +1. Clone the [Azure Event Hubs for Kafka repository](https://github.com/Azure/azure-event-hubs-for-kafka). +1. Navigate to `azure-event-hubs-for-kafka/tutorials/oauth/java/managedidentity/consumer`. +1. Switch to the `src/main/resources/` folder, and open `consumer.config`. Replace `namespacename` with the name of your Event Hubs namespace. ++ ```xml + bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093 + security.protocol=SASL_SSL + sasl.mechanism=OAUTHBEARER + sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required; + sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler; + ``` ++ > [!NOTE] + > You can find all the OAuth samples for Event Hubs for Kafka [here](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth). +7. Switch back to the **Consumer** folder where the pom.xml file is and, and run the consumer code and process events from event hub using your Kafka clients: ++ ```java + mvn clean package + mvn exec:java -Dexec.mainClass="TestConsumer" + ``` +1. Launch another command prompt window, and navigate to `azure-event-hubs-for-kafka/tutorials/oauth/java/managedidentity/producer`. +1. Switch to the `src/main/resources/` folder, and open `producer.config`. Replace `mynamespace` with the name of your Event Hubs namespace. +4. Switch back to the **Producer** folder where the `pom.xml` file is and, run the producer code and stream events into Event Hubs: + + ```shell + mvn clean package + mvn exec:java -Dexec.mainClass="TestProducer" + ``` ++ You should see messages about events sent in the producer window. Now, check the consumer app window to see the messages that it receives from the event hub. ++ :::image type="content" source="./media/event-hubs-quickstart-kafka-enabled-event-hubs/producer-consumer-output.png" alt-text="Screenshot showing the Producer and Consumer app windows showing the events."::: ### [Connection string](#tab/connection-string) |
event-hubs | Explore Captured Avro Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/explore-captured-avro-files.md | Title: Exploring captured Avro files in Azure Event Hubs description: This article provides the schema of Avro files captured by Azure Event Hubs and a list of tools to explore them. Previously updated : 07/06/2022 Last updated : 09/26/2022 # Exploring captured Avro files in Azure Event Hubs The Avro files produced by Event Hubs Capture have the following Avro schema: :::image type="content" source="./media/event-hubs-capture-overview/event-hubs-capture3.png" alt-text="Image showing the schema of Avro files captured by Azure Event Hubs."::: ## Azure Storage Explorer-You can view captured files in any tool such as [Azure Storage Explorer][Azure Storage Explorer]. You can download files locally to work on them. +You can verify that captured files were created in the Azure Storage account using tools such as [Azure Storage Explorer][Azure Storage Explorer]. You can download files locally to work on them. An easy way to explore Avro files is by using the [Avro Tools][Avro Tools] jar from Apache. You can also use [Apache Drill][Apache Drill] for a lightweight SQL-driven experience or [Apache Spark][Apache Spark] to perform complex distributed processing on the ingested data. |
external-attack-surface-management | Discovering Your Attack Surface | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/discovering-your-attack-surface.md | Custom discoveries are organized into Discovery Groups. They are independent see  - Alternatively, users can manually input their seeds. Defender EASM accepts domains, IP blocks, hosts, email contacts, ASNs, certificate common names, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization. + Alternatively, users can manually input their seeds. Defender EASM accepts domains, IP blocks, hosts, email contacts, ASNs, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization. Once your seeds have been selected, select **Review + Create**. You will then be taken back to the main Discovery page that displays your Discov ## Next steps - [Understanding asset details](understanding-asset-details.md)-- [Understanding dashboards](understanding-dashboards.md)+- [Understanding dashboards](understanding-dashboards.md) |
external-attack-surface-management | What Is Discovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/what-is-discovery.md | To create a comprehensive mapping of your organizationΓÇÖs attack surface, the s - Pages - Host Name - Domain-- SSL Cert - Contact Email Address - IP Block - IP Address Asset details are continuously refreshed and updated over time to maintain an ac ## Next steps - [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md) - [Using and managing discovery](using-and-managing-discovery.md)-- [Understanding asset details](understanding-asset-details.md)+- [Understanding asset details](understanding-asset-details.md) |
firewall | Firewall Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md | Policy Analytics provides insights, centralized visibility, and control to Azure For large, geographically dispersed organizations, manually managing Firewall rules and policies is a complex and sometimes error-prone process. The new Policy Analytics feature is the answer to this common challenge faced by IT teams. -You can now refine and update Firewall rules and policies with confidence in just a few steps in the Azure portal. You have granular control to define your own custom rules for an enhanced security and compliance posture. You can automate rule and policy management to reduce the risks associated with a manual process. +You can now refine and update Firewall rules and policies with confidence in just a few steps in the Azure portal. You have granular control to define your own custom rules for an enhanced security and compliance posture. You can automate rule and policy management to reduce the risks associated with a manual process.<br><br> ++> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE57NCC] #### Pricing |
governance | Definition Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md | see [Tag support for Azure resources](../../../azure-resource-manager/management The following Resource Provider modes are fully supported: - `Microsoft.Kubernetes.Data` for managing your Kubernetes clusters on or off Azure. Definitions- using this Resource Provider mode use effects _audit_, _deny_, and _disabled_. This mode supports - custom definitions as a _public preview_. See - [Create policy definition from constraint template](../how-to/extension-for-vscode.md#create-policy-definition-from-constraint-template) to create a - custom definition from an existing [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) - GateKeeper v3 - [constraint template](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates). Use + using this Resource Provider mode use effects _audit_, _deny_, and _disabled_. Use of the [EnforceOPAConstraint](./effects.md#enforceopaconstraint) effect is _deprecated_. - `Microsoft.KeyVault.Data` for managing vaults and certificates in [Azure Key Vault](../../../key-vault/general/overview.md). For more information on these policy The following Resource Provider modes are fully supported: The following Resource Provider modes are currently supported as a **[preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)**: - `Microsoft.Network.Data` for managing [Azure Virtual Network Manager](../../../virtual-network-manager/overview.md) custom membership policies using Azure Policy.-- `Microsoft.Kubernetes.Data` for Azure Policy components that target [Azure Kubernetes Service (AKS)](../../../aks/intro-kubernetes.md) resources such as pods, namespaces, and ingresses.+- `Microsoft.Kubernetes.Data` for Azure Policy components that target [Azure Arc-enabled Kubernetes clusters](../../../aks/intro-kubernetes.md) resources such as pods, containers, and ingresses. > [!NOTE] >Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level. For more information and examples, see - Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).+- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md). |
healthcare-apis | Store Profiles In Fhir | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/store-profiles-in-fhir.md | Profiles are also specified by various Implementation Guides (IGs). Some common ### Storing profiles -To store profiles in Azure API for FHIR, you can `POST` the `StructureDefinition` with the profile content in the body of the request. +To store profiles in Azure API for FHIR, you can `PUT` the `StructureDefinition` with the profile content in the body of the request. An update or a conditional update are both good methods to store profiles on the FHIR service. Use the conditional update if you are unsure which to use. +Standard `PUT`: `PUT http://<your Azure API for FHIR base URL>/StructureDefinition/profile-id` -`POST http://<your Azure API for FHIR base URL>/StructureDefinition` +**or** ++Conditional update: `PUT http://<your Azure API for FHIR base URL>/StructureDefinition?url=http://sample-profile-url` ``` { "resourceType" : "StructureDefinition", "id" : "profile-id",+"url": "http://sample-profile-url" … } ``` To store profiles in Azure API for FHIR, you can `POST` the `StructureDefinition For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd use the following rest command with the US Core allergy intolerance profile in the body. We've included a snippet of this profile for the example. ```rest-POST https://myAzureAPIforFHIR.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance +PUT https://myAzureAPIforFHIR.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance ``` ```json The `Capability Statement` lists all possible behaviors of Azure API for FHIR. A - `CapabilityStatement.rest.resource.profile` - `CapabilityStatement.rest.resource.supportedProfile` -For example, if you `POST` a US Core Patient profile, which starts like this: +For example, if you save a US Core Patient profile, which starts like this: ```json { |
healthcare-apis | Store Profiles In Fhir | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/store-profiles-in-fhir.md | Profiles are also specified by various Implementation Guides (IGs). Some common ### Storing profiles -To store profiles to the FHIR server, you can `POST` the `StructureDefinition` with the profile content in the body of the request. +To store profiles in Azure API for FHIR, you can `PUT` the `StructureDefinition` with the profile content in the body of the request. A standard `PUT` or a conditional update are both good methods to store profiles on the FHIR service. Use the conditional update if you are unsure which to use. +Standard `PUT`: `PUT http://<your Azure API for FHIR base URL>/StructureDefinition/profile-id` -`POST http://<your FHIR service base URL>/StructureDefinition` +**or** ++Conditional update: `PUT http://<your Azure API for FHIR base URL>/StructureDefinition?url=http://sample-profile-url` ``` { "resourceType" : "StructureDefinition", "id" : "profile-id",+"url": "http://sample-profile-url" … } ``` To store profiles to the FHIR server, you can `POST` the `StructureDefinition` w For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd use the following rest command with the US Core allergy intolerance profile in the body. We've included a snippet of this profile for the example. ```rest-POST https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance +PUT https://myAzureAPIforFHIR.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance ``` ```json |
iot-central | Concepts Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md | In IoT Central, you can configure and manage security in the following areas: - Device access to your application. - Programmatic access to your application. - Authentication to other services from your application.-- Audit logs track activity in your application. To learn more, see the [IoT Central security guide](overview-iot-central-security.md). |
iot-central | Concepts Iiot Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iiot-architecture.md | Secure your IIoT solution by using the following IoT Central features: - Ensure safe, secure data exports with Azure Active Directory managed identities. -- Use audit logs to track activity in your IoT Central application.- ## Patterns :::image type="content" source="media/concepts-iiot-architecture/automation-pyramid.svg" alt-text="Diagram that shows the five levels of the automation pyramid." border="false"::: |
iot-central | Howto Authorize Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md | Title: Authorize REST API in Azure IoT Central description: How to authenticate and authorize IoT Central REST API calls Previously updated : 07/25/2022 Last updated : 06/22/2022 To get a bearer token for a service principal, see [Service principal authentica To get an API token, you can use the IoT Central UI or a REST API call. Administrators associated with the root organization and users assigned to the correct role can create API tokens. -> [!TIP] -> Create and delete operations on API tokens are recorded in the [audit log](howto-use-audit-logs.md). - In the IoT Central UI: 1. Navigate to **Permissions > API tokens**. |
iot-central | Howto Manage Iot Central From Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-portal.md | You can configure role assignments in the Azure portal or use the Azure CLI: You can use the set of metrics provided by IoT Central to assess the health of devices connected to your IoT Central application and the health of your running data exports. -> [!NOTE] -> IoT Central applications have an internal [audit log](howto-use-audit-logs.md) to track activity within the application. - Metrics are enabled by default for your IoT Central application and you access them from the [Azure portal](https://portal.azure.com/). The [Azure Monitor data platform exposes these metrics](../../azure-monitor/essentials/data-platform-metrics.md) and provides several ways for you to interact with them. For example, you can use charts in the Azure portal, a REST API, or queries in PowerShell or the Azure CLI. Access to metrics in the Azure portal is managed by [Azure role based access control](../../role-based-access-control/overview.md). Use the Azure portal to add users to the IoT Central application/resource group/subscription to grant them access. You must add a user in the portal even they're already added to the IoT Central application. Use [Azure built-in roles](../../role-based-access-control/built-in-roles.md) for finer grained access control. |
iot-central | Howto Manage Users Roles With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md | The IoT Central REST API lets you develop client applications that integrate wit Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md). -> [!NOTE] -> Operations on users and roles are recorded in the IoT Central [audit log](howto-use-audit-logs.md). - For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/). [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] |
iot-central | Howto Manage Users Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md | Title: Manage users and roles in Azure IoT Central application | Microsoft Docs description: As an administrator, how to manage users and roles in your Azure IoT Central application Previously updated : 08/01/2022 Last updated : 06/22/2022 Every user must have a user account before they can sign in and access an applic 1. To add a user to an IoT Central application, go to the **Users** page in the **Permissions** section. - :::image type="content" source="media/howto-manage-users-roles/manage-users-pnp.png" alt-text="Screenshot of manage users page in IoT Central."::: + :::image type="content" source="media/howto-manage-users-roles/manage-users.png" alt-text="Screenshot of manage users page in IoT Central." lightbox="media/howto-manage-users-roles/manage-users.png"::: 1. To add a user on the **Users** page, choose **+ Assign user**. To add a service principal on the **Users** page, choose **+ Assign service principal**. To add an Azure Active Directory group on the **Users** page, choose **+ Assign group**. Start typing the name of the Active Directory group or service principal to auto-populate the form. Every user must have a user account before they can sign in and access an applic 1. Choose a role for the user from the **Role** drop-down menu. Learn more about roles in the [Manage roles](#manage-roles) section of this article. - :::image type="content" source="media/howto-manage-users-roles/add-user-pnp.png" alt-text="Screenshot to add a user and select a role."::: + :::image type="content" source="media/howto-manage-users-roles/add-user.png" alt-text="Screenshot to add a user and select a role." lightbox="media/howto-manage-users-roles/add-user.png"::: The available roles depend on the organization the user is associated with. You can assign **App** roles to users associated with the root organization, and **Org** roles to users associated with any other organization in the hierarchy. To delete users, select one or more check boxes on the **Users** page. Then sele Roles enable you to control who within your organization is allowed to do various tasks in IoT Central. There are three built-in roles you can assign to users of your application. You can also [create custom roles](#create-a-custom-role) if you require finer-grained control. ### App Administrator If your solution requires finer-grained access controls, you can create roles wi - Select **+ New**, add a name and description for your role, and select **Application** or **Organization** as the role type. This option lets you create a role definition from scratch. - Navigate to an existing role and select **Copy**. This option lets you start with an existing role definition that you can customize. > [!WARNING] > You can't change the role type after you create a role. When you define a custom role, you choose the set of permissions that a user is | Manage | None | | Full Control | Manage | -**Audit log permissions** --| Name | Dependencies | -| - | -- | -| View | None | -| Full Control | View | --> [!CAUTION] -> Any user granted permission to view the audit log can see all log entries even if they don't have permission to view or modify the entities listed in the log. Therefore, any user who can view the log can view the identity of and changes made to any modified entity. - #### Managing users and roles **Custom roles permissions** |
iot-central | Howto Use Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-audit-logs.md | - Title: Use Azure IoT Central audit logs | Microsoft Docs -description: Learn how to use audit logs in IoT Central to track changes made in an IoT Central application -- Previously updated : 07/25/2022-----# Administrator ---# Use audit logs to track activity in your IoT Central application --This article describes how to use audit logs to track who made what changes at what time in your IoT Central applications. You can: --- Sort the audit log.-- Filter the audit log.-- Customize the audit log.-- Manage access to the audit log.--The audit log records information about who made a change, information about the modified entity, the action that made change, and when the change was made. The log tracks changes made through the UI, programatically with the REST API, and through the CLI. --The log records changes to the following IoT Central entities: --- [Users](howto-manage-users-roles.md#add-users)-- [Roles](howto-manage-users-roles.md#manage-roles)-- [API tokens](howto-authorize-rest-api.md#token-types)-- [Application template export](howto-create-iot-central-application.md#create-and-use-a-custom-application-template)-- [File upload configuration](howto-configure-file-uploads.md#configure-device-file-uploads)-- [Application customization](howto-customize-ui.md)-- [Device enrollment groups](concepts-device-authentication.md)-- [Device templates](howto-set-up-template.md)-- [Device lifecycle events](howto-export-to-blob-storage.md#device-lifecycle-changes-format)--The log records changes made by the following types of user: --- IoT Central user - the log shows the user's email.-- API token - the log shows the token name.-- Azure Active Directory user - the log shows the user email or ID.-- Service principal - the log shows the service principal name.--The log stores data for 30 days, after which it's no longer available. --The following screenshot shows the audit log view with the location of the sorting and filtering controls highlighted: ---## Customize the log --Select **Column options** to customize the audit log view. You can add and remove columns, reorder the columns, and change the column widths: ---## Sort the log --You can sort the log into ascending or descending timestamp order. To sort, select **Timestamp**: ---## Filter the log --To focus on a specific time, filter the log by time range. Select **Edit time range** and specify the range you're interested in: ---To focus on specific entries, filter by entity type or action. Select **Filter** and use the multi-select drop-downs to specify your filter conditions: ---## Manage access --The built-in **App Administrator** role has access to the audit logs by default. The administrator can grant access to other roles. An administrator can assign either **Full control** or **View** audit log permissions to other roles. To learn more, see [Manage users and roles in your IoT Central application](howto-manage-users-roles.md). --> [!IMPORTANT] -> Any user granted permission to view the audit log can see all log entries even if they don't have permission to view or modify the entities listed in the log. Therefore, any user who can view the log can view the identity of and changes made to any modified entity. --## Next steps --Now that you've learned how to manage users and roles in your IoT Central application, the suggested next step is to learn how to [Manage IoT Central organizations](howto-create-organizations.md). |
iot-central | Overview Iot Central Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md | In IoT Central, you can configure and manage security in the following areas: - Device access to your application. - Programmatic access to your application. - Authentication to other services from your application.-- Use audit logs to track activity in your IoT Central application. To learn more, see the [IoT Central security guide](overview-iot-central-security.md). An administrator can: To learn more, see [Create and use a custom application template](howto-create-iot-central-application.md#create-and-use-a-custom-application-template). -## Integrate with Azure Pipelines +## Integrate with DevOps pipelines -Continuous integration and continuous delivery (CI/CD) refers to the process of developing and delivering software in short, frequent cycles using automation pipelines. You can use Azure Pipelines to automate the build, test, and deployment of IoT Central application configurations. +Continuous integration and continuous delivery (CI/CD) refers to the process of developing and delivering software in short, frequent cycles using automation pipelines. You can use Azure DevOps pipelines to automate the build, test, and deployment of IoT Central application configurations. Just as IoT Central is a part of your larger IoT solution, make IoT Central a part of your CI/CD pipeline. -To learn more, see [Integrate IoT Central into your Azure CI/CD pipeline](howto-integrate-with-devops.md). +To learn more, see [Integrate IoT Central into your Azure DevOps CI/CD pipeline](howto-integrate-with-devops.md). ## Monitor application health |
iot-central | Overview Iot Central Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-security.md | Title: Azure IoT Central application security guide description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to secure your IoT Central application. IoT Central security includes users, devices, API access, and authentication to other services for data export. Previously updated : 07/25/2022 Last updated : 04/12/2022 In IoT Central, you can configure and manage security in the following areas: - Device access to your application. - Programmatic access to your application. - Authentication to other services from your application.-- Use a secure virtual network.-- Audit logs track activity in the application. ## Manage user access To learn more, see: Data export in IoT Central lets you continuously stream device data to destinations such as Azure Blob Storage, Azure Event Hubs, Azure Service Bus Messaging. You may choose to lock down these destinations by using an Azure Virtual Network (VNet) and private endpoints. To enable IoT Central to connect to a destination on a secure VNet, configure a firewall exception. To learn more, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md). -## Audit logs --Audit logs let administrators track activity within your IoT Central application. Administrators can see who made what changes at what times. To learn more, see [Use audit logs to track activity in your IoT Central application](howto-use-audit-logs.md). - ## Next steps Now that you've learned about security in your Azure IoT Central application, the suggested next step is to learn about [Manage users and roles](howto-manage-users-roles.md) in Azure IoT Central. |
iot-central | Overview Iot Central Tour | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-tour.md | You launch your IoT Central application by navigating to the URL you chose durin Once you're inside your IoT application, use the left pane to access various features. You can expand or collapse the left pane by selecting the three-lined icon on top of the pane: > [!NOTE]-> The items you see in the left pane depend on your user role. Learn more about [managing users and roles](howto-manage-users-roles.md). --<!-- TODO: Needs a new screenshot and entry. --> +> The items you see in the left pane depend on your user role. Learn more about [managing users and roles](howto-manage-users-roles.md). :::row::: :::column span="":::-- :::image type="content" source="media/overview-iot-central-tour/navigation-bar.png" alt-text="left pane"::: + :::image type="content" source="media/overview-iot-central-tour/navigation-bar.png" alt-text="left pane"::: :::column-end::: :::column span="2":::+ + **Devices** lets you manage all your devices. - **Devices** lets you manage all your devices. -- **Device groups** lets you view and create collections of devices specified by a query. Device groups are used through the application to perform bulk operations. + **Device groups** lets you view and create collections of devices specified by a query. Device groups are used through the application to perform bulk operations. - **Device templates** lets you create and manage the characteristics of devices that connect to your application. + **Device templates** lets you create and manage the characteristics of devices that connect to your application. - **Data explorer** exposes rich capabilities to analyze historical trends and correlate various telemetries from your devices. + **Data explorer** exposes rich capabilities to analyze historical trends and correlate various telemetries from your devices. - **Dashboards** displays all application and personal dashboards. + **Dashboards** displays all application and personal dashboards. - **Jobs** lets you manage your devices at scale by running bulk operations. + **Jobs** lets you manage your devices at scale by running bulk operations. - **Rules** lets you create and edit rules to monitor your devices. Rules are evaluated based on device data and trigger customizable actions. + **Rules** lets you create and edit rules to monitor your devices. Rules are evaluated based on device data and trigger customizable actions. - **Data export** lets you configure a continuous export to external services such as storage and queues. + **Data export** lets you configure a continuous export to external services such as storage and queues. - **Audit logs** lets you view changes made to entities in your application. + **Permissions** lets you manage an organization's users, devices and data. - **Permissions** lets you manage an organization's users, devices and data. -- **Application** lets you manage your application's settings, billing, users, and roles. + **Application** lets you manage your application's settings, billing, users, and roles. - **Customization** lets you customize your application appearance. -- **IoT Central Home** lets you jump back to the IoT Central app manager. + **Customization** lets you customize your application appearance. - :::column-end::: + **IoT Central Home** lets you jump back to the IoT Central app manager. + + :::column-end::: :::row-end::: ### Search, help, theme, and support |
iot-central | Overview Iot Central | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md | Build IoT solutions such as: ## Administer your application -IoT Central applications are fully hosted by Microsoft, which reduces the administration overhead of managing your applications. Administrators manage access to your application with [user roles and permissions](howto-administer.md) and track activity by using [audit logs](howto-use-audit-logs.md). +IoT Central applications are fully hosted by Microsoft, which reduces the administration overhead of managing your applications. Administrators manage access to your application with [user roles and permissions](howto-administer.md). ## Pricing |
iot-hub-device-update | Device Update Control Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-control-access.md | A combination of roles can be used to provide the right level of access. For exa Device Update for IoT Hub communicates with the IoT Hub for deployments and manage updates at scale. In order to enable Device Update to do this, users need to set IoT Hub Data Contributor Contributor access for Azure Device Update Service Principal in the IoT Hub permissions. -Below actions will be blocked, after 9/28/22, if these permissions are not set: +Below actions will be blocked with upcoming release, if these permissions are not set: * Create Deployment * Cancel Deployment * Retry Deployment |
iot-hub-device-update | Device Update Ubuntu Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-ubuntu-agent.md | Read the license terms before you use a package. Your installation and use of a ## Import the update -1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in GitHub and select the **Assets** dropdown list. Download `Edge.package.update.samples.zip` by selecting it. Extract the contents of the folder to discover a sample APT manifest (sample-1.0.1-aziot-edge-apt-manifest.json) and its corresponding import manifest (sample-1.0.1-aziot-edge-importManifest.json). +1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in GitHub and select the **Assets** dropdown list. Download `Tutorial_IoTEdge_PackageUpdate.zip` by selecting it. Extract the contents of the folder to discover a sample APT manifest (sample-1.0.2-aziot-edge-apt-manifest.json) and its corresponding import manifest (sample-1.0.2-aziot-edge-importManifest.json). 1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your IoT hub with Device Update. On the left pane, under **Automatic Device Management**, select **Updates**. 1. Select the **Updates** tab. 1. Select **+ Import New Update**. |
iot-hub | Iot Hub Rm Template Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-template-powershell.md | -Learn how to use an Azure Resource Manager template to create an IoT Hub and a consumer group. Resource Manager templates are JSON files that define the resources you need to deploy for your solution. For more information about developing Resource Manager templates, see [Azure Resource Manager documentation](../azure-resource-manager/index.yml). +This article shows you how to use an Azure Resource Manager template to create an IoT Hub and a [consumer group](https://learn.microsoft.com/azure/event-hubs/event-hubs-features#consumer-groups), using Azure PowerShell. Resource Manager templates are JSON files that define the resources you need to deploy for your solution. For more information about developing Resource Manager templates, see the [Azure Resource Manager documentation](../azure-resource-manager/index.yml). -## Create an IoT hub +## Prerequisites ++[Azure PowerShell module](/powershell/azure/install-az-ps) or [Azure Cloud Shell](https://learn.microsoft.com/azure/cloud-shell/overview) -The following [Resource Manager JSON template](https://azure.microsoft.com/resources/templates/iothub-with-consumergroup-create/) used in this article is one of many templates from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/). This template creates an Azure Iot hub with three endpoints (eventhub, cloud-to-device, and messaging) and a consumer group. For more information on the Iot Hub template schema, see [Microsoft.Devices (IoT Hub) resource types](/azure/templates/microsoft.devices/iothub-allversions). +Azure Cloud Shell is useful if you don't want to install the PowerShell module locally, as Cloud Shell performs from a browser. -[!code-json[iothub-creation](~/quickstart-templates/quickstarts/microsoft.devices/iothub-with-consumergroup-create/azuredeploy.json)] +## Create an IoT hub -There are several methods for deploying a template. You use Azure PowerShell in this article. +The [Resource Manager JSON template](https://azure.microsoft.com/resources/templates/iothub-with-consumergroup-create/) used in this article is one of many templates from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/). The JSON template creates an Azure Iot hub with three endpoints (eventhub, cloud-to-device, and messaging) and a consumer group. For more information on the Iot Hub template schema, see [Microsoft.Devices (IoT Hub) resource types](https://learn.microsoft.com/azure/templates/microsoft.devices/iothub-allversions). -To run the following PowerShell script, select **Try it** to open the Azure Cloud Shell. Copy the script, paste it into the shell, and answer the prompts to create a new resource, choose a region, and create a new IoT hub. +Use the following PowerShell command to create a resource group which is then used to create an IoT hub. The JSON template is used in `-TemplateUri`. ++To run the following PowerShell script, select **Try it** to open the Azure Cloud Shell. Copy the script, paste into your shell, then press enter. Answer the prompts. These prompts will help you to create a new resource, choose a region, and create a new IoT hub. Once answered, a confirmation of your IoT hub prints to the console. ```azurepowershell-interactive $resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"-$location = Read-Host -Prompt "Enter the location (i.e. centralus)" +$location = Read-Host -Prompt "Enter the location (for example: centralus)" $iotHubName = Read-Host -Prompt "Enter the IoT Hub name" New-AzResourceGroup -Name $resourceGroupName -Location "$location" New-AzResourceGroupDeployment ` > [!NOTE] > To use your own template, upload your template file to the Cloud Shell, and then use the `-TemplateFile` switch to specify the file name. For example, see [Deploy the template](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md?tabs=PowerShell#deploy-the-template). + ## Next steps Since you've deployed an IoT hub, using an Azure Resource Manager template, you may want to explore: |
load-balancer | Load Balancer Basic Upgrade Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-basic-upgrade-guidance.md | + + Title: Upgrading from basic Load Balancer - Guidance +description: Upgrade guidance for migrating basic Load Balancer to standard Load Balancer ++++ Last updated : 09/19/2022+#customer-intent: As an cloud engineer with basic Load Balancer services, I need guidance and direction on migrating my workloads off basic to standard SKUs +++# Upgrading from basic Load Balancer - Guidance ++In this article, we'll discuss guidance for upgrading your Basic Load Balancer instances to Standard Load Balancer. Standard Load Balancer is recommended for all production instances and provides many [key differences](#basic-load-balancer-sku-vs-standard-load-balancer-sku) to your infrastructure. +## Steps to complete the upgrade ++We recommend the following approach for upgrading to Standard Load Balancer: ++1. Learn about some of the [key differences](#basic-load-balancer-sku-vs-standard-load-balancer-sku) between Basic Load Balancer and Standard Load Balancer. +1. Identify the Basic Load Balancer to upgrade. +1. Create a migration plan for planned downtime. +1. Perform migration with [automated PowerShell scripts](#upgrade-using-automated-scripts) for your scenario or create a new Standard Load Balancer with the Basic Load Balancer configurations. +1. Verify your application and workloads are receiving traffic through the Standard Load Balancer. Then delete your Basic Load Balancer resource. ++## Basic Load Balancer SKU vs. standard Load Balancer SKU ++This section lists out some key differences between these two Load Balancer SKUs. ++| Feature | Standard Load Balancer SKU | Basic Load Balancer SKU | +| - | - | - | +| **Backend type** | IP based, NIC based | NIC based | +| **Protocol** | TCP, UDP | TCP, UDP | +| **[Frontend IP configurations](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)** | Supports up to 600 configurations | Supports up to 200 configurations | +| **[Backend pool size](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)** | Supports up to 1000 instances | Supports up to 300 instances | +| **Backend pool endpoints** | Any virtual machines or virtual machine scale sets in a single virtual network | Virtual machines in a single availability set or virtual machine scale set | +| **[Health probe types](load-balancer-custom-probe-overview.md#probe-types)** | TCP, HTTP, HTTPS | TCP, HTTP | +| **[Health probe down behavior](load-balancer-custom-probe-overview.md#probe-down-behavior)** | TCP connections stay alive on an instance probe down and on all probes down | TCP connections stay alive on an instance probe down. All TCP connections end when all probes are down | +| **Availability zones** | Zone-redundant and zonal frontends for inbound and outbound traffic | Not available | +| **Diagnostics** | [Azure Monitor multi-dimensional metrics](load-balancer-standard-diagnostics.md) | Not supported | +| **HA Ports** | [Available for Internal Load Balancer](load-balancer-ha-ports-overview.md) | Not available | +| **Secure by default** | Closed to inbound flows unless allowed by a network security group. Internal traffic from the virtual network to the internal load balancer is allowed. | Open by default. Network security group optional. | +| **Outbound Rules** | [Declarative outbound NAT configuration](load-balancer-outbound-connections.md#outboundrules) | Not available | +| **TCP Reset on Idle** | Available on any rule | Not available | +| **[Multiple front ends](load-balancer-multivip-overview.md)** | Inbound and [outbound](load-balancer-outbound-connections.md) | Inbound only | +| **Management Operations** | Most operations < 30 seconds | Most operations 60-90+ seconds | +| **SLA** | [99.99%](https://azure.microsoft.com/support/legal/sla/load-balancer/v1_0/) | Not available | +| **Global VNet Peering Support** | Standard ILB is supported via Global VNet Peering | Not supported | +| **[NAT Gateway Support](../virtual-network/nat-gateway/nat-overview.md)** | Both Standard ILB and Standard Public Load Balancer are supported via Nat Gateway | Not supported | +| **[Private Link Support](../private-link/private-link-overview.md)** | Standard ILB is supported via Private Link | Not supported | +| **[Global tier (Preview)](cross-region-overview.md)** | Standard Load Balancer supports the Global tier for Public LBs enabling cross-region load balancing | Not supported | ++For information on limits, see [Load Balancer limits](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer). ++## Upgrade using automated scripts ++Use these PowerShell scripts to help with upgrading from Basic to Standard SKU: ++- [Upgrading a basic to standard public load balancer](upgrade-basic-standard.md) +- [Upgrade from Basic Internal to Standard Internal](upgrade-basicInternal-standard.md) +- [Upgrade an internal basic load balancer - Outbound connections required](upgrade-internalbasic-to-publicstandard.md) ++## Next Steps ++For guidance on upgrading basic Public IP addresses to Standard SKUs, see: ++> [!div class="nextstepaction"] +> [Upgrading a Basic Public IP to Standard Public IP - Guidance](../virtual-network/ip-services/public-ip-basic-upgrade-guidance.md) |
load-balancer | Load Balancer Migrate Nic To Ip Based Backend Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-migrate-nic-to-ip-based-backend-pools.md | + + Title: Migrating from NIC to IP-based backend pools ++description: This article covers migrating a load balancer from NIC-based backend pools to IP-based backend pools for virtual machines and virtual machine scale sets. +++++ Last updated : 09/22/2022++++# Migrating from NIC to IP-based backend pools ++In this article, you'll learn how to migrate a load balancer with NIC-based backend pools to use IP-based backend pools with virtual machines and virtual machine scale sets ++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- An existing standard Load Balancer in the subscription, with NIC-based backend pools. ++## What is IP-based Load Balancer ++IP-based load balancers reference the private IP address of the resource in the backend pool rather than the resourceΓÇÖs NIC. IP-based load balancers enable the pre-allocation of private IP addresses in a backend pool, without having to create the backend resources themselves in advance. ++## Migrating NIC-based virtual machine backend pools too IP-based ++To migrate a load balancer with NIC-based backend pools to IP-based with VMs (not virtual machine scale sets instances) in the backend pool, you can utilize the following migration REST API. ++```http ++POST URL: https://management.azure.com/subscriptions/{sub}/resourceGroups/{rg}/providers/Microsoft.Network/loadBalancers/{lbName}/migrateToIpBased?api-version=2022-01-01 ++``` +### URI Parameters ++| Name | In | Required | Type | Description | +|- | - | - | - | - | +|Sub | Path | True | String | The subscription credentials which uniquely identify the Microsoft Azure subscription. The subscription ID forms part of the URI for every service call. | +| Rg | Path | True | String | The name of the resource group. | +| LbName | Path | True | String | The name of the load balancer. | +| api-version | Query | True | String | Client API Version | ++### Request Body ++| Name | Type | Description | +| - | - | - | +| Backend Pools | String | A list of backend pools to migrate. Note if request body is specified, all backend pools will be migrated. | ++A full example using the CLI to migrate all backend pools in a load balancer is shown here: ++```azurecli-interactive ++az rest ΓÇôm post ΓÇôu ΓÇ£https://management.azure.com/subscriptions/MySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLB/migrateToIpBased?api-version=2022-01-01ΓÇ¥ ++``` +++A full example using the CLI to migrate a set of specific backend pool in a load balancer is shown below. To migrate a specific group of backend pools from NIC-based to IP-based, you can pass in a list of the backend pool names in the request body: ++```azurecli-interactive ++az rest ΓÇôm post ΓÇôu ΓÇ£https://management.azure.com/subscriptions/MySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLB/migrateToIpBased?api-version=2022-01-01ΓÇ¥ ++-b {\"Pools\":[\"MyBackendPool\"]} +``` +## Upgrading LB with virtual machine scale sets attached ++To upgrade a NIC-based load balancer to IP based load balancer with virtual machine scale sets in the backend pool, follow the following steps: +1. Configure the upgrade policy of the virtual machine scale sets to be automatic. If the upgrade policy isn't set to automatic, all virtual machine scale sets instances must be upgraded after calling the migration API. +1. Using the AzureΓÇÖs migration REST API, upgrade the NIC based LB to an IP based LB. If a manual upgrade policy is in place, upgrade all VMs in the virtual machine scale sets before step 3. +1. Remove the reference of the load balancer from the network profile of the virtual machine scale sets, and update the VM instances to reflect the changes. ++A full example using the CLI is shown here: ++```azurecli-interactive ++az rest ΓÇôm post ΓÇôu ΓÇ£https://management.azure.com/subscriptions/MySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLB/migrateToIpBased?api-version=2022-01-01ΓÇ¥ ++az virtual machine scale sets update --resource-group MyResourceGroup --name MyVMSS --remove virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerBackendAddressPools ++``` ++## Next Steps |
load-balancer | Monitor Load Balancer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer.md | For more information on Load Balancer insights, see [Using Insights to monitor a Load Balancer collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). -See [Monitoring Load Balancer data reference](monitor-load-balancer.md) for detailed information on the metrics and logs metrics created by Load Balancer. +See [Monitoring Load Balancer data reference](monitor-load-balancer-reference.md) for detailed information on the metrics and logs metrics created by Load Balancer. Load Balancer provides additional monitoring data through: |
load-balancer | Upgrade Basic Standard Virtual Machine Scale Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-virtual-machine-scale-sets.md | + + Title: Upgrade from Basic to Standard for Virtual Machine Scale Sets ++description: This article shows you how to upgrade a load balancer from basic to standard SKU for Virtual Machine Scale Sets. ++++ Last updated : 09/22/2022+++# Upgrade a basic load balancer used with Virtual Machine Scale Sets +[Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Load Balancer SKU, see [comparison table](./skus.md#skus). ++This article introduces a PowerShell module that creates a Standard Load Balancer with the same configuration as the Basic Load Balancer along with the associated Virtual Machine Scale Set. ++## Upgrade Overview ++An Azure PowerShell module is available to upgrade from Basic load balancer to a Standard load balancer along with moving the associated virtual machine scale set. The PowerShell module performs the following functions: ++- Verifies that the provided Basic load balancer scenario is supported for upgrade. +- Backs up the Basic load balancer and virtual machine scale set configuration, enabling retry on failure or if errors are encountered. +- For public load balancers, updates the front end public IP address(es) to Standard SKU and static assignment as required. +- Upgrade the Basic load balancer configuration to a new Standard load balancer, ensuring configuration and feature parity. +- Upgrade virtual machine scale set backend pool members from the Basic load balancer to the standard load balancer. +- Creates and associates a network security group with the virtual machine scale set to ensure load balanced traffic reaches backend pool members, following Standard load balancer's move to a default-deny network policy. +- Logs the upgrade operation for easy audit and failure recovery. ++### Unsupported Scenarios ++- Basic load balancers with a virtual machine scale set backend pool member that is also a member of a backend pool on a different load balancer +- Basic load balancers with backend pool members that aren't a virtual machine scale set +- Basic load balancers with only empty backend pools +- Basic load balancers with IPV6 frontend IP configurations +- Basic load balancers with a virtual machine scale set backend pool member configured with 'Flexible' orchestration mode +- Basic load balancers with a virtual machine scale set backend pool member where one or more virtual machine scale set instances have ProtectFromScaleSetActions Instance Protection policies enabled +- Migrating a Basic load balancer to an existing Standard load balancer ++### Prerequisites ++- Install the latest version of [PowerShell](/powershell/scripting/install/installing-powershell) +- Determine whether you have the latest Az PowerShell module installed (8.2.0) + - Install the latest Az PowerShell module](/powershell/azure/install-az-ps) ++## Install the 'AzureBasicLoadBalancerUpgrade' module ++Install the module from [PowerShell gallery](https://www.powershellgallery.com/packages/AzureBasicLoadBalancerUpgrade) ++```powershell +PS C:\> Install-Module -Name AzureBasicLoadBalancerUpgrade -Scope CurrentUser -Repository PSGallery -Force +``` ++## Use the module ++1. Use `Connect-AzAccount` to connect to the required Azure AD tenant and Azure subscription ++ ```powershell + PS C:\> Connect-AzAccount -Tenant <TenantId> -Subscription <SubscriptionId> + ``` ++2. Find the Load Balancer you wish to upgrade. Record its name and resource group name. ++3. Examine the module parameters: + - *BasicLoadBalancerName [string] Required* - This parameter is the name of the existing Basic load balancer you would like to upgrade + - *ResourceGroupName [string] Required* - This parameter is the name of the resource group containing the Basic load balancer + - *RecoveryBackupPath [string] Optional* - This parameter allows you to specify an alternative path in which to store the Basic load balancer ARM template backup file (defaults to the current working directory) + - *FailedMigrationRetryFilePathLB [string] Optional* - This parameter allows you to specify a path to a Basic load balancer backup state file when retrying a failed upgrade (defaults to current working directory) + - *FailedMigrationRetryFilePathVMSS [string] Optional* - This parameter allows you to specify a path to a virtual machine scale set backup state file when retrying a failed upgrade (defaults to current working directory) ++4. Run the Upgrade command. ++### Example: upgrade a basic load balancer to a standard load balancer with the same name, providing the basic load balancer name and resource group ++```powershell +PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <load balancer resource group name> -BasicLoadBalancerName <existing basic load balancer name> +``` ++### Example: upgrade a basic load balancer to a standard load balancer with the same name, providing the basic load object through the pipeline ++```powershell +PS C:\> Get-AzLoadBalancer -Name <basic load balancer name> -ResourceGroup <Basic load balancer resource group name> | Start-AzBasicLoadBalancerUpgrade +``` ++### Example: upgrade a basic load balancer to a standard load balancer with the specified name, displaying logged output on screen ++```powershell +PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <load balancer resource group name> -BasicLoadBalancerName <existing basic load balancer name> -StandardLoadBalancerName <new standard load balancer name> -FollowLog +``` ++### Example: upgrade a basic load balancer to a standard load balancer with the specified name and store the basic load balancer backup file at the specified path ++```powershell +PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <load balancer resource group name> -BasicLoadBalancerName <existing basic load balancer name> -StandardLoadBalancerName <new standard load balancer name> -RecoveryBackupPath C:\BasicLBRecovery +``` ++### Example: retry a failed upgrade (due to error or script termination) by providing the Basic load balancer and virtual machine scale set backup state file ++```powerhsell +PS C:\> Start-AzBasicLoadBalancerUpgrade -FailedMigrationRetryFilePathLB C:\RecoveryBackups\State_mybasiclb_rg-basiclbrg_20220912T1740032148.json -FailedMigrationRetryFilePathVMSS C:\RecoveryBackups\VMSS_myVMSS_rg-basiclbrg_20220912T1740032148.json +``` ++## Common Questions ++### Will the module migrate my frontend IP address to the new Standard load balancer? ++Yes, for both public and internal load balancers, the module ensures that front end IP addresses are maintained. For public IPs, the IP is converted to a static IP prior to migration (if necessary). For internal front ends, the module will attempt to reassign the same IP address freed up when the Basic load balancer was deleted; if the private IP isn't available the script will fail. In this scenario, remove the virtual network connected device that has claimed the intended front end IP and rerun the module with the `-FailedMigrationRetryFilePathLB <BasicLoadBalancerbackupFilePath> -FailedMigrationRetryFilePathVMSS <VMSSBackupFile>` parameters specified. ++### How long does the Upgrade take? ++The upgrade normally takes a few minutes for the script to finish. The following factors may lead to longer upgrade times: +- Complexity of your load balancer configuration +- Number of backend pool members +- Instance count of associated Virtual Machine Scale Sets. +Keep the downtime in mind and plan for failover if necessary. ++### Does the script migrate my backend pool members from my basic load balancer to the newly created standard load balancer? ++Yes. The Azure PowerShell script migrates the virtual machine scale set to the newly created public or private standard load balancer. ++### Which load balancer components are migrated? ++The script migrates the following from the Basic load balancer to the Standard load balancer: ++**Public Load Balancer:** ++- Public frontend IP configuration + - Converts the public IP to a static IP, if dynamic + - Updates the public IP SKU to Standard, if Basic + - Upgrade all associated public IPs to the new Standard load balancer +- Health Probes: + - All probes will be migrated to the new Standard load balancer +- Load balancing rules: + - All load balancing rules will be migrated to the new Standard load balancer +- Inbound NAT Rules: + - All NAT rules will be migrated to the new Standard load balancer +- Outbound Rules: + - Basic load balancers don't support configured outbound rules. The script will create an outbound rule in the Standard load balancer to preserve the outbound behavior of the Basic load balancer. For more information about outbound rules, see [Outbound rules](/azure/load-balancer/outbound-rules). +- Network security group + - Basic load balancer doesn't require a network security group to allow outbound connectivity. In case there's no network security group associated with the virtual machine scale set, a new network security group will be created to preserve the same functionality. This new network security group will be associated to the virtual machine scale set backend pool member network interfaces. It will allow the same load balancing rules ports and protocols and preserve the outbound connectivity. +- Backend pools: + - All backend pools will be migrated to the new Standard load balancer + - All virtual machine scale set network interfaces and IP configurations will be migrated to the new Standard load balancer + - If a virtual machine scale set is using Rolling Upgrade policy, the script will update the virtual machine scale set upgrade policy to "Manual" during the migration process and revert it back to "Rolling" after the migration is completed. ++**Internal Load Balancer:** ++- Private frontend IP configuration + - Converts the public IP to a static IP, if dynamic + - Updates the public IP SKU to Standard, if Basic +- Health Probes: + - All probes will be migrated to the new Standard load balancer +- Load balancing rules: + - All load balancing rules will be migrated to the new Standard load balancer +- Inbound NAT Rules: + - All NAT rules will be migrated to the new Standard load balancer +- Backend pools: + - All backend pools will be migrated to the new Standard load balancer + - All virtual machine scale set network interfaces and IP configurations will be migrated to the new Standard load balancer + - If there's a virtual machine scale set using Rolling Upgrade policy, the script will update the virtual machine scale set upgrade policy to "Manual" during the migration process and revert it back to "Rolling" after the migration is completed. ++>[!NOTE] +> Network security group are not configured as part of Internal Load Balancer upgrade. To learn more about NSGs, see [Network security groups](/azure/virtual-network/network-security-groups-overview) ++### What happens if my upgrade fails mid-migration? ++The module is designed to accommodate failures, either due to unhandled errors or unexpected script termination. The failure design is a 'fail forward' approach, where instead of attempting to move back to the Basic load balancer, you should correct the issue causing the failure (see the error output or log file), and retry the migration again, specifying the `-FailedMigrationRetryFilePathLB <BasicLoadBalancerbackupFilePath> -FailedMigrationRetryFilePathVMSS <VMSSBackupFile>` parameters. For public load balancers, because the Public IP Address SKU has been updated to Standard, moving the same IP back to a Basic load balancer won't be possible. The basic failure recovery procedure is: ++ 1. Address the cause of the migration failure. Check the log file `Start-AzBasicLoadBalancerUpgrade.log` for details + 1. [Remove the new Standard load balancer](/azure/load-balancer/update-load-balancer-with-vm-scale-set) (if created). Depending on which stage of the migration failed, you may have to remove the Standard load balancer reference from the virtual machine scale set network interfaces (IP configurations) and health probes in order to remove the Standard load balancer and try again. + 1. Locate the basic load balancer state backup file. This will either be in the directory where the script was executed, or at the path specified with the `-RecoveryBackupPath` parameter during the failed execution. The file will be named: `State_<basicLBName>_<basicLBRGName>_<timestamp>.json` + 1. Rerun the migration script, specifying the `-FailedMigrationRetryFilePathLB <BasicLoadBalancerbackupFilePath> -FailedMigrationRetryFilePathVMSS <VMSSBackupFile>` parameters instead of -BasicLoadBalancerName or passing the Basic load balancer over the pipeline ++## Next steps ++[Learn about Azure Load Balancer](load-balancer-overview.md) |
machine-learning | Concept Azure Machine Learning V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md | An Azure Machine Learning [component](concept-component.md) is a self-contained ## Next steps * [How to migrate from v1 to v2](how-to-migrate-from-v1.md)-* [Train models with the CLI (v2)](how-to-train-cli.md) -* [Train models with the Azure ML Python SDK v2 (preview)](how-to-train-sdk.md) +* [Train models with the v2 CLI and SDK (preview)](how-to-train-model.md) |
machine-learning | Concept Data Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md | For more information on creating and using a deployment configuration, see the f * [Where and how to deploy](how-to-deploy-managed-online-endpoints.md) -For more information on using a customer-managed key with ACI, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md#encrypt-data-with-a-customer-managed-key). +For more information on using a customer-managed key with ACI, see [Encrypt deployment data](../container-instances/container-instances-encrypt-data.md). ### Azure Kubernetes Service |
machine-learning | Concept Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md | To create a batch deployment, you need to specify the following elements: - Scoring script - code needed to do the scoring/inferencing - Environment - a Docker image with Conda dependencies -If you're deploying [MLFlow models](how-to-train-cli.md#model-tracking-with-mlflow), there's no need to provide a scoring script and execution environment, as both are autogenerated. +If you're deploying [MLFlow models](how-to-train-model.md), there's no need to provide a scoring script and execution environment, as both are autogenerated. Learn how to [deploy and use batch endpoints with the Azure CLI](how-to-use-batch-endpoint.md) and the [studio web portal](how-to-use-batch-endpoints-studio.md) |
machine-learning | Concept Train Machine Learning Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md | The machine learning CLI is an extension for the Azure CLI. It provides cross-pl * [Use the CLI extension for Azure Machine Learning](how-to-configure-cli.md) * [MLOps on Azure](https://github.com/microsoft/MLOps)-* [Train models with the CLI (v2)](how-to-train-cli.md) +* [Train models](how-to-train-model.md) ## VS Code |
machine-learning | Concept Train Model Git Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md | The logged information contains text similar to the following JSON: } ``` +### Python SDK ++After submitting a training run, a [Run](/python/api/azureml-core/azureml.core.run%28class%29) object is returned. The `properties` attribute of this object contains the logged git information. For example, the following code retrieves the commit hash: +++```python +run.properties['azureml.git.commit'] +``` ## Next steps |
machine-learning | Concept V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-v2.md | The Azure Machine Learning Python SDK v1 doesn't have a planned deprecation date * Get started with CLI v2 * [Install and set up CLI (v2)](how-to-configure-cli.md)- * [Train models with the CLI (v2)](how-to-train-cli.md) + * [Train models with the CLI (v2)](how-to-train-model.md) * [Deploy and score models with managed online endpoint](how-to-deploy-managed-online-endpoints.md) * Get started with SDK v2 * [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install)- * [Train models with the Azure ML Python SDK v2 (preview)](how-to-train-sdk.md) + * [Train models with the Azure ML Python SDK v2 (preview)](how-to-train-model.md) * [Tutorial: Create production ML pipelines with Python SDK v2 (preview) in a Jupyter notebook](tutorial-pipeline-python-sdk.md) |
machine-learning | How To Administrate Data Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md | In general, data access from studio involves the following checks: * Who is accessing? - There are multiple different types of authentication depending on the storage type. For example, account key, token, service principal, managed identity, and user identity.- - If authentication is made using a user identity, then it's important to know *which* user is trying to access storage. Learn more about [identity-based data access](how-to-identity-based-data-access.md). + - If authentication is made using a user identity, then it's important to know *which* user is trying to access storage. For more information on authenticating a _user_, see [authentication for Azure Machine Learning](how-to-setup-authentication.md). For more information on service-level authentication, see [authentication between AzureML and other services](how-to-identity-based-service-authentication.md). * Do they have permission? - Are the credentials correct? If so, does the service principal, managed identity, etc., have the necessary permissions on the storage? Permissions are granted using Azure role-based access controls (Azure RBAC). - [Reader](../role-based-access-control/built-in-roles.md#reader) of the storage account reads metadata of the storage. |
machine-learning | How To Configure Auto Train | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md | In this guide, learn how to set up an automated machine learning, AutoML, traini If you prefer a no-code experience, you can also [Set up no-code AutoML training in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md). -If you prefer to submit training jobs with the Azure Machine learning CLI v2 extension, see [Train models with the CLI (v2)](how-to-train-cli.md). +If you prefer to submit training jobs with the Azure Machine learning CLI v2 extension, see [Train models](how-to-train-model.md). ## Prerequisites For this article you need: [!INCLUDE [automl-sdk-version](../../includes/machine-learning-automl-sdk-version.md)] -## Setup your workspace +## Set up your workspace -To connect to a workspace, you need to provide a subscription, resource group and workspace name. These details are used in the MLClient from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. +To connect to a workspace, you need to provide a subscription, resource group and workspace name. These details are used in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. -In the following example, the default Azure authentication is used along with the default workspace configuration or from any `config.json` file you might have copied into the folders structure. If no `config.json` is found, then you need to manually introduce the subscription_id, resource_group and workspace when creating MLClient. +In the following example, the default Azure authentication is used along with the default workspace configuration or from any `config.json` file you might have copied into the folders structure. If no `config.json` is found, then you need to manually introduce the subscription_id, resource_group and workspace when creating `MLClient`. ```Python from azure.identity import DefaultAzureCredential transformations: Therefore, the MLTable folder would have the MLTable definition file plus the data file (the bank_marketing_train_data.csv file in this case). The following shows two ways of creating an MLTable.-- A. Providing your training data and MLTable definition file from your local folder and it'll be automatically uploaded into the cloud (default Workspace Datastore)+- A. Providing your training data and MLTable definition file from your local folder and it will be automatically uploaded into the cloud (default Workspace Datastore) - B. Providing a MLTable already registered and uploaded into the cloud. ```Python my_training_data_input = Input(type=AssetTypes.MLTABLE, path="azureml://datasto You can specify separate **training data and validation data sets**, however training data must be provided to the `training_data` parameter in the factory function of your automated ML job. -If you do not explicitly specify a `validation_data` or `n_cross_validation` parameter, automated ML applies default techniques to determine how validation is performed. This determination depends on the number of rows in the dataset assigned to your `training_data` parameter. +If you don't explicitly specify a `validation_data` or `n_cross_validation` parameter, automated ML applies default techniques to determine how validation is performed. This determination depends on the number of rows in the dataset assigned to your `training_data` parameter. |Training data size| Validation technique | ||--| If you do not explicitly specify a `validation_data` or `n_cross_validation` par Automated ML jobs with the Python SDK v2 (or CLI v2) are currently only supported on Azure ML remote compute (cluster or compute instance). -[Learn more about creating compute with the Python SDKv2 (or CLIv2).](./how-to-train-sdk.md#2-create-compute). +[Learn more about creating compute with the Python SDKv2 (or CLIv2).](./how-to-train-model.md). <a name='configure-experiment'></a> classification_job.set_training( ### Select your machine learning task type (ML problem) -Before you can submit your automated ML job, you need to determine the kind of machine learning problem you are solving. This problem determines which function your automated ML job uses and what model algorithms it applies. +Before you can submit your automated ML job, you need to determine the kind of machine learning problem you're solving. This problem determines which function your automated ML job uses and what model algorithms it applies. Automated ML supports tabular data based tasks (classification, regression, forecasting), computer vision tasks (such as Image Classification and Object Detection), and natural language processing tasks (such as Text classification and Entity Recognition tasks). Learn more about [task types](concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting-computer-vision--nlp). ### Supported algorithms -Automated machine learning tries different models and algorithms during the automation and tuning process. As a user, there is no need for you to specify the algorithm. +Automated machine learning tries different models and algorithms during the automation and tuning process. As a user, there's no need for you to specify the algorithm. The task method determines the list of algorithms/models, to apply. Use the `allowed_algorithms` or `blocked_training_algorithms` parameters in the `set_training()` setter function to further modify iterations with the available models to include or exclude. Learn about the specific definitions of these metrics in [Understand automated m These metrics apply for all classification scenarios, including tabular data, images/computer-vision and NLP-Text. -Threshold-dependent metrics, like `accuracy`, `recall_score_weighted`, `norm_macro_recall`, and `precision_score_weighted` may not optimize as well for datasets that are small, have very large class skew (class imbalance), or when the expected metric value is very close to 0.0 or 1.0. In those cases, `AUC_weighted` can be a better choice for the primary metric. After automated ML completes, you can choose the winning model based on the metric best suited to your business needs. +Threshold-dependent metrics, like `accuracy`, `recall_score_weighted`, `norm_macro_recall`, and `precision_score_weighted` may not optimize as well for datasets that are small, have large class skew (class imbalance), or when the expected metric value is very close to 0.0 or 1.0. In those cases, `AUC_weighted` can be a better choice for the primary metric. After automated ML completes, you can choose the winning model based on the metric best suited to your business needs. | Metric | Example use case(s) | | | - | Threshold-dependent metrics, like `accuracy`, `recall_score_weighted`, `norm_mac #### Metrics for classification multi-label scenarios -- For Text classification multi-label currently 'Accuracy' is the only primary metric supported.+- For Text classification, multi-label currently 'Accuracy' is the only primary metric supported. - For Image classification multi-label, the primary metrics supported are defined in the ClassificationMultilabelPrimaryMetrics Enum Threshold-dependent metrics, like `accuracy`, `recall_score_weighted`, `norm_mac `r2_score`, `normalized_mean_absolute_error` and `normalized_root_mean_squared_error` are all trying to minimize prediction errors. `r2_score` and `normalized_root_mean_squared_error` are both minimizing average squared errors while `normalized_mean_absolute_error` is minizing the average absolute value of errors. Absolute value treats errors at all magnitudes alike and squared errors will have a much larger penalty for errors with larger absolute values. Depending on whether larger errors should be punished more or not, one can choose to optimize squared error or absolute error. -The main difference between `r2_score` and `normalized_root_mean_squared_error` is the way they are normalized and their meanings. `normalized_root_mean_squared_error` is root mean squared error normalized by range and can be interpreted as the average error magnitude for prediction. `r2_score` is mean squared error normalized by an estimate of variance of data. It is the proportion of variation that can be captured by the model. +The main difference between `r2_score` and `normalized_root_mean_squared_error` is the way they're normalized and their meanings. `normalized_root_mean_squared_error` is root mean squared error normalized by range and can be interpreted as the average error magnitude for prediction. `r2_score` is mean squared error normalized by an estimate of variance of data. It's the proportion of variation that can be captured by the model. > [!Note] > `r2_score` and `normalized_root_mean_squared_error` also behave similarly as primary metrics. If a fixed validation set is applied, these two metrics are optimizing the same target, mean squared error, and will be optimized by the same model. When only a training set is available and cross-validation is applied, they would be slightly different as the normalizer for `normalized_root_mean_squared_error` is fixed as the range of training set, but the normalizer for `r2_score` would vary for every fold as it's the variance for each fold. If the rank, instead of the exact value is of interest, `spearman_correlation` can be a better choice as it measures the rank correlation between real values and predictions. -However, currently no primary metrics for regression addresses relative difference. All of `r2_score`, `normalized_mean_absolute_error`, and `normalized_root_mean_squared_error` treat a $20k prediction error the same for a worker with a $30k salary as a worker making $20M, if these two data points belongs to the same dataset for regression, or the same time series specified by the time series identifier. While in reality, predicting only $20k off from a $20M salary is very close (a small 0.1% relative difference), whereas $20k off from $30k is not close (a large 67% relative difference). To address the issue of relative difference, one can train a model with available primary metrics, and then select the model with best `mean_absolute_percentage_error` or `root_mean_squared_log_error`. +However, currently no primary metrics for regression addresses relative difference. All of `r2_score`, `normalized_mean_absolute_error`, and `normalized_root_mean_squared_error` treat a $20k prediction error the same for a worker with a $30k salary as a worker making $20M, if these two data points belongs to the same dataset for regression, or the same time series specified by the time series identifier. While in reality, predicting only $20k off from a $20M salary is very close (a small 0.1% relative difference), whereas $20k off from $30k isn't close (a large 67% relative difference). To address the issue of relative difference, one can train a model with available primary metrics, and then select the model with best `mean_absolute_percentage_error` or `root_mean_squared_log_error`. | Metric | Example use case(s) | | | - | There are a few options you can define in the `set_limits()` function to end you |Criteria| description |-|--No criteria | If you do not define any exit parameters the experiment continues until no further progress is made on your primary metric. -`timeout`| Defines how long, in minutes, your experiment should continue to run.If not specified, the default job's total timeout is 6 days (8,640 minutes). To specify a timeout less than or equal to 1 hour (60 minutes), make sure your dataset's size is not greater than 10,000,000 (rows times column) or an error results. <br><br> This timeout includes setup, featurization and training runs but does not include the ensembling and model explainability runs at the end of the process since those actions need to happen once all the trials (children jobs) are done. +No criteria | If you don't define any exit parameters the experiment continues until no further progress is made on your primary metric. +`timeout`| Defines how long, in minutes, your experiment should continue to run. If not specified, the default job's total timeout is 6 days (8,640 minutes). To specify a timeout less than or equal to 1 hour (60 minutes), make sure your dataset's size isn't greater than 10,000,000 (rows times column) or an error results. <br><br> This timeout includes setup, featurization and training runs but doesn't include the ensembling and model explainability runs at the end of the process since those actions need to happen once all the trials (children jobs) are done. `trial_timeout_minutes` | Maximum time in minutes that each trial (child job) can run for before it terminates. If not specified, a value of 1 month or 43200 minutes is used `enable_early_termination`|Whether to end the job if the score is not improving in the short term `max_trials`| The maximum number of trials/runs each with a different combination of algorithm and hyperparameters to try during an AutoML job. If not specified, the default is 1000 trials. If using `enable_early_termination` the number of trials used can be smaller. No criteria | If you do not define any exit parameters the experiment conti > [!WARNING] > If you have set rules in firewall and/or Network Security Group over your workspace, verify that required permissions are given to inbound and outbound network traffic as defined in [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md). -Submit the experiment to run and generate a model. With the MLClient created in the prerequisites,you can run the following command in the workspace. +Submit the experiment to run and generate a model. With the `MLClient` created in the prerequisites, you can run the following command in the workspace. ```python |
machine-learning | How To Configure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md | Check the Azure CLI extensions you've installed: :::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_extension_list"::: -Remove any existing installation of the of `ml` extension and also the CLI v1 `azure-cli-ml` extension: +Remove any existing installation of the `ml` extension and also the CLI v1 `azure-cli-ml` extension: :::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_extension_remove"::: If your Azure Machine Learning workspace uses a private endpoint and virtual net ## Next steps -- [Train models using CLI (v2)](how-to-train-cli.md)+- [Train models using CLI (v2)](how-to-train-model.md) - [Set up the Visual Studio Code Azure Machine Learning extension](how-to-setup-vs-code.md) - [Train an image classification TensorFlow model using the Azure Machine Learning Visual Studio Code extension](tutorial-train-deploy-image-classification-model-vscode.md) - [Explore Azure Machine Learning with examples](samples-notebooks.md) |
machine-learning | How To Create Attach Compute Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md | If your Azure Machine Learning compute cluster appears stuck at resizing (0 -> 0 Use your compute cluster to: -* [Submit a training run](./how-to-train-sdk.md) +* [Submit a training run](./how-to-train-model.md) * [Run batch inference](./tutorial-pipeline-batch-scoring-classification.md). |
machine-learning | How To Create Attach Compute Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md | To detach your compute use the following steps: ## Next steps -* Use the compute resource to [submit a training run](how-to-train-sdk.md). +* Use the compute resource to [submit a training run](how-to-train-model.md). * Learn how to [efficiently tune hyperparameters](how-to-tune-hyperparameters.md) to build better models. * Once you have a trained model, learn [how and where to deploy models](how-to-deploy-managed-online-endpoints.md). * [Use Azure Machine Learning with Azure Virtual Networks](./how-to-network-security-overview.md) |
machine-learning | How To Create Component Pipelines Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md | ms.devlang: azurecli, cliv2 [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] -In this article, you learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure CLI and components (for more, see [What is an Azure Machine Learning component?](concept-component.md)). You can [create pipelines without using components](how-to-train-cli.md#build-a-training-pipeline), but components offer the greatest amount of flexibility and reuse. AzureML Pipelines may be defined in YAML and run from the CLI, authored in Python, or composed in AzureML Studio Designer with a drag-and-drop UI. This document focuses on the CLI. +In this article, you learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure CLI and components (for more, see [What is an Azure Machine Learning component?](concept-component.md)). You can create pipelines without using components, but components offer the greatest amount of flexibility and reuse. AzureML Pipelines may be defined in YAML and run from the CLI, authored in Python, or composed in AzureML Studio Designer with a drag-and-drop UI. This document focuses on the CLI. ## Prerequisites |
machine-learning | How To Create Component Pipelines Ui | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-ui.md | -In this article, you'll learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure Machine Learning studio and [Components](concept-component.md). You can [create pipelines without using components](how-to-train-cli.md#build-a-training-pipeline), but components offer better amount of flexibility and reuse. Azure ML Pipelines may be defined in YAML and [run from the CLI](how-to-create-component-pipelines-cli.md), [authored in Python](how-to-create-component-pipeline-python.md), or composed in Azure ML Studio Designer with a drag-and-drop UI. This document focuses on the AzureML studio designer UI. +In this article, you'll learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure Machine Learning studio and [Components](concept-component.md). You can create pipelines without using components, but components offer better amount of flexibility and reuse. Azure ML Pipelines may be defined in YAML and [run from the CLI](how-to-create-component-pipelines-cli.md), [authored in Python](how-to-create-component-pipeline-python.md), or composed in Azure ML Studio Designer with a drag-and-drop UI. This document focuses on the AzureML studio designer UI. [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] |
machine-learning | How To Datastore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md | ml_client.create_or_update(store) ```python from azure.ai.ml.entities import AzureBlobDatastore+from azure.ai.ml.entities._datastore.credentials import AccountKeyCredentials from azure.ai.ml import MLClient ml_client = MLClient.from_config() store = AzureBlobDatastore(- name="blob-protocol-example", + name="blob_protocol_example", description="Datastore pointing to a blob container using wasbs protocol.", account_name="mytestblobstore", container_name="data-container", protocol="wasbs",- credentials={ - "account_key": "XXXxxxXXXxXXXXxxXXXXXxXXXXXxXxxXxXXXxXXXxXXxxxXXxxXXXxXxXXXxxXxxXXXXxxxxxXXxxxxxxXXXxXXX" - }, + credentials=AccountKeyCredentials( + account_key="XXXxxxXXXxXXXXxxXXXXXxXXXXXxXxxXxXXXxXXXxXXxxxXXxxXXXxXxXXXxxXxxXXXXxxxxxXXxxxxxxXXXxXXX" + ), ) ml_client.create_or_update(store) ml_client.create_or_update(store) ```python from azure.ai.ml.entities import AzureBlobDatastore+from azure.ai.ml.entities._datastore.credentials import SasTokenCredentials from azure.ai.ml import MLClient ml_client = MLClient.from_config() store = AzureBlobDatastore(- name="blob-sas-example", + name="blob_sas_example", description="Datastore pointing to a blob container using SAS token.", account_name="mytestblobstore", container_name="data-container",- credentials={ - "sas_token": "?xx=XXXX-XX-XX&xx=xxxx&xxx=xxx&xx=xxxxxxxxxxx&xx=XXXX-XX-XXXXX:XX:XXX&xx=XXXX-XX-XXXXX:XX:XXX&xxx=xxxxx&xxx=XXxXXXxxxxxXXXXXXXxXxxxXXXXXxxXXXXXxXXXXxXXXxXXxXX" - }, + credentials=SasTokenCredentials( + sas_token= "?xx=XXXX-XX-XX&xx=xxxx&xxx=xxx&xx=xxxxxxxxxxx&xx=XXXX-XX-XXXXX:XX:XXX&xx=XXXX-XX-XXXXX:XX:XXX&xxx=xxxxx&xxx=XXxXXXxxxxxXXXXXXXxXxxxXXXXXxxXXXXXxXXXXxXXXxXXxXX" + ), ) ml_client.create_or_update(store) ml_client.create_or_update(store) ```python from azure.ai.ml.entities import AzureDataLakeGen2Datastore+from azure.ai.ml.entities._datastore.credentials import ServicePrincipalCredentials + from azure.ai.ml import MLClient ml_client = MLClient.from_config() store = AzureDataLakeGen2Datastore(- name="adls-gen2-example", + name="adls_gen2_example", description="Datastore pointing to an Azure Data Lake Storage Gen2.", account_name="mytestdatalakegen2", filesystem="my-gen2-container",- credentials={ - "tenant_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", - "client_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", - "client_secret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", - }, + credentials=ServicePrincipalCredentials( + tenant_id= "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", + client_id= "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", + client_secret= "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", + ), ) ml_client.create_or_update(store) az ml datastore create --file my_files_datastore.yml ```python from azure.ai.ml.entities import AzureFileDatastore+from azure.ai.ml.entities._datastore.credentials import AccountKeyCredentials from azure.ai.ml import MLClient ml_client = MLClient.from_config() store = AzureFileDatastore(- name="file-example", + name="file_example", description="Datastore pointing to an Azure File Share.", account_name="mytestfilestore", file_share_name="my-share",- credentials={ - "account_key": "XXXxxxXXXxXXXXxxXXXXXxXXXXXxXxxXxXXXxXXXxXXxxxXXxxXXXxXxXXXxxXxxXXXXxxxxxXXxxxxxxXXXxXXX" - }, + credentials=AccountKeyCredentials( + account_key= "XXXxxxXXXxXXXXxxXXXXXxXXXXXxXxxXxXXXxXXXxXXxxxXXxxXXXxXxXXXxxXxxXXXXxxxxxXXxxxxxxXXXxXXX" + ), ) ml_client.create_or_update(store) ml_client.create_or_update(store) ```python from azure.ai.ml.entities import AzureFileDatastore+from azure.ai.ml.entities._datastore.credentials import SasTokenCredentials from azure.ai.ml import MLClient ml_client = MLClient.from_config() store = AzureFileDatastore(- name="file-sas-example", + name="file_sas_example", description="Datastore pointing to an Azure File Share using SAS token.", account_name="mytestfilestore", file_share_name="my-share",- credentials={ - "sas_token": "?xx=XXXX-XX-XX&xx=xxxx&xxx=xxx&xx=xxxxxxxxxxx&xx=XXXX-XX-XXXXX:XX:XXX&xx=XXXX-XX-XXXXX:XX:XXX&xxx=xxxxx&xxx=XXxXXXxxxxxXXXXXXXxXxxxXXXXXxxXXXXXxXXXXxXXXxXXxXX" - }, + credentials=SasTokenCredentials( + sas_token="?xx=XXXX-XX-XX&xx=xxxx&xxx=xxx&xx=xxxxxxxxxxx&xx=XXXX-XX-XXXXX:XX:XXX&xx=XXXX-XX-XXXXX:XX:XXX&xxx=xxxxx&xxx=XXxXXXxxxxxXXXXXXXxXxxxXXXXXxxXXXXXxXXXXxXXXxXXxXX" + ), ) ml_client.create_or_update(store) ml_client.create_or_update(store) ```python from azure.ai.ml.entities import AzureDataLakeGen1Datastore+from azure.ai.ml.entities._datastore.credentials import ServicePrincipalCredentials from azure.ai.ml import MLClient ml_client = MLClient.from_config() store = AzureDataLakeGen1Datastore(- name="adls-gen1-example", + name="adls_gen1_example", description="Datastore pointing to an Azure Data Lake Storage Gen1.", store_name="mytestdatalakegen1",- credentials={ - "tenant_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", - "client_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", - "client_secret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", - }, + credentials=ServicePrincipalCredentials( + tenant_id= "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", + client_id= "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", + client_secret= "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", + ), ) ml_client.create_or_update(store) |
machine-learning | How To Deploy Mlflow Models Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md | Once you're done with the endpoint, use the following command to delete it: This example shows how you can deploy an MLflow model to an online endpoint using [Azure Machine Learning studio](https://ml.azure.com). -1. Models need to be registered in the Azure Machine Learning workspace to be deployed. Deployment of unregistered models is not supported. To create a model in Azure Machine Learning, open the Models page in Azure Machine Learning. Click **Register model** and select where your model is located. Fill out the required fields, and then select __Register__. +1. Models need to be registered in the Azure Machine Learning workspace to be deployed. Deployment of unregistered models isn't supported. To create a model in Azure Machine Learning, open the Models page in Azure Machine Learning. Select **Register model** and select where your model is located. Fill out the required fields, and then select __Register__. :::image type="content" source="./media/how-to-manage-models/register-model-as-asset.png" alt-text="Screenshot of the UI to register a model." lightbox="./media/how-to-manage-models/register-model-as-asset.png"::: -2. To create an endpoint deployment, use either the __endpoints__ or __models__ page : +2. To create an endpoint deployment, use either the __endpoints__ or __models__ page: # [Endpoints page](#tab/endpoint) This example shows how you can deploy an MLflow model to an online endpoint usin ## Deploy models after a training job -This section helps you understand how to deploy models to an online endpoint once you have completed your [training job](how-to-train-cli.md). Models logged in a run are stored as artifacts. If you have used `mlflow.autolog()` in your training script, you will see model artifacts generated in the job's output. You can use `mlflow.autolog()` for several common ML frameworks to log model parameters, performance metrics, model artifacts, and even feature importance graphs. +This section helps you understand how to deploy models to an online endpoint once you've completed your [training job](how-to-train-model.md). Models logged in a run are stored as artifacts. If you have used `mlflow.autolog()` in your training script, you'll see model artifacts generated in the job's output. You can use `mlflow.autolog()` for several common ML frameworks to log model parameters, performance metrics, model artifacts, and even feature importance graphs. -For more information, see [Train models with CLI](how-to-train-cli.md#model-tracking-with-mlflow). Also see the [training job samples](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step) in the GitHub repository. +For more information, see [Train models](how-to-train-model.md). Also see the [training job samples](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step) in the GitHub repository. -1. Models need to be registered in the Azure Machine Learning workspace to be deployed. Deployment of unregistered models is not supported. You can register the model directly from the job's output using the Azure ML CLI (v2), the Azure ML SDK for Python (v2) or Azure Machine Learning studio. +1. Models need to be registered in the Azure Machine Learning workspace to be deployed. Deployment of unregistered models isn't supported. You can register the model directly from the job's output using the Azure ML CLI (v2), the Azure ML SDK for Python (v2) or Azure Machine Learning studio. > [!TIP] > To register the model, you will need to know the location where the model has been stored. If you are using `autolog` feature of MLflow, the path will depend on the type and framework of the model being used. We recommed to check the jobs output to identify which is the name of this folder. You can look for the folder that contains a file named `MLModel`. If you are logging your models manually using `log_model`, then the path is the argument you pass to such method. As an expample, if you log the model using `mlflow.sklearn.log_model(my_model, "classifier")`, then the path where the model is stored is `classifier`. For more information, see [Train models with CLI](how-to-train-cli.md#model-trac # [Azure ML CLI (v2)](#tab/cli) - Use the Azure ML CLI v2 to create a model from a training job output. In the following example a model named `$MODEL_NAME` is registered using the artifacts of a job with ID `$RUN_ID`. The path where the model is stored is `$MODEL_PATH`. + Use the Azure ML CLI v2 to create a model from a training job output. In the following example, a model named `$MODEL_NAME` is registered using the artifacts of a job with ID `$RUN_ID`. The path where the model is stored is `$MODEL_PATH`. ```bash az ml model create --name $MODEL_NAME --path azureml://jobs/$RUN_ID/outputs/artifacts/$MODEL_PATH |
machine-learning | How To Identity Based Service Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md | Azure Machine Learning is composed of multiple Azure services. There are multipl * The Azure ML compute cluster uses a __managed identity__ to retrieve connection information for datastores from Azure Key Vault and to pull Docker images from ACR. You can also configure identity-based access to datastores, which will instead use the managed identity of the compute cluster. * Data access can happen along multiple paths depending on the data storage service and your configuration. For example, authentication to the datastore may use an account key, token, security principal, managed identity, or user identity. - For information on how data access is authenticated, see the [Data administration](how-to-administrate-data-authentication.md) article. + For more information on how data access is authenticated, see the [Data administration](how-to-administrate-data-authentication.md) article. * Managed online endpoints can use a managed identity to access Azure resources when performing inference. For more information, see [Access Azure resources from an online endpoint](how-to-access-resources-from-endpoints-managed-identities.md). |
machine-learning | How To Manage Environments V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md | To use an environment for a training job, specify the `environment` field of the When you submit a training job, the building of a new environment can take several minutes. The duration depends on the size of the required dependencies. The environments are cached by the service. So as long as the environment definition remains unchanged, you incur the full setup time only once. -For more information on how to use environments in jobs, see [Train models with the CLI (v2)](how-to-train-cli.md). +For more information on how to use environments in jobs, see [Train models](how-to-train-model.md). ## Use environments for model deployments For more information on how to use environments in deployments, see [Deploy and ## Next steps -- [Train models (create jobs) with the CLI (v2)](how-to-train-cli.md)+- [Train models (create jobs)](how-to-train-model.md) - [Deploy and score a machine learning model by using a managed online endpoint](how-to-deploy-managed-online-endpoints.md) - [Environment YAML schema reference](reference-yaml-environment.md) |
machine-learning | How To Migrate From V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md | To migrate, you'll need to change your code for submitting jobs to v2. We recomm What you run *within* the job does not need to be migrated to v2. However, it is recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more details, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md). -We recommend migrating the code for creating jobs to v2. You can see [how to train models with the CLI (v2)](how-to-train-cli.md) and the [job YAML references](reference-yaml-job-command.md) for authoring jobs in v2 YAMLs. +We recommend migrating the code for creating jobs to v2. You can see [how to train models](how-to-train-model.md) and the [job YAML references](reference-yaml-job-command.md) for authoring jobs in v2 YAMLs. For a comparison of SDK v1 and v2 code, see [Migrate script run from SDK v1 to SDK v2](migrate-to-v2-command-job.md). For a comparison of SDK v1 and v2 code, see [Migrate data management from SDK v1 Models created from v1 can be used in v2. In v2, explicit model types are introduced. Similar to data assets, it may be easier to re-create a v1 model as a v2 model, setting the type appropriately. -We recommend migrating the code for creating models with [SDK](how-to-train-sdk.md) or [CLI](how-to-train-cli.md) to v2. +We recommend migrating the code for creating models. For more information, see [How to train models](how-to-train-model.md). For a comparison of SDK v1 and v2 code, see |
machine-learning | How To Read Write Data V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md | The following example defines a pipeline containing three nodes and moves data b ## Next steps -* [Train models with the Python SDK v2 (preview)](how-to-train-sdk.md) +* [Train models](how-to-train-model.md) * [Tutorial: Create production ML pipelines with Python SDK v2 (preview)](tutorial-pipeline-python-sdk.md) * Learn more about [Data in Azure Machine Learning](concept-data.md) |
machine-learning | How To Setup Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md | For more information on creating and using a deployment configuration, see the f * [Where and how to deploy](how-to-deploy-managed-online-endpoints.md) * [Deploy a model to Azure Container Instances](v1/how-to-deploy-azure-container-instance.md) -For more information on using a customer-managed key with ACI, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md#encrypt-data-with-a-customer-managed-key). +For more information on using a customer-managed key with ACI, see [Encrypt deployment data](../container-instances/container-instances-encrypt-data.md). ### Azure Kubernetes Service |
machine-learning | How To Train Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-cli.md | - Title: 'Train models with the CLI (v2)'- -description: Learn how to train models (create jobs) using Azure CLI extension for Machine Learning. ------ Previously updated : 05/26/2022-----# Train models with the CLI (v2) ----The Azure Machine Learning CLI (v2) is an Azure CLI extension enabling you to accelerate the model training process while scaling up and out on Azure compute, with the model lifecycle tracked and auditable. --Training a machine learning model is typically an iterative process. Modern tooling makes it easier than ever to train larger models on more data faster. Previously tedious manual processes like hyperparameter tuning and even algorithm selection are often automated. With the Azure Machine Learning CLI (v2), you can track your jobs (and models) in a [workspace](concept-workspace.md) with hyperparameter sweeps, scale-up on high-performance Azure compute, and scale-out utilizing distributed training. --## Prerequisites --- To use the CLI (v2), you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.-- [Install and set up CLI (v2)](how-to-configure-cli.md).--> [!TIP] -> For a full-featured development environment with schema validation and autocompletion for job YAMLs, use Visual Studio Code and the [Azure Machine Learning extension](how-to-setup-vs-code.md). --### Clone examples repository --To run the training examples, first clone the examples repository and change into the `cli` directory: ---Using `--depth 1` clones only the latest commit to the repository, which reduces time to complete the operation. --### Create compute --You can create an Azure Machine Learning compute cluster from the command line. For instance, the following commands will create one cluster named `cpu-cluster` and one named `gpu-cluster`. ---You are not charged for compute at this point as `cpu-cluster` and `gpu-cluster` will remain at zero nodes until a job is submitted. Learn more about how to [manage and optimize cost for AmlCompute](how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute). --The following example jobs in this article use one of `cpu-cluster` or `gpu-cluster`. Adjust these names in the example jobs throughout this article as needed to the name of your cluster(s). Use `az ml compute create -h` for more details on compute create options. --## Hello world --For the Azure Machine Learning CLI (v2), jobs are authored in YAML format. A job aggregates: --- What to run-- How to run it-- Where to run it--The "hello world" job has all three: ---> [!WARNING] -> Python must be installed in the environment used for jobs. Run `apt-get update -y && apt-get install python3 -y` in your Dockerfile to install if needed, or derive from a base image with Python installed already. --> [!TIP] -> The `$schema:` throughout examples allows for schema validation and autocompletion if authoring YAML files in [VSCode with the Azure Machine Learning extension](how-to-setup-vs-code.md). --Which you can run: ---> [!TIP] -> The `--web` parameter will attempt to open your job in the Azure Machine Learning studio using your default web browser. The `--stream` parameter can be used to stream logs to the console and block further commands. --## Overriding values on create or update --YAML job specification values can be overridden using `--set` when creating or updating a job. For instance: ---## Job names --Most `az ml job` commands other than `create` and `list` require `--name/-n`, which is a job's name or "Run ID" in the studio. You typically should not directly set a job's `name` property during creation as it must be unique per workspace. Azure Machine Learning generates a random GUID for the job name if it is not set that can be obtained from the output of job creation in the CLI or by copying the "Run ID" property in the studio and MLflow APIs. --To automate jobs in scripts and CI/CD flows, you can capture a job's name when it is created by querying and stripping the output by adding `--query name -o tsv`. The specifics will vary by shell, but for Bash: ---Then use `$run_id` in subsequent commands like `update`, `show`, or `stream`: ---## Organize jobs --To organize jobs, you can set a display name, experiment name, description, and tags. Descriptions support markdown syntax in the studio. These properties are mutable after a job is created. A full example: ---You can run this job, where these properties will be immediately visible in the studio: ---Using `--set` you can update the mutable values after the job is created: ---## Environment variables --You can set environment variables for use in your job: ---You can run this job: ---> [!WARNING] -> You should use `inputs` for parameterizing arguments in the `command`. See [inputs and outputs](#inputs-and-outputs). --## Track models and source code --Production machine learning models need to be auditable (if not reproducible). It is crucial to keep track of the source code for a given model. Azure Machine Learning takes a snapshot of your source code and keeps it with the job. Additionally, the source repository and commit are tracked if you are running jobs from a Git repository. --> [!TIP] -> If you're following along and running from the examples repository, you can see the source repository and commit in the studio on any of the jobs run so far. --You can specify the `code` field in a job with the value as the path to a source code directory. A snapshot of the directory is taken and uploaded with the job. The contents of the directory are directly available from the working directory of the job. --> [!WARNING] -> The source code should not include large data inputs for model training. Instead, [use data inputs](#data-inputs). You can use a `.gitignore` file in the source code directory to exclude files from the snapshot. The limits for snapshot size are 300 MB or 2000 files. --Let's look at a job that specifies code: ---The Python script is in the local source code directory. The command then invokes `python` to run the script. The same pattern can be applied for other programming languages. --> [!WARNING] -> The "hello" family of jobs shown in this article are for demonstration purposes and do not necessarily follow recommended best practices. Using `&&` or similar to run many commands in a sequence is not recommended -- instead, consider writing the commands to a script file in the source code directory and invoking the script in your `command`. Installing dependencies in the `command`, as shown above via `pip install`, is not recommended -- instead, all job dependencies should be specified as part of your environment. See [how to manage environments with the CLI (v2)](how-to-manage-environments-v2.md) for details. --### Model tracking with MLflow --While iterating on models, data scientists need to be able to keep track of model parameters and training metrics. Azure Machine Learning integrates with MLflow tracking to enable the logging of models, artifacts, metrics, and parameters to a job. To use MLflow in your Python scripts add `import mlflow` and call `mlflow.log_*` or `mlflow.autolog()` APIs in your training code. --> [!WARNING] -> The `mlflow` and `azureml-mlflow` packages must be installed in your Python environment for MLflow tracking features. --> [!TIP] -> The `mlflow.autolog()` call is supported for many popular frameworks and takes care of the majority of logging for you. --Let's take a look at Python script invoked in the job above that uses `mlflow` to log a parameter, a metric, and an artifact: ---You can run this job in the cloud via Azure Machine Learning, where it is tracked and auditable: ---### Query metrics with MLflow --After running jobs, you might want to query the jobs' run results and their logged metrics. Python is better suited for this task than a CLI. You can query runs and their metrics via `mlflow` and load into familiar objects like Pandas dataframes for analysis. --First, retrieve the MLflow tracking URI for your Azure Machine Learning workspace: ---Use the output of this command in `mlflow.set_tracking_uri(<YOUR_TRACKING_URI>)` from a Python environment with MLflow imported. MLflow calls will now correspond to jobs in your Azure Machine Learning workspace. --## Inputs and outputs --Jobs typically have inputs and outputs. Inputs can be model parameters, which might be swept over for hyperparameter optimization, or cloud data inputs that are mounted or downloaded to the compute target. Outputs (ignoring metrics) are artifacts that can be written or copied to the default outputs or a named data output. --### Literal inputs --Literal inputs are directly resolved in the command. You can modify our "hello world" job to use literal inputs: ---You can run this job: ---You can use `--set` to override inputs: ---Literal inputs to jobs can be [converted to search space inputs](#search-space-inputs) for hyperparameter sweeps on model training. --### Search space inputs --For a sweep job, you can specify a search space for literal inputs to be chosen from. For the full range of options for search space inputs, see the [sweep job YAML syntax reference](reference-yaml-job-sweep.md). --Let's demonstrate the concept with a simple Python script that takes in arguments and logs a random metric: ---And create a corresponding sweep job: ---And run it: ---### Data inputs --Data inputs are resolved to a path on the job compute's local filesystem. Let's demonstrate with the classic Iris dataset, which is hosted publicly in a blob container at `https://azuremlexamples.blob.core.windows.net/datasets/iris.csv`. --You can author a Python script that takes the path to the Iris CSV file as an argument, reads it into a dataframe, prints the first 5 lines, and saves it to the `outputs` directory. ---Azure storage URI inputs can be specified, which will mount or download data to the local filesystem. You can specify a single file: ---And run: ---Or specify an entire folder: ---And run: ---Make sure you accurately specify the input `type` field to either `type: uri_file` or `type: uri_folder` corresponding to whether the data points to a single file or a folder. The default if the `type` field is omitted is `uri_folder`. --#### Private data --For private data in Azure Blob Storage or Azure Data Lake Storage connected to Azure Machine Learning through a datastore, you can use Azure Machine Learning URIs of the format `azureml://datastores/<DATASTORE_NAME>/paths/<PATH_TO_DATA>` for input data. For instance, if you upload the Iris CSV to a directory named `/example-data/` in the Blob container corresponding to the datastore named `workspaceblobstore` you can modify a previous job to use the file in the datastore: --> [!WARNING] -> Running these jobs will fail for you if you have not copied the Iris CSV to the same location in `workspaceblobstore`. ---Or the entire directory: ---### Default outputs --The `./outputs` and `./logs` directories receive special treatment by Azure Machine Learning. If you write any files to these directories during your job, these files will get uploaded to the job so that you can still access them once the job is complete. The `./outputs` folder is uploaded at the end of the job, while the files written to `./logs` are uploaded in real time. Use the latter if you want to stream logs during the job, such as TensorBoard logs. --In addition, any files logged from MLflow via autologging or `mlflow.log_*` for artifact logging will get automatically persisted as well. Collectively with the aforementioned `./outputs` and `./logs` directories, this set of files and directories will be persisted to a directory that corresponds to that job's default artifact location. --You can modify the "hello world" job to output to a file in the default outputs directory instead of printing to `stdout`: ---You can run this job: ---And download the logs, where `helloworld.txt` will be present in the `<RUN_ID>/outputs/` directory: ---### Data outputs --You can specify named data outputs. This will create a directory in the default datastore which will be read/write mounted by default. --You can modify the earlier "hello world" job to write to a named data output: ---## Hello pipelines --Pipeline jobs can run multiple jobs in parallel or in sequence. If there are input/output dependencies between steps in a pipeline, the dependent step will run after the other completes. --You can split a "hello world" job into two jobs: ---And run it: ---The "hello" and "world" jobs respectively will run in parallel if the compute target has the available resources to do so. --To pass data between steps in a pipeline, define a data output in the "hello" job and a corresponding input in the "world" job, which refers to the prior's output: ---And run it: ---This time, the "world" job will run after the "hello" job completes. --To avoid duplicating common settings across jobs in a pipeline, you can set them outside the jobs: ---You can run this: ---The corresponding setting on an individual job will override the common settings for a pipeline job. The concepts so far can be combined into a three-step pipeline job with jobs "A", "B", and "C". The "C" job has a data dependency on the "B" job, while the "A" job can run independently. The "A" job will also use an individually set environment and bind one of its inputs to a top-level pipeline job input: ---You can run this: ---## Train a model --In Azure Machine Learning you basically have two possible ways to train a model: --1. Leverage automated ML to train models with your data and get the best model for you. This approach maximizes productivity by automating the iterative process of tuning hyperparameters and trying out different algorithms. -1. Train a model with your own custom training script. This approach offers the most control and allows you to customize your training. ---### Train a model with automated ML --Automated ML is the easiest way to train a model because you don't need to know how training algorithms work exactly but you just need to provide your training/validation/test datasets and some basic configuration parameters such as 'ML Task', 'target column', 'primary metric, 'timeout' etc, and the service will train multiple models and try out various algorithms and hyperparameter combinations for you. --When you train with automated ML via the CLI (v2), you just need to create a .YAML file with an AutoML configuration and provide it to the CLI for training job creation and submission. --The following example shows an AutoML configuration file for training a classification model where, -* The primary metric is `accuracy` -* The training has a time out of 180 minutes -* The data for training is in the folder "./training-mltable-folder". Automated ML jobs only accept data in the form of an `MLTable`. ---That mentioned MLTable definition is what points to the training data file, in this case a local .csv file that will be uploaded automatically: ---Finally, you can run it (create the AutoML job) with this CLI command: --``` -/> az ml job create --file ./hello-automl-job-basic.yml -``` --Or like the following if providing workspace IDs explicitly instead of using the by default workspace: --``` -/> az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION] -``` --To investigate additional AutoML model training examples using other ML-tasks such as regression, time-series forecasting, image classification, object detection, NLP text-classification, etc., see the complete list of [AutoML CLI examples](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/automl-standalone-jobs). --### Train a model with a custom script --When training by using your own custom script, the first thing you need is that python script (.py), so let's add some `sklearn` code into a Python script with MLflow tracking to train a model on the Iris CSV: ---The scikit-learn framework is supported by MLflow for autologging, so a single `mlflow.autolog()` call in the script will log all model parameters, training metrics, model artifacts, and some extra artifacts (in this case a confusion matrix image). --To run this in the cloud, specify as a job: ---And run it: ---To register a model, you can upload the model files from the run to the model registry: ---For the full set of configurable options for running command jobs, see the [command job YAML schema reference](reference-yaml-job-command.md). --## Sweep hyperparameters --You can modify the previous job to sweep over hyperparameters: ---And run it: ---> [!TIP] -> Check the "Child runs" tab in the studio to monitor progress and view parameter charts.. --For the full set of configurable options for sweep jobs, see the [sweep job YAML schema reference](reference-yaml-job-sweep.md). --## Distributed training --Azure Machine Learning supports PyTorch, TensorFlow, and MPI-based distributed training. See the [distributed section of the command job YAML syntax reference](reference-yaml-job-command.md#distribution-configurations) for details. --As an example, you can train a convolutional neural network (CNN) on the CIFAR-10 dataset using distributed PyTorch. The full script is [available in the examples repository](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/pytorch/cifar-distributed). --The CIFAR-10 dataset in `torchvision` expects as input a directory that contains the `cifar-10-batches-py` directory. You can download the zipped source and extract into a local directory: ---Then create an Azure Machine Learning data asset from the local directory, which will be uploaded to the default datastore: ---Optionally, remove the local file and directory: ---Registered data assets can be used as inputs to job using the `path` field for a job input. The format is `azureml:<data_name>:<data_version>`, so for the CIFAR-10 dataset just created, it is `azureml:cifar-10-example:1`. You can optionally use the `azureml:<data_name>@latest` syntax instead if you want to reference the latest version of the data asset. Azure ML will resolve that reference to the explicit version. --With the data asset in place, you can author a distributed PyTorch job to train our model: ---And run it: ---## Build a training pipeline --The CIFAR-10 example above translates well to a pipeline job. The previous job can be split into three jobs for orchestration in a pipeline: --- "get-data" to run a Bash script to download and extract `cifar-10-batches-py`-- "train-model" to take the data and train a model with distributed PyTorch-- "eval-model" to take the data and the trained model and evaluate accuracy--Both "train-model" and "eval-model" will have a dependency on the "get-data" job's output. Additionally, "eval-model" will have a dependency on the "train-model" job's output. Thus the three jobs will run sequentially. -<!-- -You can orchestrate these three jobs within a pipeline job: ---And run: ---Pipelines can also be written using reusable components. For more, see [Create and run components-based machine learning pipelines with the Azure Machine Learning CLI (Preview)](how-to-create-component-pipelines-cli.md). --## Next steps --- [Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md) |
machine-learning | How To Train Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-model.md | + + Title: Train ML models ++description: Configure and submit Azure Machine Learning jobs to train your models using the SDK, CLI, etc. ++++++ Last updated : 08/25/2022+++++# Train models with Azure Machine Learning CLI, SDK, and REST API +++Azure Machine Learning provides multiple ways to submit ML training jobs. In this article, you'll learn how to submit jobs using the following methods: ++* Azure CLI extension for machine learning: The `ml` extension, also referred to as CLI v2. +* Python SDK v2 for Azure Machine Learning. +* REST API: The API that the CLI and SDK are built on. ++> [!IMPORTANT] +> SDK v2 is currently in public preview. +> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. +> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ++## Prerequisites ++* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). +* An Azure Machine Learning workspace. If you don't have one, you can use the steps in the [Quickstart: Create Azure ML resources](quickstart-create-resources.md) article. ++# [Python SDK](#tab/python) ++To use the __SDK__ information, install the Azure Machine Learning [SDK v2 for Python](https://aka.ms/sdk-v2-install). ++# [Azure CLI](#tab/azurecli) ++To use the __CLI__ information, install the [Azure CLI and extension for machine learning](how-to-configure-cli.md). ++# [REST API](#tab/restapi) ++To use the __REST API__ information, you need the following items: ++- A __service principal__ in your workspace. Administrative REST requests use [service principal authentication](how-to-setup-authentication.md#use-service-principal-authentication). +- A service principal __authentication token__. Follow the steps in [Retrieve a service principal authentication token](./how-to-manage-rest.md#retrieve-a-service-principal-authentication-token) to retrieve this token. +- The __curl__ utility. The curl program is available in the [Windows Subsystem for Linux](/windows/wsl/install-win10) or any UNIX distribution. ++ > [!TIP] + > In PowerShell, `curl` is an alias for `Invoke-WebRequest` and `curl -d "key=val" -X POST uri` becomes `Invoke-WebRequest -Body "key=val" -Method POST -Uri uri`. + > + > While it is possible to call the REST API from PowerShell, the examples in this article assume you are using Bash. ++- The [jq](https://stedolan.github.io/jq/) utility for processing JSON. This utility is used to extract values from the JSON documents that are returned from REST API calls. ++++### Clone the examples repository ++The code snippets in this article are based on examples in the [Azure ML examples GitHub repo](https://github.com/azure/azureml-examples). To clone the repository to your development environment, use the following command: ++```bash +git clone --depth 1 https://github.com/Azure/azureml-examples +``` ++> [!TIP] +> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation. ++## Example job ++The examples in this article use the iris flower dataset to train an MLFlow model. ++## Train in the cloud ++When training in the cloud, you must connect to your Azure Machine Learning workspace and select a compute resource that will be used to run the training job. ++### 1. Connect to the workspace ++> [!TIP] +> Use the tabs below to select the method you want to use to train a model. Selecting a tab will automatically switch all the tabs in this article to the same tab. You can select another tab at any time. ++# [Python SDK](#tab/python) ++To connect to the workspace, you need identifier parameters - a subscription, resource group, and workspace name. You'll use these details in the `MLClient` from the `azure.ai.ml` namespace to get a handle to the required Azure Machine Learning workspace. To authenticate, you use the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true). Check this [example](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/configuration.ipynb) for more details on how to configure credentials and connect to a workspace. ++```python +#import required libraries +from azure.ai.ml import MLClient +from azure.identity import DefaultAzureCredential ++#Enter details of your AzureML workspace +subscription_id = '<SUBSCRIPTION_ID>' +resource_group = '<RESOURCE_GROUP>' +workspace = '<AZUREML_WORKSPACE_NAME>' ++#connect to the workspace +ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace) +``` ++# [Azure CLI](#tab/azurecli) ++When using the Azure CLI, you need identifier parameters - a subscription, resource group, and workspace name. While you can specify these parameters for each command, you can also set defaults that will be used for all the commands. Use the following commands to set default values. Replace `<subscription ID>`, `<AzureML workspace name>`, and `<resource group>` with the values for your configuration: ++```azurecli +az account set --subscription <subscription ID> +az configure --defaults workspace=<AzureML workspace name> group=<resource group> +``` ++# [REST API](#tab/restapi) ++The REST API examples in this article use `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, `$LOCATION`, and `$WORKSPACE` placeholders. Replace the placeholders with your own values as follows: ++* `$SUBSCRIPTION_ID`: Your Azure subscription ID. +* `$RESOURCE_GROUP`: The Azure resource group that contains your workspace. +* `$LOCATION`: The Azure region where your workspace is located. +* `$WORKSPACE`: The name of your Azure Machine Learning workspace. +* `$COMPUTE_NAME`: The name of your Azure Machine Learning compute cluster. ++Administrative REST requests a [service principal authentication token](how-to-manage-rest.md#retrieve-a-service-principal-authentication-token). You can retrieve a token with the following command. The token is stored in the `$TOKEN` environment variable: ++```azurecli +TOKEN=$(az account get-access-token --query accessToken -o tsv) +``` ++The service provider uses the `api-version` argument to ensure compatibility. The `api-version` argument varies from service to service. Set the API version as a variable to accommodate future versions: +++++### 2. Create a compute resource for training ++An AzureML compute cluster is a fully managed compute resource that can be used to run the training job. In the following examples, a compute cluster named `cpu-compute` is created. ++# [Python SDK](#tab/python) ++[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/configuration.ipynb?name=create-cpu-compute)] ++# [Azure CLI](#tab/azurecli) ++```azurecli +az ml compute create -n cpu-cluster --type amlcompute --min-instances 0 --max-instances 4 +``` ++# [REST API](#tab/restapi) ++```bash +curl -X PUT \ + "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/computes/$COMPUTE_NAME?api-version=$API_VERSION" \ + -H "Authorization:Bearer $TOKEN" \ + -H "Content-Type: application/json" \ + -d '{ + "location": "'$LOCATION'", + "properties": { + "computeType": "AmlCompute", + "properties": { + "vmSize": "Standard_D2_V2", + "vmPriority": "Dedicated", + "scaleSettings": { + "maxNodeCount": 4, + "minNodeCount": 0, + "nodeIdleTimeBeforeScaleDown": "PT30M" + } + } + } +}' +``` ++> [!TIP] +> While a response is returned after a few seconds, this only indicates that the creation request has been accepted. It can take several minutes for the cluster creation to finish. ++++### 4. Submit the training job ++# [Python SDK](#tab/python) ++To run this script, you'll use a `command`. The command will be run by submitting it as a `job` to Azure ML. ++[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=create-command)] ++[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-command)] ++In the above examples, you configured: +- `code` - path where the code to run the command is located +- `command` - command that needs to be run +- `environment` - the environment needed to run the training script. In this example, we use a curated or ready-made environment provided by AzureML called `AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu`. We use the latest version of this environment by using the `@latest` directive. You can also use custom environments by specifying a base docker image and specifying a conda yaml on top of it. +- `inputs` - dictionary of inputs using name value pairs to the command. The key is a name for the input within the context of the job and the value is the input value. Inputs are referenced in the `command` using the `${{inputs.<input_name>}}` expression. To use files or folders as inputs, you can use the `Input` class. ++For more information, see the [reference documentation](/python/api/azure-ai-ml/azure.ai.ml#azure-ai-ml-command). ++When you submit the job, a URL is returned to the job status in the AzureML studio. Use the studio UI to view the job progress. You can also use `returned_job.status` to check the current status of the job. ++# [Azure CLI](#tab/azurecli) ++The `az ml job create` command used in this example requires a YAML job definition file. The contents of the file used in this example are: ++In the above, you configured: +- `code` - path where the code to run the command is located +- `command` - command that needs to be run +- `inputs` - dictionary of inputs using name value pairs to the command. The key is a name for the input within the context of the job and the value is the input value. Inputs are referenced in the `command` using the `${{inputs.<input_name>}}` expression. +- `environment` - the environment needed to run the training script. In this example, we use a curated or ready-made environment provided by AzureML called `AzureML-sklearn-0.24-ubuntu18.04-py37-cpu`. We use the latest version of this environment by using the `@latest` directive. You can also use custom environments by specifying a base docker image and specifying a conda yaml on top of it. +To submit the job, use the following command. The run ID (name) of the training job is stored in the `$run_id` variable: ++```azurecli +run_id=$(az ml job create -f jobs/single-step/scikit-learn/iris/job.yml --query name -o tsv) +``` ++You can use the stored run ID to return information about the job. The `--web` parameter opens the AzureML studio web UI where you can drill into details on the job: +++# [REST API](#tab/restapi) ++As part of job submission, the training scripts and data must be uploaded to a cloud storage location that your AzureML workspace can access. These examples don't cover the uploading process. For information on using the Blob REST API to upload files, see the [Put Blob](/rest/api/storageservices/put-blob) reference. ++1. Create a versioned reference to the training data. In this example, the data is located at `https://azuremlexamples.blob.core.windows.net/datasets/iris.csv`. In your workspace, you might upload the file to the default storage for your workspace: ++ ```bash + DATA_VERSION=$RANDOM + curl --location --request PUT "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/data/iris-data/versions/$DATA_VERSION?api-version=$API_VERSION" \ + --header "Authorization: Bearer $TOKEN" \ + --header "Content-Type: application/json" \ + --data-raw "{ + \"properties\": { + \"description\": \"Iris dataset\", + \"dataType\": \"uri_file\", + \"dataUri\": \"https://azuremlexamples.blob.core.windows.net/datasets/iris.csv\" + } + }" + ``` ++1. Register a versioned reference to the training script for use with a job. In this case, the script would be located at `https://azuremlexamples.blob.core.windows.net/testjob`. This `testjob` is the folder in Blob storage that contains the training script and any dependencies needed by the script. In the following example, the ID of the versioned training code is returned and stored in the `$TRAIN_CODE` variable: ++ ```bash + TRAIN_CODE=$(curl --location --request PUT "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/codes/train-lightgbm/versions/1?api-version=$API_VERSION" \ + --header "Authorization: Bearer $TOKEN" \ + --header "Content-Type: application/json" \ + --data-raw "{ + \"properties\": { + \"description\": \"Train code\", + \"codeUri\": \"https://larrystore0912.blob.core.windows.net/azureml-blobstore-c8e832ae-e49c-4084-8d28-5e6c88502655/testjob\" + } + }" | jq -r '.id') + ``` ++1. Create the environment that the cluster will use to run the training script. In this example, we use a curated or ready-made environment provided by AzureML called `AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu`. The following command retrieves a list of the environment versions, with the newest being at the top of the collection. `jq` is used to retrieve the ID of the latest (`[0]`) version, which is then stored into the `$ENVIRONMENT` variable. ++ ```bash + ENVIRONMENT=$(curl --location --request GET "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/environments/AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu/versions?api-version=$API_VERSION" --header "Authorization: Bearer $TOKEN" | jq -r .value[0].id) + ``` ++1. Finally, submit the job. The following example shows how to submit the job, reference the training code ID, environment ID, URL for the input data, and the ID of the compute cluster. The job output location will be stored in the `$JOB_OUTPUT` variable: ++ > [!TIP] + > The job name must be unique. In this example, `uuidgen` is used to generate a unique value for the name. ++ ```bash + run_id=$(uuidgen) + curl --location --request PUT "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/jobs/$run_id?api-version=$API_VERSION" \ + --header "Authorization: Bearer $TOKEN" \ + --header "Content-Type: application/json" \ + --data-raw "{ + \"properties\": { + \"jobType\": \"Command\", + \"codeId\": \"$TRAIN_CODE\", + \"command\": \"python main.py --iris-csv \$AZURE_ML_INPUT_iris\", + \"environmentId\": \"$ENVIRONMENT\", + \"inputs\": { + \"iris\": { + \"jobInputType\": \"uri_file\", + \"uri\": \"https://azuremlexamples.blob.core.windows.net/datasets/iris.csv\" + } + }, + \"experimentName\": \"lightgbm-iris\", + \"computeId\": \"/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/computes/$COMPUTE_NAME\" + } + }" + ``` +++++## Register the trained model ++The following examples demonstrate how to register a model in your AzureML workspace. ++# [Python SDK](#tab/python) ++> [!TIP] +> The `name` property returned by the training job is used as part of the path to the model. ++```python +from azure.ai.ml.entities import Model +from azure.ai.ml.constants import ModelType ++run_model = Model( + path="azureml://jobs/{}/outputs/artifacts/paths/model/".format(returned_job.name), + name="run-model-example", + description="Model created from run.", + type=ModelType.MLFLOW +) ++ml_client.models.create_or_update(run_model) +``` ++# [Azure CLI](#tab/azurecli) ++> [!TIP] +> The name (stored in the `$run_id` variable) is used as part of the path to the model. +++# [REST API](#tab/restapi) ++> [!TIP] +> The name (stored in the `$run_id` variable) is used as part of the path to the model. ++```bash +curl --location --request PUT "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/models/sklearn/versions/1?api-version=$API_VERSION" \ +--header "Authorization: Bearer $TOKEN" \ +--header "Content-Type: application/json" \ +--data-raw "{ + \"properties\": { + \"modelType\": \"mlflow_model\", + \"modelUri\":\"runs:/$run_id/model\" + } +}" +``` ++++## Next steps ++Now that you have a trained model, learn [how to deploy it using an online endpoint](how-to-deploy-managed-online-endpoints.md). ++For more examples, see the [AzureML examples](https://github.com/azure/azureml-examples) GitHub repository. ++For more information on the Azure CLI commands, Python SDK classes, or REST APIs used in this article, see the following reference documentation: ++* [Azure CLI `ml` extension](/cli/azure/ml) +* [Python SDK](/python/api/azure-ai-ml/azure.ai.ml) +* [REST API](/rest/api/azureml/) |
machine-learning | How To Train Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-sdk.md | - Title: Train models with the Azure ML Python SDK v2 (preview)- -description: Configure and submit Azure Machine Learning jobs to train your models with SDK v2. ------ Previously updated : 06/10/2022-----# Train models with the Azure ML Python SDK v2 (preview) --> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] -> * [v1](v1/how-to-attach-compute-targets.md) -> * [v2 (preview)](how-to-train-sdk.md) ---> [!IMPORTANT] -> SDK v2 is currently in public preview. -> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. -> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). --In this article, you learn how to configure and submit Azure Machine Learning jobs to train your models. Snippets of code explain the key parts of configuration and submission of a training job. Then use one of the [example notebooks](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk) to find the full end-to-end working examples. --## Prerequisites --* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today -* The Azure Machine Learning [SDK v2 for Python](https://aka.ms/sdk-v2-install) -* An Azure Machine Learning workspace --### Clone examples repository --To run the training examples, first clone the examples repository and change into the `sdk` directory: --```bash -git clone --depth 1 https://github.com/Azure/azureml-examples -cd azureml-examples/sdk -``` --> [!TIP] -> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation. --## Start on your local machine --Start by running a script, which trains a model using `lightgbm`. The script file is available [here](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/src/main.py). The script needs three inputs --* _input data_: You'll use data from a web location for your run - [web location](https://azuremlexamples.blob.core.windows.net/datasets/iris.csv). In this example, we're using a file in a remote location for brevity, but you can use a local file as well. -* _learning-rate_: You'll use a learning rate of _0.9_ -* _boosting_: You'll use the Gradient Boosting _gdbt_ --Run this script file as follows --```bash -cd jobs/single-step/lightgbm/iris --python src/main.py --iris-csv https://azuremlexamples.blob.core.windows.net/datasets/iris.csv --learning-rate 0.9 --boosting gbdt -``` --The output expected is as follows: --```terminal -2022/04/21 15:02:44 INFO mlflow.tracking.fluent: Autologging successfully enabled for lightgbm. -2022/04/21 15:02:44 INFO mlflow.tracking.fluent: Autologging successfully enabled for sklearn. -2022/04/21 15:02:45 INFO mlflow.utils.autologging_utils: Created MLflow autologging run with ID 'a1d5f652796e4d88961176166de52253', which will track hyperparameters, performance metrics, model artifacts, and lineage information for the current lightgbm workflow -lightgbm\engine.py:177: UserWarning: Found `num_iterations` in params. Will use it instead of argument -[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000164 seconds. -You can set `force_col_wise=true` to remove the overhead. -[LightGBM] [Warning] No further splits with positive gain, best gain: -inf -[LightGBM] [Warning] No further splits with positive gain, best gain: -inf -[LightGBM] [Warning] No further splits with positive gain, best gain: -inf -``` --## Move to the cloud --Now that the local run works, move this run to an Azure Machine Learning workspace. To run this on Azure ML, you need: --* A workspace to run -* A compute on which to run it -* An environment on the compute to ensure you have the required packages to run your script --Let us tackle these steps below --### 1. Connect to the workspace --To connect to the workspace, you need identifier parameters - a subscription, resource group and workspace name. You'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. To authenticate, you use the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true). Check this [example](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/configuration.ipynb) for more details on how to configure credentials and connect to a workspace. --```python -#import required libraries -from azure.ai.ml import MLClient -from azure.identity import DefaultAzureCredential --#Enter details of your AzureML workspace -subscription_id = '<SUBSCRIPTION_ID>' -resource_group = '<RESOURCE_GROUP>' -workspace = '<AZUREML_WORKSPACE_NAME>' --#connect to the workspace -ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace) -``` --### 2. Create compute --You'll create a compute called `cpu-cluster` for your job, with this code: ---[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/configuration.ipynb?name=create-cpu-compute)] ---### 3. Environment to run the script --To run your script on `cpu-cluster`, you need an environment, which has the required packages and dependencies to run your script. There are a few options available for environments: --* Use a curated environment in your workspace - Azure ML offers several curated [environments](https://ml.azure.com/environments), which cater to various needs. -* Use a custom environment - Azure ML allows you to create your own environment using - * A docker image - * A base docker image with a conda YAML to customize further - * A docker build context -- Check this [example](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/assets/environment/environment.ipynb) on how to create custom environments. --You'll use a curated environment provided by Azure ML for `lightgm` called `AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu` --### 4. Submit a job to run the script --To run this script, you'll use a `command`. The command will be run by submitting it as a `job` to Azure ML. --[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=create-command)] --[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-command)] ---In the above, you configured: -- `code` - path where the code to run the command is located-- `command` - command that needs to be run-- `inputs` - dictionary of inputs using name value pairs to the command. The key is a name for the input within the context of the job and the value is the input value. Inputs are referenced in the `command` using the `${{inputs.<input_name>}}` expression. To use files or folders as inputs, you can use the `Input` class.--For more details, refer to the [reference documentation](/python/api/azure-ai-ml/azure.ai.ml#azure-ai-ml-command). --## Improve the model using hyperparameter sweep --Now that you have run a job on Azure, let us make it better using Hyperparameter tuning. Also called hyperparameter optimization, this is the process of finding the configuration of hyperparameters that results in the best performance. Azure Machine Learning provides a `sweep` function on the `command` to do hyperparameter tuning. --To perform a sweep, there needs to be input(s) against which the sweep needs to be performed. These inputs can have a discrete or continuous value. The `sweep` function will run the `command` multiple times using different combination of input values specified. Each input is a dictionary of name value pairs. The key is the name of the hyperparameter and the value is the parameter expression. --Let us improve our model by sweeping on `learning_rate` and `boosting` inputs to the script. In the previous step, you used a specific value for these parameters, but now you'll use a range or choice of values. --[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=search-space)] ---Now that you've defined the parameters, run the sweep --[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=configure-sweep)] --[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-sweep)] ---As seen above, the `sweep` function allows user to configure the following key aspects: --* `sampling_algorithm`- The hyperparameter sampling algorithm to use over the search_space. Allowed values are `random`, `grid` and `bayesian`. -* `objective` - the objective of the sweep - * `primary_metric` - The name of the primary metric reported by each trial job. The metric must be logged in the user's training script using `mlflow.log_metric()` with the same corresponding metric name. - * `goal` - The optimization goal of the objective.primary_metric. The allowed values are `maximize` and `minimize`. -* `compute` - Name of the compute target to execute the job on. -* `limits` - Limits for the sweep job --Once this job completes, you can look at the metrics and the job details in the [Azure ML Portal](https://ml.azure.com/). The job details page will identify the best performing child run. -- -## Distributed training --Azure Machine Learning supports PyTorch, TensorFlow, and MPI-based distributed training. Let us look at how to configure a command for distribution for the `command_job` you created earlier --```python -# Distribute using PyTorch -from azure.ai.ml import PyTorchDistribution -command_job.distribution = PyTorchDistribution(process_count_per_instance=4) --# Distribute using TensorFlow -from azure.ai.ml import TensorFlowDistribution -command_job.distribution = TensorFlowDistribution(parameter_server_count=1, worker_count=2) --# Distribute using MPI -from azure.ai.ml import MpiDistribution -job.distribution = MpiDistribution(process_count_per_instance=3) -``` --## Next steps --Try these next steps to learn how to use the Azure Machine Learning SDK (v2) for Python: --* Use pipelines with the Azure ML Python SDK (v2) |
machine-learning | How To Train With Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-rest.md | - Title: 'Train models with REST (preview)'- -description: Learn how to train models and create jobs with REST APIs. ------ Previously updated : 07/28/2022-----# Train models with REST (preview) --Learn how to use the Azure Machine Learning REST API to create and manage training jobs (preview). ----The REST API uses standard HTTP verbs to create, retrieve, update, and delete resources. The REST API works with any language or tool that can make HTTP requests. REST's straightforward structure makes it a good choice in scripting environments and for MLOps automation. --In this article, you learn how to use the new REST APIs to: --> [!div class="checklist"] -> * Create machine learning assets -> * Create a basic training job -> * Create a hyperparameter tuning sweep job --## Prerequisites --- An **Azure subscription** for which you have administrative rights. If you don't have such a subscription, try the [free or paid personal subscription](https://azure.microsoft.com/free/).-- An [Azure Machine Learning workspace](quickstart-create-resources.md).-- A service principal in your workspace. Administrative REST requests use [service principal authentication](how-to-setup-authentication.md#use-service-principal-authentication).-- A service principal authentication token. Follow the steps in [Retrieve a service principal authentication token](./how-to-manage-rest.md#retrieve-a-service-principal-authentication-token) to retrieve this token. -- The **curl** utility. The **curl** program is available in the [Windows Subsystem for Linux](/windows/wsl/install-win10) or any UNIX distribution. In PowerShell, **curl** is an alias for **Invoke-WebRequest** and `curl -d "key=val" -X POST uri` becomes `Invoke-WebRequest -Body "key=val" -Method POST -Uri uri`. --## Azure Machine Learning jobs -A job is a resource that specifies all aspects of a computation job. It aggregates three things: --- What to run?-- How to run it?-- Where to run it?--There are many ways to submit an Azure Machine Learning job including the SDK, Azure CLI, and visually with the studio. The following example submits a LightGBM training job with the REST API. --## Create machine learning assets --First, set up your Azure Machine Learning assets to configure your job. --In the following REST API calls, we use `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, `$LOCATION`, and `$WORKSPACE` as placeholders. Replace the placeholders with your own values. --Administrative REST requests a [service principal authentication token](how-to-manage-rest.md#retrieve-a-service-principal-authentication-token). Replace `$TOKEN` with your own value. You can retrieve this token with the following command: --```azurecli -TOKEN=$(az account get-access-token --query accessToken -o tsv) -``` --The service provider uses the `api-version` argument to ensure compatibility. The `api-version` argument varies from service to service. Set the API version as a variable to accommodate future versions: ---### Compute --Running machine learning jobs requires compute resources. You can list your workspace's compute resources: --```bash -curl "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/computes?api-version=$API_VERSION" \ header "Authorization: Bearer $TOKEN"-``` --For this example, we use an existing compute cluster named `cpu-cluster`. We set the compute name as a variable for encapsulation: --```bash -COMPUTE_NAME="cpu-cluster" -``` --> [!TIP] -> You can [create or overwrite a named compute resource with a PUT request](./how-to-manage-rest.md#create-and-modify-resources-using-put-and-post-requests). --### Environment --The LightGBM example needs to run in a LightGBM environment. Create the environment with a PUT request. Use a docker image from Microsoft Container Registry. --You can configure the docker image with `Docker` and add conda dependencies with `condaFile`: ---### Datastore --The training job needs to run on data, so you need to specify a datastore. In this example, you get the default datastore and Azure Storage account for your workspace. Query your workspace with a GET request to return a JSON file with the information. --You can use the tool [jq](https://stedolan.github.io/jq/) to parse the JSON result and get the required values. You can also use the Azure portal to find the same information. --```bash -response=$(curl --location --request GET "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/datastores?api-version=$API_VERSION&isDefault=true" \ header "Authorization: Bearer $TOKEN")--AZURE_STORAGE_ACCOUNT=$(echo $response | jq '.value[0].properties.contents.accountName') -AZUREML_DEFAULT_DATASTORE=$(echo $response | jq '.value[0].name') -AZUREML_DEFAULT_CONTAINER=$(echo $response | jq '.value[0].properties.contents.containerName') -AZURE_STORAGE_KEY=$(az storage account keys list --account-name $AZURE_STORAGE_ACCOUNT | jq '.[0].value') -``` --### Data --Now that you have the datastore, you can create a dataset. For this example, use the common dataset `iris.csv`. ---### Code --Now that you have the dataset and datastore, you can upload the training script that will run on the job. Use the Azure Storage CLI to upload a blob into your default container. You can also use other methods to upload, such as the Azure portal or Azure Storage Explorer. ---```azurecli -az storage blob upload-batch -d $AZUREML_DEFAULT_CONTAINER/src \ - -s jobs/train/lightgbm/iris/src --account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_KEY -``` --Once you upload your code, you can specify your code with a PUT request and reference the url through `codeUri`. ---## Submit a training job --Now that your assets are in place, you can run the LightGBM job, which outputs a trained model and metadata. You need the following information to configure the training job: --- **run_id**: [Optional] The name of the job, which must be unique across all jobs. Unless a name is specified either in the YAML file via the `name` field or the command line via `--name/-n`, a GUID/UUID is automatically generated and used for the name.-- **jobType**: The job type. For a basic training job, use `Command`.-- **codeId**: The ARMId reference of the name and version of your training script.-- **command**: The command to execute. Input data can be written into the command and can be referred to with data binding. -- **environmentId**: The ARMId reference of the name and version of your environment.-- **inputDataBindings**: Data binding can help you reference input data. Create an environment variable and the name of the binding will be added to AZURE_ML_INPUT_, which you can refer to in `command`. You can directly reference a public blob url file as a `UriFile` through the `uri` parameter. -- **experimentName**: [Optional] Tags the job to help you organize jobs in Azure Machine Learning studio. Each job's run record is organized under the corresponding experiment in the studio "Experiment" tab. If omitted, tags default to the name of the working directory when the job is created.-- **computeId**: The `computeId` specifies the compute target name through an ARMId.--Use the following commands to submit the training job: ---## Submit a hyperparameter sweep job --Azure Machine Learning also lets you efficiently tune training hyperparameters. You can create a hyperparameter tuning suite, with the REST APIs. For more information on Azure Machine Learning's hyperparameter tuning options, see [Hyperparameter tuning a model](how-to-tune-hyperparameters.md). Specify the hyperparameter tuning parameters to configure the sweep: --- **jobType**: The job type. For a sweep job, it will be `Sweep`. -- **algorithm**: The sampling algorithm class - class "random" is often a good place to start. See the sweep job [schema](https://azuremlschemas.azureedge.net/latest/sweepJob.schema.json) for the enumeration of options. -- **trial**: The command job configuration for each trial to be run. -- **objective**: The `primaryMetric` is the optimization metric, which must match the name of a metric logged from the training code. The `goal` specifies the direction (minimize or maximize). See the [schema](https://azuremlschemas.azureedge.net/latest/sweepJob.schema.json) for the full enumeration of options. -- **searchSpace**: A generic object of hyperparameters to sweep over. The key is a name for the hyperparameter, for example, `learning_rate`. The value is the hyperparameter distribution. See the [schema](https://azuremlschemas.azureedge.net/latest/sweepJob.schema.json) for the enumeration of options.-- **Limits**: `JobLimitsType` of type `sweep` is an object definition of the sweep job limits parameters. `maxTotalTrials` [Optional] is the maximum number of individual trials to run. `maxConcurrentTrials` is the maximum number of trials to run concurrently on your compute cluster.--To create a sweep job with the same LightGBM example, use the following commands: ---## Next steps --Now that you have a trained model, learn [how to deploy your model](how-to-deploy-managed-online-endpoints.md). |
machine-learning | How To Train With Ui | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md | -There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs) with the CLI (v2)](how-to-train-cli.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you'll learn how to use your own data and code to train a machine learning model with the job creation UI in Azure Machine Learning studio. +There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs)](how-to-train-model.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you'll learn how to use your own data and code to train a machine learning model with the job creation UI in Azure Machine Learning studio. ## Prerequisites There are many ways to create a training job with Azure Machine Learning. You ca * An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md). -* Understanding of what a job is in Azure Machine Learning. See [how to train models with the CLI (v2)](how-to-train-cli.md). +* Understanding of what a job is in Azure Machine Learning. See [how to train models]how-to-train-model.md). ## Get started To launch the job, choose **Create**. Once the job is created, Azure will show y * [Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md). -* [Train models (create jobs) with the CLI (v2)](how-to-train-cli.md) +* [Train models (create jobs) with the CLI, SDK, and REST API](how-to-train-model.md) |
machine-learning | How To Use Mlflow Cli Runs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md | Copy this code into the file: ### Submitting the job -Use the [Azure Machine Learning CLI (v2)](how-to-train-cli.md) to submit a remote run. When using the Azure Machine Learning CLI (v2), the MLflow tracking URI and experiment name are set automatically and directs the logging from MLflow to your workspace. Learn more about [logging Azure Machine Learning CLI (v2) experiments with MLflow](how-to-train-cli.md#model-tracking-with-mlflow) +Use the [Azure Machine Learning](how-to-train-model.md) to submit a remote run. When using the Azure Machine Learning CLI (v2), the MLflow tracking URI and experiment name are set automatically and directs the logging from MLflow to your workspace. Learn more about [logging Azure Machine Learning experiments with MLflow](how-to-use-mlflow-cli-runs.md) Create a YAML file with your job definition in a `job.yml` file. This file should be created outside the `src` directory. Copy this code into the file: |
machine-learning | Overview What Happened To Workbench | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-happened-to-workbench.md | Start with [Quickstart: Get started with Azure Machine Learning](quickstart-crea + [Use a Jupyter notebook to train image classification models](tutorial-train-deploy-notebook.md) + [Use automated machine learning](tutorial-designer-automobile-price-train-score.md) + [Use the designer's drag & drop capabilities](tutorial-first-experiment-automated-ml.md) - + [Use the ML extension to the CLI](how-to-train-cli.md) + + [Train models](how-to-train-model.md) |
machine-learning | Overview What Is Azure Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md | Also, Azure Machine Learning includes features for monitoring and auditing: ## Next steps Start using Azure Machine Learning:-* [Set up an Azure Machine Learning workspace](quickstart-create-resources.md) -* [Tutorial: Build a first machine learning project](tutorial-1st-experiment-hello-world.md) -* [Preview: Run model training jobs with the v2 CLI](how-to-train-cli.md) +- [Set up an Azure Machine Learning workspace](quickstart-create-resources.md) +- [Tutorial: Build a first machine learning project](tutorial-1st-experiment-hello-world.md) +- [How to run training jobs(how-to-train-model.md) |
machine-learning | Reference Automl Images Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md | Example of a JSONL file for Instance Segmentation:  -## Data format for inference +## Data schema for online scoring -In this section, we document the input data format required to make predictions when using a deployed model. Any aforementioned image format is accepted with content type `application/octet-stream`. +In this section, we document the input data format required to make predictions using a deployed model. ### Input format -The following is the input format needed to generate predictions on any task using task-specific model endpoint. After we [deploy the model](how-to-auto-train-image-models.md#register-and-deploy-model), we can use the following code snippet to get predictions for all tasks. +The following is the input format needed to generate predictions on any task using task-specific model endpoint. -```python -# input image for inference -sample_image = './test_image.jpg' -# load image data -data = open(sample_image, 'rb').read() -# set the content type -headers = {'Content-Type': 'application/octet-stream'} -# if authentication is enabled, set the authorization header -headers['Authorization'] = f'Bearer {key}' -# make the request and display the response -response = requests.post(scoring_uri, data, headers=headers) +```json +{ + "input_data": { + "columns": [ + "image" + ], + "data": [ + "image_in_base64_string_format" + ] + } +} ```++This json is a dictionary with outer key `input_data` and inner keys `columns`, `data` as described in the following table. The endpoint accepts a json string in the above format and converts it into a dataframe of samples required by the scoring script. Each input image in the `request_json["input_data"]["data"]` section of the json is a [base64 encoded string](https://docs.python.org/3/library/base64.html#base64.encodebytes). +++| Key | Description | +| -- |-| +| `input_data`<br> (outer key) | It is an outer key in json request. `input_data` is a dictionary that accepts input image samples <br>`Required, Dictionary` | +| `columns`<br> (inner key) | Column names to use to create dataframe. It accepts only one column with `image` as column name.<br>`Required, List` | +| `data`<br> (inner key) | List of base64 encoded images <br>`Required, List`| +++After we [deploy the mlflow model](how-to-auto-train-image-models.md#register-and-deploy-model), we can use the following code snippet to get predictions for all tasks. ++[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_inference_request)] ++[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=dump_inference_request)] ++[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=invoke_inference)] + ### Output format -Predictions made on model endpoints follow different structure depending on the task type. This section explores the output data formats for multi-class, multi-label image classification, object detection, and instance segmentation tasks. +Predictions made on model endpoints follow different structure depending on the task type. This section explores the output data formats for multi-class, multi-label image classification, object detection, and instance segmentation tasks. ++The following schemas are applicable when the input request contains one image. #### Image classification Endpoint for image classification returns all the labels in the dataset and their probability scores for the input image in the following format. ```json-{ - "filename":"/tmp/tmppjr4et28", - "probs":[ - 2.098e-06, - 4.783e-08, - 0.999, - 8.637e-06 - ], - "labels":[ - "can", - "carton", - "milk_bottle", - "water_bottle" - ] -} +[ + { + "filename": "/tmp/tmppjr4et28", + "probs": [ + 2.098e-06, + 4.783e-08, + 0.999, + 8.637e-06 + ], + "labels": [ + "can", + "carton", + "milk_bottle", + "water_bottle" + ] + } +] ``` #### Image classification multi-label Endpoint for image classification returns all the labels in the dataset and thei For image classification multi-label, model endpoint returns labels and their probabilities. ```json-{ - "filename":"/tmp/tmpsdzxlmlm", - "probs":[ - 0.997, - 0.960, - 0.982, - 0.025 - ], - "labels":[ - "can", - "carton", - "milk_bottle", - "water_bottle" - ] -} +[ + { + "filename": "/tmp/tmpsdzxlmlm", + "probs": [ + 0.997, + 0.960, + 0.982, + 0.025 + ], + "labels": [ + "can", + "carton", + "milk_bottle", + "water_bottle" + ] + } +] ``` #### Object detection For image classification multi-label, model endpoint returns labels and their pr Object detection model returns multiple boxes with their scaled top-left and bottom-right coordinates along with box label and confidence score. ```json-{ - "filename":"/tmp/tmpdkg2wkdy", - "boxes":[ - { - "box":{ - "topX":0.224, - "topY":0.285, - "bottomX":0.399, - "bottomY":0.620 - }, - "label":"milk_bottle", - "score":0.937 - }, - { - "box":{ - "topX":0.664, - "topY":0.484, - "bottomX":0.959, - "bottomY":0.812 +[ + { + "filename": "/tmp/tmpdkg2wkdy", + "boxes": [ + { + "box": { + "topX": 0.224, + "topY": 0.285, + "bottomX": 0.399, + "bottomY": 0.620 + }, + "label": "milk_bottle", + "score": 0.937 },- "label":"can", - "score":0.891 - }, - { - "box":{ - "topX":0.423, - "topY":0.253, - "bottomX":0.632, - "bottomY":0.725 + { + "box": { + "topX": 0.664, + "topY": 0.484, + "bottomX": 0.959, + "bottomY": 0.812 + }, + "label": "can", + "score": 0.891 },- "label":"water_bottle", - "score":0.876 - } - ] -} + { + "box": { + "topX": 0.423, + "topY": 0.253, + "bottomX": 0.632, + "bottomY": 0.725 + }, + "label": "water_bottle", + "score": 0.876 + } + ] + } +] ``` #### Instance segmentation In instance segmentation, output consists of multiple boxes with their scaled top-left and bottom-right coordinates, labels, confidence scores, and polygons (not masks). Here, the polygon values are in the same format that we discussed in the schema section. ```json-{ - "filename":"/tmp/tmpi8604s0h", - "boxes":[ - { - "box":{ - "topX":0.679, - "topY":0.491, - "bottomX":0.926, - "bottomY":0.810 - }, - "label":"can", - "score":0.992, - "polygon":[ - [ - 0.82, 0.811, 0.771, 0.810, 0.758, 0.805, 0.741, 0.797, 0.735, 0.791, 0.718, 0.785, 0.715, 0.778, 0.706, 0.775, 0.696, 0.758, 0.695, 0.717, 0.698, 0.567, 0.705, 0.552, 0.706, 0.540, 0.725, 0.520, 0.735, 0.505, 0.745, 0.502, 0.755, 0.493 - ] - ] - }, - { - "box":{ - "topX":0.220, - "topY":0.298, - "bottomX":0.397, - "bottomY":0.601 - }, - "label":"milk_bottle", - "score":0.989, - "polygon":[ - [ - 0.365, 0.602, 0.273, 0.602, 0.26, 0.595, 0.263, 0.588, 0.251, 0.546, 0.248, 0.501, 0.25, 0.485, 0.246, 0.478, 0.245, 0.463, 0.233, 0.442, 0.231, 0.43, 0.226, 0.423, 0.226, 0.408, 0.234, 0.385, 0.241, 0.371, 0.238, 0.345, 0.234, 0.335, 0.233, 0.325, 0.24, 0.305, 0.586, 0.38, 0.592, 0.375, 0.598, 0.365 - ] - ] - }, - { - "box":{ - "topX":0.433, - "topY":0.280, - "bottomX":0.621, - "bottomY":0.679 - }, - "label":"water_bottle", - "score":0.988, - "polygon":[ - [ - 0.576, 0.680, 0.501, 0.680, 0.475, 0.675, 0.460, 0.625, 0.445, 0.630, 0.443, 0.572, 0.440, 0.560, 0.435, 0.515, 0.431, 0.501, 0.431, 0.433, 0.433, 0.426, 0.445, 0.417, 0.456, 0.407, 0.465, 0.381, 0.468, 0.327, 0.471, 0.318 - ] - ] - } - ] -} +[ + { + "filename": "/tmp/tmpi8604s0h", + "boxes": [ + { + "box": { + "topX": 0.679, + "topY": 0.491, + "bottomX": 0.926, + "bottomY": 0.810 + }, + "label": "can", + "score": 0.992, + "polygon": [ + [ + 0.82, 0.811, 0.771, 0.810, 0.758, 0.805, 0.741, 0.797, 0.735, 0.791, 0.718, 0.785, 0.715, 0.778, 0.706, 0.775, 0.696, 0.758, 0.695, 0.717, 0.698, 0.567, 0.705, 0.552, 0.706, 0.540, 0.725, 0.520, 0.735, 0.505, 0.745, 0.502, 0.755, 0.493 + ] + ] + }, + { + "box": { + "topX": 0.220, + "topY": 0.298, + "bottomX": 0.397, + "bottomY": 0.601 + }, + "label": "milk_bottle", + "score": 0.989, + "polygon": [ + [ + 0.365, 0.602, 0.273, 0.602, 0.26, 0.595, 0.263, 0.588, 0.251, 0.546, 0.248, 0.501, 0.25, 0.485, 0.246, 0.478, 0.245, 0.463, 0.233, 0.442, 0.231, 0.43, 0.226, 0.423, 0.226, 0.408, 0.234, 0.385, 0.241, 0.371, 0.238, 0.345, 0.234, 0.335, 0.233, 0.325, 0.24, 0.305, 0.586, 0.38, 0.592, 0.375, 0.598, 0.365 + ] + ] + }, + { + "box": { + "topX": 0.433, + "topY": 0.280, + "bottomX": 0.621, + "bottomY": 0.679 + }, + "label": "water_bottle", + "score": 0.988, + "polygon": [ + [ + 0.576, 0.680, 0.501, 0.680, 0.475, 0.675, 0.460, 0.625, 0.445, 0.630, 0.443, 0.572, 0.440, 0.560, 0.435, 0.515, 0.431, 0.501, 0.431, 0.433, 0.433, 0.426, 0.445, 0.417, 0.456, 0.407, 0.465, 0.381, 0.468, 0.327, 0.471, 0.318 + ] + ] + } + ] + } +] ``` > [!NOTE] |
machine-learning | Reference Yaml Core Syntax | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-core-syntax.md | python train.py --training_data some_input_path --max_epocs 10 --learning_rate 0 ## Next steps * [Install and use the CLI (v2)](how-to-configure-cli.md)-* [Train models with the CLI (v2)](how-to-train-cli.md) +* [Train models with the CLI (v2)](how-to-train-model.md) * [CLI (v2) YAML schemas](reference-yaml-overview.md) |
machine-learning | Resource Limits Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-limits-capacity.md | + + Title: Service limits ++description: Service limits used for capacity planning and maximum limits on requests and responses for Azure Machine Learning. +++++++ Last updated : 09/27/2022+ms.metadata: product-dependency +++# Service limits in Azure Machine Learning ++This section lists basic limits and throttling thresholds in Azure Machine Learning. ++> [!IMPORTANT] +> Azure Machine Learning doesn't store or process your data outside of the region where you deploy. ++## Workspaces ++| Limit | Value | +| | | +| Workspace name | 2-32 characters | ++## Runs +| Limit | Value | +| | | +| Runs per workspace | 10 million | +| RunId/ParentRunId | 256 characters | +| DataContainerId | 261 characters | +| DisplayName |256 characters| +| Description |5,000 characters| +| Number of properties |50 | +| Length of property key |100 characters | +| Length of property value |1,000 characters | +| Number of tags |50 | +| Length of tag key |100 | +| Length of tag value |1,000 characters | +| CancelUri / CompleteUri / DiagnosticsUri |1,000 characters | +| Error message length |3,000 characters | +| Warning message length |300 characters | +| Number of input datasets |200 | +| Number of output datasets |20 | +++## Metrics +| Limit | Value | +| | | +| Metric names per run |50| +| Metric rows per metric name |10 million| +| Columns per metric row |15| +| Metric column name length |255 characters | +| Metric column value length |255 characters | +| Metric rows per batch uploaded | 250 | ++> [!NOTE] +> If you are hitting the limit of metric names per run because you are formatting variables into the metric name, consider instead to use a row metric where one column is the variable value and the second column is the metric value. ++## Artifacts ++| Limit | Value | +| | | +| Number of artifacts per run |10 million| +| Max length of artifact path |5,000 characters | ++## Limit increases ++Some limits can be increased for individual workspaces. To learn how to increase these limits, see ["Manage and increase quotas for resources"](how-to-manage-quotas.md) ++## Next steps ++- Learn how increase resource quotas in ["Manage and increase quotas for resources"](how-to-manage-quotas.md). |
machine-learning | How To Attach Compute Targets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md | -> * [v2 (preview)](../how-to-train-sdk.md) +> * [v2 (preview)](../how-to-train-model.md) Learn how to attach Azure compute resources to your Azure Machine Learning workspace with SDK v1. Then you can use these resources as training and inference [compute targets](../concept-compute-target.md) in your machine learning tasks. |
machine-learning | How To Machine Learning Interpretability Automl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-machine-learning-interpretability-automl.md | + + Title: Model explainability in automated ML (preview) ++description: Learn how to get explanations for how your automated ML model determines feature importance and makes predictions when using the Azure Machine Learning SDK. +++++++ Last updated : 10/21/2021+++# Interpretability: Model explainability in automated ML (preview) +++In this article, you learn how to get explanations for automated machine learning (automated ML) models in Azure Machine Learning using the Python SDK. Automated ML helps you understand feature importance of the models that are generated. ++All SDK versions after 1.0.85 set `model_explainability=True` by default. In SDK version 1.0.85 and earlier versions users need to set `model_explainability=True` in the `AutoMLConfig` object in order to use model interpretability. +++In this article, you learn how to: ++- Perform interpretability during training for best model or any model. +- Enable visualizations to help you see patterns in data and explanations. +- Implement interpretability during inference or scoring. ++## Prerequisites ++- Interpretability features. Run `pip install azureml-interpret` to get the necessary package. +- Knowledge of building automated ML experiments. For more information on how to use the Azure Machine Learning SDK, complete this [object detection model tutorial](../tutorial-auto-train-image-models.md) or see how to [configure automated ML experiments](../how-to-configure-auto-train.md). ++## Interpretability during training for the best model ++Retrieve the explanation from the `best_run`, which includes explanations for both raw and engineered features. ++> [!NOTE] +> Interpretability, model explanation, is not available for the TCNForecaster model recommended by Auto ML forecasting experiments. ++### Download the engineered feature importances from the best run ++You can use `ExplanationClient` to download the engineered feature explanations from the artifact store of the `best_run`. ++```python +from azureml.interpret import ExplanationClient ++client = ExplanationClient.from_run(best_run) +engineered_explanations = client.download_model_explanation(raw=False) +print(engineered_explanations.get_feature_importance_dict()) +``` ++### Download the raw feature importances from the best run ++You can use `ExplanationClient` to download the raw feature explanations from the artifact store of the `best_run`. ++```python +from azureml.interpret import ExplanationClient ++client = ExplanationClient.from_run(best_run) +raw_explanations = client.download_model_explanation(raw=True) +print(raw_explanations.get_feature_importance_dict()) +``` ++## Interpretability during training for any model ++When you compute model explanations and visualize them, you're not limited to an existing model explanation for an AutoML model. You can also get an explanation for your model with different test data. The steps in this section show you how to compute and visualize engineered feature importance based on your test data. ++### Retrieve any other AutoML model from training ++```python +automl_run, fitted_model = local_run.get_output(metric='accuracy') +``` ++### Set up the model explanations ++Use `automl_setup_model_explanations` to get the engineered and raw explanations. The `fitted_model` can generate the following items: ++- Featured data from trained or test samples +- Engineered feature name lists +- Findable classes in your labeled column in classification scenarios ++The `automl_explainer_setup_obj` contains all the structures from above list. ++```python +from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations ++automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, + X_test=X_test, y=y_train, + task='classification') +``` ++### Initialize the Mimic Explainer for feature importance ++To generate an explanation for automated ML models, use the `MimicWrapper` class. You can initialize the MimicWrapper with these parameters: ++- The explainer setup object +- Your workspace +- A surrogate model to explain the `fitted_model` automated ML model ++The MimicWrapper also takes the `automl_run` object where the engineered explanations will be uploaded. ++```python +from azureml.interpret import MimicWrapper ++# Initialize the Mimic Explainer +explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, + explainable_model=automl_explainer_setup_obj.surrogate_model, + init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, + features=automl_explainer_setup_obj.engineered_feature_names, + feature_maps=[automl_explainer_setup_obj.feature_map], + classes=automl_explainer_setup_obj.classes, + explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) +``` ++### Use Mimic Explainer for computing and visualizing engineered feature importance ++You can call the `explain()` method in MimicWrapper with the transformed test samples to get the feature importance for the generated engineered features. You can also sign in to [Azure Machine Learning studio](https://ml.azure.com/) to view the explanations dashboard visualization of the feature importance values of the generated engineered features by automated ML featurizers. ++```python +engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) +print(engineered_explanations.get_feature_importance_dict()) +``` +For models trained with automated ML, you can get the best model using the `get_output()` method and compute explanations locally. You can visualize the explanation results with `ExplanationDashboard` from the `raiwidgets` package. ++```python +best_run, fitted_model = remote_run.get_output() ++from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations +automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, + X_test=X_test, y=y_train, + task='regression') ++from interpret.ext.glassbox import LGBMExplainableModel +from azureml.interpret.mimic_wrapper import MimicWrapper ++explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel, + init_dataset=automl_explainer_setup_obj.X_transform, run=best_run, + features=automl_explainer_setup_obj.engineered_feature_names, + feature_maps=[automl_explainer_setup_obj.feature_map], + classes=automl_explainer_setup_obj.classes) + +pip install interpret-community[visualization] ++engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) +print(engineered_explanations.get_feature_importance_dict()), +from raiwidgets import ExplanationDashboard +ExplanationDashboard(engineered_explanations, automl_explainer_setup_obj.automl_estimator, datasetX=automl_explainer_setup_obj.X_test_transform) ++ ++raw_explanations = explainer.explain(['local', 'global'], get_raw=True, + raw_feature_names=automl_explainer_setup_obj.raw_feature_names, + eval_dataset=automl_explainer_setup_obj.X_test_transform) +print(raw_explanations.get_feature_importance_dict()), +from raiwidgets import ExplanationDashboard +ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipeline, datasetX=automl_explainer_setup_obj.X_test_raw) +``` ++### Use Mimic Explainer for computing and visualizing raw feature importance ++You can call the `explain()` method in MimicWrapper with the transformed test samples to get the feature importance for the raw features. In the [Machine Learning studio](https://ml.azure.com/), you can view the dashboard visualization of the feature importance values of the raw features. ++```python +raw_explanations = explainer.explain(['local', 'global'], get_raw=True, + raw_feature_names=automl_explainer_setup_obj.raw_feature_names, + eval_dataset=automl_explainer_setup_obj.X_test_transform, + raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) +print(raw_explanations.get_feature_importance_dict()) +``` ++## Interpretability during inference ++In this section, you learn how to operationalize an automated ML model with the explainer that was used to compute the explanations in the previous section. ++### Register the model and the scoring explainer ++Use the `TreeScoringExplainer` to create the scoring explainer that'll compute the engineered feature importance values at inference time. You initialize the scoring explainer with the `feature_map` that was computed previously. ++Save the scoring explainer, and then register the model and the scoring explainer with the Model Management Service. Run the following code: ++```python +from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer, save ++# Initialize the ScoringExplainer +scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) ++# Pickle scoring explainer locally +save(scoring_explainer, exist_ok=True) ++# Register trained automl model present in the 'outputs' folder in the artifacts +original_model = automl_run.register_model(model_name='automl_model', + model_path='outputs/model.pkl') ++# Register scoring explainer +automl_run.upload_file('scoring_explainer.pkl', 'scoring_explainer.pkl') +scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='scoring_explainer.pkl') +``` ++### Create the conda dependencies for setting up the service ++Next, create the necessary environment dependencies in the container for the deployed model. Please note that azureml-defaults with version >= 1.0.45 must be listed as a pip dependency, because it contains the functionality needed to host the model as a web service. ++```python +from azureml.core.conda_dependencies import CondaDependencies ++azureml_pip_packages = [ + 'azureml-interpret', 'azureml-train-automl', 'azureml-defaults' +] ++myenv = CondaDependencies.create(conda_packages=['scikit-learn', 'pandas', 'numpy', 'py-xgboost<=0.80'], + pip_packages=azureml_pip_packages, + pin_sdk_version=True) ++with open("myenv.yml","w") as f: + f.write(myenv.serialize_to_string()) ++with open("myenv.yml","r") as f: + print(f.read()) ++``` ++### Create the scoring script ++Write a script that loads your model and produces predictions and explanations based on a new batch of data. ++```python +%%writefile score.py +import joblib +import pandas as pd +from azureml.core.model import Model +from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations +++def init(): + global automl_model + global scoring_explainer ++ # Retrieve the path to the model file using the model name + # Assume original model is named automl_model + automl_model_path = Model.get_model_path('automl_model') + scoring_explainer_path = Model.get_model_path('scoring_explainer') ++ automl_model = joblib.load(automl_model_path) + scoring_explainer = joblib.load(scoring_explainer_path) +++def run(raw_data): + data = pd.read_json(raw_data, orient='records') + # Make prediction + predictions = automl_model.predict(data) + # Setup for inferencing explanations + automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, + X_test=data, task='classification') + # Retrieve model explanations for engineered explanations + engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) + # Retrieve model explanations for raw explanations + raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) + # You can return any data type as long as it is JSON-serializable + return {'predictions': predictions.tolist(), + 'engineered_local_importance_values': engineered_local_importance_values, + 'raw_local_importance_values': raw_local_importance_values} +``` ++### Deploy the service ++Deploy the service using the conda file and the scoring file from the previous steps. ++```python +from azureml.core.webservice import Webservice +from azureml.core.webservice import AciWebservice +from azureml.core.model import Model, InferenceConfig +from azureml.core.environment import Environment ++aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, + memory_gb=1, + tags={"data": "Bank Marketing", + "method" : "local_explanation"}, + description='Get local explanations for Bank marketing test data') +myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") +inference_config = InferenceConfig(entry_script="score_local_explain.py", environment=myenv) ++# Use configs and models generated above +service = Model.deploy(ws, + 'model-scoring', + [scoring_explainer_model, original_model], + inference_config, + aciconfig) +service.wait_for_deployment(show_output=True) +``` ++### Inference with test data ++Inference with some test data to see the predicted value from AutoML model, currently supported only in Azure Machine Learning SDK. View the feature importances contributing towards a predicted value. ++```python +if service.state == 'Healthy': + # Serialize the first row of the test data into json + X_test_json = X_test[:1].to_json(orient='records') + print(X_test_json) + # Call the service to get the predictions and the engineered explanations + output = service.run(X_test_json) + # Print the predicted value + print(output['predictions']) + # Print the engineered feature importances for the predicted value + print(output['engineered_local_importance_values']) + # Print the raw feature importances for the predicted value + print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) +``` ++## Visualize to discover patterns in data and explanations at training time ++You can visualize the feature importance chart in your workspace in [Azure Machine Learning studio](https://ml.azure.com). After your AutoML run is complete, select **View model details** to view a specific run. Select the **Explanations** tab to see the visualizations in the explanation dashboard. ++[](./media/how-to-machine-learning-interpretability-automl/automl-explanation.png#lightbox) ++For more information on the explanation dashboard visualizations and specific plots, please refer to the [how-to doc on interpretability](../how-to-machine-learning-interpretability-aml.md). ++## Next steps ++For more information about how you can enable model explanations and feature importance in areas other than automated ML, see [more techniques for model interpretability](../how-to-machine-learning-interpretability.md). |
machine-learning | How To Use Private Python Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-private-python-packages.md | + + Title: Use private Python packages ++description: Learn how to securely work with private Python packages from your Azure Machine Learning environments. +++++++ Last updated : 10/21/2021++++# Use private Python packages with Azure Machine Learning +++In this article, learn how to use private Python packages securely within Azure Machine Learning. Use cases for private Python packages include: ++ * You've developed a private package that you don't want to share publicly. + * You want to use a curated repository of packages stored within an enterprise firewall. ++The recommended approach depends on whether you have few packages for a single Azure Machine Learning workspace, or an entire repository of packages for all workspaces within an organization. ++The private packages are used through [Environment](/python/api/azureml-core/azureml.core.environment.environment) class. Within an environment, you declare which Python packages to use, including private ones. To learn about environment in Azure Machine Learning in general, see [How to use environments](how-to-use-environments.md). ++## Prerequisites ++ * The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install) + * An [Azure Machine Learning workspace](../quickstart-create-resources.md) ++## Use small number of packages for development and testing ++For a small number of private packages for a single workspace, use the static [`Environment.add_private_pip_wheel()`](/python/api/azureml-core/azureml.core.environment.environment#add-private-pip-wheel-workspace--file-path--exist-ok-false-) method. This approach allows you to quickly add a private package to the workspace, and is well suited for development and testing purposes. ++Point the file path argument to a local wheel file and run the ```add_private_pip_wheel``` command. The command returns a URL used to track the location of the package within your Workspace. Capture the storage URL and pass it the `add_pip_package()` method. ++```python +whl_url = Environment.add_private_pip_wheel(workspace=ws,file_path = "my-custom.whl") +myenv = Environment(name="myenv") +conda_dep = CondaDependencies() +conda_dep.add_pip_package(whl_url) +myenv.python.conda_dependencies=conda_dep +``` ++Internally, Azure Machine Learning service replaces the URL by secure SAS URL, so your wheel file is kept private and secure. ++## Use a repository of packages from Azure DevOps feed ++If you're actively developing Python packages for your machine learning application, you can host them in an Azure DevOps repository as artifacts and publish them as a feed. This approach allows you to integrate the DevOps workflow for building packages with your Azure Machine Learning Workspace. To learn how to set up Python feeds using Azure DevOps, read [Get Started with Python Packages in Azure Artifacts](/azure/devops/artifacts/quickstarts/python-packages) ++This approach uses Personal Access Token to authenticate against the repository. The same approach is applicable to other repositories +with token based authentication, such as private GitHub repositories. ++ 1. [Create a Personal Access Token (PAT)](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?tabs=preview-page#create-a-pat) for your Azure DevOps instance. Set the scope of the token to __Packaging > Read__. ++ 2. Add the Azure DevOps URL and PAT as workspace properties, using the [Workspace.set_connection](/python/api/azureml-core/azureml.core.workspace.workspace#set-connection-name--category--target--authtype--value-) method. ++ ```python + from azureml.core import Workspace + + pat_token = input("Enter secret token") + ws = Workspace.from_config() + ws.set_connection(name="connection-1", + category = "PythonFeed", + target = "https://pkgs.dev.azure.com/<MY-ORG>", + authType = "PAT", + value = pat_token) + ``` ++ 3. Create an Azure Machine Learning environment and add Python packages from the feed. + + ```python + from azureml.core import Environment + from azureml.core.conda_dependencies import CondaDependencies + + env = Environment(name="my-env") + cd = CondaDependencies() + cd.add_pip_package("<my-package>") + cd.set_pip_option("--extra-index-url https://pkgs.dev.azure.com/<MY-ORG>/_packaging/<MY-FEED>/pypi/simple")") + env.python.conda_dependencies=cd + ``` ++The environment is now ready to be used in training runs or web service endpoint deployments. When building the environment, Azure Machine Learning service uses the PAT to authenticate against the feed with the matching base URL. ++## Use a repository of packages from private storage ++You can consume packages from an Azure storage account within your organization's firewall. The storage account can hold a curated set of packages or an internal mirror of publicly available packages. ++To set up such private storage, see [Secure an Azure Machine Learning workspace and associated resources](../how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). You must also [place the Azure Container Registry (ACR) behind the VNet](../how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr). ++> [!IMPORTANT] +> You must complete this step to be able to train or deploy models using the private package repository. ++After completing these configurations, you can reference the packages in the Azure Machine Learning environment definition by their full URL in Azure blob storage. ++## Next steps ++ * Learn more about [enterprise security in Azure Machine Learning](../concept-enterprise-security.md) |
migrate | Resources Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/resources-faq.md | -This article answers common questions about Azure Migrate. If you have questions after you read this article, you can post them in the [Azure Migrate forum](https://aka.ms/AzureMigrateForum). You also can review these articles: +This article answers common questions about Azure Migrate. If you've questions after you read this article, you can post them in the [Azure Migrate forum](https://aka.ms/AzureMigrateForum). You also can review these articles: - Questions about the [Azure Migrate appliance](common-questions-appliance.md) - Questions about [discovery, assessment, and dependency visualization](common-questions-discovery-assessment.md) The Azure Migrate: Server Migration tool uses some back-end Site Recovery functi ## I have a project with the previous classic experience of Azure Migrate. How do I start using the new version? -Classic Azure Migrate is retiring in Feb 2024. After Feb 2024, classic version of Azure Migrate will no longer be supported and the inventory metadata in the classic project will be deleted. You can't upgrade projects or components in the previous version to the new version. You need to [create a new Azure Migrate project](create-manage-projects.md), and [add assessment and migration tools](./create-manage-projects.md) to it. Use the tutorials to understand how to use the assessment and migration tools available. If you had a Log Analytics workspace attached to a classic project, you can attach it to a project of current version after you delete the classic project. +Classic Azure Migrate is retiring in Feb 2024. After Feb 2024, classic version of Azure Migrate will no longer be supported, and the inventory metadata in the classic project will be deleted. You can't upgrade projects or components in the previous version to the new version. You need to [create a new Azure Migrate project](create-manage-projects.md), and [add assessment and migration tools](./create-manage-projects.md) to it. Use the tutorials to understand how to use the assessment and migration tools available. If you had a Log Analytics workspace attached to a classic project, you can attach it to a project of current version after you delete the classic project. ## What's the difference between Azure Migrate: Discovery and assessment and the MAP Toolkit? Choose your tool based on what you want to do: Review the supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government). +## What does Azure Migrate do to ensure data residency? ++When you create a project, you select a geography of your choice. The project and related resources are created in one of the regions in the geography, as allocated by the Azure Migrate service. +See the metadata storage locations for each geography [here](migrate-support-matrix.md#public-cloud). ++Azure Migrate doesn't move or store customer data outside of the region allocated, guaranteeing data residency and resiliency in the same geography. ++## Does Azure Migrate offer Backup and Disaster Recovery? ++Azure Migrate is classified as customer managed Disaster Recovery, which means Azure Migrate doesn't offer to recover data from an alternate region and offer it to customers when the project region isn't available. ++While using different capabilities, it's recommended that you export the software inventory, dependency analysis, and assessment report for an offline backup. ++In the event of a regional failure or outage in the Azure region that your project is created in: +- You may not be able to access your Azure Migrate projects, assessments, and other reports for the duration of the outage. However, you can use the offline copies that you've exported. +- Any in-progress replication and/or migration will be paused and you might have to restart it post the outage. + ## How do I get started? Identify the tool you need, and then add the tool to an Azure Migrate project. |
mysql | Concepts High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md | If you don't choose a zone, one will be randomly selected. It won't be the one u If there's a database crash or node failure, the Flexible Server VM is restarted on the same node. At the same time, an automatic failover is triggered. If the Flexible Server VM restart is successful before the failover finishes, the failover operation will be canceled. The determination of which server to use as the primary replica depends on the process that finishes first.</br> - **Is there a performance impact when I use HA?**</br>-For zone-redundant HA, there might be a 5-10 percent drop in latency if the application is connecting to the database server across availability zones where network latency is relatively higher (2-4 ms). For same-zone HA, because the primary and the standby replica is in the same zone, the replication lag is lower. There's less latency between the application server and the database server when they're in the same Azure availability zone.</br> +For zone-redundant HA, while there is no major performance impact for read workloads across availability zones, there might be up to 40 percent drop in write-query latency. The increase in write-latency is due to synchronous replication across Availability zone. The write latency impact is generally twice in zone redundant HA compared to the same zone HA. For same-zone HA, because the primary and the standby replica is in the same zone, the replication latency and consequently the synchronous write latency is lower. In summary, if write-latency is more critical for you compared to availability, you may want to choose same-zone HA but if availability and resiliency of your data is more critical for you at the expense of write-latency drop, you must choose zone-redundant HA. To measure the accurate impact of the latency drop in HA setup, we recommend you to perform performance testing for your workload to take an informed decision.</br> - **How does maintenance of my HA server happen?**</br> Planned events like scaling of compute and minor version upgrades happen on the primary and the standby at the same time. You can set the [scheduled maintenance window](./concepts-maintenance.md) for HA servers as you do for flexible servers. The amount of downtime will be the same as the downtime for the Azure Database for MySQL flexible server when HA is disabled. Using the failover mechanism to reduce downtime for HA servers is on our roadmap and will be available soon. </br> |
mysql | How To Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-upgrade.md | - Title: Azure Database for MySQL - flexible server - major version upgrade -description: Learn how to upgrade major version for an Azure Database for MySQL - Flexible server. ----- Previously updated : 9/26/2022---# Major version upgrade in Azure Database for MySQL flexible server preview --->[!Note] -> This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article. --This article describes how you can upgrade your MySQL major version in-place in Azure Database for MySQL Flexible server. -This feature will enable customers to perform in-place upgrades of their MySQL 5.7 servers to MySQL 8.0 with a click of button without any data movement or the need of any application connection string changes. -->[!Important] -> - Major version upgrade for Azure database for MySQL Flexible Server is available in public preview. -> - Major version upgrade is currently not available for Burstable SKU 5.7 servers. -> - Duration of downtime will vary based on the size of your database instance and the number of tables on the database. -> - Upgrading major MySQL version is irreversible. Your deployment might fail if validation identifies the server is configured with any features that are [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) or [deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations). You can make necessary configuration changes on the server and try upgrade again --## Prerequisites --- Read Replicas with MySQL version 5.7 should be upgraded before Primary Server for replication to be compatible between different MySQL versions, read more on [Replication Compatibility between MySQL versions](https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication-compatibility.html).-- Before you upgrade your production servers, we strongly recommend you to test your application compatibility and verify your database compatibility with features [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals)/[deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations) in the new MySQL version.-- Trigger [on-demand backup](./how-to-trigger-on-demand-backup.md) before you perform major version upgrade on your production server, which can be used to [rollback to version 5.7](./how-to-restore-server-portal.md) from the full on-demand backup taken.---## Perform Planned Major version upgrade from MySQL 5.7 to MySQL 8.0 using Azure portal --1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.7 server. - >[!Important] - > We recommend performing upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](./how-to-restore-server-portal.md). --2. From the overview page, click the Upgrade button in the toolbar -- >[!Important] - > Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0. - > Verify deprecated [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure. - > [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) with values NO_AUTO_CREATE_USER, NO_FIELD_OPTIONS, NO_KEY_OPTIONS and NO_TABLE_OPTIONS are no longer supported in MySQL 8.0. -- :::image type="content" source="./media/how-to-upgrade/1-how-to-upgrade.png" alt-text="Screenshot showing Azure Database for MySQL Upgrade."::: --3. In the Upgrade sidebar, verify Major Upgrade version to upgrade i.e 8.0. -- :::image type="content" source="./media/how-to-upgrade/2-how-to-upgrade.png" alt-text="Screenshot showing Upgrade."::: --4. For Primary Server, click on confirmation checkbox, to confirm that all your replica servers are upgraded before primary server. Once confirmed that all your replicas are upgraded, Upgrade button will be enabled. For your read-replicas and standalone servers, Upgrade button will be enabled by default. -- :::image type="content" source="./media/how-to-upgrade/3-how-to-upgrade.png" alt-text="Screenshot showing confirmation."::: --5. Once Upgrade button is enabled, you can click on Upgrade button to proceed with deployment. -- :::image type="content" source="./media/how-to-upgrade/4-how-to-upgrade.png" alt-text="Screenshot showing upgrade."::: ---## Perform Planned Major version upgrade from MySQL 5.7 to MySQL 8.0 using Azure CLI --Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.7 server using Azure CLI. --1. Install [Azure CLI](/cli/azure/install-azure-cli) for Windows or use [Azure CLI](../../cloud-shell/overview.md) in Azure Cloud Shell to run the upgrade commands. -- This upgrade requires version 2.40.0 or later of the Azure CLI. If you are using Azure Cloud Shell, the latest version is already installed. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade. ---2. After you sign in, run the [az mysql server upgrade](/cli/azure/mysql/server#az-mysql-server-upgrade) command. -- ```azurecli - az mysql server upgrade --name testsvr --resource-group testgroup --subscription MySubscription --version 8.0 - ``` --3. Under confirmation prompt, type ΓÇ£yΓÇ¥ for confirming or ΓÇ£nΓÇ¥ to stop the upgrade process and enter. ---## Perform major version upgrade from MySQL 5.7 to MySQL 8.0 on read replica using Azure portal --1. In the Azure portal, select your existing Azure Database for MySQL 5.7 read replica server. --2. From the Overview page, click the Upgrade button in the toolbar. ->[!Important] -> Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0. ->Verify deprecated [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure. --3. In the Upgrade section, select Upgrade button to upgrade Azure database for MySQL 5.7 read replica server to 8.0 server. --4. A notification will confirm that upgrade is successful. --5. From the Overview page, confirm that your Azure database for MySQL read replica server version is 8.0. --6. Now go to your primary server and perform major version upgrade on it. ---## Perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replicas --1. In the Azure portal, select your existing Azure Database for MySQL 5.7. -2. Create a [read replica](./how-to-read-replicas-portal.md) from your primary server. -3. Upgrade your [read replica to version](#perform-planned-major-version-upgrade-from-mysql-57-to-mysql-80-using-azure-cli) 8.0. -4. Once you confirm that the replica server is running on version 8.0, stop your application from connecting to your primary server. -5. Check replication status, and make sure replica is all caught up with primary, so all the data is in sync and ensure there are no new operations performed in primary. -Confirm with the show slave status command on the replica server to view the replication status. - ```azurecli - SHOW SLAVE STATUS\G - ``` - If the state of Slave_IO_Running and Slave_SQL_Running are "yes" and the value of Seconds_Behind_Master is "0", replication is working well. Seconds_Behind_Master indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates. Once you confirm Seconds_Behind_Master is "0" it's safe to stop replication. --6. Promote your read replica to primary by stopping replication. -7. Set Server Parameter read_only to 0 i.e., OFF to start writing on promoted primary. -- Point your application to the new primary (former replica) which is running server 8.0. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source. -->[!Note] -> This scenario will have downtime during steps 4, 5 and 6 only. ---## Frequently asked questions -- Will this cause downtime of the server and if so, how long?-- To have minimal downtime during upgrades, follow the steps mentioned under - [Perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replicas](#perform-minimal-downtime-major-version-upgrade-from-mysql-57-to-mysql-80-using-read-replicas). - The server will be unavailable during the upgrade process, so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server. ---- When will this upgrade feature be GA?- - The GA of this feature will be planned by December 2022. However, the feature is production ready and fully supported by Azure so you should run it with confidence in your environment. As a recommended best practice, we strongly suggest you run and test it first on a restored copy of the server so you can estimate the downtime during upgrade and perform application compatibility test before you run it on production. - -- What happens to my backups after upgrade?- - All backups (automated/on-demand) taken before major version upgrade, when used for restoration will always restore to a server with older version (5.7). - All the backups (automated/on-demand) taken after major version upgrade will restore to server with upgraded version (8.0). It's highly recommended to take on-demand backup before you perform the major version upgrade for an easy rollback. --- ## Next steps - - Learn more on [how to configure scheduled maintenance](./how-to-maintenance-portal.md) for your Azure Database for MySQL flexible server. - - Learn about what's new in [MySQL version 8.0](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html). |
mysql | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md | This article summarizes new releases and features in Azure Database for MySQL - ## September 2022 -- **Major version upgrade in Azure Database for MySQL - Flexible Server (Preview)**- You can now upgrade your MySQL major version, in-place in Azure Database for MySQL Flexible server from MySQL 5.7 servers to MySQL 8.0 with a click of button without any data movement or the need of any application connection string changes.[Learn more](./how-to-upgrade.md) -- - **Read replica for HA enabled Azure Database for MySQL - Flexible Server (General Availability)** The read replica feature allows you to replicate data from an Azure Database for MySQL flexible server to a read-only server. You can replicate the source server to up to 10 replicas. This functionality is now extended to support HA enabled servers within same region.[Learn more](concepts-read-replicas.md) |
purview | Concept Self Service Data Access Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-self-service-data-access-policy.md | Whenever a data consumer requests access to a dataset, the notification is sent Data consumer can access the requested dataset using tools such as PowerBI or Azure Synapse Analytics workspace. >[!NOTE]-> Users will not be able to browse to the asset using the Azure Portal or Storage explorer if the only permission granted is read/modify access at the file or folder -> level of the storage account. +> Users will not be able to browse to the asset using the Azure Portal or Storage explorer if the only permission granted is read/modify access at the file or folder level of the storage account. ++> [!CAUTION] +> Folder level permission is required to access data in ADLS Gen 2 using PowerBI. +> Additionally, resource sets are not supported by self-service policies. Hence, folder level permission needs to be granted to access resource set files such as CSV or parquet. ## Next steps |
purview | How To Manage Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-manage-quotas.md | This article highlights the limits that currently exist in the Microsoft Purview |Maximum length of asset property name and value|32 KB|32 KB| |Maximum length of classification attribute name and value|32 KB|32 KB| |Maximum number of glossary terms, per account|100K|100K|+|Maximum number of self-service policies, per account|3K|3K| \* Self-hosted integration runtime scenarios aren't included in the limits defined in the above table. |
sentinel | Indicators Bulk File Import | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/indicators-bulk-file-import.md | Monitor your imports and view error reports for partially imported or failed imp :::image type="content" source="media/indicators-bulk-file-import/manage-file-imports.png" alt-text="Screenshot of the menu option to manage file imports."::: -1. Review the status of imported files and the number of invalid indicator entries. +1. Review the status of imported files and the number of invalid indicator entries.The valid/invalid indicator count is updated once the file is processed. Please wait for the import to complete to get the updated count of valid/invalid indicators. :::image type="content" source="media/indicators-bulk-file-import/manage-file-imports-pane.png" alt-text="Screenshot of the manage file imports pane with example ingestion data. The columns show sorted by imported number with various sources."::: |
service-bus-messaging | Service Bus Integrate With Rabbitmq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-integrate-with-rabbitmq.md | |
spring-apps | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/faq.md | Azure Spring Apps intelligently schedules your applications on the underlying Ku ### In which regions is Azure Spring Apps Basic/Standard tier available? -East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, Switzerland North, China East 2 (Mooncake), China North 2 (Mooncake), and China North 3 (Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud) +East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, Canada East, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, Switzerland North, China East 2 (Mooncake), China North 2 (Mooncake), and China North 3 (Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud) ### In which regions is Azure Spring Apps Enterprise tier available? -East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, and Switzerland North. +East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, Canada East, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, and Switzerland North. ### Is any customer data stored outside of the specified region? |
spring-apps | How To Bind Postgres | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-postgres.md | With Azure Spring Apps, you can bind select Azure services to your applications ## Prepare your Java project +Use the following steps to prepare your project. + 1. In your project's *pom.xml* file, add the following dependency: ```xml With Azure Spring Apps, you can bind select Azure services to your applications ### [Using admin credentials](#tab/Secrets) +Use the following steps to bind your app. + 1. Note the admin username and password of your Azure Database for PostgreSQL account. 1. Connect to the server, create a database named **testdb** from a PostgreSQL client, and then create a new non-admin account. |
storage | Blobfuse2 Commands Mount | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount.md | The supported subcommands for `blobfuse2 mount` are: | Command | Description | |--|--|-| [all](blobfuse2-commands-mount-all.md) | Mounts all azure blob containers in a specified storage account | +| [all](blobfuse2-commands-mount-all.md) | Mounts all Azure blob containers in a specified storage account | | [list](blobfuse2-commands-mount-list.md) | Lists all BlobFuse2 mount points | Select one of the command links in the table above to view the documentation for the individual subcommands, including the arguments and flags they support. The following flags apply only to command `blobfuse2 mount`: | allow-other | boolean | false | Allow other users to access this mount point | | attr-cache-timeout | uint32 | 120 | Attribute cache timeout<br /><sub>(in seconds)</sub> | | attr-timeout | uint32 | | Attribute timeout <br /><sub>(in seconds)</sub> |-| config-file | string | ./config.yaml | The path for the file where the account credentials are provided Default is config.yaml in current directory. | +| config-file | string | ./config.yaml | The path to the configuration file where the account credentials are provided. | | container-name | string | | The name of the container to be mounted | | entry-timeout | uint32 | | Entry timeout <br /><sub>(in seconds)</sub> | | file-cache-timeout | uint32 | 120 | File cache timeout <br /><sub>(in seconds)</sub>| |
storage | Blobfuse2 Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands.md | The supported commands for BlobFuse2 are: | Command | Description | |--|--|-| [mount](blobfuse2-commands-mount.md) | Mounts an Azure blob storage container as a filesystem in Linux or lists mounted file systems | -| [mountv1](blobfuse2-commands-mountv1.md) | Mounts a blob container using legacy BlobFuse configuration and CLI parameters | -| [unmount](blobfuse2-commands-unmount.md) | Unmounts a BlobFuse2-mounted file system | -| [completion](blobfuse2-commands-completion.md) | Generates an autocompletion script for BlobFuse2 for the specified shell | -| [secure](blobfuse2-commands-secure.md) | Encrypts or decrypts a configuration file, or gets or sets values in an encrypted configuration file | -| [version](blobfuse2-commands-version.md) | Displays the current version of BlobFuse2 | -| [help](blobfuse2-commands-help.md) | Gives help information about any command | +| [mount](blobfuse2-commands-mount.md) | Mounts an Azure blob storage container as a filesystem in Linux or lists mounted file systems | +| [mountv1](blobfuse2-commands-mountv1.md) | Mounts a blob container using legacy BlobFuse configuration and CLI parameters | +| [unmount](blobfuse2-commands-unmount.md) | Unmounts a BlobFuse2-mounted file system | +| [completion](blobfuse2-commands-completion.md) | Generates an autocompletion script for BlobFuse2 for the specified shell | +| [secure](blobfuse2-commands-secure.md) | Encrypts or decrypts a configuration file, or gets or sets values in an encrypted configuration file | +| [version](blobfuse2-commands-version.md) | Displays the current version of BlobFuse2 | +| [help](blobfuse2-commands-help.md) | Gives help information about any command | ## Arguments |
storage | Blobfuse2 Health Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-health-monitor.md | + + Title: How to use BlobFuse2 Health Monitor to gain insights into BlobFuse2 mount activities and resource usage (preview) | Microsoft Docs ++description: How to use BlobFuse2 Health Monitor to gain insights into BlobFuse2 mount activities and resource usage (preview). ++++ Last updated : 09/26/2022+++++# Use Health Monitor to gain insights into BlobFuse2 mounts (preview) ++This article provides references to assist in deploying and using BlobFuse2 Health Monitor to gain insights into BlobFuse2 mount activities and resource usage. ++> [!IMPORTANT] +> BlobFuse2 is the next generation of BlobFuse and is currently in preview. +> This preview version is provided without a service level agreement, and might not suitable for production workloads. Certain features might not be supported or might have constrained capabilities. +> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> +> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see: +> +> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md) +> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master) ++You can use BlobFuse2 Health Monitor to: ++- Get statistics about internal activities related to BlobFuse2 mounts +- Monitor CPU, memory, and network usage by BlobFuse2 mount processes +- Track file cache usage and events ++## BlobFuse2 Health Monitor Resources ++During the preview of BlobFuse2, refer to [the BlobFuse2 Health Monitor README on GitHub](https://github.com/Azure/azure-storage-fuse/blob/main/tools/health-monitor/README.md) for full details on how to deploy and use Health Monitor. The README file describes: ++- What Health Monitor collects +- How to set it up in the configuration file used for mounting a storage container +- The name, location, and contents of the output ++## See also ++- [What is BlobFuse2? (preview)](blobfuse2-what-is.md) +- [BlobFuse2 configuration reference (preview)](blobfuse2-configuration.md) +- [How to mount an Azure blob storage container on Linux with BlobFuse2 (preview)](blobfuse2-how-to-deploy.md) |
storage | Blobfuse2 How To Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md | To install BlobFuse2: 1. Retrieve the latest Blobfuse2 binary for your distro from GitHub, for example: ```bash- wget https://github.com/Azure/azure-storage-fuse/releases/download/blobfuse2-2.0.0-preview2/blobfuse2-2.0.0-preview.2-ubuntu-20.04-x86-64.deb + wget https://github.com/Azure/azure-storage-fuse/releases/download/blobfuse2-2.0.0-preview.3/blobfuse2-2.0.0-preview.3-Ubuntu-22.04-x86-64.deb ``` 1. Install BlobFuse2. For example, on an Ubuntu distribution run: ```bash sudo apt-get install libfuse3-dev fuse3 - sudo apt install blobfuse2-2.0.0-preview.2-ubuntu-20.04-x86-64.deb + sudo dpkg -i blobfuse2-2.0.0-preview.3-Ubuntu-22.04-x86-64.deb ``` ### Option 2: Build from source This table shows how this feature is supported in your account and the impact on - [Blobfuse2 Migration Guide (from BlobFuse v1)](https://github.com/Azure/azure-storage-fuse/blob/main/MIGRATION.md) - [BlobFuse2 configuration reference (preview)](blobfuse2-configuration.md) - [BlobFuse2 command reference (preview)](blobfuse2-commands.md)+- [Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage (preview)](blobfuse2-health-monitor.md) - [How to troubleshoot BlobFuse2 issues (preview)](blobfuse2-troubleshooting.md) |
storage | Blobfuse2 Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-troubleshooting.md | -This articles provides references to assist in troubleshooting BlobFuse2 issues during the public preview. +This article provides references to assist in troubleshooting BlobFuse2 issues during the preview. -## The troubleshooting guide +## The troubleshooting guide (TSG) -During the preview of BlobFuse2, refer to [The BlobFuse2 Troubleshoot Guide (TSG) on GitHub](https://github.com/Azure/azure-storage-fuse/blob/main/TSG.md) +During the preview of BlobFuse2, refer to [The BlobFuse2 Troubleshooting Guide (TSG) on GitHub](https://github.com/Azure/azure-storage-fuse/blob/main/TSG.md) > [!IMPORTANT] > BlobFuse2 is the next generation of BlobFuse and is currently in preview. During the preview of BlobFuse2, refer to [The BlobFuse2 Troubleshoot Guide (TSG ## See also +- [Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage (preview)](blobfuse2-health-monitor.md) - [What is BlobFuse2? (preview)](blobfuse2-what-is.md) - [How to mount an Azure blob storage container on Linux with BlobFuse2 (preview)](blobfuse2-how-to-deploy.md) - [BlobFuse2 configuration reference (preview)](blobfuse2-configuration.md) |
storage | Blobfuse2 What Is | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-what-is.md | Blobfuse2 has more feature support and improved performance in multiple user sce - Improved caching - More management support through new Azure CLI commands - Additional logging support+- Gain insights into mount activities and resource usage using BlobFuse2 Health Monitor - Compatibility and upgrade options for existing BlobFuse v1 users - Version checking and upgrade prompting - Support for configuration file encryption This table shows how this feature is supported in your account and the impact on - [BlobFuse2 configuration reference (preview)](blobfuse2-configuration.md) - [BlobFuse2 command reference (preview)](blobfuse2-commands.md)+- [Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage (preview)](blobfuse2-health-monitor.md) - [How to troubleshoot BlobFuse2 issues (preview)](blobfuse2-troubleshooting.md) |
storage | Secure File Transfer Protocol Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md | The following clients are known to be incompatible with SFTP for Azure Blob Stor The unsupported client list above is not exhaustive and may change over time. +## Client settings ++To transfer files to or from Azure storage via client applications, see the following recommended client settings. ++- WinSCP ++ - Under the **Preferences** dialog, under **Transfer** - **Endurance**, select **Disable** to disable the **Enable transfer resume/transfer to temporary filename** option. + + > [!CAUTION] + > Leaving this option enabled can cause failures or degraded performance during large file uploads. + ## Unsupported operations | Category | Unsupported operations | |
storage | Secure File Transfer Protocol Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-performance.md | For WinSCP, you can use a maximum of 9 concurrent connections to upload multiple > [!IMPORTANT] > Concurrent uploads will only improve performance when uploading multiple files at the same time. Using multiple connections to upload a single file is not supported.+ +- Under the **Preferences** dialog, under **Logging**, if the **Enable session logging on level** is checked, select **Reduced** or **Normal**. ++> [!CAUTION] +> Logging level **Debug 1** or **Debug 2** significantly reduces session operation performance. ## Use premium block blob storage accounts |
storage | Storage Blob Static Website | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website.md | If the server returns a 404 error, and you have not specified an error document If you set up [redundancy in a secondary region](../common/storage-redundancy.md#redundancy-in-a-secondary-region), you can also access website content by using a secondary endpoint. Because data is replicated to secondary regions asynchronously, the files that are available at the secondary endpoint aren't always in sync with the files that are available on the primary endpoint. -## Impact of the setting the public access level of the web container +## Impact of setting the access level on the web container You can modify the public access level of the **$web** container, but this has no impact on the primary static website endpoint because these files are served through anonymous access requests. That means public (read-only) access to all files. |
storage | Storage How To Mount Container Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md | Title: How to mount Azure Blob storage as a file system on Linux with BlobFuse | Microsoft Docs + Title: How to mount Azure Blob storage as a file system on Linux with BlobFuse v1 | Microsoft Docs -description: Learn how to mount an Azure Blob storage container with BlobFuse, a virtual file system driver on Linux. +description: Learn how to mount an Azure Blob storage container with BlobFuse v1, a virtual file system driver on Linux. Previously updated : 08/02/2022 Last updated : 09/26/2022 -# How to mount Blob storage as a file system with BlobFuse +# How to mount Blob storage as a file system with BlobFuse v1 ## Overview -> [!NOTE] -> This article is about the original version of BlobFuse. It is simply referred to as "BlobFuse" in many cases, but is also referred to as "BlobFuse v1" in this and other articles to distinguish it from the next generation of BlobFuse, BlobFuse2. BlobFuse2 is currently in preview and might not be suitable for production workloads. +> [!IMPORTANT] +> [BlobFuse2](blobfuse2-what-is.md) is the latest version of BlobFuse and has many significant improvements over the version discussed in this article, BlobFuse v1. To learn about the improvements made in BlobFuse2, see [the list of BlobFuse2 enhancements](blobfuse2-what-is.md#blobfuse2-enhancements). BlobFuse2 is currently in preview and might not be suitable for production workloads. >-> To learn about the improvements made in BlobFuse2, see [What is BlobFuse2?](blobfuse2-what-is.md). +> This article is about the original version of BlobFuse. It is simply referred to as "BlobFuse" in many cases, but is also referred to as "BlobFuse v1" in this and other articles to distinguish it from the next generation of BlobFuse, BlobFuse2. [BlobFuse](https://github.com/Azure/azure-storage-fuse) is a virtual file system driver for Azure Blob storage. BlobFuse allows you to access your existing block blob data in your storage account through the Linux file system. BlobFuse uses the virtual directory scheme with the forward-slash '/' as a delimiter. -This guide shows you how to use BlobFuse, and mount a Blob storage container on Linux and access data. To learn more about BlobFuse, see the [readme](https://github.com/Azure/azure-storage-fuse) and [wiki](https://github.com/Azure/azure-storage-fuse/wiki). +This guide shows you how to use BlobFuse v1, and mount a Blob storage container on Linux and access data. To learn more about BlobFuse, see the [readme](https://github.com/Azure/azure-storage-fuse) and [wiki](https://github.com/Azure/azure-storage-fuse/wiki). > [!WARNING] > BlobFuse doesn't guarantee 100% POSIX compliance as it simply translates requests into [Blob REST APIs](/rest/api/storageservices/blob-service-rest-api). For example, rename operations are atomic in POSIX, but not in BlobFuse. > For a full list of differences between a native file system and BlobFuse, visit [the BlobFuse source code repository](https://github.com/azure/azure-storage-fuse). -## Install BlobFuse on Linux +## Install BlobFuse v1 on Linux BlobFuse binaries are available on [the Microsoft software repositories for Linux](/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software) for Ubuntu, Debian, SUSE, CentOS, Oracle Linux and RHEL distributions. To install BlobFuse on those distributions, configure one of the repositories from the list. You can also build the binaries from source code following the [Azure Storage installation steps](https://github.com/Azure/azure-storage-fuse/wiki/1.-Installation#option-2build-from-source) if there are no binaries available for your distribution. -BlobFuse is published in the Linux repo for Ubuntu versions: 16.04, 18.04, and 20.04, RHELversions: 7.5, 7.8, 7.9, 8.0, 8.1, 8.2, CentOS versions: 7.0, 8.0, Debian versions: 9.0, 10.0, SUSE version: 15, OracleLinux 8.1 . Run this command to make sure that you have one of those versions deployed: +BlobFuse is published in the Linux repo for Ubuntu versions: 16.04, 18.04, and 20.04, RHELversions: 7.5, 7.8, 7.9, 8.0, 8.1, 8.2, CentOS versions: 7.0, 8.0, Debian versions: 9.0, 10.0, SUSE version: 15, Oracle Linux 8.1. Run this command to make sure that you have one of those versions deployed: ```bash lsb_release -a sudo apt-get update Similarly, change the URL to `.../ubuntu/16.04/...` or `.../ubuntu/18.04/...` to reference another Ubuntu version. -### Install BlobFuse +### Install BlobFuse v1 On an Ubuntu/Debian distribution: |
storage | Storage Quickstart Blobs Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md | Title: 'Quickstart: Azure Blob Storage library v12 - Python' description: In this quickstart, you learn how to use the Azure Blob Storage client library version 12 for Python to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container. Previously updated : 01/28/2021 Last updated : 09/26/2022 ms.devlang: python -# Quickstart: Manage blobs with Python v12 SDK +# Quickstart: Azure Blob Storage client library for Python -In this quickstart, you learn to manage blobs by using Python. Blobs are objects that can hold large amounts of text or binary data, including images, documents, streaming media, and archive data. You'll upload, download, and list blobs, and you'll create and delete containers. +Get started with the Azure Blob Storage client library for Python to manage blobs and containers. Follow steps to install the package and try out example code for basic tasks. -More resources: --- [API reference documentation](/python/api/azure-storage-blob)-- [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob)-- [Package (Python Package Index)](https://pypi.org/project/azure-storage-blob/)-- [Samples](../common/storage-samples-python.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)+[API reference documentation](/python/api/azure-storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob) | [Package (PyPi)](https://pypi.org/project/azure-storage-blob/) | [Samples](../common/storage-samples-python.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples) ## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An Azure Storage account. [Create a storage account](../common/storage-account-create.md).+- An Azure account with an active subscription - [create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). +- An Azure Storage account - [create a storage account](../common/storage-account-create.md). - [Python](https://www.python.org/downloads/) 2.7 or 3.6+. ## Setting up This section walks you through preparing a project to work with the Azure Blob S Create a Python application named *blob-quickstart-v12*. -1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for the project. +1. In a console window (such as PowerShell, cmd, or bash), create a new directory for the project. ```console mkdir blob-quickstart-v12 Create a Python application named *blob-quickstart-v12*. ### Install the package -While still in the application directory, install the Azure Blob Storage client library for Python package by using the `pip install` command. +From the project directory, install the Azure Blob Storage client library for Python package by using the `pip install` command. ```console pip install azure-storage-blob ``` -This command installs the Azure Blob Storage client library for Python package and all the libraries on which it depends. In this case, that is just the Azure core library for Python. +This command installs the Azure Blob Storage for Python package and libraries on which it depends. In this case, the only dependency is the Azure core library for Python. ### Set up the app framework -From the project directory: +From the project directory, follow steps to create the basic structure of the app: 1. Open a new text file in your code editor-1. Add `import` statements -1. Create the structure for the program, including basic exception handling -- Here's the code: -- :::code language="python" source="~/azure-storage-snippets/blobs/quickstarts/python/V12/app_framework.py"::: -+1. Add `import` statements, create the structure for the program, and include basic exception handling, as shown below 1. Save the new file as *blob-quickstart-v12.py* in the *blob-quickstart-v12* directory. [!INCLUDE [storage-quickstart-credentials-include](../../../includes/storage-quickstart-credentials-include.md)] Azure Blob Storage is optimized for storing massive amounts of unstructured data - A container in the storage account - A blob in the container -The following diagram shows the relationship between these resources. +The following diagram shows the relationship between these resources:  Use the following Python classes to interact with these resources: These example code snippets show you how to do the following tasks with the Azure Blob Storage client library for Python: -- [Get the connection string](#get-the-connection-string)+- [Get the connection string](#get-the-connection-string-for-authentication) - [Create a container](#create-a-container) - [Upload blobs to a container](#upload-blobs-to-a-container) - [List the blobs in a container](#list-the-blobs-in-a-container) - [Download blobs](#download-blobs) - [Delete a container](#delete-a-container) -### Get the connection string +### Get the connection string for authentication The code below retrieves the storage account connection string from the environment variable created in the [Configure your storage connection string](#configure-your-storage-connection-string) section. Add this code to the end of the `try` block: The following code cleans up the resources the app created by removing the entire container using the [ΓÇïdelete_container](/python/api/azure-storage-blob/azure.storage.blob.containerclient#delete-containerkwargs-) method. You can also delete the local files, if you like. -The app pauses for user input by calling `input()` before it deletes the blob, container, and local files. Verify that the resources were created correctly, before they're deleted. +The app pauses for user input by calling `input()` before it deletes the blob, container, and local files. Verify that the resources were created correctly before they're deleted. Add this code to the end of the `try` block: Deleting the local source and downloaded files... Done ``` -Before you begin the cleanup process, check your *data* folder for the two files. You can open them and observe that they're identical. +Before you begin the cleanup process, check your *data* folder for the two files. You can compare them and observe that they're identical. ++## Clean up resources -After you've verified the files, press the **Enter** key to delete the test files and finish the demo. +After you've verified the files and finished testing, press the **Enter** key to delete the test files along with the container you created in the storage account. ## Next steps |
storage | Migrate Azure Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/migrate-azure-credentials.md | Storage account keys should be used with caution. Developers must be diligent to ## Migrating to passwordless connections -Many Azure services support passwordless connections through Azure AD and Role Based Access control (RBAC). These techniques provide robust security features and can be implemented using `DefaultAzureCredential` from the Azure Identity client libraries. +Many Azure services support passwordless connections through Azure AD and Role Based Access control (RBAC). These techniques provide robust security features and can be implemented using `DefaultAzureCredential` from the Azure Identity client libraries. > [!IMPORTANT] > Some languages must implement `DefaultAzureCredential` explicitly in their code, while others utilize `DefaultAzureCredential` internally through underlying plugins or drivers. - `DefaultAzureCredential` supports multiple authentication methods and automatically determines which should be used at runtime. This approach enables your app to use different authentication methods in different environments (local dev vs. production) without implementing environment-specific code. +`DefaultAzureCredential` supports multiple authentication methods and automatically determines which should be used at runtime. This approach enables your app to use different authentication methods in different environments (local dev vs. production) without implementing environment-specific code. -The order and locations in which `DefaultAzureCredential` searches for credentials can be found in the [Azure Identity library overview](/dotnet/api/overview/azure/Identity-readme#defaultazurecredential) and varies between languages. For example, when working locally with .NET, `DefaultAzureCredential` will generally authenticate using the account the developer used to sign-in to Visual Studio. When the app is deployed to Azure, `DefaultAzureCredential` will automatically switch to use a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview). No code changes are required for this transition. +The order and locations in which `DefaultAzureCredential` searches for credentials can be found in the [Azure Identity library overview](/dotnet/api/overview/azure/Identity-readme#defaultazurecredential) and varies between languages. For example, when working locally with .NET, `DefaultAzureCredential` will generally authenticate using the account the developer used to sign-in to Visual Studio. When the app is deployed to Azure, `DefaultAzureCredential` will automatically switch to use a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview). No code changes are required for this transition. > [!NOTE] > A managed identity provides a security identity to represent an app or service. The identity is managed by the Azure platform and does not require you to provision or rotate any secrets. You can read more about managed identities in the [overview](/azure/active-directory/managed-identities-azure-resources/overview) documentation. Next you will need to update your code to use passwordless connections. 1. To use `DefaultAzureCredential` in a .NET application, add the **Azure.Identity** NuGet package to your application. - ```dotnetcli - dotnet add package Azure.Identity - ``` + ```dotnetcli + dotnet add package Azure.Identity + ``` 1. At the top of your `Program.cs` file, add the following `using` statement: - ```csharp - using Azure.Identity; - ``` + ```csharp + using Azure.Identity; + ``` 1. Identify the locations in your code that currently create a `BlobServiceClient` to connect to Azure Storage. This task is often handled in `Program.cs`, potentially as part of your service registration with the .NET dependency injection container. Update your code to match the following example: - ```csharp - // TODO: Update <storage-account-name> placeholder to your account name - var blobServiceClient = new BlobServiceClient( - new Uri("https://<storage-account-name>.blob.core.windows.net"), - new DefaultAzureCredential()); - ``` + ```csharp + // TODO: Update <storage-account-name> placeholder to your account name + var blobServiceClient = new BlobServiceClient( + new Uri("https://<storage-account-name>.blob.core.windows.net"), + new DefaultAzureCredential()); + ``` -1. Make sure to update the storage account name in the URI of your `BlobServiceClient`. The storage account name can be found on the overview page of the Azure portal. +1. Make sure to update the storage account name in the URI of your `BlobServiceClient`. You can find the storage account name on the overview page of the Azure portal. - :::image type="content" source="../blobs/media/storage-quickstart-blobs-dotnet/storage-account-name.png" alt-text="A screenshot showing how to find the storage account name."::: + :::image type="content" source="../blobs/media/storage-quickstart-blobs-dotnet/storage-account-name.png" alt-text="Screenshot showing how to find the storage account name."::: #### Run the app locally After making these code changes, run your application locally. The new configura ### Configure the Azure hosting environment -Once your application is configured to use passwordless connections and runs locally, the same code can authenticate to Azure services after it is deployed to Azure. For example, an application deployed to an Azure App Service instance that has a managed identity enabled can connect to Azure Storage. +Once your application is configured to use passwordless connections and runs locally, the same code can authenticate to Azure services after it is deployed to Azure. For example, an application deployed to an Azure App Service instance that has a managed identity enabled can connect to Azure Storage. #### Create the managed identity using the Azure portal For this migration guide you will use App Service, but the steps are similar on > [!NOTE] > Azure Spring Apps currently only supports Service Connector using connection strings. -1. On the main overview page of your App Service, select **Service Connector** from the left navigation. +1. On the main overview page of your App Service, select **Service Connector** from the left navigation. 1. Select **+ Create** from the top menu and the **Create connection** panel will open. Enter the following values: - * **Service type**: Choose **Storage blob**. - * **Subscription**: Select the subscription you would like to use. - * **Connection Name**: Enter a name for your connection, such as *connector_appservice_blob*. - * **Client type**: Leave the default value selected or choose the specific client you'd like to use. - - Select **Next: Authentication**. + * **Service type**: Choose **Storage blob**. + * **Subscription**: Select the subscription you would like to use. + * **Connection Name**: Enter a name for your connection, such as *connector_appservice_blob*. + * **Client type**: Leave the default value selected or choose the specific client you'd like to use. - :::image type="content" source="media/migration-create-identity-small.png" alt-text="A screenshot showing how to create a system assigned managed identity." lightbox="media/migration-create-identity.png"::: + Select **Next: Authentication**. ++ :::image type="content" source="media/migration-create-identity-small.png" alt-text="Screenshot showing how to create a system assigned managed identity." lightbox="media/migration-create-identity.png"::: 1. Make sure **System assigned managed identity (Recommended)** is selected, and then choose **Next: Networking**. 1. Leave the default values selected, and then choose **Next: Review + Create**. For this migration guide you will use App Service, but the steps are similar on The Service Connector will automatically create a system-assigned managed identity for the app service. The connector will also assign the managed identity a **Storage Blob Data Contributor** role for the storage account you selected. -### [App Service](#tab/app-service) +### [Azure App Service](#tab/app-service) -1. On the main overview page of your App Service, select **Identity** from the left navigation. +1. On the main overview page of your Azure App Service instance, select **Identity** from the left navigation. 1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code. - :::image type="content" source="media/migration-create-identity-small.png" alt-text="A screenshot showing how to create a system assigned managed identity." lightbox="media/migration-create-identity.png"::: + :::image type="content" source="media/migration-create-identity-small.png" alt-text="Screenshot showing how to create a system assigned managed identity." lightbox="media/migration-create-identity.png"::: -### [Spring Apps](#tab/spring-apps) +### [Azure Spring Apps](#tab/spring-apps) -1. On the main overview page of your Azure Spring App, select **Identity** from the left navigation. +1. On the main overview page of your Azure Spring Apps instance, select **Identity** from the left navigation. 1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code. - :::image type="content" source="media/storage-migrate-credentials/spring-apps-identity.png" alt-text="A screenshot showing how to enable managed identity for spring apps."::: + :::image type="content" source="media/storage-migrate-credentials/spring-apps-identity.png" alt-text="Screenshot showing how to enable managed identity for Azure Spring Apps."::: -### [Container Apps](#tab/container-apps) +### [Azure Container Apps](#tab/container-apps) -1. On the main overview page of your Azure Container App, select **Identity** from the left navigation. +1. On the main overview page of your Azure Container Apps instance, select **Identity** from the left navigation. 1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code. - :::image type="content" source="media/storage-migrate-credentials/container-apps-identity.png" alt-text="A screenshot showing how to enable managed identity for container apps."::: + :::image type="content" source="media/storage-migrate-credentials/container-apps-identity.png" alt-text="Screenshot showing how to enable managed identity for Azure Container Apps."::: -### [Virtual Machines](#tab/virtual-machines) +### [Azure virtual machines](#tab/virtual-machines) -1. On the main overview page of your Azure Spring App, select **Identity** from the left navigation. +1. On the main overview page of your virtual machine, select **Identity** from the left navigation. 1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code. - :::image type="content" source="media/storage-migrate-credentials/virtual-machine-identity.png" alt-text="A screenshot showing how to enable managed identity for virtual machines."::: + :::image type="content" source="media/storage-migrate-credentials/virtual-machine-identity.png" alt-text="Screenshot showing how to enable managed identity for virtual machines."::: You can also enable managed identity on an Azure hosting environment using the A ### [Service Connector](#tab/service-connector-identity) -You can create a Service Connection between an Azure compute hosting environment and a target service using the Azure CLI. The CLI automatically handles creating a managed identity and assigns the proper role, as explained in the [portal instructions](#create-the-managed-identity-using-the-azure-portal). +You can use Service Connector to create a connection between an Azure compute hosting environment and a target service using the Azure CLI. The CLI automatically handles creating a managed identity and assigns the proper role, as explained in the [portal instructions](#create-the-managed-identity-using-the-azure-portal). -If you are using an Azure App Service, use the `az webapp connection` command: +If you're using an Azure App Service, use the `az webapp connection` command: ```azurecli-az webapp connection create storage-blob --resource-group <resource-group-name> --name <app-service-name> --target-resource-group <target-resource-group-name> --account <target-storage-account-name> --system-identity +az webapp connection create storage-blob \ + --resource-group <resource-group-name> \ + --name <webapp-name> \ + --target-resource-group <target-resource-group-name> \ + --account <target-storage-account-name> \ + --system-identity ``` -If you are using Azure Spring Apps, use `the az spring-cloud connection` command: +If you're using Azure Spring Apps, use `the az spring-cloud connection` command: ```azurecli-az spring-cloud connection create storage-blob --resource-group <resource-group-name> --service <spring-cloud-service-name> --app <spring-app-name> --deployment <deployment-name> --target-resource-group <target-resource-group> --account <target-storage-account-name> --system-identity +az spring-cloud connection create storage-blob \ + --resource-group <resource-group-name> \ + --service <service-instance-name> \ + --app <app-name> \ + --deployment <deployment-name> \ + --target-resource-group <target-resource-group> \ + --account <target-storage-account-name> \ + --system-identity ``` -If you are using Azure Container Apps, use the `az containerapp connection` command: +If you're using Azure Container Apps, use the `az containerapp connection` command: ```azurecli-az containerapp connection create storage-blob --resource-group <resource-group-name> --name <containerapp-name> --target-resource-group <target-resource-group-name> --account <target-storage-account-name> --system-identity +az containerapp connection create storage-blob \ + --resource-group <resource-group-name> \ + --name <containerapp-name> \ + --target-resource-group <target-resource-group-name> \ + --account <target-storage-account-name> \ + --system-identity ``` -### [App Service](#tab/app-service-identity) +### [Azure App Service](#tab/app-service-identity) -You can assign a managed identity to an Azure App Service with the [az webapp identity assign](/cli/azure/webapp/identity) command. +You can assign a managed identity to an Azure App Service instance with the [az webapp identity assign](/cli/azure/webapp/identity) command. ```azurecli-az webapp identity assign --resource-group <resource-group-name> --name <app-service-name> +az webapp identity assign \ + --resource-group <resource-group-name> \ + --name <webapp-name> ``` -### [Spring Apps](#tab/spring-apps-identity) +### [Azure Spring Apps](#tab/spring-apps-identity) -You can assign a managed identity to an Azure Spring App with the [az spring app identity assign](/cli/azure/spring/app/identity) command. +You can assign a managed identity to an Azure Spring Apps instance with the [az spring app identity assign](/cli/azure/spring/app/identity) command. ```azurecli-az spring app identity assign --resource-group <resource-group-name> --name <app-service-name> --service <service-name> +az spring app identity assign \ + --resource-group <resource-group-name> \ + --name <app-name> \ + --service <service-name> ``` -### [Container Apps](#tab/container-apps-identity) +### [Azure Container Apps](#tab/container-apps-identity) -You can assign a managed identity to an Azure Container App with the [az containerapp identity assign](/cli/azure/containerapp/identity) command. +You can assign a managed identity to an Azure Container Apps instance with the [az containerapp identity assign](/cli/azure/containerapp/identity) command. ```azurecli-az containerapp identity assign --resource-group <resource-group-name> --name <app-service-name> +az containerapp identity assign \ + --resource-group <resource-group-name> \ + --name <app-name> ``` -### [Virtual Machines](#tab/virtual-machines-identity) +### [Azure virtual machines](#tab/virtual-machines-identity) -You can assign a managed identity to a Virtual Machine with the [az vm identity assign](/cli/azure/vm/identity) command. +You can assign a managed identity to a virtual machine with the [az vm identity assign](/cli/azure/vm/identity) command. ```azurecli-az vm identity assign --resource-group <resource-group-name> --name <app-service-name> +az vm identity assign \ + --resource-group <resource-group-name> \ + --name <virtual-machine-name> ``` -### [AKS](#tab/aks-identity) +### [Azure Kubernetes Service](#tab/aks-identity) -You can assign a managed identity to an Azure Kubernetes Service with the [az aks update](/cli/azure/aks) command. +You can assign a managed identity to an Azure Kubernetes Service (AKS) instance with the [az aks update](/cli/azure/aks) command. ```azurecli-az vm identity assign --resource-group <resource-group-name> --name <app-service-name> +az vm identity assign \ + --resource-group <resource-group-name> \ + --name <virtual-machine-name> ``` #### Assign roles to the managed identity -Next, you need to grant permissions to the managed identity you created to access your storage account. You can do this by assigning a role to the managed identity, just like you did with your local development user. +Next, you need to grant permissions to the managed identity you created to access your storage account. You can do this by assigning a role to the managed identity, just like you did with your local development user. ### [Service Connector](#tab/assign-role-service-connector) -If you connected your services using the Service Connector you do not need to complete this step. The necessary configurations were handled for you: +If you connected your services using the Service Connector you do not need to complete this step. The necessary configurations were handled for you: * If you selected a managed identity while creating the connection, a system-assigned managed identity was created for your app and assigned the **Storage Blob Data Contributor** role on the storage account. If you connected your services using the Service Connector you do not need to co 1. Choose **Add role assignment** - :::image type="content" source="media/migration-add-role-small.png" alt-text="A screenshot showing how to add a role to a managed identity." lightbox="media/migration-add-role.png"::: + :::image type="content" source="media/migration-add-role-small.png" alt-text="Screenshot showing how to add a role to a managed identity." lightbox="media/migration-add-role.png"::: 1. In the **Role** search box, search for *Storage Blob Data Contributor*, which is a common role used to manage data operations for blobs. You can assign whatever role is appropriate for your use case. Select the *Storage Blob Data Contributor* from the list and choose **Next**. If you connected your services using the Service Connector you do not need to co 1. In the flyout, search for the managed identity you created by entering the name of your app service. Select the system assigned identity, and then choose **Select** to close the flyout menu. - :::image type="content" source="media/migration-select-identity-small.png" alt-text="A screenshot showing how to select the assigned managed identity." lightbox="media/migration-select-identity.png"::: + :::image type="content" source="media/migration-select-identity-small.png" alt-text="Screenshot showing how to select the assigned managed identity." lightbox="media/migration-select-identity.png"::: -1. Select **Next** a couple times until you're able to select **Review + assign** to finish the role assignment. +1. Select **Next** a couple times until you're able to select **Review + assign** to finish the role assignment. ### [Azure CLI](#tab/assign-role-azure-cli) To assign a role at the resource level using the Azure CLI, you first must retrieve the resource ID using the az storage account show command. You can filter the output properties using the --query parameter. ```azurecli-az storage account show --resource-group '<your-resource-group-name>' --name '<your-storage-account-name>' --query id +az storage account show \ + --resource-group '<your-resource-group-name>' \ + --name '<your-storage-account-name>' \ + --query id ``` Copy the output ID from the preceding command. You can then assign roles using the az role command of the Azure CLI. ```azurecli-az role assignment create --assignee "<your-username>" \ - --role "Storage Blob Data Contributor" \ - --scope "<your-resource-id>" +az role assignment create \ + --assignee "<your-username>" \ + --role "Storage Blob Data Contributor" \ + --scope "<your-resource-id>" ``` In this tutorial, you learned how to migrate an application to passwordless conn You can read the following resources to explore the concepts discussed in this article in more depth: -- For more information on authorizing access with managed identity, visit [Authorize access to blob data with managed identities for Azure resources](/azure/storage/blobs/authorize-managed-identity).--[Authorize with Azure roles](/azure/storage/blobs/authorize-access-azure-active-directory)-- To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).-- To learn more about authorizing from a web application, visit [Authorize from a native or web application](/azure/storage/common/storage-auth-aad-app)+* For more information on authorizing access with managed identity, visit [Authorize access to blob data with managed identities for Azure resources](/azure/storage/blobs/authorize-managed-identity). +* [Authorize with Azure roles](/azure/storage/blobs/authorize-access-azure-active-directory) +* To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro). +* To learn more about authorizing from a web application, visit [Authorize from a native or web application](/azure/storage/common/storage-auth-aad-app) |
storage | Storage Files Identity Ad Ds Configure Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md | Title: Control what a user can do at the file level - Azure file shares -description: Learn how to configure Windows ACLs permissions for on-premises AD DS authentication to Azure file shares. Allowing you to take advantage of granular access control. + Title: Control what a user can do at the directory and file level - Azure Files +description: Learn how to configure Windows ACLs for directory and file level permissions for AD DS authentication to Azure file shares, allowing you to take advantage of granular access control. Previously updated : 03/16/2022 Last updated : 09/27/2022 -# Part three: configure directory and file level permissions over SMB +# Part three: configure directory and file level permissions over SMB -Before you begin this article, make sure you completed the previous article, [Assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md) to ensure that your share-level permissions are in place. +Before you begin this article, make sure you've completed the previous article, [Assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md), to ensure that your share-level permissions are in place with Azure role-based access control (RBAC). -After you assign share-level permissions with Azure RBAC, you must configure proper Windows ACLs at the root, directory, or file level, to take advantage of granular access control. The Azure RBAC share-level permissions act as a high-level gatekeeper that determines whether a user can access the share. While the Windows ACLs operate at a more granular level to control what operations the user can do at the directory or file level. Both share-level and file/directory level permissions are enforced when a user attempts to access a file/directory, so if there is a difference between either of them, only the most restrictive one will be applied. For example, if a user has read/write access at the file-level, but only read at a share-level, then they can only read that file. The same would be true if it was reversed, and a user had read/write access at the share-level, but only read at the file-level, they can still only read the file. +After you assign share-level permissions, you must first connect to the Azure file share using the storage account key and then configure Windows access control lists (ACLs), also known as NTFS permissions, at the root, directory, or file level. While share-level permissions act as a high-level gatekeeper that determines whether a user can access the share, Windows ACLs operate at a more granular level to control what operations the user can do at the directory or file level. +Both share-level and file/directory level permissions are enforced when a user attempts to access a file/directory, so if there's a difference between either of them, only the most restrictive one will be applied. For example, if a user has read/write access at the file level, but only read at a share level, then they can only read that file. The same would be true if it was reversed: if a user had read/write access at the share-level, but only read at the file-level, they can still only read the file. ## Applies to | File share type | SMB | NFS | After you assign share-level permissions with Azure RBAC, you must configure pro The following table contains the Azure RBAC permissions related to this configuration: - | Built-in role | NTFS permission | Resulting access | |||| |Storage File Data SMB Share Reader | Full control, Modify, Read, Write, Execute | Read & execute | The following table contains the Azure RBAC permissions related to this configur | | Read | Read | | | Write | Write | -- ## Supported permissions -Azure Files supports the full set of basic and advanced Windows ACLs. You can view and configure Windows ACLs on directories and files in an Azure file share by mounting the share and then using Windows File Explorer, running the Windows [icacls](/windows-server/administration/windows-commands/icacls) command, or the [Set-ACL](/powershell/module/microsoft.powershell.security/set-acl) command. +Azure Files supports the full set of basic and advanced Windows ACLs. You can view and configure Windows ACLs on directories and files in an Azure file share by connecting to the share and then using Windows File Explorer, running the Windows [icacls](/windows-server/administration/windows-commands/icacls) command, or the [Set-ACL](/powershell/module/microsoft.powershell.security/set-acl) command. To configure ACLs with superuser permissions, you must mount the share by using your storage account key from your domain-joined VM. Follow the instructions in the next section to mount an Azure file share from the command prompt and to configure Windows ACLs. The following permissions are included on the root directory of a file share: |`NT AUTHORITY\Authenticated Users`|All users in AD that can get a valid Kerberos token.| |`CREATOR OWNER`|Each object either directory or file has an owner for that object. If there are ACLs assigned to `CREATOR OWNER` on that object, then the user that is the owner of this object has the permissions to the object defined by the ACL.| +## Connect to the Azure file share -## Mount a file share from the command prompt --Use the Windows `net use` command to mount the Azure file share. Remember to replace the placeholder values in the following example with your own values. For more information about mounting file shares, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md). +Run the PowerShell script below or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to connect to the Azure file share using the storage account key and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace the placeholder values with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md). > [!NOTE]-> You may see the *Full Control** ACL applied to a role already. This typically already offers the ability to assign permissions. However, because there are access checks at two levels (the share-level and the file-level), this is restricted. Only users who have the **SMB Elevated Contributor** role and create a new file or folder can assign permissions on those specific new files or folders without the use of the storage account key. All other permission assignment requires mounting the share with the storage account key, first. +> You might see the **Full Control** ACL applied to a role already. This typically already offers the ability to assign permissions. However, because there are access checks at two levels (the share level and the file/directory level), this is restricted. Only users who have the **SMB Elevated Contributor** role and create a new file or directory can assign permissions on those new files or directories without using the storage account key. All other file/directory permission assignment requires connecting to the share using the storage account key first. -``` +```powershell $connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445-if ($connectTestResult.TcpTestSucceeded) -{ - net use <desired-drive-letter>: \\<storage-account-name>.file.core.windows.net\<share-name> /user:Azure\<storage-account-name> <storage-account-key> -} -else -{ - Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port." +if ($connectTestResult.TcpTestSucceeded) { + cmd.exe /C "cmdkey /add:`"<storage-account-name>.file.core.windows.net`" /user:`"localhost\<storage-account-name>`" /pass:`"<storage-account-key>`"" + New-PSDrive -Name Z -PSProvider FileSystem -Root "\\<storage-account-name>.file.core.windows.net\<file-share-name>" +} else { + Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port." }- ``` -If you experience issues in connecting to Azure Files, refer to [the troubleshooting tool we published for Azure Files mounting errors on Windows](https://azure.microsoft.com/blog/new-troubleshooting-diagnostics-for-azure-files-mounting-errors-on-windows/). +If you experience issues connecting to Azure Files on Windows, refer to [this troubleshooting tool](https://azure.microsoft.com/blog/new-troubleshooting-diagnostics-for-azure-files-mounting-errors-on-windows/). ## Configure Windows ACLs -Once your file share has been mounted with the storage account key, you must configure the Windows ACLs (also known as NTFS permissions). You can configure the Windows ACLs using either Windows File Explorer or icacls. +After you've connected to your Azure file share, you must configure the Windows ACLs. You can do this using either Windows File Explorer or [icacls](/windows-server/administration/windows-commands/icacls). If you have directories or files in on-premises file servers with Windows DACLs configured against the AD DS identities, you can copy it over to Azure Files persisting the ACLs with traditional file copy tools like Robocopy or [Azure AzCopy v 10.4+](https://github.com/Azure/azure-storage-azcopy/releases). If your directories and files are tiered to Azure Files through Azure File Sync, your ACLs are carried over and persisted in their native format. If you have directories or files in on-premises file servers with Windows DACLs Use the following Windows command to grant full permissions to all directories and files under the file share, including the root directory. Remember to replace the placeholder values in the example with your own values. ```-icacls <mounted-drive-letter>: /grant <user-upn>:(f) +icacls <mapped-drive-letter>: /grant <user-upn>:(f) ``` For more information on how to use icacls to set Windows ACLs and on the different types of supported permissions, see [the command-line reference for icacls](/windows-server/administration/windows-commands/icacls). ### Configure Windows ACLs with Windows File Explorer -Use Windows File Explorer to grant full permission to all directories and files under the file share, including the root directory. If you are not able to load the AD domain information correctly in Windows File Explorer, this is likely due to trust configuration in your on-prem AD environment. The client machine was not able to reach the AD domain controller registered for Azure Files authentication. In this case, use icacls for configurating Windows ACLs. +Use Windows File Explorer to grant full permission to all directories and files under the file share, including the root directory. If you're not able to load the AD domain information correctly in Windows File Explorer, this is likely due to trust configuration in your on-premises AD environment. The client machine wasn't able to reach the AD domain controller registered for Azure Files authentication. In this case, [use icacls](#configure-windows-acls-with-icacls) for configuring Windows ACLs. 1. Open Windows File Explorer and right click on the file/directory and select **Properties**. 1. Select the **Security** tab. 1. Select **Edit..** to change permissions. 1. You can change the permissions of existing users or select **Add...** to grant permissions to new users. 1. In the prompt window for adding new users, enter the target username you want to grant permissions to in the **Enter the object names to select** box, and select **Check Names** to find the full UPN name of the target user.-1. Select **OK**. -1. In the **Security** tab, select all permissions you want to grant your new user. -1. Select **Apply**. -+1. Select **OK**. +1. In the **Security** tab, select all permissions you want to grant your new user. +1. Select **Apply**. ## Next steps -Now that the feature is enabled and configured, continue to the next article, where you mount your Azure file share from a domain-joined VM. +Now that the feature is enabled and configured, continue to the next article to learn how to mount your Azure file share from a domain-joined VM. [Part four: mount a file share from a domain-joined VM](storage-files-identity-ad-ds-mount-file-share.md) |
storsimple | Storsimple Update52 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-update52-release-notes.md | Use the following steps to install Update 5.2: 1. [Connect to Windows PowerShell on the StorSimple 8000 series device](storsimple-8000-deployment-walkthrough-u2.md#use-putty-to-connect-to-the-device-serial-console), or connect directly to the appliance via serial cable. -1. Use [Start-HcsUpdate](/powershell/module/hcs/start-hcsupdate.md?view=winserver2012r2-ps&preserve-view=true) to update the device. For detailed steps, see [Install regular updates via Windows PowerShell](storsimple-update-device.md#to-install-regular-updates-via-windows-powershell-for-storsimple). This update is non-disruptive. +1. Use [Start-HcsUpdate](/powershell/module/hcs/start-hcsupdate?view=winserver2012r2-ps&preserve-view=true) to update the device. For detailed steps, see [Install regular updates via Windows PowerShell](storsimple-update-device.md#to-install-regular-updates-via-windows-powershell-for-storsimple). This update is non-disruptive. 1. If ```Start-HcsUpdate``` doesn't work because of firewall issues, contact Microsoft Support. |
stream-analytics | Azure Database Explorer Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/azure-database-explorer-output.md | Azure Data Explorer supports several ingestion methods, including connectors to For more information about Azure Data Explorer, visit the [What is Azure Data Explorer documentation.](/azure/data-explorer/data-explorer-overview/) -To learn more about how to create an Azure Data Explorer and cluster by using the Azure portal, visit: [Quickstart: Create an Azure Data Explorer cluster and database](/azure/data-explorer/create-cluster-database-portal/) +To learn more about how to create an Azure Data Explorer cluster by using the Azure portal, visit: [Quickstart: Create an Azure Data Explorer cluster and database](/azure/data-explorer/create-cluster-database-portal/) > [!NOTE] -> Azure Data Explorer from Azure Stream Analytics does not support output to Synapse Data Explorer clusters. +> Azure Data Explorer from Azure Stream Analytics supports output to Synapse Data Explorer clusters. To write to your synapse data explorer clusters, you have to specify the url of your cluster in the configuration blade in for Azure Data Explorer output in your Azure Stream Analytics job. ## Output configuration You can significantly grow the scope of real-time analytics by leveraging ASA an * Stream Analytics can perform aggregates, filters, enrich, and transform incoming data streams for use in Data Explorer -## Limitation -* The name of the columns & data type should match between Azure Stream Analytics SQL query and Azure Data Explorer table. +## Other Scenarios and limitations +* The name of the columns and data type should match between Azure Stream Analytics SQL query and Azure Data Explorer table. Note that the comparison is case sensitive. +* Columns that exist in your Azure Data explorer clusters but are missing in ASA are ignored while columns that are missing in Azure Stream raise an error. +* The order of your columns in your ASA query does not matter. Order is determined by the schema of the ADX table. * Azure Data Explorer has an aggregation (batching) policy for data ingestion, designed to optimize the ingestion process. The policy is configured to 5 minutes, 1000 items or 1 GB of data by default, so you may experience a latency. See [batching policy](/azure/data-explorer/kusto/management/batchingpolicy) for aggregation options.-* Test connection to Azure Data Explorer is not supported in jobs running in Shared multi-tenant environment. ## Next steps |
stream-analytics | Blob Output Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/blob-output-managed-identity.md | Below are the current limitations of this feature: 3. Multi-tenant access is not supported. The Service principal created for a given Stream Analytics job must reside in the same Azure Active Directory tenant in which the job was created, and cannot be used with a resource that resides in a different Azure Active Directory tenant. -4. [User Assigned Identity](../active-directory/managed-identities-azure-resources/overview.md) is not supported. This means the user is not able to enter their own service principal to be used by their Stream Analytics job. The service principal must be generated by Azure Stream Analytics. ## Next steps |
stream-analytics | Stream Analytics Window Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-window-functions.md | In time-streaming scenarios, performing operations on the data contained in temp There are five kinds of temporal windows to choose from: [**Tumbling**](/stream-analytics-query/tumbling-window-azure-stream-analytics), [**Hopping**](/stream-analytics-query/hopping-window-azure-stream-analytics), [**Sliding**](/stream-analytics-query/sliding-window-azure-stream-analytics), [**Session**](/stream-analytics-query/session-window-azure-stream-analytics), and [**Snapshot**](/stream-analytics-query/snapshot-window-azure-stream-analytics) windows. You use the window functions in the [**GROUP BY**](/stream-analytics-query/group-by-azure-stream-analytics) clause of the query syntax in your Stream Analytics jobs. You can also aggregate events over multiple windows using the [**Windows()** function](/stream-analytics-query/windows-azure-stream-analytics). -All the [windowing](/stream-analytics-query/windowing-azure-stream-analytics) operations output results at the **end** of the window. Note that when you start a stream analytics job, you can specify the *Job output start time* and the system will automatically fetch previous events in the incoming streams to output the first window at the specified time; for example when you start with the *Now* option, it will start to emit data immediately. -The output of the window will be a single event based on the aggregate function used. The output event will have the time stamp of the end of the window and all window functions are defined with a fixed length. +All the [windowing](/stream-analytics-query/windowing-azure-stream-analytics) operations output results at the **end** of the window. Note that when you start a stream analytics job, you can specify the *Job output start time* and the system will automatically fetch previous events in the incoming streams to output the first window at the specified time; for example when you start with the *Now* option, it will start to emit data immediately. +The output of the window will be a single event based on the aggregate function used. The output event will have the time stamp of the end of the window and all window functions are defined with a fixed length.  Will return: |2021-10-26T10:15:20|PST|2| |2021-10-26T10:15:30|PST|4| - ## Hopping window -[**Hopping**](/stream-analytics-query/hopping-window-azure-stream-analytics) window functions hop forward in time by a fixed period. It may be easy to think of them as Tumbling windows that can overlap and be emitted more often than the window size. Events can belong to more than one Hopping window result set. To make a Hopping window the same as a Tumbling window, specify the hop size to be the same as the window size. +[**Hopping**](/stream-analytics-query/hopping-window-azure-stream-analytics) window functions hop forward in time by a fixed period. It may be easy to think of them as Tumbling windows that can overlap and be emitted more often than the window size. Events can belong to more than one Hopping window result set. To make a Hopping window the same as a Tumbling window, specify the hop size to be the same as the window size.  Will return: |2021-10-26T10:15:25|Streaming|4| |2021-10-26T10:15:30|Streaming|4| - ## Sliding window [**Sliding**](/stream-analytics-query/sliding-window-azure-stream-analytics) windows, unlike Tumbling or Hopping windows, output events only for points in time when the content of the window actually changes. In other words, when an event enters or exits the window. So, every window has at least one event. Similar to Hopping windows, events can belong to more than one sliding window. - + With the following input data (illustrated above): Will return: |2021-10-26T10:15:22|Streaming|2| ## Next steps+ * [Introduction to Azure Stream Analytics](stream-analytics-introduction.md) * [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md) * [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md) |
synapse-analytics | How To Monitor Pipeline Runs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/how-to-monitor-pipeline-runs.md | Select **Pipeline runs** to view the list of pipeline runs. ## Filter your pipeline runs -You can filter the list of pipeline runs to the ones you're interested in. The filters at the top of the screen allow you to specify a field on which you'd like to filter. +You can filter the list of pipeline runs to the ones you're interested in. The filters at the top of the screen allow you to specify a field on which you'd like to filter. You can view pipeline run data for the last 45 days. If you want to store pipeline run data for more than 45 days, set up your own diagnostic logging with [Azure monitor](../../data-factory/monitor-using-azure-monitor.md). For example, you can filter the view to see only the pipeline runs for the pipeline named "holiday": |
synapse-analytics | Apache Spark Secure Credentials With Tokenlibrary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md | print(accountKey) ::: zone-end -#### getSecret() +#### GetSecret() To retrieve a secret stored from Azure Key Vault, we recommend that you create a linked service to Azure Key Vault within the Synapse workspace. The Synapse workspace managed service identity will need to be granted **GET** Secrets permission to the Azure Key Vault. The linked service will use the managed service identity to connect to Azure Key Vault service to retrieve the secret. Otherwise, connecting directly to Azure Key Vault will use the user's Azure Active Directory (AAD) credential. In this case, the user will need to be granted the Get Secret permissions in Azure Key Vault. -`TokenLibrary.getSecret("<AZURE KEY VAULT NAME>", "<SECRET KEY>" [, <LINKED SERVICE NAME>])` +`TokenLibrary.GetSecret("<AZURE KEY VAULT NAME>", "<SECRET KEY>" [, <LINKED SERVICE NAME>])` -To retrieve a secret from Azure Key Vault, use the **TokenLibrary.getSecret()** function. +To retrieve a secret from Azure Key Vault, use the **TokenLibrary.GetSecret()** function. ::: zone pivot = "programming-language-scala" ```scala import com.microsoft.azure.synapse.tokenlibrary.TokenLibrary -val connectionString: String = TokenLibrary.getSecret("<AZURE KEY VAULT NAME>", "<SECRET KEY>", "<LINKED SERVICE NAME>") +val connectionString: String = TokenLibrary.GetSecret("<AZURE KEY VAULT NAME>", "<SECRET KEY>", "<LINKED SERVICE NAME>") println(connectionString) ``` from pyspark.sql import SparkSession sc = SparkSession.builder.getOrCreate() token_library = sc._jvm.com.microsoft.azure.synapse.tokenlibrary.TokenLibrary -connection_string = token_library.getSecret("<AZURE KEY VAULT NAME>", "<SECRET KEY>", "<LINKED SERVICE NAME>") +connection_string = token_library.GetSecret("<AZURE KEY VAULT NAME>", "<SECRET KEY>", "<LINKED SERVICE NAME>") print(connection_string) ``` |
synapse-analytics | Connect Synapse Link Sql Server 2022 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022.md | This article provides a step-by-step guide for getting started with Azure Synaps :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-new-empty-sql-script.png" alt-text="Screenshot of creating a new empty SQL script from Synapse Studio."::: -1. Paste the following script and select **Run** to create the master key for your target Synapse SQL database. You also need to create a schema if your expected schema is not available in target Synapse SQL database. +1. Paste the following script and select **Run** to create the master key for your target Synapse SQL database. ```sql CREATE MASTER KEY |
virtual-desktop | Multimedia Redirection Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection-intro.md | + + Title: Understanding multimedia redirection on Azure Virtual Desktop - Azure +description: An overview of multimedia redirection on Azure Virtual Desktop (preview). ++ Last updated : 09/27/2022++++# Understanding multimedia redirection for Azure Virtual Desktop ++> [!IMPORTANT] +> Multimedia redirection on Azure Virtual Desktop is currently in preview. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++Multimedia redirection (MMR) gives you smooth video playback while watching videos in a browser in Azure Virtual Desktop. Multimedia redirection redirects the media content from Azure Virtual Desktop to your local machine for faster processing and rendering. Both Microsoft Edge and Google Chrome support this feature. ++> [!NOTE] +> Multimedia redirection isn't supported on Azure Virtual Desktop for Microsoft 365 Government (GCC), GCC-High environments, and Microsoft 365 DoD. +> +> Multimedia redirection on Azure Virtual Desktop is only available for the [Windows Desktop client, version 1.2.3573 or later](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew). on Windows 11, Windows 10, or Windows 10 IoT Enterprise devices. +> +> The preview version of multimedia redirection for Azure Virtual Desktop has restricted playback to sites we've tested. ++## Websites that work with multimedia redirection ++The following list shows websites that are known to work with MMR. MMR works with these sites by default. ++ :::column span=""::: + - AnyClip + - AWS Training + - BBC + - Big Think + - Brightcove + - CNBC + - Coursera + - Daily Mail + - Facebook + - Fidelity + :::column-end::: + :::column span=""::: + - Flashtalking + - Fox Sports + - Fox Weather + - IMDB + - Infosec Institute + - LinkedIn Learning + - Microsoft Learn + - Microsoft Stream + - Pluralsight + - Reddit + - Reuters + :::column-end::: + :::column span=""::: + - Skillshare + - The Guardian + - Twitch + - Udemy + - UMU + - U.S. News + - Vidazoo + - Vimeo + - Yahoo + - Yammer + - YouTube (including sites with embedded YouTube videos). + :::column-end::: ++Microsoft Teams live events aren't media-optimized for Azure Virtual Desktop and Windows 365 when using the native Teams app. However, if you use Teams live events with a supported browser, MMR is a workaround that provides smoother Teams live events playback on Azure Virtual Desktop. MMR supports Enterprise Content Delivery Network (ECDN) for Teams live events. ++### The multimedia redirection status icon ++To quickly tell if multimedia redirection is active in your browser, we've added the following icon states: ++| Icon State | Definition | +|--|--| +| :::image type="content" source="./media/mmr-extension-unsupported.png" alt-text="The MMR extension icon greyed out, indicating that the website can't be redirected or the extension isn't loading."::: | A greyed out icon means that multimedia content on the website can't be redirected or the extension isn't loading. | +| :::image type="content" source="./media/mmr-extension-disconnect.png" alt-text="The MMR extension icon with a red square with an x that indicates the client can't connect to multimedia redirection."::: | The red square with an "X" inside of it means that the client can't connect to multimedia redirection. You may need to uninstall and reinstall the extension, then try again. | +| :::image type="content" source="./media/mmr-extension-supported.png" alt-text="The MMR extension icon with no status applied."::: | The default icon appearance with no status applied. This icon state means that multimedia content on the website can be redirected and is ready to use. | +| :::image type="content" source="./media/mmr-extension-playback.png" alt-text="The MMR extension icon with a green square with a play button icon inside of it, indicating that multimedia redirection is working."::: | The green square with a play button icon inside of it means that the extension is currently redirecting video playback. | +| :::image type="content" source="./media/mmr-extension-webrtc.png" alt-text="The MMR extension icon with a green square with telephone icon inside of it, indicating that multimedia redirection is working."::: | The green square with a phone icon inside of it means that the extension is currently redirecting a WebRTC call. | ++Selecting the icon in your browser will display a pop-up menu where it lists the features supported on the current page, you can select to enable or disable multimedia redirection on all websites, and collect logs. It also lists the version numbers for each component of the service. ++You can use the icon to check the status of the extension by following the directions in [Check the extension status](multimedia-redirection.md#check-the-extension-status). ++## Support during public preview ++If you run into issues while using the public preview version of multimedia redirection, we recommend contacting [Microsoft Azure support](https://azure.microsoft.com/support/plans/). ++## Next steps ++To learn how to use this feature, see [Multimedia redirection for Azure Virtual Desktop (preview)](multimedia-redirection.md). ++To troubleshoot issues or view known issues, see [our troubleshooting article](troubleshoot-multimedia-redirection.md). ++If you're interested in video streaming on other parts of Azure Virtual Desktop, check out [Teams for Azure Virtual Desktop](teams-on-avd.md). |
virtual-desktop | Multimedia Redirection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md | Title: Multimedia redirection on Azure Virtual Desktop - Azure -description: How to use multimedia redirection for Azure Virtual Desktop (preview). -+ Title: Use multimedia redirection on Azure Virtual Desktop - Azure +description: How to use multimedia redirection on Azure Virtual Desktop (preview). + Previously updated : 08/27/2022- Last updated : 09/27/2022+ -# Multimedia redirection for Azure Virtual Desktop (preview) +# Use multimedia redirection on Azure Virtual Desktop (preview) > [!IMPORTANT]-> Multimedia redirection for Azure Virtual Desktop is currently in preview. +> Multimedia redirection on Azure Virtual Desktop is currently in preview. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ->[!NOTE] ->Azure Virtual Desktop doesn't currently support multimedia redirection on Azure Virtual Desktop for Microsoft 365 Government (GCC), GCC-High environments, and Microsoft 365 DoD. +This article will show you how to use multimedia redirection (MMR) for Azure Virtual Desktop (preview) with Microsoft Edge or Google Chrome browsers. For more information about how multimedia redirection works, see [Understanding multimedia redirection for Azure Virtual Desktop](multimedia-redirection-intro.md). ++> [!NOTE] +> Multimedia redirection isn't supported on Azure Virtual Desktop for Microsoft 365 Government (GCC), GCC-High environments, and Microsoft 365 DoD. >->Multimedia redirection on Azure Virtual Desktop is only available for the Windows Desktop client on Windows 11, Windows 10, or Windows 10 IoT Enterprise devices. Multimedia redirection requires the Windows Desktop client, version 1.2.2999 or later. +>Multimedia redirection on Azure Virtual Desktop is only available for the Windows Desktop client on Windows 11, Windows 10, or Windows 10 IoT Enterprise devices. Multimedia redirection requires the [Windows Desktop client, version 1.2.3573 or later](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew). For more information, see [Prerequisites](#prerequisites). -Multimedia redirection (MMR) gives you smooth video playback while watching videos in your Azure Virtual Desktop browser. Multimedia redirection remotes the media content from the browser to the local machine for faster processing and rendering. Both Microsoft Edge and Google Chrome support the multimedia redirection feature. However, the public preview version of multimedia redirection for Azure Virtual Desktop has restricted playback on sites in the "Known Sites" list. To test sites on the list within your organization's deployment, you'll need to [enable an extension](#managing-group-policies-for-the-multimedia-redirection-browser-extension). +## Prerequisites -## Websites that work with MMR +Before you can use multimedia redirection on Azure Virtual Desktop, you'll need the following things: -The following list shows websites that are known to work with MMR. MMR is supposed to work on these sites by default, when you haven't selected the **Enable on all sites** check box. +- An Azure Virtual Desktop deployment. +- Microsoft Edge or Google Chrome installed on your session hosts. +- Microsoft Visual C++ Redistributable 2015-2022, version 14.32.31332.0 or later installed on your session hosts. You can download the latest version from [Microsoft Visual C++ Redistributable latest supported downloads](/cpp/windows/latest-supported-vc-redist). +- Windows Desktop client, version 1.2.3573 or later on Windows 11, Windows 10, or Windows 10 IoT Enterprise devices. This includes the multimedia redirection plugin (`C:\Program Files\Remote Desktop\MsMmrDVCPlugin.dll`), which is required on the client device. Your device must meet the [hardware requirements for Teams on a Windows PC](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/). -- YouTube -- Facebook-- Fox Sports-- IMDB-- [Microsoft Learn training](/training)-- LinkedIn Learning-- Fox Weather-- Yammer-- The Guardian-- Fidelity-- Udemy-- BBC-- Pluralsight-- Sites with embedded YouTube videos, such as Medium, Udacity, Los Angeles Times, and so on.-- Teams Live Events (on web)- - Currently, Teams live events aren't media-optimized for Azure Virtual Desktop and Windows 365. MMR is a short-term workaround for a smoother Teams live events playback on Azure Virtual Desktop. - - MMR supports Enterprise Content Delivery Network (ECDN) for Teams live events. +## Install the multimedia redirection extension -### How to use MMR for Teams live events +For multimedia redirection to work, there are two parts to install on your session hosts: the host component and the browser extension for Edge or Chrome. You install the host component and browser extension from an MSI file, and you can also get and install the browser extension from Microsoft Edge Add-ons or the Chrome Web Store, depending on which browser you're using. -To use MMR for Teams live events: +### Install the host component -1. First, open the link to the Teams event in either a Microsoft Edge or Google Chrome browser. +To install the host component on your session hosts, you can install the MSI manually on each session host or use your enterprise deployment tool with `msiexec`. To install the MSI manually, you'll need to: -2. Make sure you can see a green check mark next to the [multimedia redirection status icon](#the-multimedia-redirection-status-icon). If the green check mark is there, MMR is enabled for Teams live events. +1. Sign in to a session host as a local administrator. -3. Select **Watch on the web instead**. The Teams live event should automatically start playing in your browser. Make sure you only select **Watch on the web instead**, as shown in the following screenshot. If you use the Teams app, MMR won't work. +1. Download the [MMR host MSI installer](https://aka.ms/avdmmr/msi). -The following screenshot highlights the areas described in the previous steps: +1. Open the file that you downloaded to run the setup wizard. +1. Follow the prompts. Once it's completed, select **Finish**. -## Requirements +### Install the browser extension -Before you can use Multimedia Redirection on Azure Virtual Desktop, you'll need -to do these things: +Next, you'll need to install the browser extension. This is installed on session hosts where you already have Edge or Chrome available. Installing the host component also installs the browser extension. Users will see a prompt that says **New Extension added**. In order to use the app, they'll need to enable the extension. A user can enable the extension by doing the following: -1. [Install the Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop) on a Windows 11, Windows 10, or Windows 10 IoT Enterprise device that meets the [hardware requirements for Teams on a Windows PC](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/). Installing version 1.2.2999 or later of the client will also install the multimedia redirection plugin (MsMmrDVCPlugin.dll) on the client device. To learn more about updates and new versions, see [What's new in the Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew). +1. Sign in to Azure Virtual Desktop and open Edge or Chrome. -2. [Create a host pool for your users](create-host-pools-azure-marketplace.md). +1. At the prompt to enable the extension, select **Turn on extension**. Users should also pin the extension so that they can see from the icon if multimedia redirection is connected. -3. Configure the client machine to let your users access the Insiders program. To configure the client for the Insider group, set the following registry information: + :::image type="content" source="./media/mmr-extension-enable.png" alt-text="A screenshot of the prompt to enable the extension."::: - - **Key**: HKLM\\Software\\Microsoft\\MSRDC\\Policies - - **Type**: REG_SZ - - **Name**: ReleaseRing - - **Data**: insider + >[!IMPORTANT] + >If the user selects **Remove extension**, it will be removed from the browser and they will need to add it from Microsoft Edge Add-ons or the Chrome Web Store. To install it again, see [Installing the browser extension manually](#install-the-browser-extension-manually). - To learn more about the Insiders program, see [Windows Desktop client for admins](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-admin#configure-user-groups). +You can also automate installing the browser extension from Microsoft Edge Add-ons or the Chrome Web Store for all users by [using Group Policy](#install-the-browser-extension-using-group-policy). -4. Use [the MSI installer (MsMmrHostMri)](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE55eRq) to install both the host native component and the multimedia redirection extensions for your internet browser on your Azure VM. +Using Group Policy has the following benefits: -## Managing group policies for the multimedia redirection browser extension +- You can install the extension silently and without user interaction. +- You can restrict which websites use multimedia redirection. +- You can pin the extension icon in Google Chrome by default. -Using the multimedia redirection MSI will install the browser extensions. However, as this service is still in public preview, user experience may vary. For more information about known issues, see [Known issues](#known-issues-and-limitations). +#### Install the browser extension manually -Keep in mind that when the IT admin installs an extension with MSI, the users will see a prompt that says "New Extension added." In order to use the app, they'll need to confirm the prompt. If they select **Cancel**, then their browser will uninstall the extension. If you want the browser to force install the extension without any input from your users, we recommend you use the group policy in the following section. +If you need to install the browser extension separately, you can download it from Microsoft Edge Add-ons or the Chrome Web Store. -In some cases, you can change the group policy to manage the browser extensions and improve user experience. For example: +To install the multimedia redirection extension manually, follow these steps: -- You can install the extension without user interaction.-- You can restrict which websites use multimedia redirection.-- You can pin the extension icon in Google Chrome by default. The extension icon is already pinned by default in Microsoft Edge, so you'll only need to change this setting in Chrome.+1. Sign in to Azure Virtual Desktop. -### Configure Microsoft Edge group policies for multimedia redirection +1. In your browser, open one of the following links, depending on which browser you're using: -To configure the group policies, you'll need to edit the Microsoft Edge Administrative Template. You should see the extension configuration options under **Administrative Templates Microsoft Edge Extensions** > **Configure extension management settings**. + - For **Microsoft Edge**: [Microsoft Multimedia Redirection Extension](https://microsoftedge.microsoft.com/addons/detail/wvd-multimedia-redirectio/joeclbldhdmoijbaagobkhlpfjglcihd) -The following code is an example of a Microsoft Edge group policy that doesn't restrict site access: - -```cmd -{ "joeclbldhdmoijbaagobkhlpfjglcihd": { "installation_mode": "force_installed", "update_url": "https://edge.microsoft.com/extensionwebstorebase/v1/crx" } } -``` + - For **Google Chrome**: [Microsoft Multimedia Redirection Extension](https://chrome.google.com/webstore/detail/wvd-multimedia-redirectio/lfmemoeeciijgkjkgbgikoonlkabmlno) -This next example group policy makes the browser install the multimedia redirection extension, but only lets multimedia redirection load on YouTube: +1. Install the extension by selecting **Get** (for Microsoft Edge) or **Add to Chrome** (for Google Chrome), then at the additional prompt, select **Add extension**. Once the installation is finished, you'll see a confirmation message saying that you've successfully added the extension. -```cmd -{ "joeclbldhdmoijbaagobkhlpfjglcihd": { "installation_mode": "force_installed", "runtime_allowed_hosts": [ "*://*.youtube.com" ], "runtime_blocked_hosts": [ "*://*" ], "update_url": "https://edge.microsoft.com/extensionwebstorebase/v1/crx" } } -``` +#### Install the browser extension using Group Policy -To learn more about group policy configuration, see [Microsoft Edge group policy](/DeployEdge/configure-microsoft-edge). +You can install the multimedia redirection extension using Group Policy, either centrally from your domain for session hosts that are joined to an Active Directory (AD) domain, or using the Local Group Policy Editor for each session host. This process will change depending on which browser you're using. -### Configure Google Chrome group policies for multimedia redirection +# [Edge](#tab/edge) -To configure the Google Chrome group policies, you'll need to edit the Google Chrome Administrative Template. You should see the extension configuration options under **Administrative Templates** > **Google** > **Google Chrome Extensions** > **Extension management settings**. +1. Download and install the Microsoft Edge administrative template by following the directions in [Configure Microsoft Edge policy settings on Windows devices](/deployedge/configure-microsoft-edge.md#1-download-and-install-the-microsoft-edge-administrative-template) -The following example is much like the code example in [Configure Microsoft Edge group policies for multimedia redirection](#configure-microsoft-edge-group-policies-for-multimedia-redirection). This policy will force the multimedia redirection extension to install with the icon pinned in the top-right menu, and will only allow multimedia redirection to load on YouTube. +1. Next, decide whether you want to configure Group Policy centrally from your domain or locally for each session host: + + - To configure it from an AD Domain, open the **Group Policy Management Console** (GPMC) and create or edit a policy that targets your session hosts. + + - To configure it locally, open the **Local Group Policy Editor** on the session host. -```cmd -{ "lfmemoeeciijgkjkgbgikoonlkabmlno": { "installation_mode": "force_installed", "runtime_allowed_hosts": [ "*://*.youtube.com" ], "runtime_blocked_hosts": [ "*://*" ], "toolbar_pin": "force_pinned", "update_url": "https://clients2.google.com/service/update2/crx" } } -``` +1. Go to **Computer Configuration** > **Administrative Templates** > **Microsoft Edge** > **Extensions**. -Additional information on configuring [Google Chrome group policy](https://support.google.com/chrome/a/answer/187202#zippy=%2Cwindows). +1. Open the policy setting **Configure extension management settings** and set it to **Enabled**. -## Run the multimedia redirection extension manually on a browser +1. In the field for **Configure extension management settings**, enter the following: -MMR uses remote apps and the session desktop for Microsoft Edge and Google Chrome browsers. Once you've fulfilled [the requirements](#requirements), open your supported browser. If you didn't install the browsers or extension with a group policy, users will need to manually run the extension. This section will tell you how to manually run the extension in one of the currently supported browsers. + ```json + { "joeclbldhdmoijbaagobkhlpfjglcihd": { "installation_mode": "force_installed", "update_url": "https://edge.microsoft.com/extensionwebstorebase/v1/crx" } } + ``` -### Microsoft Edge + You can specify additional parameters to allow or block specific domains. For example, to only allow *youtube.com*, enter the following: -To run the extension on Microsoft Edge manually, look for the yellow exclamation mark on the overflow menu. You should see a prompt to enable the Azure Virtual Desktop Multimedia Redirection extension. Select **Enable extension**. + ```json + { "joeclbldhdmoijbaagobkhlpfjglcihd": { "installation_mode": "force_installed", "runtime_allowed_hosts": [ "*://*.youtube.com" ], "runtime_blocked_hosts": [ "*://*" ], "update_url": "https://edge.microsoft.com/extensionwebstorebase/v1/crx" } } + ``` -### Google Chrome +1. Apply the changes by running the following command in Command Prompt or PowerShell on each session host: -To run the extension on Google Chrome manually, look for the notification message that says the new extension was installed, as shown in the following screenshot. + ```cmd + gpupdate /force + ``` - +# [Google Chrome](#tab/google-chrome) -Select the notification to allow your users to enable the extension. Users should also pin the extension so that they can see from the icon if multimedia redirection is connected. +1. Download and install the Google Chrome administrative template by following the instructions in [Set Chrome Browser policies on managed PCs](https://support.google.com/chrome/a/answer/187202#zippy=%2Cwindows) -### The multimedia redirection status icon +1. Next, decide whether you want to configure Group Policy centrally from your domain or locally for each session host: + + - To configure it from an AD Domain, open the **Group Policy Management Console** (GPMC) and create or edit a policy that targets your session hosts. + + - To configure it locally, open the **Local Group Policy Editor** on the session host. -To quickly tell if multimedia redirection is active in your browser, we've added the following icon states: +1. Go to **Computer Configuration** > **Administrative Templates** > **Microsoft Edge** > **Extensions**. -| Icon State | Definition | -|--|--| -|  | The default icon appearance with no status applied. | -|  | The red square with an "X" inside of it means that the client couldn't connect to multimedia redirection. | -|  | The green square with a check mark inside of it means that the client successfully connected to multimedia redirection. | +1. Open the policy setting **Configure extension management settings** and set it to **Enabled**. -Selecting the icon will display a pop-up menu that has a checkbox you can select to enable or disable multimedia redirection on all websites. It also lists the version numbers for each component of the service. +1. In the field for **Configure extension management settings**, enter the following: -## Support during public preview + ```json + { "lfmemoeeciijgkjkgbgikoonlkabmlno": { "installation_mode": "force_installed", "update_url": "https://clients2.google.com/service/update2/crx" } } + ``` -If you run into issues while using the public preview version of multimedia redirection, we recommend contacting Microsoft Support. + You can specify additional parameters to allow or block specific domains. For example, to only allow *youtube.com* and pin the extension to the toolbar, enter the following: -### Known issues and limitations + ```json + { "lfmemoeeciijgkjkgbgikoonlkabmlno": { "installation_mode": "force_installed", "runtime_allowed_hosts": [ "*://*.youtube.com" ], "runtime_blocked_hosts": [ "*://*" ], "toolbar_pin": "force_pinned", "update_url": "https://clients2.google.com/service/update2/crx" } } + ``` -The following issues are ones we're already aware of, so you won't need to report them: +1. Apply the changes by running the following command in Command Prompt or PowerShell on each session host: -- Multimedia redirection only works on the [Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop#install-the-client), not the web client.+ ```cmd + gpupdate /force + ``` -- Multimedia redirection doesn't currently support protected content, so videos from Pluralsight and Netflix won't work.+ -- During public preview, multimedia redirection will be disabled on all sites except for the sites listed in [Websites that work with MMR](#websites-that-work-with-mmr). However, if you have the extension, you can enable multimedia redirection for all websites. We added the extension so organizations can test the feature on their company websites.+## Configure the Remote Desktop client -- There's a small chance that the MSI installer won't be able to install the extension during internal testing. If you run into this issue, you'll need to install the multimedia redirection extension from the Microsoft Edge Store or Google Chrome Store.+During the preview, you'll need to configure the Remote Desktop client to use Insider features. To learn more about the Insiders program, see [Windows Desktop client for admins](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-admin#configure-user-groups). ++To enable Insider features: ++1. Add the following registry key and value: ++ - **Key**: HKLM\\Software\\Microsoft\\MSRDC\\Policies + - **Type**: REG_SZ + - **Name**: ReleaseRing + - **Data**: insider - - [Multimedia redirection browser extension (Microsoft Edge)](https://microsoftedge.microsoft.com/addons/detail/wvd-multimedia-redirectio/joeclbldhdmoijbaagobkhlpfjglcihd) - - [Multimedia browser extension (Google Chrome)](https://chrome.google.com/webstore/detail/wvd-multimedia-redirectio/lfmemoeeciijgkjkgbgikoonlkabmlno) + You can do this with PowerShell. On your local device, open an elevated PowerShell prompt and run the following commands: -- Installing the extension on host machines with the MSI installer will either prompt users to accept the extension the first time they open the browser or display a warning or error message. If users deny this prompt, it can cause the extension to not load. To avoid this issue, install the extensions by [editing the group policy](#managing-group-policies-for-the-multimedia-redirection-browser-extension).+ ```powershell + New-Item -Path "HKLM:\SOFTWARE\Microsoft\MSRDC\Policies" -Force + New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\MSRDC\Policies" -Name ReleaseRing -PropertyType String -Value insider -Force + ``` -- When you resize the video window, the window's size will adjust faster than the video itself. You'll also see this issue when minimizing and maximizing the window.+1. Restart your local device. -- When the display scale factor of the screen isn't at 100% and you've set the video window to a certain size, you might see a gray patch on the screen. In most cases, you can get rid of the gray patch by resizing the window.+1. Open the Remote Desktop client. The title in the top left-hand corner should be **Remote Desktop (Insider)**: ++ :::image type="content" source="./media/remote-desktop-client-windows-insider.png" alt-text="A screenshot of the Remote Desktop client with Insider features enabled. The title is highlighted in a red box."::: ++## Check the extension status ++You can check the extension status by visiting a website with media content, such as one from the list at [Websites that work with multimedia redirection](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection), and hovering your mouse cursor over [the multimedia redirection extension icon](multimedia-redirection-intro.md#the-multimedia-redirection-status-icon) in the extension bar on the top-right corner of your browser. A message will appear and tell you about the current status, as shown in the following screenshot. +++Another way you can check the extension status is by selecting the extension icon, then selecting **Features supported on this website** from the drop-down menu to see whether the website supports the redirection extension. ++## Teams live events ++To use multimedia redirection with Teams live events: ++1. Sign in to Azure Virtual Desktop. ++1. Open the link to the Teams live event in either the Edge or Chrome browser. ++1. Make sure you can see a green check mark next to the [multimedia redirection status icon](multimedia-redirection-intro.md#the-multimedia-redirection-status-icon). If the green check mark is there, MMR is enabled for Teams live events. ++1. Select **Watch on the web instead**. The Teams live event should automatically start playing in your browser. Make sure you only select **Watch on the web instead**, as shown in the following screenshot. If you use the native Teams app, MMR won't work. ++ :::image type="content" source="./media/teams-live-events.png" alt-text="A screenshot of the 'Watch the live event in Microsoft Teams' page. The status icon and 'watch on the web instead' options are highlighted in red."::: ## Next steps -If you're interested in video streaming on other parts of Azure Virtual Desktop, check out [Teams for Azure Virtual Desktop](teams-on-avd.md). +For more information about multimedia redirection and how it works, see [What is multimedia redirection for Azure Virtual Desktop? (preview)](multimedia-redirection-intro.md). ++To troubleshoot issues or view known issues, see [our troubleshooting article](troubleshoot-multimedia-redirection.md). ++If you're interested in learning more about using Teams for Azure Virtual Desktop, check out [Teams for Azure Virtual Desktop](teams-on-avd.md). |
virtual-desktop | Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/security.md | Azure Virtual Desktop has features like [Reverse Connect](../network-connectivit ## Defense in depth -Today's threat landscape requires designs with security approaches in mind. Ideally, you'll want to build a series of security mechanisms and controls layered throughout your computer network to protect your data and network from being compromised or attacked. This type of security design is what the United States Cybersecurity and Infrastructure Security Agency (CISA) calls "defense in depth." To learn more about defense in depth, go to [the CISA website](https://us-cert.cisa.gov/bsi/articles/knowledge/principles/defense-in-depth). +Today's threat landscape requires designs with security approaches in mind. Ideally, you'll want to build a series of security mechanisms and controls layered throughout your computer network to protect your data and network from being compromised or attacked. This type of security design is what the United States Cybersecurity and Infrastructure Security Agency (CISA) calls "defense in depth". ## Security boundaries |
virtual-desktop | Sandbox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/sandbox.md | Before you get started, here's what you need to configureWindows Sandbox in Azur - A working Azure profile that can access the Azure portal. - A functioning Azure Virtual Desktop deployment. To learn how to deploy Azure Virtual Desktop (classic), see [Create a tenant in Azure Virtual Desktop](./virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md). To learn how to deploy Azure Virtual Desktop with Azure Resource Manager integration, see [Create a host pool with the Azure portal](create-host-pools-azure-marketplace.md).+- Azure Virtual Desktop session hosts that supported the nested virtualization capability. To check if a specific VM size supports nested virtualization, navigate to the description page matching your VM size from [Sizes for virtual machines in Azure](/azure/virtual-machines/sizes-general.md). ## Prepare the VHD image for Azure |
virtual-desktop | Security Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/security-guide.md | When choosing a deployment model, you can either provide remote users access to Monitor your Azure Virtual Desktop service's usage and availability with [Azure Monitor](https://azure.microsoft.com/services/monitor/). Consider creating [service health alerts](../service-health/alerts-activity-log-service-notifications-portal.md) for the Azure Virtual Desktop service to receive notifications whenever there's a service impacting event. +### Encrypt your VM ++Encrypt your VM with [managed disk encryption options](../virtual-machines/disk-encryption-overview.md) to protect stored data from unauthorized access. + ## Session host security best practices Session hosts are virtual machines that run inside an Azure subscription and virtual network. Your Azure Virtual Desktop deployment's overall security depends on the security controls you put on your session hosts. This section describes best practices for keeping your session hosts secure. Remote attestation is a great way to check the health of your VMs. Remote attest A vTPM is a virtualized version of a hardware Trusted Platform Module (TPM), with a virtual instance of a TPM per VM. vTPM enables remote attestation by performing integrity measurement of the entire boot chain of the VM (UEFI, OS, system, and drivers). -We recommend enabling vTPM to use remote attestation on your VMs. With vTPM enabled, you can also enable BitLocker functionality, which provides full-volume encryption to protect data at rest. Any features using vTPM will result in secrets bound to the specific VM. When users connect to the Azure Virtual Desktop service in a pooled scenario, users can be redirected to any VM in the host pool. Depending on how the feature is designed this may have an impact. +We recommend enabling vTPM to use remote attestation on your VMs. With vTPM enabled, you can also enable BitLocker functionality with Azure Disk Encryption, which provides full-volume encryption to protect data at rest. Any features using vTPM will result in secrets bound to the specific VM. When users connect to the Azure Virtual Desktop service in a pooled scenario, users can be redirected to any VM in the host pool. Depending on how the feature is designed this may have an impact. >[!NOTE] >BitLocker should not be used to encrypt the specific disk where you're storing your FSLogix profile data. The following operating systems support running nested virtualization on Azure V - Windows Server 2022 - Windows 10 Enterprise - Windows 10 Enterprise multi-session-- Windows 11+- Windows 11 Enterprise +- Windows 11 Enterprise multi-session ## Windows Defender Application Control The following operating systems support using Windows Defender Application Contr - Windows Server 2022 - Windows 10 Enterprise - Windows 10 Enterprise multi-session-- Windows 11+- Windows 11 Enterprise +- Windows 11 Enterprise multi-session >[!NOTE] >When using Windows Defender Access Control, we recommend only targeting policies at the device level. Although it's possible to target policies to individual users, once the policy is applied, it affects all users on the device equally. |
virtual-desktop | Set Up Scaling Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-scaling-script.md | In this article, you'll learn about the scaling tool that uses an Azure Automati Before you start setting up the scaling tool, make sure you have the following things ready: -- An [Azure Virtual Desktop host pool](create-host-pools-azure-marketplace.md)-- Session host pool VMs configured and registered with the Azure Virtual Desktop service-- A user with the [Contributor role](../role-based-access-control/role-assignments-portal.md) assigned on the Azure subscription+- An [Azure Virtual Desktop host pool](create-host-pools-azure-marketplace.md). +- Session host pool VMs configured and registered with the Azure Virtual Desktop service. +- A user with the [Contributor role](../role-based-access-control/role-assignments-portal.md) assigned on the Azure subscription. +- A Log Analytics workspace (optional). The machine you use to deploy the tool must have: |
virtual-desktop | Teams On Avd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md | This section will show you how to install the Teams desktop app on your Windows To enable media optimization for Teams, set the following registry key on the host VM: -1. From the start menu, run **RegEdit** as an administrator. Navigate to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Teams**. Create the Teams key if it doesn't already exist. +1. From the start menu, run **Registry Editor** as an administrator. Navigate to `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Teams`. Create the Teams key if it doesn't already exist. 2. Create the following value for the Teams key: -| Name | Type | Data/Value | -||--|-| -| IsWVDEnvironment | DWORD | 1 | + | Name | Type | Data/Value | + ||--|-| + | IsWVDEnvironment | DWORD | 1 | ++Alternatively, you can create the registry entry by running the following commands from an elevated PowerShell session: ++```powershell +New-Item -Path "HKLM:\SOFTWARE\Microsoft\Teams" -Force +New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Teams" -Name IsWVDEnvironment -PropertyType DWORD -Value 1 -Force +``` ### Install the Teams WebSocket Service Using Teams in a virtualized environment is different from using Teams in a non- ### Calls and meetings -- The Teams desktop client in Azure Virtual Desktop environments doesn't support creating live events, but you can join live events. For now, we recommend you create live events from the [Teams web client](https://teams.microsoft.com) in your remote session instead. When watching a live event in the browser, [enable multimedia redirection (MMR) for Teams live events](multimedia-redirection.md#how-to-use-mmr-for-teams-live-events) for smoother playback.+- The Teams desktop client in Azure Virtual Desktop environments doesn't support creating live events, but you can join live events. For now, we recommend you create live events from the [Teams web client](https://teams.microsoft.com) in your remote session instead. When watching a live event in the browser, [enable multimedia redirection (MMR) for Teams live events](multimedia-redirection.md#teams-live-events) for smoother playback. - Calls or meetings don't currently support application sharing. Desktop sessions support desktop sharing. - Due to WebRTC limitations, incoming and outgoing video stream resolution is limited to 720p. - The Teams app doesn't support HID buttons or LED controls with other devices. |
virtual-desktop | Troubleshoot Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client.md | If that doesn't work, make sure your app group is associated with a workspace. In this section you'll find troubleshooting guidance for the Remote Desktop client for Windows. +### Access client logs ++You might need the client logs when investigating an issue. ++To retrieve the client logs: ++1. Ensure no sessions are active and the client process isn't running in the background by right-clicking on the **Remote Desktop** icon in the system tray and selecting **Disconnect all sessions**. +1. Open **File Explorer**. +1. Navigate to the **%temp%\DiagOutputDir\RdClientAutoTrace** folder. ++Below you will find different methods used to read the client logs. ++#### Event Viewer ++1. Navigate to the Start menu, Control Panel, System and Security, and select **view event logs** under "Windows Tools". +1. Once the **Event Viewer** is op |