Updates from: 05/17/2022 01:09:08
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 05/03/2022 Last updated : 05/16/2022
To create the registry key that overrides push notifications:
### Policy schema changes
+>[!NOTE]
+>In Graph Explorer, ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
+ Identify your single target group for the schema configuration. Then use the following API endpoint to change the numberMatchingRequiredState property to **enabled**: https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
active-directory Cloudknox All Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-all-reports.md
Title: View a list and description of all system reports available in CloudKnox Permissions Management reports description: View a list and description of all system reports available in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View a list and description of system reports
active-directory Cloudknox Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-faqs.md
Title: Frequently asked questions (FAQs) about CloudKnox Permissions Management description: Frequently asked questions (FAQs) about CloudKnox Permissions Management. -+ Last updated 04/20/2022-+ # Frequently asked questions (FAQs)
active-directory Cloudknox Howto Add Remove Role Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-add-remove-role-task.md
Title: Add and remove roles and tasks for groups, users, and service accounts for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management description: How to attach and detach permissions for groups, users, and service accounts for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities
active-directory Cloudknox Howto Attach Detach Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-attach-detach-permissions.md
Title: Attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in CloudKnox Permissions Management description: How to attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Attach and detach policies for Amazon Web Services (AWS) identities
active-directory Cloudknox Howto Audit Trail Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-audit-trail-results.md
Title: Generate an on-demand report from a query in the Audit dashboard in CloudKnox Permissions Management description: How to generate an on-demand report from a query in the **Audit** dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Generate an on-demand report from a query
active-directory Cloudknox Howto Clone Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-clone-role-policy.md
Title: Clone a role/policy in the Remediation dashboard in CloudKnox Permissions Management description: How to clone a role/policy in the Just Enough Permissions (JEP) Controller. -+ Last updated 02/23/2022-+ # Clone a role/policy in the Remediation dashboard
active-directory Cloudknox Howto Create Alert Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-alert-trigger.md
Title: Create and view activity alerts and alert triggers in CloudKnox Permissions Management description: How to create and view activity alerts and alert triggers in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Create and view activity alerts and alert triggers
active-directory Cloudknox Howto Create Approve Privilege Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-approve-privilege-request.md
Title: Create or approve a request for permissions in the Remediation dashboard in CloudKnox Permissions Management description: How to create or approve a request for permissions in the Remediation dashboard. -+ Last updated 02/23/2022-+ # Create or approve a request for permissions
active-directory Cloudknox Howto Create Custom Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-custom-queries.md
Title: Create a custom query in CloudKnox Permissions Management description: How to create a custom query in the Audit dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Create a custom query
active-directory Cloudknox Howto Create Group Based Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-group-based-permissions.md
Title: Select group-based permissions settings in CloudKnox Permissions Management with the User management dashboard description: How to select group-based permissions settings in CloudKnox Permissions Management with the User management dashboard. -+ Last updated 02/23/2022-+ # Select group-based permissions settings
active-directory Cloudknox Howto Create Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-role-policy.md
Title: Create a role/policy in the Remediation dashboard in CloudKnox Permissions Management description: How to create a role/policy in the Remediation dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Create a role/policy in the Remediation dashboard
active-directory Cloudknox Howto Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-rule.md
Title: Create a rule in the Autopilot dashboard in CloudKnox Permissions Management description: How to create a rule in the Autopilot dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Create a rule in the Autopilot dashboard
active-directory Cloudknox Howto Delete Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-delete-role-policy.md
Title: Delete a role/policy in the Remediation dashboard in CloudKnox Permissions Management description: How to delete a role/policy in the Just Enough Permissions (JEP) Controller. -+ Last updated 02/23/2022-+ # Delete a role/policy in the Remediation dashboard
active-directory Cloudknox Howto Modify Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-modify-role-policy.md
Title: Modify a role/policy in the Remediation dashboard in CloudKnox Permissions Management description: How to modify a role/policy in the Remediation dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Modify a role/policy in the Remediation dashboard
active-directory Cloudknox Howto Notifications Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-notifications-rule.md
Title: View notification settings for a rule in the Autopilot dashboard in CloudKnox Permissions Management description: How to view notification settings for a rule in the Autopilot dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View notification settings for a rule in the Autopilot dashboard
active-directory Cloudknox Howto Recommendations Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-recommendations-rule.md
Title: Generate, view, and apply rule recommendations in the Autopilot dashboard in CloudKnox Permissions Management description: How to generate, view, and apply rule recommendations in the Autopilot dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Generate, view, and apply rule recommendations in the Autopilot dashboard
active-directory Cloudknox Howto Revoke Task Readonly Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-revoke-task-readonly-status.md
Title: Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management description: How to revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities
active-directory Cloudknox Howto View Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-view-role-policy.md
Title: View information about roles/ policies in the Remediation dashboard in CloudKnox Permissions Management description: How to view and filter information about roles/ policies in the Remediation dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View information about roles/ policies in the Remediation dashboard
active-directory Cloudknox Integration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-integration-api.md
Title: Set and view configuration settings in CloudKnox Permissions Management description: How to view the CloudKnox Permissions Management API integration settings and create service accounts and roles. -+ Last updated 02/23/2022-+ # Set and view configuration settings
active-directory Cloudknox Multi Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-multi-cloud-glossary.md
Title: CloudKnox Permissions Management - The CloudKnox glossary description: CloudKnox Permissions Management glossary -+ Last updated 02/23/2022-+ # The CloudKnox glossary
active-directory Cloudknox Onboard Add Account After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-add-account-after-onboarding.md
Title: Add an account/ subscription/ project to Microsoft CloudKnox Permissions Management after onboarding is complete description: How to add an account/ subscription/ project to Microsoft CloudKnox Permissions Management after onboarding is complete. -+ Last updated 02/23/2022-+ # Add an account/ subscription/ project after onboarding is complete
active-directory Cloudknox Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-aws.md
Title: Onboard an Amazon Web Services (AWS) account on CloudKnox Permissions Management description: How to onboard an Amazon Web Services (AWS) account on CloudKnox Permissions Management. -+ Last updated 04/20/2022-+ # Onboard an Amazon Web Services (AWS) account
active-directory Cloudknox Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-azure.md
Title: Onboard a Microsoft Azure subscription in CloudKnox Permissions Management description: How to a Microsoft Azure subscription on CloudKnox Permissions Management. -+ Last updated 04/20/2022-+ # Onboard a Microsoft Azure subscription
active-directory Cloudknox Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-controller-after-onboarding.md
Title: Enable or disable the controller in Microsoft CloudKnox Permissions Management after onboarding is complete description: How to enable or disable the controller in Microsoft CloudKnox Permissions Management after onboarding is complete. -+ Last updated 02/23/2022-+ # Enable or disable the controller after onboarding is complete
active-directory Cloudknox Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-tenant.md
Title: Enable CloudKnox Permissions Management in your organization description: How to enable CloudKnox Permissions Management in your organization. -+ Last updated 04/20/2022-+ # Enable CloudKnox in your organization
active-directory Cloudknox Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-gcp.md
Title: Onboard a Google Cloud Platform (GCP) project in CloudKnox Permissions Management description: How to onboard a Google Cloud Platform (GCP) project on CloudKnox Permissions Management. -+ Last updated 04/20/2022-+ # Onboard a Google Cloud Platform (GCP) project
active-directory Cloudknox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-overview.md
Title: What's CloudKnox Permissions Management? description: An introduction to CloudKnox Permissions Management. -+ Last updated 04/20/2022-+ # What's CloudKnox Permissions Management?
active-directory Cloudknox Product Account Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-account-explorer.md
-+ Last updated 02/23/2022-+ # View roles and identities that can access account information from an external account
active-directory Cloudknox Product Account Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-account-settings.md
-+ Last updated 02/23/2022-+ # View personal and organization information
active-directory Cloudknox Product Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-audit-trail.md
Title: Filter and query user activity in CloudKnox Permissions Management description: How to filter and query user activity in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Filter and query user activity
active-directory Cloudknox Product Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-dashboard.md
Title: View data about the activity in your authorization system in CloudKnox Permissions Management description: How to view data about the activity in your authorization system in the CloudKnox Dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+
active-directory Cloudknox Product Data Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-data-inventory.md
Title: CloudKnox Permissions Management - Display an inventory of created resources and licenses for your authorization system description: How to display an inventory of created resources and licenses for your authorization system in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Display an inventory of created resources and licenses for your authorization system
active-directory Cloudknox Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-data-sources.md
Title: View and configure settings for data collection from your authorization system in CloudKnox Permissions Management description: How to view and configure settings for collecting data from your authorization system in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View and configure settings for data collection
active-directory Cloudknox Product Define Permission Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-define-permission-levels.md
Title: Define and manage users, roles, and access levels in CloudKnox Permissions Management description: How to define and manage users, roles, and access levels in CloudKnox Permissions Management User management dashboard. -+ Last updated 02/23/2022-+ # Define and manage users, roles, and access levels
active-directory Cloudknox Product Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-integrations.md
Title: View integration information about an authorization system in CloudKnox Permissions Management description: View integration information about an authorization system in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View integration information about an authorization system
active-directory Cloudknox Product Permission Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-permission-analytics.md
Title: Create and view permission analytics triggers in CloudKnox Permissions Management description: How to create and view permission analytics triggers in the Permission analytics tab in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Create and view permission analytics triggers
active-directory Cloudknox Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-permissions-analytics-reports.md
Title: Generate and download the Permissions analytics report in CloudKnox Permissions Management description: How to generate and download the Permissions analytics report in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Generate and download the Permissions analytics report
active-directory Cloudknox Product Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-reports.md
Title: View system reports in the Reports dashboard in CloudKnox Permissions Management description: How to view system reports in the Reports dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View system reports in the Reports dashboard
active-directory Cloudknox Product Rule Based Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-rule-based-anomalies.md
Title: Create and view rule-based anomalies and anomaly triggers in CloudKnox Permissions Management description: How to create and view rule-based anomalies and anomaly triggers in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Create and view rule-based anomaly alerts and anomaly triggers
active-directory Cloudknox Product Statistical Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-statistical-anomalies.md
Title: Create and view statistical anomalies and anomaly triggers in CloudKnox Permissions Management description: How to create and view statistical anomalies and anomaly triggers in the Statistical Anomaly tab in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Create and view statistical anomalies and anomaly triggers
active-directory Cloudknox Report Create Custom Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-report-create-custom-report.md
Title: Create, view, and share a custom report a custom report in CloudKnox Permissions Management description: How to create, view, and share a custom report in the CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Create, view, and share a custom report
active-directory Cloudknox Report View System Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-report-view-system-report.md
Title: Generate and view a system report in CloudKnox Permissions Management description: How to generate and view a system report in the CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Generate and view a system report
active-directory Cloudknox Training Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-training-videos.md
Title: CloudKnox Permissions Management training videos description: CloudKnox Permissions Management training videos. -+ Last updated 04/20/2022-+ # CloudKnox Permissions Management training videos
active-directory Cloudknox Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-troubleshoot.md
Title: Troubleshoot issues with CloudKnox Permissions Management description: Troubleshoot issues with CloudKnox Permissions Management -+ Last updated 02/23/2022-+ # Troubleshoot issues with CloudKnox Permissions Management
active-directory Cloudknox Ui Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-audit-trail.md
Title: Use queries to see how users access information in an authorization system in CloudKnox Permissions Management description: How to use queries to see how users access information in an authorization system in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Use queries to see how users access information
active-directory Cloudknox Ui Autopilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-autopilot.md
Title: View rules in the Autopilot dashboard in CloudKnox Permissions Management description: How to view rules in the Autopilot dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View rules in the Autopilot dashboard
active-directory Cloudknox Ui Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-dashboard.md
Title: View key statistics and data about your authorization system in CloudKnox Permissions Management description: How to view statistics and data about your authorization system in the CloudKnox Permissions Management. -+ Last updated 02/23/2022-+
active-directory Cloudknox Ui Remediation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-remediation.md
Title: View existing roles/policies and requests for permission in the Remediation dashboard in CloudKnox Permissions Management description: How to view existing roles/policies and requests for permission in the Remediation dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View roles/policies and requests for permission in the Remediation dashboard
active-directory Cloudknox Ui Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-tasks.md
Title: View information about active and completed tasks in CloudKnox Permissions Management description: How to view information about active and completed tasks in the Activities pane in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View information about active and completed tasks
active-directory Cloudknox Ui Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-triggers.md
Title: View information about activity triggers in CloudKnox Permissions Management description: How to view information about activity triggers in the Activity triggers dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View information about activity triggers
active-directory Cloudknox Ui User Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-user-management.md
Title: Manage users and groups with the User management dashboard in CloudKnox Permissions Management description: How to manage users and groups in the User management dashboard in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # Manage users and groups with the User management dashboard
active-directory Cloudknox Usage Analytics Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-access-keys.md
Title: View analytic information about access keys in CloudKnox Permissions Management description: How to view analytic information about access keys in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View analytic information about access keys
active-directory Cloudknox Usage Analytics Active Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-active-resources.md
Title: View analytic information about active resources in CloudKnox Permissions Management description: How to view usage analytics about active resources in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View analytic information about active resources
active-directory Cloudknox Usage Analytics Active Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-active-tasks.md
Title: View analytic information about active tasks in CloudKnox Permissions Management description: How to view analytic information about active tasks in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View analytic information about active tasks
active-directory Cloudknox Usage Analytics Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-groups.md
Title: View analytic information about groups in CloudKnox Permissions Management description: How to view analytic information about groups in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View analytic information about groups
active-directory Cloudknox Usage Analytics Home https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-home.md
Title: View analytic information with the Analytics dashboard in CloudKnox Permissions Management description: How to use the Analytics dashboard in CloudKnox Permissions Management to view details about users, groups, active resources, active tasks, access keys, and serverless functions. -+ Last updated 02/23/2022-+ # View analytic information with the Analytics dashboard
active-directory Cloudknox Usage Analytics Serverless Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-serverless-functions.md
Title: View analytic information about serverless functions in CloudKnox Permissions Management description: How to view analytic information about serverless functions in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View analytic information about serverless functions
active-directory Cloudknox Usage Analytics Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-users.md
Title: View analytic information about users in CloudKnox Permissions Management description: How to view analytic information about users in CloudKnox Permissions Management. -+ Last updated 02/23/2022-+ # View analytic information about users
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Publisher verification provides the following benefits:
> [!NOTE] > - Starting in November 2020, end users will no longer be able to grant consent to most newly registered multi-tenant apps without verified publishers if [risk-based step-up consent](../manage-apps/configure-risk-based-step-up-consent.md) is enabled. This will apply to apps that are registered after November 8, 2020, use OAuth2.0 to request permissions beyond basic sign-in and read user profile, and request consent from users in different tenants than the one the app is registered in. A warning will be displayed on the consent screen informing users that these apps are risky and are from unverified publishers.
-> - Publisher verification is not supported in national clouds. Applications registered in national cloud tenants can't be publisher-verified at this time.
## Requirements There are a few pre-requisites for publisher verification, some of which will have already been completed by many Microsoft partners. They are: - An MPN ID for a valid [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. This MPN account must be the [Partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for your organization.
+- The application to be publisher verified must be registered using a Azure AD account. Applications registered using a Microsoft personal account aren't supported for publisher verification.
+ - The Azure AD tenant where the app is registered must be associated with the Partner Global account. If it's not the primary tenant associated with the PGA, follow the steps to [set up the MPN partner global account as a multi-tenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account). - An app registered in an Azure AD tenant, with a [Publisher Domain](howto-configure-publisher-domain.md) configured.
There are a few pre-requisites for publisher verification, some of which will ha
Developers who have already met these pre-requisites can get verified in a matter of minutes. If the requirements have not been met, getting set up is free.
+## National Clouds and Publisher Verification
+Publisher verification is currently not supported in national clouds. Applications registered in national cloud tenants can't be publisher-verified at this time.
+ ## Frequently asked questions Below are some frequently asked questions regarding the publisher verification program. For FAQs related to the requirements and the process, see [mark an app as publisher verified](mark-app-as-publisher-verified.md).
active-directory Azure Active Directory Parallel Identity Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-active-directory-parallel-identity-options.md
In this approach, Contoso would configure a [direct federation](../external-iden
- [Setup Inbound provisioning for Azure AD](../app-provisioning/plan-cloud-hr-provision.md) - [Setup B2B direct federation](../external-identities/direct-federation.md) - [Multi-tenant user management options](multi-tenant-user-management-introduction.md)
+- [What is application provisioning?](../app-provisioning/user-provisioning.md)
active-directory Balsamiq Wireframes Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/balsamiq-wireframes-tutorial.md
Previously updated : 01/20/2022 Last updated : 05/13/2022
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Balsamiq Wireframes single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This feature is only available for users on the 200-projects Space plan.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
active-directory Facebook Work Accounts Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/facebook-work-accounts-provisioning-tutorial.md
Title: 'Tutorial: Configure Facebook Work Accounts for automatic user provisioni
description: Learn how to automatically provision and de-provision user accounts from Azure AD to Facebook Work Accounts. documentationcenter: ''-+ writer: Zhchia
na Last updated 10/27/2021-+ # Tutorial: Configure Facebook Work Accounts for automatic user provisioning
active-directory Factset Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/factset-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with FactSet | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with FactSet'
description: Learn how to configure single sign-on between Azure Active Directory and FactSet.
Previously updated : 05/17/2021 Last updated : 05/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with FactSet
+# Tutorial: Azure AD SSO integration with FactSet
In this tutorial, you'll learn how to integrate FactSet with Azure Active Directory (Azure AD). When you integrate FactSet with Azure AD, you can:
-* Control in Azure AD who has access to FactSet.
+* Control in Azure AD who has access to FactSet URLs via the Federation.
* Enable your users to be automatically signed-in to FactSet with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* FactSet supports **IDP** initiated SSO.
+* FactSet supports **SP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://auth.factset.com` b. In the **Reply URL** text box, type the URL:
- `https://auth.factset.com/sp/ACS.saml2`
+ `https://login.factset.com/services/saml2/`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.factset.com/services/saml2/`
+
+ > [!NOTE]
+ > The Sign-on URL value is not real. Update the value with the actual Sign-on URL. Contact the [FactSet Support Team](https://www.factset.com/contact-us) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the metadata file and save it on your computer.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure FactSet you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure FactSet you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Google Cloud (G Suite) Connector | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Google Cloud (G Suite) Connector.
+ Title: 'Tutorial: Azure AD SSO integration with Google Cloud / G Suite Connector by Microsoft'
+description: Learn how to configure single sign-on between Azure Active Directory and Google Cloud / G Suite Connector by Microsoft.
Last updated 12/27/2021
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Google Cloud (G Suite) Connector
+# Tutorial: Azure AD SSO integration with Google Cloud / G Suite Connector by Microsoft
-In this tutorial, you'll learn how to integrate Google Cloud (G Suite) Connector with Azure Active Directory (Azure AD). When you integrate Google Cloud (G Suite) Connector with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Google Cloud / G Suite Connector by Microsoft with Azure Active Directory (Azure AD). When you integrate Google Cloud / G Suite Connector by Microsoft with Azure AD, you can:
-* Control in Azure AD who has access to Google Cloud (G Suite) Connector.
-* Enable your users to be automatically signed-in to Google Cloud (G Suite) Connector with their Azure AD accounts.
+* Control in Azure AD who has access to Google Cloud / G Suite Connector by Microsoft.
+* Enable your users to be automatically signed-in to Google Cloud / G Suite Connector by Microsoft with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Google Cloud (G Suite) Connector
To get started, you need the following items: * An Azure AD subscription.
-* Google Cloud (G Suite) Connector single sign-on (SSO) enabled subscription.
+* Google Cloud / G Suite Connector by Microsoft single sign-on (SSO) enabled subscription.
* A Google Apps subscription or Google Cloud Platform subscription. > [!NOTE]
-> To test the steps in this tutorial, we do not recommend using a production environment. This document was created using the new user Single-Sign-on experience. If you are still using the old one, the setup will look different. You can enable the new experience in the Single Sign-on settings of G-Suite application. Go to **Azure AD, Enterprise applications**, select **Google Cloud (G Suite) Connector**, select **Single Sign-on** and then click on **Try out our new experience**.
+> To test the steps in this tutorial, we do not recommend using a production environment. This document was created using the new user Single-Sign-on experience. If you are still using the old one, the setup will look different. You can enable the new experience in the Single Sign-on settings of G-Suite application. Go to **Azure AD, Enterprise applications**, select **Google Cloud / G Suite Connector by Microsoft**, select **Single Sign-on** and then click on **Try out our new experience**.
To test the steps in this tutorial, you should follow these recommendations:
To test the steps in this tutorial, you should follow these recommendations:
2. **Q: Are Chromebooks and other Chrome devices compatible with Azure AD single sign-on?**
- A: Yes, users are able to sign into their Chromebook devices using their Azure AD credentials. See this [Google Cloud (G Suite) Connector support article](https://support.google.com/chrome/a/answer/6060880) for information on why users may get prompted for credentials twice.
+ A: Yes, users are able to sign into their Chromebook devices using their Azure AD credentials. See this [Google Cloud / G Suite Connector by Microsoft support article](https://support.google.com/chrome/a/answer/6060880) for information on why users may get prompted for credentials twice.
3. **Q: If I enable single sign-on, will users be able to use their Azure AD credentials to sign into any Google product, such as Google Classroom, GMail, Google Drive, YouTube, and so on?**
- A: Yes, depending on [which Google Cloud (G Suite) Connector](https://support.google.com/a/answer/182442?hl=en&ref_topic=1227583) you choose to enable or disable for your organization.
+ A: Yes, depending on [which Google Cloud / G Suite Connector by Microsoft](https://support.google.com/a/answer/182442?hl=en&ref_topic=1227583) you choose to enable or disable for your organization.
-4. **Q: Can I enable single sign-on for only a subset of my Google Cloud (G Suite) Connector users?**
+4. **Q: Can I enable single sign-on for only a subset of my Google Cloud / G Suite Connector by Microsoft users?**
- A: No, turning on single sign-on immediately requires all your Google Cloud (G Suite) Connector users to authenticate with their Azure AD credentials. Because Google Cloud (G Suite) Connector doesn't support having multiple identity providers, the identity provider for your Google Cloud (G Suite) Connector environment can either be Azure AD or Google -- but not both at the same time.
+ A: No, turning on single sign-on immediately requires all your Google Cloud / G Suite Connector by Microsoft users to authenticate with their Azure AD credentials. Because Google Cloud / G Suite Connector by Microsoft doesn't support having multiple identity providers, the identity provider for your Google Cloud / G Suite Connector by Microsoft environment can either be Azure AD or Google -- but not both at the same time.
-5. **Q: If a user is signed in through Windows, are they automatically authenticate to Google Cloud (G Suite) Connector without getting prompted for a password?**
+5. **Q: If a user is signed in through Windows, are they automatically authenticate to Google Cloud / G Suite Connector by Microsoft without getting prompted for a password?**
- A: There are two options for enabling this scenario. First, users could sign into Windows 10 devices via [Azure Active Directory Join](../devices/overview.md). Alternatively, users could sign into Windows devices that are domain-joined to an on-premises Active Directory that has been enabled for single sign-on to Azure AD via an [Active Directory Federation Services (AD FS)](../hybrid/plan-connect-user-signin.md) deployment. Both options require you to perform the steps in the following tutorial to enable single sign-on between Azure AD and Google Cloud (G Suite) Connector.
+ A: There are two options for enabling this scenario. First, users could sign into Windows 10 devices via [Azure Active Directory Join](../devices/overview.md). Alternatively, users could sign into Windows devices that are domain-joined to an on-premises Active Directory that has been enabled for single sign-on to Azure AD via an [Active Directory Federation Services (AD FS)](../hybrid/plan-connect-user-signin.md) deployment. Both options require you to perform the steps in the following tutorial to enable single sign-on between Azure AD and Google Cloud / G Suite Connector by Microsoft.
6. **Q: What should I do when I get an "invalid email" error message?**
To test the steps in this tutorial, you should follow these recommendations:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Google Cloud (G Suite) Connector supports **SP** initiated SSO.
+* Google Cloud / G Suite Connector by Microsoft supports **SP** initiated SSO.
-* Google Cloud (G Suite) Connector supports [**Automated** user provisioning](./g-suite-provisioning-tutorial.md).
+* Google Cloud / G Suite Connector by Microsoft supports [**Automated** user provisioning](./g-suite-provisioning-tutorial.md).
-## Adding Google Cloud (G Suite) Connector from the gallery
+## Adding Google Cloud / G Suite Connector by Microsoft from the gallery
-To configure the integration of Google Cloud (G Suite) Connector into Azure AD, you need to add Google Cloud (G Suite) Connector from the gallery to your list of managed SaaS apps.
+To configure the integration of Google Cloud / G Suite Connector by Microsoft into Azure AD, you need to add Google Cloud / G Suite Connector by Microsoft from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Google Cloud (G Suite) Connector** in the search box.
-1. Select **Google Cloud (G Suite) Connector** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Google Cloud / G Suite Connector by Microsoft** in the search box.
+1. Select **Google Cloud / G Suite Connector by Microsoft** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Google Cloud (G Suite) Connector
+## Configure and test Azure AD single sign-on for Google Cloud / G Suite Connector by Microsoft
-Configure and test Azure AD SSO with Google Cloud (G Suite) Connector using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Google Cloud (G Suite) Connector.
+Configure and test Azure AD SSO with Google Cloud / G Suite Connector by Microsoft using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Google Cloud / G Suite Connector by Microsoft.
-To configure and test Azure AD SSO with Google Cloud (G Suite) Connector, perform the following steps:
+To configure and test Azure AD SSO with Google Cloud / G Suite Connector by Microsoft, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Google Cloud (G Suite) Connector SSO](#configure-google-cloud-g-suite-connector-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Google Cloud (G Suite) Connector test user](#create-google-cloud-g-suite-connector-test-user)** - to have a counterpart of B.Simon in Google Cloud (G Suite) Connector that is linked to the Azure AD representation of user.
+1. **[Configure Google Cloud/G Suite Connector by Microsoft SSO](#configure-google-cloudg-suite-connector-by-microsoft-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Google Cloud/G Suite Connector by Microsoft test user](#create-google-cloudg-suite-connector-by-microsoft-test-user)** - to have a counterpart of B.Simon in Google Cloud / G Suite Connector by Microsoft that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Google Cloud (G Suite) Connector** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Google Cloud / G Suite Connector by Microsoft** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://www.google.com/a/<yourdomain.com>/ServiceLogin?continue=https://console.cloud.google.com` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier,Reply URL and Sign on URL. Google Cloud (G Suite) Connector doesn't provide Entity ID/Identifier value on Single Sign On configuration so when you uncheck the **domain specific issuer** option the Identifier value will be `google.com`. If you check the **domain specific issuer** option it will be `google.com/a/<yourdomainname.com>`. To check/uncheck the **domain specific issuer** option you need to go to the **Configure Google Cloud (G Suite) Connector SSO** section which is explained later in the tutorial. For more information contact [Google Cloud (G Suite) Connector Client support team](https://www.google.com/contact/).
+ > These values are not real. Update these values with the actual Identifier,Reply URL and Sign on URL. Google Cloud / G Suite Connector by Microsoft doesn't provide Entity ID/Identifier value on Single Sign On configuration so when you uncheck the **domain specific issuer** option the Identifier value will be `google.com`. If you check the **domain specific issuer** option it will be `google.com/a/<yourdomainname.com>`. To check/uncheck the **domain specific issuer** option you need to go to the **Configure Google Cloud / G Suite Connector by Microsoft SSO** section which is explained later in the tutorial. For more information contact [Google Cloud / G Suite Connector by Microsoft Client support team](https://www.google.com/contact/).
-1. Your Google Cloud (G Suite) Connector application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Google Cloud (G Suite) Connector expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
+1. Your Google Cloud / G Suite Connector by Microsoft application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Google Cloud / G Suite Connector by Microsoft expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
![image](common/default-attributes.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up Google Cloud (G Suite) Connector** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up Google Cloud / G Suite Connector by Microsoft** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Google Cloud (G Suite) Connector.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Google Cloud / G Suite Connector by Microsoft.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Google Cloud (G Suite) Connector**.
+1. In the applications list, select **Google Cloud / G Suite Connector by Microsoft**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Google Cloud (G Suite) Connector SSO
+## Configure Google Cloud/G Suite Connector by Microsoft SSO
-1. Open a new tab in your browser, and sign into the [Google Cloud (G Suite) Connector Admin Console](https://admin.google.com/) using your administrator account.
+1. Open a new tab in your browser, and sign into the [Google Cloud / G Suite Connector by Microsoft Admin Console](https://admin.google.com/) using your administrator account.
1. Go to the **Menu -> Security -> Authentication -> SSO with third party IDP**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
a. Turn ON the **SSO profile for your organization**.
- b. In the **Sign-in page URL** field in Google Cloud (G Suite) Connector, paste the value of **Login URL** which you have copied from Azure portal.
+ b. In the **Sign-in page URL** field in Google Cloud / G Suite Connector by Microsoft, paste the value of **Login URL** which you have copied from Azure portal.
- c. In the **Sign-out page URL** field in Google Cloud (G Suite) Connector, paste the value of **Logout URL** which you have copied from Azure portal.
+ c. In the **Sign-out page URL** field in Google Cloud / G Suite Connector by Microsoft, paste the value of **Logout URL** which you have copied from Azure portal.
- d. In Google Cloud (G Suite) Connector, for the **Verification certificate**, upload the certificate that you have downloaded from Azure portal.
+ d. In Google Cloud / G Suite Connector by Microsoft, for the **Verification certificate**, upload the certificate that you have downloaded from Azure portal.
e. Check/Uncheck the **Use a domain specific issuer** option as per the note mentioned in the above **Basic SAML Configuration** section in the Azure AD.
- f. In the **Change password URL** field in Google Cloud (G Suite) Connector, enter the value as `https://account.activedirectory.windowsazure.com/changepassword.aspx`
+ f. In the **Change password URL** field in Google Cloud / G Suite Connector by Microsoft, enter the value as `https://account.activedirectory.windowsazure.com/changepassword.aspx`
g. Click **Save**.
-### Create Google Cloud (G Suite) Connector test user
+### Create Google Cloud/G Suite Connector by Microsoft test user
-The objective of this section is to [create a user in Google Cloud (G Suite) Connector](https://support.google.com/a/answer/33310?hl=en) called B.Simon. After the user has manually been created in Google Cloud (G Suite) Connector, the user will now be able to sign in using their Microsoft 365 login credentials.
+The objective of this section is to [create a user in Google Cloud / G Suite Connector by Microsoft](https://support.google.com/a/answer/33310?hl=en) called B.Simon. After the user has manually been created in Google Cloud / G Suite Connector by Microsoft, the user will now be able to sign in using their Microsoft 365 login credentials.
-Google Cloud (G Suite) Connector also supports automatic user provisioning. To configure automatic user provisioning, you must first [configure Google Cloud (G Suite) Connector for automatic user provisioning](./g-suite-provisioning-tutorial.md).
+Google Cloud / G Suite Connector by Microsoft also supports automatic user provisioning. To configure automatic user provisioning, you must first [configure Google Cloud / G Suite Connector by Microsoft for automatic user provisioning](./g-suite-provisioning-tutorial.md).
> [!NOTE]
-> Make sure that your user already exists in Google Cloud (G Suite) Connector if provisioning in Azure AD has not been turned on before testing Single Sign-on.
+> Make sure that your user already exists in Google Cloud / G Suite Connector by Microsoft if provisioning in Azure AD has not been turned on before testing Single Sign-on.
> [!NOTE] > If you need to create a user manually, contact the [Google support team](https://www.google.com/contact/).
Google Cloud (G Suite) Connector also supports automatic user provisioning. To c
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Google Cloud (G Suite) Connector Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Google Cloud / G Suite Connector by Microsoft Sign-on URL where you can initiate the login flow.
-* Go to Google Cloud (G Suite) Connector Sign-on URL directly and initiate the login flow from there.
+* Go to Google Cloud / G Suite Connector by Microsoft Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Google Cloud (G Suite) Connector tile in the My Apps, this will redirect to Google Cloud (G Suite) Connector Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the Google Cloud / G Suite Connector by Microsoft tile in the My Apps, this will redirect to Google Cloud / G Suite Connector by Microsoft Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure Google Cloud (G Suite) Connector you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Google Cloud / G Suite Connector by Microsoft you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Netpresenter Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netpresenter-provisioning-tutorial.md
Title: 'Tutorial: Configure Netpresenter Next for automatic user provisioning wi
description: Learn how to automatically provision and de-provision user accounts from Azure AD to Netpresenter Next. documentationcenter: ''-+ writer: Zhchia
na Last updated 10/04/2021-+ # Tutorial: Configure Netpresenter Next for automatic user provisioning
active-directory Uniflow Online Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uniflow-online-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with uniFLOW Online | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with uniFLOW Online'
description: Learn how to configure single sign-on between Azure Active Directory and uniFLOW Online.
Previously updated : 08/26/2021 Last updated : 05/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with uniFLOW Online
+# Tutorial: Azure AD SSO integration with uniFLOW Online
In this tutorial, you'll learn how to integrate uniFLOW Online with Azure Active Directory (Azure AD). When you integrate uniFLOW Online with Azure AD, you can:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, sign in to uniFLOW Online website as an administrator.
-1. From the left navigation panel, select **User** tab.
+1. From the left navigation panel, select **Extensions** tab.
- ![Screenshot shows User selected from the uniflow Online site.](./media/uniflow-online-tutorial/user.png)
+ ![Screenshot shows Extension selected from the uniFLOW Online site.](./media/uniflow-online-tutorial/extensions.png)
-1. Click **Identity providers**.
- ![Screenshot shows Identity Providers selected.](./media/uniflow-online-tutorial/profile.png)
+1. Click **Identity Providers**.
+
+ ![Screenshot shows Identity Providers selected.](./media/uniflow-online-tutorial/identity-providers.png)
+
+1. Click **Configure identity providers**
+
+ ![Screenshot shows box to configure identity providers](./media/uniflow-online-tutorial/configure-identity-providers.png)
1. Click on **Add identity provider**.
- ![Screenshot shows Add identity provider selected.](./media/uniflow-online-tutorial/add-profile.png)
+ ![Screenshot shows Add identity provider selected.](./media/uniflow-online-tutorial/add-identity-providers.png)
+ 1. On the **ADD IDENTITY PROVIDER** section, perform the following steps:
- ![Screenshot shows the ADD IDENTITY PROVIDER section where you can enter the values described.](./media/uniflow-online-tutorial/configuration.png)
+ ![Screenshot shows the ADD IDENTITY PROVIDER section where you can enter the values described.](./media/uniflow-online-tutorial/display-name.png)
+ a. Enter the Display name Ex: **AzureAD SSO**.
- b. For **Provider type**, select **WS-Fed** option from the dropdown.
+ b. For **Provider type**, select **WS-Federation** option from the dropdown.
- c. For **WS-Fed type**, select **Azure Active Directory** option from the dropdown.
+ c. For **WS-Federation type**, select **Azure Active Directory** option from the dropdown.
d. Click **Save**. 1. On the **General** tab, perform the following steps:
- ![Screenshot shows the General tab where you can enter the values described.](./media/uniflow-online-tutorial/general-tab.png)
+ ![Screenshot shows the General tab where you can enter the values described.](./media/uniflow-online-tutorial/configuration.png)
- a. Enter the Display name Ex: **AzureAD SSO**.
- b. Select the **From URL** option for the **ADFS Federation Metadata**.
+ a. Enter the Display name Ex: **AzureAD SSO**.
+
+ b. Select **Identity provider** as **Enable AzureAD SSO**.
- c. In the **Federation Metadata URL** textbox, paste the **App Federation Metadata Url** value, which you have copied from the Azure portal.
+ c. Select the **From URL** option for the **ADFS Federation Metadata**.
- d. Select **Identity provider** as **Enabled**.
+ d. In the **Federation Metadata URL** textbox, paste the **App Federation Metadata URL** value, which you have copied from the Azure portal.
e. Select **Automatic user registration** as **Activated**. f. Click **Save**.
+
+> [!NOTE]
+> **Reply URL** is automatically pre-filled and cannot be changed.
### Sign in to uniFLOW Online using the created test user
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure uniFLOW Online you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure uniFLOW Online you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
analysis-services Analysis Services Connect Excel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-excel.md
description: Learn how to connect to an Azure Analysis Services server by using
Previously updated : 02/02/2022 Last updated : 05/16/2022 # Connect with Excel
-Once you've created a server, and deployed a tabular model to it, clients can connect and begin exploring data.
+Once you've created a server and deployed a tabular model to it, clients can connect and begin exploring data. This article describes connecting to an Azure Analysis Services resource by using the Excel desktop app. Connecting to an Azure Analysis Services resource is not supported in Excel for the web.
## Before you begin
app-service Tutorial Networking Isolate Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-networking-isolate-vnet.md
Because your Key Vault and Cognitive Services resources will sit behind [private
## Create private endpoints
-1. In the private endpoint subnet of your virtual network, create a private endpoint for your key vault.
+1. In the private endpoint subnet of your virtual network, create a private endpoint for your Cognitive Service.
```azurecli-interactive # Get Cognitive Services resource ID
This command may take a minute to run.
## Next steps - [Integrate your app with an Azure virtual network](overview-vnet-integration.md)-- [App Service networking features](networking-features.md)
+- [App Service networking features](networking-features.md)
applied-ai-services Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/configure-metrics.md
Previously updated : 09/10/2020 Last updated : 05/12/2022
-# How to: Configure metrics and fine tune detection configuration
+# Configure metrics and fine tune detection configuration
-Use this article to start configuring your Metrics Advisor instance using the web portal. To browse the metrics for a specific data feed, go to the **Data feeds** page and select one of the feeds. This will display a list of metrics associated with it.
+Use this article to start configuring your Metrics Advisor instance using the web portal and fine-tune the anomaly detection results.
+
+## Metrics
+
+To browse the metrics for a specific data feed, go to the **Data feeds** page and select one of the feeds. This will display a list of metrics associated with it.
:::image type="content" source="../media/metrics/select-metric.png" alt-text="Select a metric" lightbox="../media/metrics/select-metric.png":::
-Select one of the metric names to see its details. In this detailed view, you can switch to another metric in the same data feed using the drop down list in the top right corner of the screen.
+Select one of the metric names to see its details. In this view, you can switch to another metric in the same data feed using the drop-down list in the top right corner of the screen.
-When you first view a metric's details, you can load a time series by letting Metrics Advisor choose one for you, or by specifying values to be included for each dimension.
+When you first view a metric's details, you can load a time series by letting Metrics Advisor choose one for you, or by specifying values to be included for each dimension.
You can also select time ranges, and change the layout of the page. > [!NOTE] > - The start time is inclusive.
-> - The end time is exclusive.
+> - The end time is exclusive.
-You can click the **Incidents** tab to view anomalies, and find a link to the [Incident hub](diagnose-an-incident.md).
+You can select the **Incidents** tab to view anomalies, and find a link to the [Incident hub](diagnose-an-incident.md).
## Tune the detection configuration
-A metric can apply one or more detection configurations. There is a default configuration for each metric, which you can edit or add to, according to your monitoring needs.
+A metric can apply one or more detection configurations. There's a default configuration for each metric, which you can edit or add to, according to your monitoring needs.
+
+### Detection configuration auto-tuning based on anomaly preference
+
+Detection configuration auto-tuning is a new feature released in Metrics Advisor, to help address the following scenarios:
+
+- Depending on the use case, certain types of anomalies may be of more significant interest. Sometimes you may be interested in sudden spikes or dips, but other cases, spikes/dips or transient anomalies aren't critical. Previously it was hard to distinguish the configuration for different anomaly types. The new auto-tuning feature makes this distinction between anomaly types possible. As of now, there are five supported anomaly patterns:
+ * Spike
+ * Dip
+ * Increase
+ * Decrease
+ * Steady
+
+- Sometimes, there may be many dimensions within one metric, which will split the metric into hundreds, thousands, or even more time series to be monitored. However, often some of these dimensions aren't equally important. Take revenue as an example, the number for small regions or a niche product category might be quite small and therefore also not that stable. But at the same time not necessarily critical. The new auto-tuning feature has made it possible to fine tune configuration based on series value range.
+
+This allows you to not have to spend as much effort fine tuning your configuration again and again, and also reduces alert noise.
+
+> [!NOTE]
+> The auto-tuning feature is only applied on the 'Smart detection' method.
+
+#### Prerequisite for triggering auto-tuning
+
+After the metrics are onboarded to Metrics Advisor, the system will try to perform statistics on the metrics to categorize **anomaly pattern** types and **series value** distribution. By providing this functionality, you can further fine tune the configuration based on their specific preferences. At the beginning, it will show a status of **Initializing**.
++
+#### Choose to enable auto-tuning on anomaly pattern and series value
+
+The feature enables you to tune detection configuration from two perspectives **anomaly pattern** and **series value**. Based on your specific use case, you can choose which one to enabled or enable both.
+
+- For the **anomaly pattern** option, the system will list out different anomaly patterns that were observed with the metric. You can choose which ones you're interested in and select them, the unselected patterns will have their sensitivity **reduced** by default.
+
+- For the **series value** option, your selection will depend on your specific use case. You'll have to decide if you want to use a higher sensitivity for series with higher values, and decrease sensitivity on low value ones, or vice versa. Then check the checkbox.
++
+#### Tune the configuration for selected anomaly patterns
+
+If specific anomaly patterns are chosen, the next step is to fine tune the configuration for each. There's a global **sensitivity** that is applied for all series. For each anomaly pattern, you can tune the **adjustment**, which is based on the global **sensitivity**.
+
+You must tune each anomaly pattern that has been chosen individually.
++
+#### Tune the configuration for each series value group
+
+After the system generates statistics on all time series within the metric, several series value groups are created automatically. As described above, you can fine tune the **adjustment** for each series value group according to your specific business needs.
+
+There will be a default adjustment configured to get the best detection results, but it can be further tuned.
++
+#### Set up alert rules
+
+Even once the detection configuration on capturing valid anomalies is tuned, it's still important to input **alert
+rules** to make sure the final alert rules can meet eventual business needs. There are a number of rules that can be set, like **filter rules** or **snooze continuous alert rules**.
++
+After configuring all the settings described in the section above, the system will orchestrate them together and automatically detect anomalies based on your inputted preferences. The goal is to get the best configuration that works for each metric, which can be achieved much easier through use of the new **auto-tuning** capability.
### Tune the configuration for all series in current metric
There are additional parameters like **Direction**, and **Valid anomaly** that c
### Tune the configuration for a specific series or group
-Click **Advanced configuration** below the metric level configuration options to see the group level configuration.You can add a configuration for an individual series, or group of series by clicking the **+** icon in this window. The parameters are similar to the metric-level configuration parameters, but you may need to specify at least one dimension value for a group-level configuration to identify a group of series. And specify all dimension values for series-level configuration to identify a specific series.
+Select **Advanced configuration** below the metric level configuration options to see the group level configuration.You can add a configuration for an individual series, or group of series by clicking the **+** icon in this window. The parameters are similar to the metric-level configuration parameters, but you may need to specify at least one dimension value for a group-level configuration to identify a group of series. And specify all dimension values for series-level configuration to identify a specific series.
This configuration will be applied to the group of series or specific series instead of the metric level configuration. After setting the conditions for this group, save it.
When the sensitivity is turned down, the expected value range will be wider, and
Change threshold is normally used when metric data generally stays around a certain range. The threshold is set according to **Change percentage**. The **Change threshold** mode is able to detect anomalies in the scenarios: * Your data is normally stable and smooth. You want to be notified when there are fluctuations.
-* Your data is normally quite unstable and fluctuates a lot. You want to be notified when it becomes too stable or flat.
+* Your data is normally unstable and fluctuates a lot. You want to be notified when it becomes too stable or flat.
Use the following steps to use this mode:
Sometimes, expected events and occurrences (such as holidays) can generate anoma
> [!Note] > Preset event configuration will take holidays into consideration during anomaly detection, and may change your results. It will be applied to the data points ingested after you save the configuration.
-Click the **Configure Preset Event** button next to the metrics drop-down list on each metric details page.
-
+Select the **Configure Preset Event** button next to the metrics drop-down list on each metric details page.
+ :::image type="content" source="../media/metrics/preset-event-button.png" alt-text="preset event button"::: In the window that appears, configure the options according to your usage. Make sure **Enable holiday event** is selected to use the configuration.
There are several other values you can configure:
|Option |Description | |||
-|**Choose one dimension as country** | Choose a dimension that contains country information. For example a country code. |
+|**Choose one dimension as country** | Choose a dimension that contains country information. For example, a country code. |
|**Country code mapping** | The mapping between a standard [country code](https://wikipedia.org/wiki/ISO_3166-1_alpha-2), and chosen dimension's country data. | |**Holiday options** | Whether to take into account all holidays, only PTO (Paid Time Off) holidays, or only Non-PTO holidays. | |**Days to expand** | The impacted days before and after a holiday. |
+The **Cycle event** section can be used in some scenarios to help reduce unnecessary alerts by using cyclic patterns in the data. For example:
-The **Cycle event** section can be used in some scenarios to help reduce unnecessary alerts by using cyclic patterns in the data. For example:
+- Metrics that have multiple patterns or cycles, such as both a weekly and monthly pattern.
+- Metrics that don't have a clear pattern, but the data is comparable Year over Year (YoY), Month over Month (MoM), Week Over Week (WoW), or Day Over Day (DoD).
-- Metrics that have multiple patterns or cycles, such as both a weekly and monthly pattern. -- Metrics that do not have a clear pattern, but the data is comparable Year over Year (YoY), Month over Month (MoM), Week Over Week (WoW), or Day Over Day (DoD).
-
Not all options are selectable for every granularity. The available options per granularity are below (Γ£ö for available, X for unavailable): | Granularity | YoY | MoM | WoW | DoD |
Not all options are selectable for every granularity. The available options per
| Secondly | X | X | X | X | | Custom* | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-
When using a custom granularity in seconds, only available if the metric is longer than one hour and less than one day. Cycle event is used to reduce anomalies if they follow a cyclic pattern, but it will report an anomaly if multiple data points don't follow the pattern. **Strict mode** is used to enable anomaly reporting if even one data point doesn't follow the pattern.
Cycle event is used to reduce anomalies if they follow a cyclic pattern, but it
## View recent incidents
-Metrics Advisor detects anomalies on all your time series data as they're ingested. However, not all anomalies need to be escalated, because they might not have a big impact. Aggregation will be performed on anomalies to group related ones into incidents. You can view these incidents from the **Incident** tab in metrics details page.
+Metrics Advisor detects anomalies on all your time series data as they're ingested. However, not all anomalies need to be escalated, because they might not have a significant impact. Aggregation will be performed on anomalies to group related ones into incidents. You can view these incidents from the **Incident** tab in metrics details page.
-Click on an incident to go to the **Incidents analysis** page where you can see more details about it. Click on **Manage incidents in new Incident hub**, to find the [Incident hub](diagnose-an-incident.md) page where you can find all incidents under the specific metric.
+Select an incident to go to the **Incidents analysis** page where you can see more details about it. Select **Manage incidents in new Incident hub**, to find the [Incident hub](diagnose-an-incident.md) page where you can find all incidents under the specific metric.
## Subscribe anomalies for notification
-If you'd like to get notified whenever an anomaly is detected, you can subscribe to alerts for the metric, using a hook. See [Configure alerts and get notifications using a hook](alerts.md) for more information.
-
+If you'd like to get notified whenever an anomaly is detected, you can subscribe to alerts for the metric, using a hook. For more information, see [configure alerts and get notifications using a hook](alerts.md) for more information.
## Next steps - [Configure alerts and get notifications using a hook](alerts.md)
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 05/11/2022- Last updated : 05/16/2022+ ms.devlang: azurecli
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* A basic understanding of [Kubernetes core concepts](../../aks/concepts-clusters-workloads.md).
-* [Azure PowerShell version 5.9.0 or later](/powershell/azure/install-az-ps)
+* [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-az-ps)
* Install the **Az.ConnectedKubernetes** PowerShell module:
az connectedk8s connect --name <cluster-name> --resource-group <resource-group>
### [Azure PowerShell](#tab/azure-powershell)
-The ability to pass in the proxy certificate only without the proxy server endpoint details is not yet supported via PowerShell.
+The ability to pass in the proxy certificate only without the proxy server endpoint details is not yet supported via PowerShell.
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
In the following example:
* The namespace for configuration installation is `cluster-config`. * The URL for the public Git repository is `https://github.com/Azure/gitops-flux2-kustomize-helm-mt`. * The Git repository branch is `main`.
-* The scope of the configuration is `cluster`. It gives the operators permissions to make changes throughout cluster.
+* The scope of the configuration is `cluster`. This gives the operators permissions to make changes throughout cluster. To use `namespace` scope with this tutorial, [see the changes needed](#multi-tenancy).
* Two kustomizations are specified with names `infra` and `apps`. Each is associated with a path in the repository. * The `apps` kustomization depends on the `infra` kustomization. (The `infra` kustomization must finish before the `apps` kustomization runs.) * Set `prune=true` on both kustomizations. This setting assures that the objects that Flux deployed to the cluster will be cleaned up if they're removed from the repository or if the Flux configuration or kustomizations are deleted.
azure-arc Ssh Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md
SSH access to Arc-enabled servers provides the following key benefits:
## Prerequisites To leverage this functionality, please ensure the following: - Ensure the Arc-enabled server has a hybrid agent version of "1.13.21320.014" or higher.
- - Run: ```azcmagent show``` on your Arc-enabled Server.
+ - Run: ```azcmagent show``` on your Arc-enabled Server.
+ - [Ensure the Arc-enabled server has the "sshd" service enabled](/windows-server/administration/openssh/openssh_install_firstuse).
- Ensure you have the Virtual Machine Local User Login role assigned (role ID: 602da2baa5c241dab01d5360126ab525) ### Availability
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
description: Learn how to use a .NET isolated process to run your C# functions i
Previously updated : 06/01/2021 Last updated : 05/12/2022 recommendations: false #Customer intent: As a developer, I need to know how to create functions that run in an isolated process so that I can run my function code on current (not LTS) releases of .NET.
recommendations: false
# Guide for running C# Azure Functions in an isolated process
-This article is an introduction to using C# to develop .NET isolated process functions, which run out-of-process in Azure Functions. Running out-of-process lets you decouple your function code from the Azure Functions runtime. Isolated process C# functions run on both .NET 5.0 and .NET 6.0. [In-process C# class library functions](functions-dotnet-class-library.md) aren't supported on .NET 5.0.
+This article is an introduction to using C# to develop .NET isolated process functions, which run out-of-process in Azure Functions. Running out-of-process lets you decouple your function code from the Azure Functions runtime. Isolated process C# functions run on .NET 5.0, .NET 6.0, and .NET Framework 4.8 (preview support). [In-process C# class library functions](functions-dotnet-class-library.md) aren't supported on .NET 5.0.
| Getting started | Concepts| Samples | |--|--|--|
Because these functions run in a separate process, there are some [feature and f
### Benefits of running out-of-process
-When running out-of-process, your .NET functions can take advantage of the following benefits:
+When your .NET functions run out-of-process, you can take advantage of the following benefits:
+ Fewer conflicts: because the functions run in a separate process, assemblies used in your app won't conflict with different version of the same assemblies used by the host process. + Full control of the process: you control the start-up of the app and can control the configurations used and the middleware started.
A .NET isolated function project is basically a .NET console app project that ta
+ [local.settings.json](functions-develop-local.md#local-settings-file) file. + C# project file (.csproj) that defines the project and dependencies. + Program.cs file that's the entry point for the app.++ Any code files [defining your functions](#bindings).
+For complete examples, see the [.NET 6 isolated sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/FunctionApp) and the [.NET Framework 4.8 isolated sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/NetFxWorker).
+ > [!NOTE] > To be able to publish your isolated function project to either a Windows or a Linux function app in Azure, you must set a value of `dotnet-isolated` in the remote [FUNCTIONS_WORKER_RUNTIME](functions-app-settings.md#functions_worker_runtime) application setting. To support [zip deployment](deployment-zip-push.md) and [running from the deployment package](run-functions-from-deployment-package.md) on Linux, you also need to update the `linuxFxVersion` site config setting to `DOTNET-ISOLATED|6.0`. To learn more, see [Manual version updates on Linux](set-runtime-version.md#manual-version-updates-on-linux). ## Package references
-When running out-of-process, your .NET project uses a unique set of packages, which implement both core functionality and binding extensions.
+When your functions run out-of-process, your .NET project uses a unique set of packages, which implement both core functionality and binding extensions.
### Core packages
You'll find these extension packages under [Microsoft.Azure.Functions.Worker.Ext
## Start-up and configuration
-When using .NET isolated functions, you have access to the start-up of your function app, which is usually in Program.cs. You're responsible for creating and starting your own host instance. As such, you also have direct access to the configuration pipeline for your app. When running out-of-process, you can much more easily add configurations, inject dependencies, and run your own middleware.
+When using .NET isolated functions, you have access to the start-up of your function app, which is usually in Program.cs. You're responsible for creating and starting your own host instance. As such, you also have direct access to the configuration pipeline for your app. When you run your functions out-of-process, you can much more easily add configurations, inject dependencies, and run your own middleware.
The following code shows an example of a [HostBuilder] pipeline:
A [HostBuilder] is used to build and return a fully initialized [IHost] instance
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/FunctionApp/Program.cs" id="docsnippet_host_run":::
+> [!IMPORTANT]
+> If your project targets .NET Framework 4.8, you also need to add `FunctionsDebugger.Enable();` before creating the HostBuilder. It should be the first line of your `Main()` method. See [Debugging when targeting .NET Framework](#debugging-when-targeting-net-framework) for more information.
+ ### Configuration The [ConfigureFunctionsWorkerDefaults] method is used to add the settings required for the function app to run out-of-process, which includes the following functionality:
The following is an example of a middleware implementation which reads the `Http
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/CustomMiddleware/StampHttpHeaderMiddleware.cs" id="docsnippet_middleware_example_stampheader" :::
-For a more complete example of using custom middlewares in your function app, see the [custom middleware reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/CustomMiddleware).
+For a more complete example of using custom middleware in your function app, see the [custom middleware reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/CustomMiddleware).
## Execution context
Use various methods of [ILogger] to write various log levels, such as `LogWarnin
An [ILogger] is also provided when using [dependency injection](#dependency-injection).
+## Debugging when targeting .NET Framework
+
+If your isolated project targets .NET Framework 4.8, the current preview scope requires manual steps to enable debugging. These steps are not required if using another target framework.
+
+Your app should start with a call to `FunctionsDebugger.Enable();` as its first operation. This occurs in the `Main()` method before initializing a HostBuilder. Your `Program.cs` file should look similar to the following:
+
+```csharp
+using System;
+using System.Diagnostics;
+using Microsoft.Extensions.Hosting;
+using Microsoft.Azure.Functions.Worker;
+using NetFxWorker;
+
+namespace MyDotnetFrameworkProject
+{
+ internal class Program
+ {
+ static void Main(string[] args)
+ {
+ FunctionsDebugger.Enable();
+
+ var host = new HostBuilder()
+ .ConfigureFunctionsWorkerDefaults()
+ .Build();
+
+ host.Run();
+ }
+ }
+}
+```
+
+Next, you need to manually attach to the process using a .NET Framework debugger. Visual Studio doesn't do this automatically for isolated process .NET Framework apps yet, and the "Start Debugging" operation should be avoided.
+
+In your project directory (or its build output directory), run:
+
+```azurecli
+func host start --dotnet-isolated-debug
+```
+
+This will start your worker, and the process will stop with the following message:
+
+```azurecli
+Azure Functions .NET Worker (PID: <process id>) initialized in debug mode. Waiting for debugger to attach...
+```
+
+Where `<process id>` is the ID for your worker process. You can now use Visual Studio to manually attach to the process. For instructions on this operation, see [How to attach to a running process](/visualstudio/debugger/attach-to-running-processes-with-the-visual-studio-debugger#BKMK_Attach_to_a_running_process).
+
+Once the debugger is attached, the process execution will resume and you will be able to debug.
+ ## Differences with .NET class library functions This section describes the current state of the functional and behavioral differences running on out-of-process compared to .NET class library functions running in-process: | Feature/behavior | In-process | Out-of-process | | - | - | - |
-| .NET versions | .NET Core 3.1<br/>.NET 6.0 | .NET 5.0<br/>.NET 6.0 |
+| .NET versions | .NET Core 3.1<br/>.NET 6.0 | .NET 5.0<br/>.NET 6.0<br/>.NET Framework 4.8 (Preview) |
| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | | Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | Under [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | | Logging | [ILogger] passed to the function | [ILogger] obtained from [FunctionContext] |
azure-functions Functions Bindings Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-output.md
+
+ Title: Apache Kafka output binding for Azure Functions
+description: Use Azure Functions to write messages to an Apache Kafka stream.
++ Last updated : 05/14/2022+
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++
+# Apache Kafka output binding for Azure Functions
+
+The output binding allows an Azure Functions app to write messages to a Kafka topic.
+
+> [!IMPORTANT]
+> Kafka bindings are only available for Functions on the [Elastic Premium Plan](functions-premium-plan.md) and [Dedicated (App Service) plan](dedicated-plan.md). They are only supported on version 3.x and later version of the Functions runtime.
+
+## Example
+
+The usage of the binding depends on the C# modality used in your function app, which can be one of the following:
+
+# [In-process](#tab/in-process)
+
+An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
+
+# [Isolated process](#tab/isolated-process)
+
+An [isolated process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+++
+The attributes you use depend on the specific event provider.
+
+# [Confluent](#tab/confluent/in-process)
+
+The following example shows a C# function that sends a single message to a Kafka topic, using data provided in HTTP GET request.
++
+To send events in a batch, use an array of `KafkaEventData` objects, as shown in the following example:
++
+The following function adds headers to the Kafka output data:
++
+For a complete set of working .NET examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/dotnet/Confluent/).
+
+# [Event Hubs](#tab/event-hubs/in-process)
+
+The following example shows a C# function that sends a single message to a Kafka topic, using data provided in HTTP GET request.
++
+To send events in a batch, use an array of `KafkaEventData` objects, as shown in the following example:
++
+The following function adds headers to the Kafka output data:
++
+For a complete set of working .NET examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/dotnet/EventHub).
+
+# [Confluent](#tab/confluent/isolated-process)
+
+The following example has a custom return type that is `MultipleOutputType`, which consists of an HTTP response and a Kafka output.
++
+In the class `MultipleOutputType`, `Kevent` is the output binding variable for the Kafka binding.
++
+To send a batch of events, pass a string array to the output type, as shown in the following example:
++
+The string array is defined as `Kevents` property on the class, on which the output binding is defined:
++
+The following function adds headers to the Kafka output data:
++
+For a complete set of working .NET examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/dotnet-isolated/confluent).
+
+# [Event Hubs](#tab/event-hubs/isolated-process)
++
+The following example has a custom return type that is `MultipleOutputType`, which consists of an HTTP response and a Kafka output.
++
+In the class `MultipleOutputType`, `Kevent` is the output binding variable for the Kafka binding.
++
+To send a batch of events, pass a string array to the output type, as shown in the following example:
++
+The string array is defined as `Kevents` property on the class, on which the output binding is defined:
++
+The following function adds headers to the Kafka output data:
++
+For a complete set of working .NET examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/dotnet-isolated/eventhub).
++++
+> [!NOTE]
+> For an equivalent set of TypeScript examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/tree/dev/samples/typescript)
+
+The specific properties of the function.json file depend on your event provider, which in these examples are either Confluent or Azure Event Hubs. The following examples show a Kafka output binding for a function that is triggered by an HTTP request and sends data from the request to the Kafka topic.
+
+The following function.json defines the trigger for the specific provider in these examples:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then sends a message to the topic:
++
+The following code sends multiple messages as an array to the same topic:
++
+The following example shows how to send an event message with headers to the same Kafka topic:
++
+For a complete set of working JavaScript examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/javascript/).
++
+The specific properties of the function.json file depend on your event provider, which in these examples are either Confluent or Azure Event Hubs. The following examples show a Kafka output binding for a function that is triggered by an HTTP request and sends data from the request to the Kafka topic.
+
+The following function.json defines the trigger for the specific provider in these examples:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then sends a message to the topic:
++
+The following code sends multiple messages as an array to the same topic:
++
+The following example shows how to send an event message with headers to the same Kafka topic:
++
+For a complete set of working PowerShell examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/javascript/).
++
+The specific properties of the function.json file depend on your event provider, which in these examples are either Confluent or Azure Event Hubs. The following examples show a Kafka output binding for a function that is triggered by an HTTP request and sends data from the request to the Kafka topic.
+
+The following function.json defines the trigger for the specific provider in these examples:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then sends a message to the topic:
++
+The following code sends multiple messages as an array to the same topic:
++
+The following example shows how to send an event message with headers to the same Kafka topic:
++
+For a complete set of working Python examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/python/).
+++
+The annotations you use to configure the output binding depend on the specific event provider.
+
+# [Confluent](#tab/confluent)
+
+The following function sends a message to the Kafka topic.
++
+The following example shows how to send multiple messages to a Kafka topic.
++
+In this example, the output binding parameter is changed to string array.
+
+The last example uses to these `KafkaEntity` and `KafkaHeader` classes:
+++
+The following example function sends a message with headers to a Kafka topic.
++
+For a complete set of working Java examples for Confluent, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/tree/dev/samples/java/confluent/src/main/java/com/contoso/kafka).
+
+# [Event Hubs](#tab/event-hubs)
+
+The following function sends a message to the Kafka topic.
++
+The following example shows how to send multiple messages to a Kafka topic.
++
+In this example, the output binding parameter is changed to string array.
+
+The last example uses to these `KafkaEntity` and `KafkaHeader` classes:
+++
+The following example function sends a message with headers to a Kafka topic.
++
+For a complete set of working Java examples for Confluent, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/tree/dev/samples/java/eventhub/src/main/java/com/contoso/kafka).
+++
+## Attributes
+
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `Kafka` attribute to define the function trigger.
+
+The following table explains the properties you can set using this attribute:
+
+| Parameter |Description|
+| | |
+| **BrokerList** | (Required) The list of Kafka brokers to which the output is sent. See [Connections](#connections) for more information. |
+| **Topic** | (Required) The topic to which the output is sent. |
+| **AvroSchema** | (Optional) Schema of a generic record when using the Avro protocol. |
+| **MaxMessageBytes** | (Optional) The maximum size of the output message being sent (in MB), with a default value of `1`. |
+| **BatchSize** | (Optional) Maximum number of messages batched in a single message set, with a default value of `10000`. |
+| **EnableIdempotence** | (Optional) When set to `true`, guarantees that messages are successfully produced exactly once and in the original produce order, with a default value of `false`|
+| **MessageTimeoutMs** | (Optional) The local message timeout, in milliseconds. This value is only enforced locally and limits the time a produced message waits for successful delivery, with a default `300000`. A time of `0` is infinite. This value is the maximum time used to deliver a message (including retries). Delivery error occurs when either the retry count or the message timeout are exceeded. |
+| **RequestTimeoutMs** | (Optional) The acknowledgment timeout of the output request, in milliseconds, with a default of `5000`. |
+| **MaxRetries** | (Optional) The number of times to retry sending a failing Message, with a default of `2`. Retrying may cause reordering, unless `EnableIdempotence` is set to `true`.|
+| **AuthenticationMode** | (Optional) The authentication mode when using Simple Authentication and Security Layer (SASL) authentication. The supported values are `Gssapi`, `Plain` (default), `ScramSha256`, `ScramSha512`. |
+| **Username** | (Optional) The username for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information.|
+| **Password** | (Optional) The password for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information.|
+| **Protocol** | (Optional) The security protocol used when communicating with brokers. The supported values are `plaintext` (default), `ssl`, `sasl_plaintext`, `sasl_ssl`. |
+| **SslCaLocation** | (Optional) Path to CA certificate file for verifying the broker's certificate. |
+| **SslCertificateLocation** | (Optional) Path to the client's certificate. |
+| **SslKeyLocation** | (Optional) Path to client's private key (PEM) used for authentication. |
+| **SslKeyPassword** | (Optional) Password for client's certificate. |
++
+## Annotations
+
+The `KafkaOutput` annotation allows you to create a function that writes to a specific topic. Supported options include the following elements:
+
+|Element | Description|
+||-|
+|**name** | The name of the variable that represents the brokered data in function code. |
+| **brokerList** | (Required) The list of Kafka brokers to which the output is sent. See [Connections](#connections) for more information. |
+| **topic** | (Required) The topic to which the output is sent. |
+| **dataType** | Defines how Functions handles the parameter value. By default, the value is obtained as a string and Functions tries to deserialize the string to actual plain-old Java object (POJO). When `string`, the input is treated as just a string. When `binary`, the message is received as binary data, and Functions tries to deserialize it to an actual parameter type byte[]. |
+| **avroSchema** | (Optional) Schema of a generic record when using the Avro protocol. |
+| **maxMessageBytes** | (Optional) The maximum size of the output message being sent (in MB), with a default value of `1`. |
+| **batchSize** | (Optional) Maximum number of messages batched in a single message set, with a default value of `10000`. |
+| **enableIdempotence** | (Optional) When set to `true`, guarantees that messages are successfully produced exactly once and in the original produce order, with a default value of `false`|
+| **messageTimeoutMs** | (Optional) The local message timeout, in milliseconds. This value is only enforced locally and limits the time a produced message waits for successful delivery, with a default `300000`. A time of `0` is infinite. This is the maximum time used to deliver a message (including retries). Delivery error occurs when either the retry count or the message timeout are exceeded. |
+| **requestTimeoutMs** | (Optional) The acknowledgment timeout of the output request, in milliseconds, with a default of `5000`. |
+| **maxRetries** | (Optional) The number of times to retry sending a failing Message, with a default of `2`. Retrying may cause reordering, unless `EnableIdempotence` is set to `true`.|
+| **authenticationMode** | (Optional) The authentication mode when using Simple Authentication and Security Layer (SASL) authentication. The supported values are `Gssapi`, `Plain` (default), `ScramSha256`, `ScramSha512`. |
+| **username** | (Optional) The username for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information.|
+| **password** | (Optional) The password for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information.|
+| **protocol** | (Optional) The security protocol used when communicating with brokers. The supported values are `plaintext` (default), `ssl`, `sasl_plaintext`, `sasl_ssl`. |
+| **sslCaLocation** | (Optional) Path to CA certificate file for verifying the broker's certificate. |
+| **sslCertificateLocation** | (Optional) Path to the client's certificate. |
+| **sslKeyLocation** | (Optional) Path to client's private key (PEM) used for authentication. |
+| **sslKeyPassword** | (Optional) Password for client's certificate. |
++
+## Configuration
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+| _function.json_ property |Description|
+| | |
+|**type** | Must be set to `kafka`. |
+|**direction** | Must be set to `out`. |
+|**name** | The name of the variable that represents the brokered data in function code. |
+| **brokerList** | (Required) The list of Kafka brokers to which the output is sent. See [Connections](#connections) for more information. |
+| **topic** | (Required) The topic to which the output is sent. |
+| **avroSchema** | (Optional) Schema of a generic record when using the Avro protocol. |
+| **maxMessageBytes** | (Optional) The maximum size of the output message being sent (in MB), with a default value of `1`. |
+| **batchSize** | (Optional) Maximum number of messages batched in a single message set, with a default value of `10000`. |
+| **enableIdempotence** | (Optional) When set to `true`, guarantees that messages are successfully produced exactly once and in the original produce order, with a default value of `false`|
+| **messageTimeoutMs** | (Optional) The local message timeout, in milliseconds. This value is only enforced locally and limits the time a produced message waits for successful delivery, with a default `300000`. A time of `0` is infinite. This is the maximum time used to deliver a message (including retries). Delivery error occurs when either the retry count or the message timeout are exceeded. |
+| **requestTimeoutMs** | (Optional) The acknowledgment timeout of the output request, in milliseconds, with a default of `5000`. |
+| **maxRetries** | (Optional) The number of times to retry sending a failing Message, with a default of `2`. Retrying may cause reordering, unless `EnableIdempotence` is set to `true`.|
+| **authenticationMode** | (Optional) The authentication mode when using Simple Authentication and Security Layer (SASL) authentication. The supported values are `Gssapi`, `Plain` (default), `ScramSha256`, `ScramSha512`. |
+| **username** | (Optional) The username for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information.|
+| **password** | (Optional) The password for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information.|
+| **protocol** | (Optional) The security protocol used when communicating with brokers. The supported values are `plaintext` (default), `ssl`, `sasl_plaintext`, `sasl_ssl`. |
+| **sslCaLocation** | (Optional) Path to CA certificate file for verifying the broker's certificate. |
+| **sslCertificateLocation** | (Optional) Path to the client's certificate. |
+| **sslKeyLocation** | (Optional) Path to client's private key (PEM) used for authentication. |
+| **sslKeyPassword** | (Optional) Password for client's certificate. |
++
+## Usage
+
+Both keys and values types are supported with built-in [Avro](http://avro.apache.org/docs/current/) and [Protobuf](https://developers.google.com/protocol-buffers/) serialization.
+
+The offset, partition, and timestamp for the event are generated at runtime. Only value and headers can be set inside the function. Topic is set in the function.json.
+
+Please make sure to have access to the Kafka topic to which you are trying to write. You configure the binding with access and connection credentials to the Kafka topic.
+
+In a Premium plan, you must enable runtime scale monitoring for the Kafka output to be able to scale out to multiple instances. To learn more, see [Enable runtime scaling](functions-bindings-kafka.md#enable-runtime-scaling).
+
+For a complete set of supported host.json settings for the Kafka trigger, see [host.json settings](functions-bindings-kafka.md#hostjson-settings).
++
+## Next steps
+
+- [Run a function from an Apache Kafka event stream](./functions-bindings-kafka-trigger.md)
azure-functions Functions Bindings Kafka Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-trigger.md
+
+ Title: Apache Kafka trigger for Azure Functions
+description: Use Azure Functions to run your code based on events from an Apache Kafka stream.
++ Last updated : 05/14/2022+
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++
+# Apache Kafka trigger for Azure Functions
+
+You can use the Apache Kafka trigger in Azure Functions to run your function code in response to messages in Kafka topics. You can also use a [Kafka output binding](functions-bindings-kafka-output.md) to write from your function to a topic. For information on setup and configuration details, see [Apache Kafka bindings for Azure Functions overview](functions-bindings-kafka.md).
+
+> [!IMPORTANT]
+> Kafka bindings are only available for Functions on the [Elastic Premium Plan](functions-premium-plan.md) and [Dedicated (App Service) plan](dedicated-plan.md). They are only supported on version 3.x and later version of the Functions runtime.
+
+## Example
+
+The usage of the trigger depends on the C# modality used in your function app, which can be one of the following modes:
+
+# [In-process](#tab/in-process)
+
+An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
+
+# [Isolated process](#tab/isolated-process)
+
+An [isolated process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+++
+The attributes you use depend on the specific event provider.
+
+# [Confluent](#tab/confluent/in-process)
+
+The following example shows a C# function that reads and logs the Kafka message as a Kafka event:
++
+To receive events in a batch, use an input string or `KafkaEventData` as an array, as shown in the following example:
++
+The following function logs the message and headers for the Kafka Event:
++
+You can define a generic [Avro schema] for the event passed to the trigger. The following string value defines the generic Avro schema:
++
+In the following function, an instance of `GenericRecord` is available in the `KafkaEvent.Value` property:
++
+You can define a specific [Avro schema] for the event passed to the trigger. The following defines the `UserRecord` class:
++
+In the following function, an instance of `UserRecord` is available in the `KafkaEvent.Value` property:
++
+For a complete set of working .NET examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/dotnet/).
+
+# [Event Hubs](#tab/event-hubs/in-process)
+
+The following example shows a C# function that reads and logs the Kafka message as a Kafka event:
++
+To receive events in a batch, use a string array or `KafkaEventData` array as input, as shown in the following example:
++
+The following function logs the message and headers for the Kafka Event:
++
+You can define a generic [Avro schema] for the event passed to the trigger. The following string value defines the generic Avro schema:
++
+In the following function, an instance of `GenericRecord` is available in the `KafkaEvent.Value` property:
++
+You can define a specific [Avro schema] for the event passed to the trigger. The following defines the `UserRecord` class:
++
+In the following function, an instance of `UserRecord` is available in the `KafkaEvent.Value` property:
++
+For a complete set of working .NET examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/dotnet/).
+
+# [Confluent](#tab/confluent/isolated-process)
+
+The following example shows a C# function that reads and logs the Kafka message as a Kafka event:
++
+To receive events in a batch, use a string array as input, as shown in the following example:
++
+The following function logs the message and headers for the Kafka Event:
++
+For a complete set of working .NET examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/dotnet-isolated/).
+
+# [Event Hubs](#tab/event-hubs/isolated-process)
+
+The following example shows a C# function that reads and logs the Kafka message as a Kafka event:
++
+To receive events in a batch, use a string array as input, as shown in the following example:
++
+The following function logs the message and headers for the Kafka Event:
++
+For a complete set of working .NET examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/dotnet-isolated/).
++++
+> [!NOTE]
+> For an equivalent set of TypeScript examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/tree/dev/samples/typescript)
+
+The specific properties of the function.json file depend on your event provider, which in these examples are either Confluent or Azure Event Hubs. The following examples show a Kafka trigger for a function that reads and logs a Kafka message.
+
+The following function.json defines the trigger for the specific provider:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then runs when the function is triggered:
++
+To receive events in a batch, set the `cardinality` value to `many` in the function.json file, as shown in the following examples:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then parses the array of events and logs the event data:
++
+The following code also logs the header data:
++
+You can define a generic [Avro schema] for the event passed to the trigger. The following function.json defines the trigger for the specific provider with a generic Avro schema:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then runs when the function is triggered:
++
+For a complete set of working JavaScript examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/javascript/).
++
+The specific properties of the function.json file depend on your event provider, which in these examples are either Confluent or Azure Event Hubs. The following examples show a Kafka trigger for a function that reads and logs a Kafka message.
+
+The following function.json defines the trigger for the specific provider:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then runs when the function is triggered:
++
+To receive events in a batch, set the `cardinality` value to `many` in the function.json file, as shown in the following examples:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then parses the array of events and logs the event data:
++
+The following code also logs the header data:
++
+You can define a generic [Avro schema] for the event passed to the trigger. The following function.json defines the trigger for the specific provider with a generic Avro schema:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then runs when the function is triggered:
++
+For a complete set of working PowerShell examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/powershell/).
++
+The specific properties of the function.json file depend on your event provider, which in these examples are either Confluent or Azure Event Hubs. The following examples show a Kafka trigger for a function that reads and logs a Kafka message.
+
+The following function.json defines the trigger for the specific provider:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then runs when the function is triggered:
++
+To receive events in a batch, set the `cardinality` value to `many` in the function.json file, as shown in the following examples:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then parses the array of events and logs the event data:
++
+The following code also logs the header data:
++
+You can define a generic [Avro schema] for the event passed to the trigger. The following function.json defines the trigger for the specific provider with a generic Avro schema:
+
+# [Confluent](#tab/confluent)
++
+# [Event Hubs](#tab/event-hubs)
++++
+The following code then runs when the function is triggered:
++
+For a complete set of working Python examples, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/blob/dev/samples/python/).
++
+The annotations you use to configure your trigger depend on the specific event provider.
+
+# [Confluent](#tab/confluent)
+
+The following example shows a Java function that reads and logs the content of the Kafka event:
++
+To receive events in a batch, use an input string as an array, as shown in the following example:
++
+The following function logs the message and headers for the Kafka Event:
++
+You can define a generic [Avro schema] for the event passed to the trigger. The following function defines a trigger for the specific provider with a generic Avro schema:
++
+For a complete set of working Java examples for Confluent, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/tree/dev/samples/java/confluent/src/main/java/com/contoso/kafka).
+
+# [Event Hubs](#tab/event-hubs)
+
+The following example shows a Java function that reads and logs the content of the Kafka event:
++
+To receive events in a batch, use an input string as an array, as shown in the following example:
++
+The following function logs the message and headers for the Kafka Event:
++
+You can define a generic [Avro schema] for the event passed to the trigger. The following function defines a trigger for the specific provider with a generic Avro schema:
++
+For a complete set of working Java examples for Event Hubs, see the [Kafka extension repository](https://github.com/Azure/azure-functions-kafka-extension/tree/dev/samples/java/confluent/src/main/java/com/contoso/kafka).
+++
+## Attributes
+
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `KafkaTriggerAttribute` to define the function trigger.
+
+The following table explains the properties you can set using this trigger attribute:
+
+| Parameter |Description|
+| | |
+| **BrokerList** | (Required) The list of Kafka brokers monitored by the trigger. See [Connections](#connections) for more information. |
+| **Topic** | (Required) The topic monitored by the trigger. |
+| **ConsumerGroup** | (Optional) Kafka consumer group used by the trigger. |
+| **AvroSchema** | (Optional) Schema of a generic record when using the Avro protocol. |
+| **AuthenticationMode** | (Optional) The authentication mode when using Simple Authentication and Security Layer (SASL) authentication. The supported values are `Gssapi`, `Plain` (default), `ScramSha256`, `ScramSha512`. |
+| **Username** | (Optional) The username for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information.|
+| **Password** | (Optional) The password for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information.|
+| **Protocol** | (Optional) The security protocol used when communicating with brokers. The supported values are `plaintext` (default), `ssl`, `sasl_plaintext`, `sasl_ssl`. |
+| **SslCaLocation** | (Optional) Path to CA certificate file for verifying the broker's certificate. |
+| **SslCertificateLocation** | (Optional) Path to the client's certificate. |
+| **SslKeyLocation** | (Optional) Path to client's private key (PEM) used for authentication. |
+| **SslKeyPassword** | (Optional) Password for client's certificate. |
+++
+## Annotations
+
+The `KafkaTrigger` annotation allows you to create a function that runs when a topic is received. Supported options include the following elements:
+
+|Element | Description|
+||-|
+|**name** | (Required) The name of the variable that represents the queue or topic message in function code. |
+| **brokerList** | (Required) The list of Kafka brokers monitored by the trigger. See [Connections](#connections) for more information. |
+| **topic** | (Required) The topic monitored by the trigger. |
+| **cardinality** | (Optional) Indicates the cardinality of the trigger input. The supported values are `ONE` (default) and `MANY`. Use `ONE` when the input is a single message and `MANY` when the input is an array of messages. When you use `MANY`, you must also set a `dataType`. |
+| **dataType** | Defines how Functions handles the parameter value. By default, the value is obtained as a string and Functions tries to deserialize the string to actual plain-old Java object (POJO). When `string`, the input is treated as just a string. When `binary`, the message is received as binary data, and Functions tries to deserialize it to an actual parameter type byte[]. |
+| **consumerGroup** | (Optional) Kafka consumer group used by the trigger. |
+| **avroSchema** | (Optional) Schema of a generic record when using the Avro protocol. |
+| **authenticationMode** | (Optional) The authentication mode when using Simple Authentication and Security Layer (SASL) authentication. The supported values are `Gssapi`, `Plain` (default), `ScramSha256`, `ScramSha512`. |
+| **username** | (Optional) The username for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information.|
+| **password** | (Optional) The password for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information.|
+| **protocol** | (Optional) The security protocol used when communicating with brokers. The supported values are `plaintext` (default), `ssl`, `sasl_plaintext`, `sasl_ssl`. |
+| **sslCaLocation** | (Optional) Path to CA certificate file for verifying the broker's certificate. |
+| **sslCertificateLocation** | (Optional) Path to the client's certificate. |
+| **sslKeyLocation** | (Optional) Path to client's private key (PEM) used for authentication. |
+| **sslKeyPassword** | (Optional) Password for client's certificate. |
+++
+## Configuration
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+| _function.json_ property |Description|
+| | |
+|**type** | (Required) Must be set to `kafkaTrigger`. |
+|**direction** | (Required) Must be set to `in`. |
+|**name** | (Required) The name of the variable that represents the brokered data in function code. |
+| **brokerList** | (Required) The list of Kafka brokers monitored by the trigger. See [Connections](#connections) for more information.|
+| **topic** | (Required) The topic monitored by the trigger. |
+| **cardinality** | (Optional) Indicates the cardinality of the trigger input. The supported values are `ONE` (default) and `MANY`. Use `ONE` when the input is a single message and `MANY` when the input is an array of messages. When you use `MANY`, you must also set a `dataType`. |
+| **dataType** | Defines how Functions handles the parameter value. By default, the value is obtained as a string and Functions tries to deserialize the string to actual plain-old Java object (POJO). When `string`, the input is treated as just a string. When `binary`, the message is received as binary data, and Functions tries to deserialize it to an actual parameter type byte[]. |
+| **consumerGroup** | (Optional) Kafka consumer group used by the trigger. |
+| **avroSchema** | (Optional) Schema of a generic record when using the Avro protocol. |
+| **authenticationMode** | (Optional) The authentication mode when using Simple Authentication and Security Layer (SASL) authentication. The supported values are `Gssapi`, `Plain` (default), `ScramSha256`, `ScramSha512`. |
+| **username** | (Optional) The username for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information. |
+| **password** | (Optional) The password for SASL authentication. Not supported when `AuthenticationMode` is `Gssapi`. See [Connections](#connections) for more information.|
+| **protocol** | (Optional) The security protocol used when communicating with brokers. The supported values are `plaintext` (default), `ssl`, `sasl_plaintext`, `sasl_ssl`. |
+| **sslCaLocation** | (Optional) Path to CA certificate file for verifying the broker's certificate. |
+| **sslCertificateLocation** | (Optional) Path to the client's certificate. |
+| **sslKeyLocation** | (Optional) Path to client's private key (PEM) used for authentication. |
+| **sslKeyPassword** | (Optional) Password for client's certificate. |
++
+## Usage
++
+# [In-process](#tab/in-process)
+
+Kafka events are passed to the function as `KafkaEventData<string>` objects or arrays. Strings and string arrays that are JSON payloads are also supported.
+
+# [Isolated process](#tab/isolated-process)
+
+Kafka events are currently supported as strings and string arrays that are JSON payloads.
++++
+Kafka messages are passed to the function as strings and string arrays that are JSON payloads.
++
+In a Premium plan, you must enable runtime scale monitoring for the Kafka output to be able to scale out to multiple instances. To learn more, see [Enable runtime scaling](functions-bindings-kafka.md#enable-runtime-scaling).
+
+For a complete set of supported host.json settings for the Kafka trigger, see [host.json settings](functions-bindings-kafka.md#hostjson-settings).
++
+## Next steps
+
+- [Write to an Apache Kafka stream from a function](./functions-bindings-kafka-output.md)
+
+[Avro schema]: http://avro.apache.org/docs/current/
azure-functions Functions Bindings Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka.md
+
+ Title: Apache Kafka bindings for Azure Functions
+description: Learn to integrate Azure Functions with an Apache Kafka stream.
++ Last updated : 05/14/2022+
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++
+# Apache Kafka bindings for Azure Functions overview
+
+The Kafka extension for Azure Functions lets you write values out to [Apache Kafka](https://kafka.apache.org/) topics by using an output binding. You can also use a trigger to invoke your functions in response to messages in Kafka topics.
+
+> [!IMPORTANT]
+> Kafka bindings are only available for Functions on the [Elastic Premium Plan](functions-premium-plan.md) and [Dedicated (App Service) plan](dedicated-plan.md). They are only supported on version 3.x and later version of the Functions runtime.
+
+| Action | Type |
+|||
+| Run a function based on a new Kafka event. | [Trigger](./functions-bindings-kafka-trigger.md) |
+| Write to the Kafka event stream. |[Output binding](./functions-bindings-kafka-output.md) |
++
+## Install extension
+
+The extension NuGet package you install depends on the C# mode you're using in your function app:
+
+# [In-process](#tab/in-process)
+
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Kafka).
+
+# [Isolated process](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Kafka).
+
+<!--
+# [C# script](#tab/csharp-script)
+
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+
+The Kafka extension is part of an [extension bundle], which is specified in your host.json project file. When you create a project that targets version 2.x or later, you should already have this bundle installed. To learn more, see [extension bundle].
+-->
+++++
+## Install bundle
+
+The Kafka extension is part of an [extension bundle], which is specified in your host.json project file. When you create a project that targets Functions version 3.x or later, you should already have this bundle installed. To learn more, see [extension bundle].
++
+## Enable runtime scaling
+
+To allow your functions to scale properly on the Premium plan when using Kafka triggers and bindings, you need to enable runtime scale monitoring.
+
+# [Azure portal](#tab/portal)
+
+In the Azure portal, in your function app, choose **Configuration** and on the **Function runtime settings** tab turn **Runtime scale monitoring** to **On**.
++
+# [Azure CLI](#tab/azure-cli)
+
+Use the following Azure CLI command to enable runtime scale monitoring:
+
+```azurecli-interactive
+az resource update -g <RESOURCE_GROUP> -n <FUNCTION_APP_NAME>/config/web --set properties.functionsRuntimeScaleMonitoringEnabled=1 --resource-type Microsoft.Web/sites
+```
+++
+## host.json settings
+
+This section describes the configuration settings available for this binding in versions 3.x and higher. Settings in the host.json file apply to all functions in a function app instance. For more information about function app configuration settings in versions 3.x and later versions, see the [host.json reference for Azure Functions](functions-host-json.md).
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "kafka": {
+ "maxBatchSize": 64,
+ "SubscriberIntervalInSeconds": 1,
+ "ExecutorChannelCapacity": 1,
+ "ChannelFullRetryIntervalInMs": 50
+ }
+ }
+}
+
+```
+
+|Property |Default | Type | Description |
+|||| - |
+| ChannelFullRetryIntervalInMs | 50 | Trigger | Defines the subscriber retry interval, in milliseconds, used when attempting to add items to an at-capacity channel. |
+| ExecutorChannelCapacity | 1| Both| Defines the channel message capacity. Once capacity is reached, the Kafka subscriber pauses until the function catches up. |
+| MaxBatchSize | 64 | Trigger | Maximum batch size when calling a Kafka triggered function. |
+| SubscriberIntervalInSeconds | 1 | Trigger | Defines the minimum frequency incoming messages are executed, per function in seconds. Only when the message volume is less than `MaxBatchSize` / `SubscriberIntervalInSeconds`|
+
+The following properties, which are inherited from the [Apache Kafka C/C++ client library](https://github.com/edenhill/librdkafk), are also supported in the `kafka` section of host.json, for either triggers or both output bindings and triggers:
+
+|Property | Applies to | librdkafka equivalent |
+||||
+| AutoCommitIntervalMs | Trigger | `auto.commit.interval.ms` |
+| FetchMaxBytes | Trigger | `fetch.max.bytes` |
+| LibkafkaDebug | Both | `debug` |
+| MaxPartitionFetchBytes | Trigger | `max.partition.fetch.bytes` |
+| MaxPollIntervalMs | Trigger | `max.poll.interval.ms` |
+| MetadataMaxAgeMs | Both | `metadata.max.age.ms` |
+| QueuedMinMessages | Trigger | `queued.min.messages` |
+| QueuedMaxMessagesKbytes | Trigger | `queued.max.messages.kbytes` |
+| ReconnectBackoffMs | Trigger | `reconnect.backoff.max.ms` |
+| ReconnectBackoffMaxMs | Trigger | `reconnect.backoff.max.ms` |
+| SessionTimeoutMs | Trigger | `session.timeout.ms` |
+| SocketKeepaliveEnable | Both | `socket.keepalive.enable` |
+| StatisticsIntervalMs | Trigger | `statistics.interval.ms` |
++
+## Next steps
+
+- [Run a function from an Apache Kafka event stream](./functions-bindings-kafka-trigger.md)
+
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
The usage of the binding depends on the extension package version and the C# mod
# [In-process](#tab/in-process)
-An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
# [Isolated process](#tab/isolated-process)
-An isolated process class library compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
+An [isolated process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
# [C# script](#tab/csharp-script)
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
Expressed as a string, the `TimeSpan` format is `hh:mm:ss` when `hh` is less tha
|--|-| | "01:00:00" | every hour | | "00:01:00" | every minute |
-| "25:00:00" | every 25 days |
+| "25:00:00:00"| every 25 days |
| "1.00:00:00" | every day | ### Scale-out
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
Some configuration settings are slot-specific. The following lists detail which
* Custom domain names * Non-public certificates and TLS/SSL settings * Scale settings
-* WebJobs schedulers
* IP restrictions * Always On * Diagnostic settings
Some configuration settings are slot-specific. The following lists detail which
* Connection strings (can be configured to stick to a slot) * Handler mappings * Public certificates
-* WebJobs content
* Hybrid connections * * Virtual network integration * * Service endpoints *
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
Title: Develop C# class library functions using Azure Functions
-description: Understand how to use C# to develop and publish code as class libraries that runs in-process with the Azure Functions runtime.
+description: Understand how to use C# to develop and publish code as class libraries that run in-process with the Azure Functions runtime.
ms.devlang: csharp Previously updated : 02/08/2022 Last updated : 05/12/2022 # Develop C# class library functions using Azure Functions
Last updated 02/08/2022
This article is an introduction to developing Azure Functions by using C# in .NET class libraries. >[!IMPORTANT]
->This article supports .NET class library functions that run in-process with the runtime. Functions also supports .NET 5.x by running your C# functions out-of-process and isolated from the runtime. To learn more, see [.NET isolated process functions](dotnet-isolated-process-guide.md).
+>This article supports .NET class library functions that run in-process with the runtime. Your C# functions can also run out-of-process and isolated from the Functions runtime. The isolated model is the only way to run .NET 5.x and the preview of .NET Framework 4.8 using recent versions of the Functions runtime. To learn more, see [.NET isolated process functions](dotnet-isolated-process-guide.md).
As a C# developer, you may also be interested in one of the following articles:
The generated *function.json* file includes a `configurationSource` property tha
The *function.json* file generation is performed by the NuGet package [Microsoft\.NET\.Sdk\.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions).
-The same package is used for both version 1.x and 2.x of the Functions runtime. The target framework is what differentiates a 1.x project from a 2.x project. Here are the relevant parts of *.csproj* files, showing different target frameworks with the same `Sdk` package:
+The same package is used for both version 1.x and 2.x of the Functions runtime. The target framework is what differentiates a 1.x project from a 2.x project. Here are the relevant parts of the `.csproj` files, showing different target frameworks with the same `Sdk` package:
# [v2.x+](#tab/v2)
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
zone_pivot_groups: programming-languages-set-functions
| Version | Support level | Description | | | | |
-| 4.x | GA | _Recommended runtime version for functions in all languages._ Use this version to [run C# functions on .NET 6.0](functions-dotnet-class-library.md#supported-versions). |
+| 4.x | GA | **_Recommended runtime version for functions in all languages._** Use this version to [run C# functions on .NET 6.0 and .NET Framework 4.8](functions-dotnet-class-library.md#supported-versions). |
| 3.x | GA | Supports all languages. Use this version to [run C# functions on .NET Core 3.1 and .NET 5.0](functions-dotnet-class-library.md#supported-versions).| | 2.x | GA | Supported for [legacy version 2.x apps](#pinning-to-version-20). This version is in maintenance mode, with enhancements provided only in later versions.| | 1.x | GA | Recommended only for C# apps that must use .NET Framework and only supports development in the Azure portal, Azure Stack Hub portal, or locally on Windows computers. This version is in maintenance mode, with enhancements provided only in later versions. |
To learn more, see [How to target Azure Functions runtime versions](set-runtime-
### Pinning to a specific minor version
-To resolve issues your function app may have when running on the latest major version, you have to temporatily pin your app to a specific minor version. This gives you time to get your app running correctly on the latest major version. The way that you pin to a minor version differs between Windows and Linux. To learn more, see [How to target Azure Functions runtime versions](set-runtime-version.md).
+To resolve issues your function app may have when running on the latest major version, you have to temporarily pin your app to a specific minor version. This gives you time to get your app running correctly on the latest major version. The way that you pin to a minor version differs between Windows and Linux. To learn more, see [How to target Azure Functions runtime versions](set-runtime-version.md).
Older minor versions are periodically removed from Functions. For the latest news about Azure Functions releases, including the removal of specific older minor versions, monitor [Azure App Service announcements](https://github.com/Azure/app-service-announcements/issues).
The following are some changes to be aware of before upgrading a 3.x app to 4.x.
- Default and maximum timeouts are now enforced in 4.x for function app running on Linux in a Consumption plan. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915)) -- Azure Functions 4.x uses Azure.Identity and Azure.Security.KeyVault.Secrets for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. For more information about how to configure function app settings, see the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories). ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))
+- Azure Functions 4.x uses `Azure.Identity` and `Azure.Security.KeyVault.Secrets` for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. For more information about how to configure function app settings, see the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories). ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))
- Function apps that share storage accounts now fail to start when their host IDs are the same. For more information, see [Host ID considerations](storage-considerations.md#host-id-considerations). ([#2049](https://github.com/Azure/Azure-Functions/issues/2049))
The following are some changes to be aware of before upgrading a 3.x app to 4.x.
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-typescript" -- Node.js 10 and 12 are not supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
+- Node.js versions 10 and 12 are not supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
- Output serialization in Node.js apps was updated to address previous inconsistencies. ([#2007](https://github.com/Azure/Azure-Functions/issues/2007)) ::: zone-end
The main differences between versions when running .NET class library functions
* Timer trigger object is camelCase instead of PascalCase
-* Event Hub triggered functions with `dataType` binary will receive an array of `binary` instead of `string`.
+* Event hub triggered functions with `dataType` binary will receive an array of `binary` instead of `string`.
* The HTTP request payload can no longer be accessed via `context.bindingData.req`. It can still be accessed as an input parameter, `context.req`, and in `context.bindings`.
While it's possible to do an "in-place" upgrade by manually updating the app con
Starting with version 2.x, you must install the extensions for specific triggers and bindings used by the functions in your app. The only exception for this HTTP and timer triggers, which don't require an extension. For more information, see [Register and install binding extensions](./functions-bindings-register.md).
-There are also a few changes in the *function.json* or attributes of the function between versions. For example, the Event Hub `path` property is now `eventHubName`. See the [existing binding table](#bindings) for links to documentation for each binding.
+There are also a few changes in the *function.json* or attributes of the function between versions. For example, the Event Hubs `path` property is now `eventHubName`. See the [existing binding table](#bindings) for links to documentation for each binding.
### Changes in features and functionality after version 1.x
A few features were removed, updated, or replaced after version 1.x. This sectio
In version 2.x, the following changes were made:
-* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure Files by default. When upgrading an app from version 1.x to version 2.x, existing secrets that are in Azure Files are reset.
+* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure Files by default. When you upgrade an app from version 1.x to version 2.x, existing secrets that are in Azure Files are reset.
* The version 2.x runtime doesn't include built-in support for webhook providers. This change was made to improve performance. You can still use HTTP triggers as endpoints for webhooks.
In version 2.x, the following changes were made:
* HTTP concurrency throttles are implemented by default for Consumption plan functions, with a default of 100 concurrent requests per instance. You can change this in the [`maxConcurrentRequests`](functions-host-json.md#http) setting in the host.json file.
-* Because of [.NET Core limitations](https://github.com/Azure/azure-functions-host/issues/3414), support for F# script (.fsx) functions has been removed. Compiled F# functions (.fs) are still supported.
+* Because of [.NET Core limitations](https://github.com/Azure/azure-functions-host/issues/3414), support for F# script (`.fsx` files) functions has been removed. Compiled F# functions (.fs) are still supported.
-* The URL format of Event Grid trigger webhooks has been changed to `https://{app}/runtime/webhooks/{triggerName}`.
+* The URL format of Event Grid trigger webhooks has been changed to follow this pattern: `https://{app}/runtime/webhooks/{triggerName}`.
### Locally developed application versions
In Visual Studio, you select the runtime version when you create a project. Azur
<AzureFunctionsVersion>v4</AzureFunctionsVersion> ```
+You can also choose `net6.0` or `net48` as the target framework if you are using [.NET isolated process functions](dotnet-isolated-process-guide.md). Support for `net48` is currently in preview.
+ > [!NOTE] > Azure Functions 4.x requires the `Microsoft.NET.Sdk.Functions` extension be at least `4.0.0`.
In Visual Studio, you select the runtime version when you create a project. Azur
<AzureFunctionsVersion>v3</AzureFunctionsVersion> ```
+You can also choose `net5.0` as the target framework if you are using [.NET isolated process functions](dotnet-isolated-process-guide.md).
+ > [!NOTE] > Azure Functions 3.x and .NET requires the `Microsoft.NET.Sdk.Functions` extension be at least `3.0.0`.
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log.md
You can also [create log alert rules using Azure Resource Manager templates](../
1. Write a query that will find the log events for which you want to create an alert. You can use the [alert query examples article](../logs/queries.md) to understand what you can discover or [get started on writing your own query](../logs/log-analytics-tutorial.md). Also, [learn how to create optimized alert queries](alerts-log-query.md). 1. From the top command bar, Select **+ New Alert rule**.
- :::image type="content" source="media/alerts-log/alerts-create-new-alert-rule.png" alt-text="Create new alert rule.":::
+ :::image type="content" source="media/alerts-log/alerts-create-new-alert-rule.png" alt-text="Create new alert rule." lightbox="media/alerts-log/alerts-create-new-alert-rule-expanded.png":::
1. The **Condition** tab opens, populated with your log query.
azure-monitor Itsmc Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-troubleshoot-overview.md
The following sections identify common symptoms, possible causes, and resolution
* [Sync the connector](itsmc-resync-servicenow.md). * Check the [dashboard](itsmc-dashboard.md) and review the errors in the section for connector status. Then review the [common errors and their resolutions](itsmc-dashboard-errors.md)
-### Configuration Item is blank in incidents received from ServiceNow
+### In the incidents received from ServiceNow, the configuration item is blank
**Cause**: There can be several reasons for this:
-* Only Log alerts supports the configuration item but the alert is another type of alert
-* To contain the configuration item, the search results must include the **Computer** or **Resource** column
-* The values in the configuration item field do not match an entry in the CMDB
+* The alert is not a log alert. Configuration items are only supported by log alerts.
+* The search results do not include the **Computer** or **Resource** column.
+* The values in the configuration item field do not match an entry in the CMDB.
**Resolution**:
-* Check whether it is log alert - if not configuration item not supported
-* Check whether search results have column Computer or Resource -if not it should be added to the query
-* Check whether values in the columns Computer/Resource are identical to the values in CMDB- if not a new entry should be added to the CMDB
+* Check if the alert is a log alert. If it isn't a log alert, configuration items are not supported.
+* If the search results do not have a Computer or Resource column, add them to the query.
+* Check that the values in the Computer and Resource columns are identical to the values in the CMDB. If they are not, add a new entry to the CMDB with the matching values.
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-overview.md
ms.contributor: charles.weininger Previously updated : 04/26/2022 Last updated : 05/11/2022
Profiler works with .NET applications deployed on the following Azure services.
| [Azure Container Instances for Windows](profiler-containers.md) | No | Yes | No | | [Azure Container Instances for Linux](profiler-containers.md) | No | Yes | No | | Kubernetes | No | Yes | No |
-| Azure Functions | No | No | No |
+| Azure Functions | Yes | Yes | No |
| Azure Spring Cloud | N/A | No | No | | [Azure Service Fabric](profiler-servicefabric.md) | Yes | Yes | No |
azure-monitor Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices.md
This scenario provides recommended guidance for configuring features of Azure Mo
Azure Monitor is available the moment you create an Azure subscription. The Activity log immediately starts collecting events about activity in the subscription, and platform metrics are collected for any Azure resources you created. Features such as metrics explorer are available to analyze data. Other features require configuration. This scenario identifies the configuration steps required to take advantage of all Azure Monitor features. It also makes recommendations for which features you should leverage and how to determine configuration options based on your particular requirements.
-Enabling Azure Monitor to monitor of all your Azure resources is a combination of configuring Azure Monitor components and configuring Azure resources to generate monitoring data for Azure Monitor to collect. The goal of a complete implementation is to collect all useful data from all of your cloud resources and applications and enable the entire set of Azure Monitor features based on that data.
+Enabling Azure Monitor to monitor all of your Azure resources is a combination of configuring Azure Monitor components and configuring Azure resources to generate monitoring data for Azure Monitor to collect. The goal of a complete implementation is to collect all useful data from all of your cloud resources and applications and enable the entire set of Azure Monitor features based on that data.
> [!IMPORTANT]
azure-netapp-files Cross Region Replication Display Health Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-display-health-status.md
na Previously updated : 01/28/2022 Last updated : 05/16/2022 # Display health and monitor status of replication relationship
Follow the following steps to create [alert rules in Azure Monitor](../azure-mon
5. From the Condition tab, select **Add condition**. From there, find a signal called ΓÇ£**is volume replication healthy**ΓÇ¥. 6. There you'll see **Condition of the relationship, 1 or 0** and the **Configure Signal Logic** window is displayed. 7. To check if the replication is _unhealthy_:
- * Set **Operator** to `Less than or equal to`.
+ * Set **Operator** to `Less than`.
* Set **Aggregation type** to `Average`.
- * Set **Threshold** value to `0`.
+ * Set **Threshold** value to `1`.
* Set **Unit** to `Count`. 8. To check if the replication is _healthy_: * Set **Operator** to `Greater than or equal to`.
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
description: Describes the functions to use in a Bicep file to retrieve values a
Previously updated : 04/28/2022 Last updated : 05/16/2022 # Resource functions for Bicep
You can use the response from `pickZones` to determine whether to provide null f
## providers
-**The providers function has been deprecated.** We no longer recommend using it. If you used this function to get an API version for the resource provider, we recommend that you provide a specific API version in your template. Using a dynamically returned API version can break your template if the properties change between versions.
+**The providers function has been deprecated in Bicep.** We no longer recommend using it. If you used this function to get an API version for the resource provider, we recommend that you provide a specific API version in your Bicep file. Using a dynamically returned API version can break your template if the properties change between versions.
+
+The [providers operation](/rest/api/resources/providers) is still available through the REST API. It can be used outside of a Bicep file to get information about a resource provider.
Namespace: [az](bicep-functions.md#namespaces-for-functions).
You use this function to get the resource ID for resources that are [deployed to
### managementGroupResourceID example
-The following template creates a policy definition, and assign the policy defintion. It uses the `managementGroupResourceId` function to get the resource ID for policy definition.
+The following template creates and assigns a policy definition. It uses the `managementGroupResourceId` function to get the resource ID for policy definition.
```bicep targetScope = 'managementGroup'
azure-resource-manager Scenarios Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scenarios-rbac.md
description: Describes how to create role assignments and role definitions by us
Previously updated : 12/20/2021 Last updated : 05/15/2022 # Create Azure RBAC resources by using Bicep
-Azure has a powerful role-based access control (RBAC) system. By using Bicep, you can programmatically define your RBAC role assignments and role definitions.
+Azure has a powerful role-based access control (RBAC) system. For more information on Azure RBAC, see [What is Azure Role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) By using Bicep, you can programmatically define your RBAC role assignments and role definitions.
## Role assignments
+Role assignments enable you to grant a principal (such as a user, a group, or a service principal) access to a specific Azure resource.
+ To define a role assignment, create a resource with type [`Microsoft.Authorization/roleAssignments`](/azure/templates/microsoft.authorization/roleassignments?tabs=bicep). A role definition has multiple properties, including a scope, a name, a role definition ID, a principal ID, and a principal type. ### Scope
+Role assignments apply at a specific *scope*, which defines the resource or set of resources that you're granting access to. For more information, see [Understand scope for Azure RBAC](../../role-based-access-control/scope-overview.md).
+ Role assignments are [extension resources](scope-extension-resources.md), which means they apply to another resource. The following example shows how to create a storage account and a role assignment scoped to that storage account: ::: code language="bicep" source="~/azure-docs-bicep-samples/samples/scenarios-rbac/scope.bicep" highlight="17" :::
-If you don't explicitly specify the scope, Bicep uses the file's `targetScope`. In the following example, no `scope` property is specified, so the role assignment applies to the subscription:
+If you don't explicitly specify the scope, Bicep uses the file's `targetScope`. In the following example, no `scope` property is specified, so the role assignment is scoped to the subscription:
::: code language="bicep" source="~/azure-docs-bicep-samples/samples/scenarios-rbac/scope-default.bicep" highlight="4" :::
If you don't explicitly specify the scope, Bicep uses the file's `targetScope`.
### Name
-A role assignment's resource name must be a globally unique identifier (GUID). It's a good practice to create a GUID that uses the scope, principal ID, and role ID together. Role assignment resource names must be unique within the Azure Active Directory tenant, even if the scope is narrower.
+A role assignment's resource name must be a globally unique identifier (GUID).
-> [!TIP]
-> Use the `guid()` function to help you to create a deterministic GUID for your role assignment names, like in this example:
->
-> ```bicep
-> name: guid(subscription().id, principalId, roleDefinitionResourceId)
-> ```
+Role assignment resource names must be unique within the Azure Active Directory tenant, even if the scope is narrower.
+
+For your Bicep deployment to be repeatable, it's important for the name to be deterministic - in other words, to use the same name every time you deploy. It's a good practice to create a GUID that uses the scope, principal ID, and role ID together. It's a good idea to use the `guid()` function to help you to create a deterministic GUID for your role assignment names, like in this example:
+
+```bicep
+name: guid(subscription().id, principalId, roleDefinitionResourceId)
+```
### Role definition ID
The following example shows how to create a user-assigned managed identity and a
::: code language="bicep" source="~/azure-docs-bicep-samples/samples/scenarios-rbac/managed-identity.bicep" highlight="15-16" :::
+### Resource deletion behavior
+
+When you delete a user, group, service principal, or managed identity from Azure AD, it's a good practice to delete any role assignments. They aren't deleted automatically.
+
+Any role assignments that refer to a deleted principal ID become invalid. If you try to reuse a role assignment's name for another role assignment, the deployment will fail.
+ ## Custom role definitions
+Custom role definitions enable you to define a set of permissions that can then be assigned to a principal by using a role assignment. For more information on role definitions, see [Understand Azure role definitions](../../role-based-access-control/role-definitions.md).
+ To create a custom role definition, define a resource of type `Microsoft.Authorization/roleDefinitions`. See the [Create a new role def via a subscription level deployment](https://azure.microsoft.com/resources/templates/create-role-def/) quickstart for an example. Role definition resource names must be unique within the Azure Active Directory tenant, even if the assignable scopes are narrower.
Role definition resource names must be unique within the Azure Active Directory
- [Create a new role def via a subscription level deployment](https://azure.microsoft.com/resources/templates/create-role-def/) - [Assign a role at subscription scope](https://azure.microsoft.com/resources/templates/subscription-role-assignment/) - [Assign a role at tenant scope](https://azure.microsoft.com/resources/templates/tenant-role-assignment/)
- - [Create a resourceGroup, apply a lock and RBAC](https://azure.microsoft.com/resources/templates/create-rg-lock-role-assignment/)
+ - [Create a resourceGroup, apply a lock and RBAC](https://azure.microsoft.com/resources/templates/create-rg-lock-role-assignment/)
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> - [Microsoft.ClassicSubscription](#microsoftclassicsubscription) > - [Microsoft.CognitiveServices](#microsoftcognitiveservices) > - [Microsoft.Commerce](#microsoftcommerce)
+> - [Microsoft.Communication](#microsoftcommunication)
> - [Microsoft.Compute](#microsoftcompute) > - [Microsoft.Confluent](#microsoftconfluent) > - [Microsoft.Consumption](#microsoftconsumption)
Jump to a resource provider namespace:
> | ratecard | No | No | No | > | usageaggregates | No | No | No |
+## Microsoft.Communication
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Resource group | Subscription | Region move |
+> | - | -- | - | -- |
+> | communicationservices | Yes | Yes | No |
+ ## Microsoft.Compute > [!IMPORTANT]
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
description: Shows the rules and restrictions for naming Azure resources.
Previously updated : 05/06/2022 Last updated : 05/16/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | | | | | > | communicationServices | global | 1-63 | Alphanumerics and hyphens.<br><br>Can't use underscores. |
+## Microsoft.ConfidentialLedger
+
+> [!div class="mx-tableFixed"]
+> | Entity | Scope | Length | Valid Characters |
+> | | | | |
+> | ledgers | Resource group | 3-32 | Alphanumerics and hyphens.<br><br>Can't start or end with hyphen. |
+ ## Microsoft.Consumption > [!div class="mx-tableFixed"]
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 03/31/2022 Last updated : 05/16/2022
Cosmos DB isn't a zonal resource but you can use the `pickZones` function to det
## providers
-**The providers function has been deprecated.** We no longer recommend using it. If you used this function to get an API version for the resource provider, we recommend that you provide a specific API version in your template. Using a dynamically returned API version can break your template if the properties change between versions.
+**The providers function has been deprecated in ARM templates.** We no longer recommend using it. If you used this function to get an API version for the resource provider, we recommend that you provide a specific API version in your template. Using a dynamically returned API version can break your template if the properties change between versions.
In Bicep, the [providers](../bicep/bicep-functions-resource.md#providers) function is deprecated.
+The [providers operation](/rest/api/resources/providers) is still available through the REST API. It can be used outside of an ARM template to get information about a resource provider.
+ ## reference `reference(resourceName or resourceIdentifier, [apiVersion], ['Full'])`
You use this function to get the resource ID for resources that are [deployed to
### managementGrouopResourceID example
-The following template creates a policy definition, and assign the policy defintion. It uses the `managementGroupResourceId` function to get the resource ID for policy definition.
+The following template creates and assigns a policy definition. It uses the `managementGroupResourceId` function to get the resource ID for policy definition.
```json {
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
Title: Troubleshoot common Azure deployment errors
description: Describes common deployment errors for Azure resources that are deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue Previously updated : 03/17/2022 Last updated : 05/16/2022
If your error code isn't listed, submit a GitHub issue. On the right side of the
| DeploymentJobSizeExceeded | Simplify your template to reduce size. | [Resolve template size errors](error-job-size-exceeded.md) | | DnsRecordInUse | The DNS record name must be unique. Enter a different name. | | | ImageNotFound | Check VM image settings. | |
+| InaccessibleImage | Azure Container Instance deployment fails. You might need to include the image's tag with the syntax `registry/image:tag` to deploy the container. For a private registry, verify your credentials are correct. | [Find error code](find-error-code.md) |
| InternalServerError | Caused by a temporary problem. Retry the deployment. | | | InUseSubnetCannotBeDeleted | This error can occur when you try to update a resource, if the request process deletes and creates the resource. Make sure to specify all unchanged values. | [Update resource](/azure/architecture/guide/azure-resource-manager/advanced-templates/update-resource) | | InvalidAuthenticationTokenTenant | Get access token for the appropriate tenant. You can only get the token from the tenant that your account belongs to. | |
If your error code isn't listed, submit a GitHub issue. On the right side of the
| SubnetsNotInSameVnet | A virtual machine can only have one virtual network. When deploying several NICs, make sure they belong to the same virtual network. | [Windows VM multiple NICs](../../virtual-machines/windows/multiple-nics.md) <br><br> [Linux VM multiple NICs](../../virtual-machines/linux/multiple-nics.md) | | SubnetIsFull | There aren't enough available addresses in the subnet to deploy resources. You can release addresses from the subnet, use a different subnet, or create a new subnet. | [Manage subnets](../../virtual-network/virtual-network-manage-subnet.md) and [Virtual network FAQ](../../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets) <br><br> [Private IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) | | SubscriptionNotFound | A specified subscription for deployment can't be accessed. It could be the subscription ID is wrong, the user deploying the template doesn't have adequate permissions to deploy to the subscription, or the subscription ID is in the wrong format. When using ARM template nested deployments to deploy across scopes, provide the subscription's GUID. | [ARM template deploy across scopes](../templates/deploy-to-resource-group.md) <br><br> [Bicep file deploy across scopes](../bicep/deploy-to-resource-group.md) |
-| SubscriptionNotRegistered | When deploying a resource, the resource provider must be registered for your subscription. When you use an Azure Resource Manager template for deployment, the resource provider is automatically registered in the subscription. Sometimes, the automatic registration doesn't complete in time. To avoid this intermittent error, register the resource provider before deployment. | [Resolve registration](error-register-resource-provider.md) |
+| SubscriptionNotRegistered | When a resource is deployed, the resource provider must be registered for your subscription. When you use an Azure Resource Manager template for deployment, the resource provider is automatically registered in the subscription. Sometimes, the automatic registration doesn't complete in time. To avoid this intermittent error, register the resource provider before deployment. | [Resolve registration](error-register-resource-provider.md) |
| TemplateResourceCircularDependency | Remove unnecessary dependencies. | [Resolve circular dependencies](error-invalid-template.md#circular-dependency) | | TooManyTargetResourceGroups | Reduce number of resource groups for a single deployment. | [ARM template deploy across scopes](../templates/deploy-to-resource-group.md) <br><br> [Bicep file deploy across scopes](../bicep/deploy-to-resource-group.md) |
azure-resource-manager Find Error Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/find-error-code.md
Title: Find error codes
description: Describes how to find error codes to troubleshoot Azure resources deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue Previously updated : 11/04/2021 Last updated : 05/16/2022 # Find error codes
-When an Azure resource deployment fails using Azure Resource Manager templates (ARM templates) or Bicep files, and error code is received. This article describes how to find error codes so you can troubleshoot the problem. For more information about error codes, see [common deployment errors](common-deployment-errors.md).
+When an Azure resource deployment fails using Azure Resource Manager templates (ARM templates) or Bicep files, an error code is received. This article describes how to find error codes so you can troubleshoot the problem. For more information about error codes, see [common deployment errors](common-deployment-errors.md).
## Error types
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer (formerly Azure Video Analyzer for Media) release not
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer (formerly Azure Video Analyzer for Media). Previously updated : 05/12/2022 Last updated : 05/16/2022
Azure Video Indexer is now part of [Network Service Tags](network-security.md).
You can now enable or disable the celebrity recognition model on the account level (on classic account only). To turn on or off the model, go to the account settings > and toggle on/off the model. Once you disable the model, Video Indexer insights will not include the output of celebrity model and will not run the celebrity model pipeline.
-### Azure Video Indexer repository Name
+### Azure Video Indexer repository name
As of May 1st, our new updated repository of Azure Video Indexer widget was renamed. Use https://www.npmjs.com/package/@azure/video-indexer-widgets instead
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-build-chat.md
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
dotnet add package Microsoft.Extensions.Azure ```
-2. Add a `SampleChatHub` class to handle hub events. Add DI for the service middleware and service client. Don't forget to replace `<connection_string>` with the one of your services.
+2. Add a `Sample_ChatApp` class to handle hub events. Add DI for the service middleware and service client. Don't forget to replace `<connection_string>` with the one of your services.
```csharp using Microsoft.Azure.WebPubSub.AspNetCore;
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
builder.Services.AddWebPubSub( o => o.ServiceEndpoint = new ServiceEndpoint("<connection_string>"))
- .AddWebPubSubServiceClient<SampleChatHub>();
+ .AddWebPubSubServiceClient<Sample_ChatApp>();
var app = builder.Build();
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
app.Run();
- sealed class SampleChatHub : WebPubSubHub
+ sealed class Sample_ChatApp : WebPubSubHub
{ } ```
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
```csharp app.UseEndpoints(endpoints => {
- endpoints.MapGet("/negotiate", async (WebPubSubServiceClient<SampleChatHub> serviceClient, HttpContext context) =>
+ endpoints.MapGet("/negotiate", async (WebPubSubServiceClient<Sample_ChatApp> serviceClient, HttpContext context) =>
{ var id = context.Request.Query["id"]; if (id.Count != 1)
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
const { WebPubSubServiceClient } = require('@azure/web-pubsub'); const app = express();
- const hubName = 'chat';
+ const hubName = 'Sample_ChatApp';
const port = 8080; let serviceClient = new WebPubSubServiceClient(process.env.WebPubSubConnectionString, hubName);
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
// create the service client WebPubSubServiceClient service = new WebPubSubServiceClientBuilder() .connectionString(args[0])
- .hub("chat")
+ .hub("Sample_ChatApp")
.buildClient(); // start a server
Here we're using Web PubSub middleware SDK, there is already an implementation t
```csharp app.UseEndpoints(endpoints => {
- endpoints.MapGet("/negotiate", async (WebPubSubServiceClient<SampleChatHub> serviceClient, HttpContext context) =>
+ endpoints.MapGet("/negotiate", async (WebPubSubServiceClient<Sample_ChatApp> serviceClient, HttpContext context) =>
{ var id = context.Request.Query["id"]; if (id.Count != 1)
Here we're using Web PubSub middleware SDK, there is already an implementation t
await context.Response.WriteAsync(serviceClient.GetClientAccessUri(userId: id).AbsoluteUri); });
- endpoints.MapWebPubSubHub<SampleChatHub>("/eventhandler/{*path}");
+ endpoints.MapWebPubSubHub<Sample_ChatApp>("/eventhandler/{*path}");
}); ```
-2. Go the `SampleChatHub` we created in previous step. Add a constructor to work with `WebPubSubServiceClient<SampleChatHub>` so we can use to invoke service. And override `OnConnectedAsync()` method to respond when `connected` event is triggered.
+2. Go the `Sample_ChatApp` we created in previous step. Add a constructor to work with `WebPubSubServiceClient<Sample_ChatApp>` so we can use to invoke service. And override `OnConnectedAsync()` method to respond when `connected` event is triggered.
```csharp
- sealed class SampleChatHub : WebPubSubHub
+ sealed class Sample_ChatApp : WebPubSubHub
{
- private readonly WebPubSubServiceClient<SampleChatHub> _serviceClient;
+ private readonly WebPubSubServiceClient<Sample_ChatApp> _serviceClient;
- public SampleChatHub(WebPubSubServiceClient<SampleChatHub> serviceClient)
+ public Sample_ChatApp(WebPubSubServiceClient<Sample_ChatApp> serviceClient)
{ _serviceClient = serviceClient; }
Use the Azure CLI [az webpubsub hub create](/cli/azure/webpubsub/hub#az-webpubsu
> Replace &lt;domain-name&gt; with the name ngrok printed. ```azurecli-interactive
-az webpubsub hub create -n "<your-unique-resource-name>" -g "myResourceGroup" --hub-name "SampleChatHub" --event-handler url-template="https://<domain-name>.ngrok.io/eventHandler" user-event-pattern="*" system-event="connected"
+az webpubsub hub create -n "<your-unique-resource-name>" -g "myResourceGroup" --hub-name "Sample_ChatApp" --event-handler url-template="https://<domain-name>.ngrok.io/eventHandler" user-event-pattern="*" system-event="connected"
``` After the update is completed, open the home page http://localhost:8080/https://docsupdatetracker.net/index.html, input your user name, youΓÇÖll see the connected message printed in the server console.
Besides system events like `connected` or `disconnected`, client can also send m
# [C#](#tab/csharp)
-Implement the OnMessageReceivedAsync() method in SampleChatHub.
+Implement the OnMessageReceivedAsync() method in Sample_ChatApp.
1. Handle message event. ```csharp
- sealed class SampleChatHub : WebPubSubHub
+ sealed class Sample_ChatApp : WebPubSubHub
{
- private readonly WebPubSubServiceClient<SampleChatHub> _serviceClient;
+ private readonly WebPubSubServiceClient<Sample_ChatApp> _serviceClient;
- public SampleChatHub(WebPubSubServiceClient<SampleChatHub> serviceClient)
+ public Sample_ChatApp(WebPubSubServiceClient<Sample_ChatApp> serviceClient)
{ _serviceClient = serviceClient; }
backup Delete Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/delete-recovery-services-vault.md
This script helps you to delete a Recovery Services vault.
``` 1. Launch PowerShell 7 as Administrator.
-1. Before you run the script for vault deletion, run the following command to upgrade the _Az module_ to the latest version:
-
- ```azurepowershell-interactive
- Uninstall-Module -Name Az.RecoveryServices
- Set-ExecutionPolicy -ExecutionPolicy Unrestricted
- Install-Module -Name Az.RecoveryServices -Repository PSGallery -Force -AllowClobber
- ```
1. In the PowerShell window, change the path to the location the file is present, and then run the file using **./NameOfFile.ps1**. 1. Provide authentication via browser by signing into your Azure account.
The script will continue to delete all the backup items and ultimately the entir
## Script ```azurepowershell-interactive
+Write-Host "WARNING: Please ensure that you have at least PowerShell 7 before running this script. Visit https://go.microsoft.com/fwlink/?linkid=2181071 for the procedure." -ForegroundColor Yellow
+$RSmodule = Get-Module -Name Az.RecoveryServices -ListAvailable
+$NWmodule = Get-Module -Name Az.Network -ListAvailable
+$RSversion = $RSmodule.Version.ToString()
+$NWversion = $NWmodule.Version.ToString()
+
+if($RSversion -lt "5.3.0")
+{
+ Uninstall-Module -Name Az.RecoveryServices
+ Set-ExecutionPolicy -ExecutionPolicy Unrestricted
+ Install-Module -Name Az.RecoveryServices -Repository PSGallery -Force -AllowClobber
+}
+
+if($NWversion -lt "4.15.0")
+{
+ Uninstall-Module -Name Az.Network
+ Set-ExecutionPolicy -ExecutionPolicy Unrestricted
+ Install-Module -Name Az.Network -Repository PSGallery -Force -AllowClobber
+}
+ Connect-AzAccount $VaultName = "Vault name" #enter vault name
$backupServersMARSFin = Get-AzRecoveryServicesBackupContainer -ContainerType "Wi
$backupServersMABSFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID| Where-Object { $_.BackupManagementType -eq "AzureBackupServer" } $backupServersDPMFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID | Where-Object { $_.BackupManagementType-eq "SCDPM" } $pvtendpointsFin = Get-AzPrivateEndpointConnection -PrivateLinkResourceId $VaultToDelete.ID
-Write-Host "Number of backup items left in the vault and which need to be deleted:" $backupItemsVMFin.count "Azure VMs" $backupItemsSQLFin.count "SQL Server Backup Items" $backupContainersSQLFin.count "SQL Server Backup Containers" $protectableItemsSQLFin.count "SQL Server Instances" $backupItemsSAPFin.count "SAP HANA backup items" $backupContainersSAPFin.count "SAP HANA Backup Containers" $backupItemsAFSFin.count "Azure File Shares" $StorageAccountsFin.count "Storage Accounts" $backupServersMARSFin.count "MARS Servers" $backupServersMABSFin.count "MAB Servers" $backupServersDPMFin.count "DPM Servers" $pvtendpointsFin.count "Private endpoints"
-Write-Host "Number of ASR items left in the vault and which need to be deleted:" $ASRProtectedItems "ASR protected items" $ASRPolicyMappings "ASR policy mappings" $fabricCount "ASR Fabrics" $pvtendpointsFin.count "Private endpoints. Warning: This script will only remove the replication configuration from Azure Site Recovery and not from the source. Please cleanup the source manually. Visit https://go.microsoft.com/fwlink/?linkid=2182781 to learn more"
-Remove-AzRecoveryServicesVault -Vault $VaultToDelete
+
+#Display items which are still present in the vault and might be preventing vault deletion.
+
+if($backupItemsVMFin.count -ne 0) {Write-Host $backupItemsVMFin.count "Azure VM backups are still present in the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($backupItemsSQLFin.count -ne 0) {Write-Host $backupItemsSQLFin.count "SQL Server Backup Items are still present in the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($backupContainersSQLFin.count -ne 0) {Write-Host $backupContainersSQLFin.count "SQL Server Backup Containers are still registered to the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($protectableItemsSQLFin.count -ne 0) {Write-Host $protectableItemsSQLFin.count "SQL Server Instances are still present in the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($backupItemsSAPFin.count -ne 0) {Write-Host $backupItemsSAPFin.count "SAP HANA Backup Items are still present in the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($backupContainersSAPFin.count -ne 0) {Write-Host $backupContainersSAPFin.count "SAP HANA Backup Containers are still registered to the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($backupItemsAFSFin.count -ne 0) {Write-Host $backupItemsAFSFin.count "Azure File Shares are still present in the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($StorageAccountsFin.count -ne 0) {Write-Host $StorageAccountsFin.count "Storage Accounts are still registered to the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($backupServersMARSFin.count -ne 0) {Write-Host $backupServersMARSFin.count "MARS Servers are still registered to the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($backupServersMABSFin.count -ne 0) {Write-Host $backupServersMABSFin.count "MAB Servers are still registered to the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($backupServersDPMFin.count -ne 0) {Write-Host $backupServersDPMFin.count "DPM Servers are still registered to the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($ASRProtectedItems -ne 0) {Write-Host $ASRProtectedItems "ASR protected items are still present in the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($ASRPolicyMappings -ne 0) {Write-Host $ASRPolicyMappings "ASR policy mappings are still present in the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($fabricCount -ne 0) {Write-Host $fabricCount "ASR Fabrics are still present in the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+if($pvtendpointsFin.count -ne 0) {Write-Host $pvtendpointsFin.count "Private endpoints are still linked to the vault. Remove the same for successful vault deletion." -ForegroundColor Red}
+
+$accesstoken = Get-AzAccessToken
+$token = $accesstoken.Token
+$authHeader = @{
+ 'Content-Type'='application/json'
+ 'Authorization'='Bearer ' + $token
+}
+$restUri = "https://management.azure.com/subscriptions/"+$SubscriptionId+'/resourcegroups/'+$ResourceGroup+'/providers/Microsoft.RecoveryServices/vaults/'+$VaultName+'?api-version=2021-06-01&operation=DeleteVaultUsingPS'
+$response = Invoke-RestMethod -Uri $restUri -Headers $authHeader -Method DELETE
+
+$VaultDeleted = Get-AzRecoveryServicesVault -Name $VaultName -ResourceGroupName $ResourceGroup -erroraction 'silentlycontinue'
+if ($VaultDeleted -eq $null){
+Write-Host "Recovery Services Vault" $VaultName "successfully deleted"
+}
#Finish ```
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 02/14/2022 Last updated : 05/16/2022
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- May 2022
+ - [Archive tier support for Azure Virtual Machines is now generally available](#archive-tier-support-for-azure-virtual-machines-is-now-generally-available)
- February 2022 - [Multiple backups per day for Azure Files is now generally available](#multiple-backups-per-day-for-azure-files-is-now-generally-available) - January 2022
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Archive tier support for Azure Virtual Machines is now generally available
+
+Azure Backup now supports the movement of recovery points to the Vault-archive tier for Azure Virtual Machines from the Azure portal. This allows you to move the archivable/recommended recovery points (corresponding to a backup item) to the Vault-archive tier at one go.
+
+Azure Backup also supports Vault-archive tier for SQL Server in Azure VM and SAP HANA in Azure VM. The support has been extended via Azure portal.
+
+For more information, see [Archive tier support in Azure Backup](archive-tier-support.md).
+ ## Multiple backups per day for Azure Files is now generally available Low RPO (Recovery Point Objective) is a key requirement for Azure Files that contains the frequently updated, business-critical data. To ensure minimal data loss in the event of a disaster or unwanted changes to file share content, you may prefer to take backups more frequently than once a day.
For more information, see [how to protect Recovery Services vault and manage cri
Low RPO (Recovery Point Objective) is a key requirement for Azure Files that contains the frequently updated, business-critical data. To ensure minimal data loss in the event of a disaster or unwanted changes to file share content, you may prefer to take backups more frequently than once a day.
-Using Azure Backup you can now create a backup policy or modify an existing backup policy to take multiple snapshots in a day. With this capability, you can also define the duration in which your backup jobs would trigger. This capability empowers you to align your backup schedule with the working hours when there are frequent updates to Azure Files content.
+Using Azure Backup, you can now create a backup policy or modify an existing backup policy to take multiple snapshots in a day. With this capability, you can also define the duration in which your backup jobs would trigger. This capability empowers you to align your backup schedule with the working hours when there are frequent updates to Azure Files content.
For more information, see [how to configure multiple backups per day via backup policy](./manage-afs-backup.md#create-a-new-policy).
cognitive-services Copy Move Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/copy-move-projects.md
After you've created and trained a Custom Vision project, you may want to copy your project to another resource. If your app or business depends on the use of a Custom Vision project, we recommend you copy your model to another Custom Vision account in another region. Then if a regional outage occurs, you can access your project in the region where it was copied.
-As a part of Azure, Custom Vision Service has components that are maintained across multiple regions. Service zones and regions are used by all of our services to provide continued service to our customers. For more information on zones and regions, see [Azure regions](../../availability-zones/az-overview.md). If you need additional information or have any issues, please [contact support](/answers/topics/azure-custom-vision.html).
- The **[ExportProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** and **[ImportProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** APIs enable this scenario by allowing you to copy projects from one Custom Vision account into others. This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service like Postman to issue the requests. > [!TIP]
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
The Custom Vision portal can be used by the following web browsers:
![Custom Vision website in a Chrome browser window](media/browser-home.png)
+## Backup and disaster recovery
+
+As a part of Azure, Custom Vision Service has components that are maintained across multiple regions. Service zones and regions are used by all of our services to provide continued service to our customers. For more information on zones and regions, see [Azure regions](../../availability-zones/az-overview.md). If you need additional information or have any issues, please [contact support](/answers/topics/azure-custom-vision.html).
++ ## Data privacy and security As with all of the Cognitive Services, developers using the Custom Vision service should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
+## Data residency
+
+Custom Vision primarily doesn't replicate data out of the specified region, except for one region, `NorthCentralUS`, where there is no local Azure Support.
+ ## Next steps Follow the [Build a classifier](getting-started-build-a-classifier.md) quickstart to get started using Custom Vision on the web portal, or complete an [SDK quickstart](quickstarts/image-classification.md) to implement the basic scenarios in code.
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
Previously updated : 01/24/2022 Last updated : 05/16/2022 ms.devlang: csharp
Before you use the speech-to-text REST API for short audio, consider the followi
* Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio. * The REST API for short audio returns only final results. It doesn't provide partial results.
+* [Speech translation](speech-translation.md) is not supported via REST API for short audio. You need to use [Speech SDK](speech-sdk.md).
> [!TIP] > For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
container-apps Application Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/application-lifecycle-management.md
As a container app is deployed, the first revision is automatically created.
## Update
-As a container app is updated with a [revision scope-change](revisions.md#revision-scope-changes), a new revision is created. You can choose whether to [automatically deactivate new old revisions, or allow them to remain available](revisions.md).
+As a container app is updated with a [revision scope-change](revisions.md#revision-scope-changes), a new revision is created. You can choose whether to [automatically deactivate old revisions, or allow them to remain available](revisions.md).
:::image type="content" source="media/application-lifecycle-management/azure-container-apps-lifecycle-update.png" alt-text="Azure Container Apps: Update phase"::: ## Deactivate
-Once a revision is no longer needed, you can deactivate a revision with the option to reactivate later. During deactivation, the container is [shut down](#shutdown).
+Once a revision is no longer needed, you can deactivate a revision with the option to reactivate later. During deactivation, containers in the revision are [shut down](#shutdown).
:::image type="content" source="media/application-lifecycle-management/azure-container-apps-lifecycle-deactivate.png" alt-text="Azure Container Apps: Deactivation phase":::
The containers are shut down in the following situations:
When a shutdown is initiated, the container host sends a [SIGTERM message](https://wikipedia.org/wiki/Signal_(IPC)) to your container. The code implemented in the container can respond to this operating system-level message to handle termination.
-If your application does not respond to the `SIGTERM` message, then [SIGKILL](https://wikipedia.org/wiki/Signal_(IPC)) terminates your container.
+If your application does not respond within 30 seconds to the `SIGTERM` message, then [SIGKILL](https://wikipedia.org/wiki/Signal_(IPC)) terminates your container.
## Next steps
container-apps Github Actions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions-cli.md
# Publish revisions with GitHub Actions in Azure Container Apps Preview
-Azure Container Apps allows you to use GitHub Actions to publish [revisions](revisions.md) to your container app. As commits are pushed to your GitHub repository, a GitHub Action is triggered which updates the [container](containers.md) image in the container registry. Once the container is updated in the registry, Azure Container Apps creates a new revision based on the updated container image.
+Azure Container Apps allows you to use GitHub Actions to publish [revisions](revisions.md) to your container app. As commits are pushed to your GitHub repository, a GitHub Actions is triggered which updates the [container](containers.md) image in the container registry. Once the container is updated in the registry, Azure Container Apps creates a new revision based on the updated container image.
:::image type="content" source="media/github-actions/azure-container-apps-github-actions.png" alt-text="Changes to a GitHub repo trigger an action to create a new revision.":::
-The GitHub action is triggered by commits to a specific branch in your repository. When creating the integration link, you decide which branch triggers the action.
+The GitHub Actions is triggered by commits to a specific branch in your repository. When creating the integration link, you decide which branch triggers the action.
## Authentication
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
If you've changed the `STORAGE_ACCOUNT_CONTAINER` variable from its original val
Navigate to the directory in which you stored the *statestore.yaml* file and run the following command to configure the Dapr component in the Container Apps environment.
+If you need to add multiple components, run the `az containerapp env dapr-component set` command multiple times to add each component.
+ # [Bash](#tab/bash) ```azurecli
This command deletes the resource group that includes all of the resources creat
> [!div class="nextstepaction"] > [Application lifecycle management](application-lifecycle-management.md)+
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
There are two scale properties that apply to all rules in your container app:
- Individual scale rules are defined in the `rules` array. - If you want to ensure that an instance of your application is always running, set `minReplicas` to 1 or higher. - Replicas not processing, but that remain in memory are billed in the "idle charge" category.-- Changes to scaling rules are a [revision-scope](./revisions.md#revision-scope-changes) change.-- When using non-HTTP event scale rules, setting the `activeRevisionMode` to `single` is recommended.
+- Changes to scaling rules are a [revision-scope](overview.md) change.
+- When using non-HTTP event scale rules, setting the `properties.configuration.activeRevisionsMode` property of the container app to `single` is recommended.
+
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
Previously updated : 2/18/2022 Last updated : 5/16/2022 zone_pivot_groups: azure-cli-or-portal
-# Provide an virtual network to an internal Azure Container Apps (Preview) environment
+# Provide a virtual network to an internal Azure Container Apps (Preview) environment
The following example shows you how to create a Container Apps environment in an existing virtual network. > [!IMPORTANT]
-> In order to ensure the environment deployment within your custom VNET is successful, configure your VNET with an "allow-all" configuration by default. The full list of traffic dependencies required to configure the VNET as "deny-all" is not yet available. Refer to [Known issues for public preview](https://github.com/microsoft/azure-container-apps/wiki/Known-Issues-for-public-preview) for additional details.
+> In order to ensure the environment deployment within your custom VNET is successful, configure your VNET with an "allow-all" configuration by default. The full list of traffic dependencies required to configure the VNET as "deny-all" is not yet available. For more information, see [Known issues for public preview](https://github.com/microsoft/azure-container-apps/wiki/Known-Issues-for-public-preview).
::: zone pivot="azure-portal"
The following example shows you how to create a Container Apps environment in an
7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*.
-9. Next to the *Virtual network* box, select the **Create new** link.
-10. Enter **my-custom-vnet** in the name box.
-11. Select the **OK** button.
-12. Next to the *Control plane subnet* box, select the **Create new** link and enter the following values:
+9. Next to the *Virtual network* box, select the **Create new** link and enter the following value.
| Setting | Value |
- |||
- | Subnet name | Enter **my-control-plane-vnet**. |
- | Virtual Network Address Block | Keep the default values. |
- | Subnet Address Block | Keep the default values. |
+ |--|--|
+ | Name | Enter **my-custom-vnet**. |
-13. Select the **OK** button.
-14. Next to the *Control plane subnet* box, select the **Create new** link and enter the following values:
+10. Select the **OK** button.
+11. Next to the *Infrastructure subnet* box, select the **Create new** link and enter the following values:
| Setting | Value | |||
- | Subnet name | Enter **my-apps-vnet**. |
+ | Subnet Name | Enter **infrastructure-subnet**. |
| Virtual Network Address Block | Keep the default values. | | Subnet Address Block | Keep the default values. |
-15. Under *Virtual IP*, select **Internal**.
-16. Select **Create**.
+12. Select the **OK** button.
+13. Under *Virtual IP*, select **Internal**.
+14. Select **Create**.
<!-- Deploy --> [!INCLUDE [container-apps-create-portal-deploy.md](../../includes/container-apps-create-portal-deploy.md)]
az network vnet create \
az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \
- --name control-plane \
- --address-prefixes 10.0.0.0/21
-```
-
-```azurecli
-az network vnet subnet create \
- --resource-group $RESOURCE_GROUP \
- --vnet-name $VNET_NAME \
- --name applications \
- --address-prefixes 10.0.8.0/21
+ --name infrastructure \
+ --address-prefixes 10.0.0.0/23
``` # [PowerShell](#tab/powershell)
az network vnet create `
az network vnet subnet create ` --resource-group $RESOURCE_GROUP ` --vnet-name $VNET_NAME `
- --name control-plane `
- --address-prefixes 10.0.0.0/21
-```
-
-```powershell
-az network vnet subnet create `
- --resource-group $RESOURCE_GROUP `
- --vnet-name $VNET_NAME `
- --name applications `
- --address-prefixes 10.0.8.0/21
+ --name infrastructure-subnet `
+ --address-prefixes 10.0.0.0/23
```
-With the VNET established, you can now query for the VNET, control plane, and app subnet IDs.
+With the VNET established, you can now query for the VNET and infrastructure subnet ID.
# [Bash](#tab/bash)
VNET_RESOURCE_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name
``` ```bash
-CONTROL_PLANE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name control-plane --query "id" -o tsv | tr -d '[:space:]'`
-```
-
-```bash
-APP_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name ${VNET_NAME} --name applications --query "id" -o tsv | tr -d '[:space:]'`
+INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'`
``` # [PowerShell](#tab/powershell)
$VNET_RESOURCE_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name
``` ```powershell
-$CONTROL_PLANE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name control-plane --query "id" -o tsv)
-```
-
-```powershell
-$APP_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name applications --query "id" -o tsv)
+$INFRASTRUCTURE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv)
```
-Finally, create the Container Apps environment with the internal VNET and subnets.
+Finally, create the Container Apps environment with the VNET and subnet.
# [Bash](#tab/bash)
Finally, create the Container Apps environment with the internal VNET and subnet
az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \
--location "$LOCATION" \
- --app-subnet-resource-id $APP_SUBNET \
- --controlplane-subnet-resource-id $CONTROL_PLANE_SUBNET \
+ --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET \
--internal-only ```
az containerapp env create \
az containerapp env create ` --name $CONTAINERAPPS_ENVIRONMENT ` --resource-group $RESOURCE_GROUP `
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET `
--location "$LOCATION" `
- --app-subnet-resource-id $APP_SUBNET `
- --controlplane-subnet-resource-id $CONTROL_PLANE_SUBNET `
+ --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET `
--internal-only ```
The following table describes the parameters used in for `containerapp env creat
| `logs-workspace-id` | The ID of the Log Analytics workspace. | | `logs-workspace-key` | The Log Analytics client secret. | | `location` | The Azure location where the environment is to deploy. |
-| `app-subnet-resource-id` | The resource ID of a subnet where containers are injected into the container app. This subnet must be in the same VNET as the subnet defined in `--control-plane-subnet-resource-id`. |
-| `controlplane-subnet-resource-id` | The resource ID of a subnet for control plane infrastructure components. This subnet must be in the same VNET as the subnet defined in `--app-subnet-resource-id`. |
+| `infrastructure-subnet-resource-id` | Resource ID of a subnet for infrastructure components and user application containers. |
| `internal-only` | Optional parameter that scopes the environment to IP addresses only available the custom VNET. |
-With your environment created with your custom-virtual network, you can create container apps into the environment using the `az containerapp create` command.
+With your environment created in your custom virtual network, you can deploy container apps into the environment using the `az containerapp create` command.
### Optional configuration
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
Previously updated : 2/18/2022 Last updated : 05/16/2022 zone_pivot_groups: azure-cli-or-portal
-# Provide an virtual network to an external Azure Container Apps (Preview) environment
+# Provide a virtual network to an external Azure Container Apps (Preview) environment
The following example shows you how to create a Container Apps environment in an existing virtual network. > [!IMPORTANT]
-> In order to ensure the environment deployment within your custom VNET is successful, configure your VNET with an "allow-all" configuration by default. The full list of traffic dependencies required to configure the VNET as "deny-all" is not yet available. Refer to [Known issues for public preview](https://github.com/microsoft/azure-container-apps/wiki/Known-Issues-for-public-preview) for additional details.
+> In order to ensure the environment deployment within your custom VNET is successful, configure your VNET with an "allow-all" configuration by default. The full list of traffic dependencies required to configure the VNET as "deny-all" is not yet available. For more information, see [Known issues for public preview](https://github.com/microsoft/azure-container-apps/wiki/Known-Issues-for-public-preview).
::: zone pivot="azure-portal"
The following example shows you how to create a Container Apps environment in an
7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*.
-9. Next to the *Virtual network* box, select the **Create new** link.
-10. Enter **my-custom-vnet** in the name box.
-11. Select the **OK** button.
-12. Next to the *Control plane subnet* box, select the **Create new** link and enter the following values:
+9. Next to the *Virtual network* box, select the **Create new** link and enter the following value.
| Setting | Value |
- |||
- | Subnet name | Enter **my-control-plane-vnet**. |
- | Virtual Network Address Block | Keep the default values. |
- | Subnet Address Block | Keep the default values. |
+ |--|--|
+ | Name | Enter **my-custom-vnet**. |
-13. Select the **OK** button.
-14. Next to the *Control plane subnet* box, select the **Create new** link and enter the following values:
+10. Select the **OK** button.
+11. Next to the *Infrastructure subnet* box, select the **Create new** link and enter the following values:
| Setting | Value | |||
- | Subnet name | Enter **my-apps-vnet**. |
+ | Subnet Name | Enter **infrastructure-subnet**. |
| Virtual Network Address Block | Keep the default values. | | Subnet Address Block | Keep the default values. |
-15. Under *Virtual IP*, select **External**.
-16. Select **Create**.
+12. Select the **OK** button.
+13. Under *Virtual IP*, select **External**.
+14. Select **Create**.
<!-- Deploy --> [!INCLUDE [container-apps-create-portal-deploy.md](../../includes/container-apps-create-portal-deploy.md)]
$VNET_NAME="my-custom-vnet"
-Now create an instance of the virtual network to associate with the Container Apps environment. The virtual network must have two subnets available for the container apps instance.
+Now create an Azure virtual network to associate with the Container Apps environment. The virtual network must have a subnet available for the environment deployment.
> [!NOTE]
-> You can use an existing virtual network, but two empty subnets are required to use with Container Apps.
+> You can use an existing virtual network, but a dedicated subnet is required for use with Container Apps.
# [Bash](#tab/bash)
az network vnet create \
az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \
- --name control-plane \
- --address-prefixes 10.0.0.0/21
-```
-
-```azurecli
-az network vnet subnet create \
- --resource-group $RESOURCE_GROUP \
- --vnet-name $VNET_NAME \
- --name applications \
- --address-prefixes 10.0.8.0/21
+ --name infrastructure-subnet \
+ --address-prefixes 10.0.0.0/23
``` # [PowerShell](#tab/powershell)
az network vnet create `
az network vnet subnet create ` --resource-group $RESOURCE_GROUP ` --vnet-name $VNET_NAME `
- --name control-plane `
- --address-prefixes 10.0.0.0/21
-```
-
-```powershell
-az network vnet subnet create `
- --resource-group $RESOURCE_GROUP `
- --vnet-name $VNET_NAME `
- --name applications `
- --address-prefixes 10.0.8.0/21
+ --name infrastructure-subnet `
+ --address-prefixes 10.0.0.0/23
```
-With the VNET established, you can now query for the VNET, control plane, and app subnet IDs.
+With the virtual network created, you can retrieve the IDs for both the VNET and the infrastructure subnet.
# [Bash](#tab/bash)
VNET_RESOURCE_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name
``` ```bash
-CONTROL_PLANE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name control-plane --query "id" -o tsv | tr -d '[:space:]'`
-```
-
-```bash
-APP_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name ${VNET_NAME} --name applications --query "id" -o tsv | tr -d '[:space:]'`
+INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'`
``` # [PowerShell](#tab/powershell)
$VNET_RESOURCE_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name
``` ```powershell
-$CONTROL_PLANE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name control-plane --query "id" -o tsv)
-```
-
-```powershell
-$APP_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name applications --query "id" -o tsv)
+$INFRASTRUCTURE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv)
```
-Finally, create the Container Apps environment with the VNET and subnets.
+Finally, create the Container Apps environment using the custom VNET deployed in the preceding steps.
# [Bash](#tab/bash)
az containerapp env create \
--name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \ --location "$LOCATION" \
- --app-subnet-resource-id $APP_SUBNET \
- --controlplane-subnet-resource-id $CONTROL_PLANE_SUBNET
+ --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET
``` # [PowerShell](#tab/powershell)
az containerapp env create `
--name $CONTAINERAPPS_ENVIRONMENT ` --resource-group $RESOURCE_GROUP ` --location "$LOCATION" `
- --app-subnet-resource-id $APP_SUBNET `
- --controlplane-subnet-resource-id $CONTROL_PLANE_SUBNET
+ --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET
```
The following table describes the parameters used in `containerapp env create`.
| `name` | Name of the container apps environment. | | `resource-group` | Name of the resource group. | | `location` | The Azure location where the environment is to deploy. |
-| `app-subnet-resource-id` | The resource ID of a subnet where containers are injected into the container app. This subnet must be in the same VNET as the subnet defined in `--control-plane-subnet-resource-id`. |
-| `controlplane-subnet-resource-id` | The resource ID of a subnet for control plane infrastructure components. This subnet must be in the same VNET as the subnet defined in `--app-subnet-resource-id`. |
-| `internal-only` | Optional parameter that scopes the environment to IP addresses only available the custom VNET. |
+| `infrastructure-subnet-resource-id` | Resource ID of a subnet for infrastructure components and user application containers. |
-With your environment created with your custom-virtual network, you can create container apps into the environment using the `az containerapp create` command.
+With your environment created using a custom virtual network, you can now deploy container apps using the `az containerapp create` command.
### Optional configuration
container-instances Container Instances Egress Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-egress-ip-address.md
Title: Configure static outbound IP description: Configure Azure firewall and user-defined routes for Azure Container Instances workloads that use the firewall's public IP address for ingress and egress Previously updated : 07/16/2020 Last updated : 05/03/2022 # Configure a single public IP address for outbound and inbound traffic to a container group
Setting up a [container group](container-instances-container-groups.md) with an
This article provides steps to configure a container group in a [virtual network](container-instances-virtual-network-concepts.md) integrated with [Azure Firewall](../firewall/overview.md). By setting up a user-defined route to the container group and firewall rules, you can route and identify traffic to and from the container group. Container group ingress and egress use the public IP address of the firewall. A single egress IP address can be used by multiple container groups deployed in the virtual network's subnet delegated to Azure Container Instances.
-In this article you use the Azure CLI to create the resources for this scenario:
+In this article, you use the Azure CLI to create the resources for this scenario:
-* Container groups deployed on a delegated subnet [in the virtual network](container-instances-vnet.md)
+* Container groups deployed on a delegated subnet [in the virtual network](container-instances-vnet.md)
* An Azure firewall deployed in the network with a static public IP address * A user-defined route on the container groups' subnet * A NAT rule for firewall ingress and an application rule for egress You then validate ingress and egress from example container groups through the firewall.
-## Deploy ACI in a virtual network
-In a typical case, you might already have an Azure virtual network in which to deploy a container group. For demonstration purposes, the following commands create a virtual network and subnet when the container group is created. The subnet is delegated to Azure Container Instances.
-The container group runs a small web app from the `aci-helloworld` image. As shown in other articles in the documentation, this image packages a small web app written in Node.js that serves a static HTML page.
-If you need one, first create an Azure resource group with the [az group create][az-group-create] command. For example:
+> [!NOTE]
+> To download the complete script, go to [full script](https://github.com/Azure-Samples/azure-cli-samples/blob/master/container-instances/egress-ip-address.sh).
-```azurecli
-az group create --name myResourceGroup --location eastus
-```
+## Get started
-To simplify the following command examples, use an environment variable for the resource group's name:
+This tutorial makes use of a randomized variable. If you are using an existing resource group, modify the value of this variable appropriately.
-```console
-export RESOURCE_GROUP_NAME=myResourceGroup
-```
+
+**Azure resource group**: If you don't have an Azure resource group already, create a resource group with the [az group create][az-group-create] command. Modify the location value as appropriate.
++
+## Deploy ACI in a virtual network
+
+In a typical case, you might already have an Azure virtual network in which to deploy a container group. For demonstration purposes, the following commands create a virtual network and subnet when the container group is created. The subnet is delegated to Azure Container Instances.
+
+The container group runs a small web app from the `aci-helloworld` image. As shown in other articles in the documentation, this image packages a small web app written in Node.js that serves a static HTML page.
Create the container group with the [az container create][az-container-create] command:
-```azurecli
-az container create \
- --name appcontainer \
- --resource-group $RESOURCE_GROUP_NAME \
- --image mcr.microsoft.com/azuredocs/aci-helloworld \
- --vnet aci-vnet \
- --vnet-address-prefix 10.0.0.0/16 \
- --subnet aci-subnet \
- --subnet-address-prefix 10.0.0.0/24
-```
> [!TIP] > Adjust the value of `--subnet address-prefix` for the IP address space you need in your subnet. The smallest supported subnet is /29, which provides eight IP addresses. Some IP addresses are reserved for use by Azure. For use in a later step, get the private IP address of the container group by running the [az container show][az-container-show] command:
-```azurecli
-ACI_PRIVATE_IP="$(az container show --name appcontainer \
- --resource-group $RESOURCE_GROUP_NAME \
- --query ipAddress.ip --output tsv)"
-```
## Deploy Azure Firewall in network
In the following sections, use the Azure CLI to deploy an Azure firewall in the
First, use the [az network vnet subnet create][az-network-vnet-subnet-create] to add a subnet named AzureFirewallSubnet for the firewall. AzureFirewallSubnet is the *required* name of this subnet.
-```azurecli
-az network vnet subnet create \
- --name AzureFirewallSubnet \
- --resource-group $RESOURCE_GROUP_NAME \
- --vnet-name aci-vnet \
- --address-prefix 10.0.1.0/26
-```
Use the following [Azure CLI commands](../firewall/deploy-cli.md) to create a firewall in the subnet. If not already installed, add the firewall extension to the Azure CLI using the [az extension add][az-extension-add] command:
-```azurecli
-az extension add --name azure-firewall
-```
Create the firewall resources:
-```azurecli
-az network firewall create \
- --name myFirewall \
- --resource-group $RESOURCE_GROUP_NAME \
- --location eastus
-
-az network public-ip create \
- --name fw-pip \
- --resource-group $RESOURCE_GROUP_NAME \
- --location eastus \
- --allocation-method static \
- --sku standard
-
-az network firewall ip-config create \
- --firewall-name myFirewall \
- --name FW-config \
- --public-ip-address fw-pip \
- --resource-group $RESOURCE_GROUP_NAME \
- --vnet-name aci-vnet
-```
Update the firewall configuration using the [az network firewall update][az-network-firewall-update] command:
-```azurecli
-az network firewall update \
- --name myFirewall \
- --resource-group $RESOURCE_GROUP_NAME
-```
Get the firewall's private IP address using the [az network firewall ip-config list][az-network-firewall-ip-config-list] command. This private IP address is used in a later command.
-```azurecli
-FW_PRIVATE_IP="$(az network firewall ip-config list \
- --resource-group $RESOURCE_GROUP_NAME \
- --firewall-name myFirewall \
- --query "[].privateIpAddress" --output tsv)"
-```
Get the firewall's public IP address using the [az network public-ip show][az-network-public-ip-show] command. This public IP address is used in a later command.
-```azurecli
-FW_PUBLIC_IP="$(az network public-ip show \
- --name fw-pip \
- --resource-group $RESOURCE_GROUP_NAME \
- --query ipAddress --output tsv)"
-```
## Define user-defined route on ACI subnet
Define a use-defined route on the ACI subnet, to divert traffic to the Azure fir
First, run the following [az network route-table create][az-network-route-table-create] command to create the route table. Create the route table in the same region as the virtual network.
-```azurecli
-az network route-table create \
- --name Firewall-rt-table \
- --resource-group $RESOURCE_GROUP_NAME \
- --location eastus \
- --disable-bgp-route-propagation true
-```
### Create route Run [az network-route-table route create][az-network-route-table-route-create] to create a route in the route table. To route traffic to the firewall, set the next hop type to `VirtualAppliance`, and pass the firewall's private IP address as the next hop address.
-```azurecli
-az network route-table route create \
- --resource-group $RESOURCE_GROUP_NAME \
- --name DG-Route \
- --route-table-name Firewall-rt-table \
- --address-prefix 0.0.0.0/0 \
- --next-hop-type VirtualAppliance \
- --next-hop-ip-address $FW_PRIVATE_IP
-```
### Associate route table to ACI subnet Run the [az network vnet subnet update][az-network-vnet-subnet-update] command to associate the route table with the subnet delegated to Azure Container Instances.
-```azurecli
-az network vnet subnet update \
- --name aci-subnet \
- --resource-group $RESOURCE_GROUP_NAME \
- --vnet-name aci-vnet \
- --address-prefixes 10.0.0.0/24 \
- --route-table Firewall-rt-table
-```
## Configure rules on firewall
-By default, Azure Firewall denies (blocks) inbound and outbound traffic.
+By default, Azure Firewall denies (blocks) inbound and outbound traffic.
### Configure NAT rule on firewall to ACI subnet
Create a [NAT rule](../firewall/rule-processing.md) on the firewall to translate
Create a NAT rule and collection by using the [az network firewall nat-rule create][az-network-firewall-nat-rule-create] command:
-```azurecli
-az network firewall nat-rule create \
- --firewall-name myFirewall \
- --collection-name myNATCollection \
- --action dnat \
- --name myRule \
- --protocols TCP \
- --source-addresses '*' \
- --destination-addresses $FW_PUBLIC_IP \
- --destination-ports 80 \
- --resource-group $RESOURCE_GROUP_NAME \
- --translated-address $ACI_PRIVATE_IP \
- --translated-port 80 \
- --priority 200
-```
Add NAT rules as needed to filter traffic to other IP addresses in the subnet. For example, other container groups in the subnet could expose IP addresses for inbound traffic, or other internal IP addresses could be assigned to the container group after a restart.
Add NAT rules as needed to filter traffic to other IP addresses in the subnet. F
Run the following [az network firewall application-rule create][az-network-firewall-application-rule-create] command to create an outbound rule on the firewall. This sample rule allows access from the subnet delegated to Azure Container Instances to the FQDN `checkip.dyndns.org`. HTTP access to the site is used in a later step to confirm the egress IP address from Azure Container Instances.
-```azurecli
-az network firewall application-rule create \
- --collection-name myAppCollection \
- --firewall-name myFirewall \
- --name Allow-CheckIP \
- --protocols Http=80 Https=443 \
- --resource-group $RESOURCE_GROUP_NAME \
- --target-fqdns checkip.dyndns.org \
- --source-addresses 10.0.0.0/24 \
- --priority 200 \
- --action Allow
-```
## Test container group access through the firewall
The following sections verify that the subnet delegated to Azure Container Insta
### Test ingress to a container group
-Test inbound access to the *appcontainer* running in the virtual network by browsing to the firewall's public IP address. Previously, you stored the public IP address in variable $FW_PUBLIC_IP:
+Test inbound access to the `appcontainer` running in the virtual network by browsing to the firewall's public IP address. Previously, you stored the public IP address in variable $FW_PUBLIC_IP:
-```bash
-echo $FW_PUBLIC_IP
-```
Output is similar to:
If the NAT rule on the firewall is configured properly, you see the following wh
### Test egress from a container group - Deploy the following sample container into the virtual network. When it runs, it sends a single HTTP request to `http://checkip.dyndns.org`, which displays the IP address of the sender (the egress IP address). If the application rule on the firewall is configured properly, the firewall's public IP address is returned.
-```azurecli
-az container create \
- --resource-group $RESOURCE_GROUP_NAME \
- --name testegress \
- --image mcr.microsoft.com/azuredocs/aci-tutorial-sidecar \
- --command-line "curl -s http://checkip.dyndns.org" \
- --restart-policy OnFailure \
- --vnet aci-vnet \
- --subnet aci-subnet
-```
View the container logs to confirm the IP address is the same as the public IP address of the firewall.
Output is similar to:
<html><head><title>Current IP Check</title></head><body>Current IP Address: 52.142.18.133</body></html> ```
+## Clean up resources
+
+When no longer needed, you can use [az group delete](/cli/azure/group) to remove the resource group and all related resources as follows. The `--no-wait` parameter returns control to the prompt without waiting for the operation to complete. The `--yes` parameter confirms that you wish to delete the resources without an additional prompt to do so.
+
+```azurecli-interactive
+az group delete --name $resourceGroup --yes --no-wait
+```
+ ## Next steps In this article, you set up container groups in a virtual network behind an Azure firewall. You configured a user-defined route and NAT and application rules on the firewall. By using this configuration, you set up a single, static IP address for ingress and egress from Azure Container Instances. For more information about managing traffic and protecting Azure resources, see the [Azure Firewall](../firewall/index.yml) documentation. -- [az-group-create]: /cli/azure/group#az_group_create [az-container-create]: /cli/azure/container#az_container_create [az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_create
container-instances Container Instances Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-nat-gateway.md
Previously updated : 02/28/2022 Last updated : 05/03/2022 # Configure a NAT gateway for static IP address for outbound traffic from a container group
Setting up a [container group](container-instances-container-groups.md) with an
This article provides steps to configure a container group in a [virtual network](container-instances-virtual-network-concepts.md) integrated with a [Network Address Translation (NAT) gateway](../virtual-network/nat-gateway/nat-overview.md). By configuring a NAT gateway to SNAT a subnet address range delegated to Azure Container Instances (ACI), you can identify outbound traffic from your container groups. The container group egress traffic will use the public IP address of the NAT gateway. A single NAT gateway can be used by multiple container groups deployed in the virtual network's subnet delegated to ACI.
-In this article you use the Azure CLI to create the resources for this scenario:
+In this article, you use the Azure CLI to create the resources for this scenario:
-* Container groups deployed on a delegated subnet [in the virtual network](container-instances-vnet.md)
+* Container groups deployed on a delegated subnet [in the virtual network](container-instances-vnet.md)
* A NAT gateway deployed in the network with a static public IP address You then validate egress from example container groups through the NAT gateway. > [!NOTE]
-> The ACI service recommends integrating with a NAT gateway for containerized workoads that have static egress but not static ingress requirements. For ACI architecture that supports both static ingress and egress, please see the following tutorial: [Use Azure Firewall for ingress and egress](container-instances-egress-ip-address.md).
-## Before you begin
-You must satisfy the following requirements to complete this tutorial:
+> The ACI service recommends integrating with a NAT gateway for containerized workloads that have static egress but not static ingress requirements. For ACI architecture that supports both static ingress and egress, please see the following tutorial: [Use Azure Firewall for ingress and egress](container-instances-egress-ip-address.md).
-**Azure CLI**: You must have Azure CLI version installed on your local computer. If you need to install or upgrade, see [Install the Azure CLI][azure-cli-install]
+++
+> [!NOTE]
+> To download the complete script, go to [full script](https://github.com/Azure-Samples/azure-cli-samples/blob/master/container-instances/nat-gateway.sh).
+
+## Get started
+
+This tutorial makes use of a randomized variable. If you are using an existing resource group, modify the value of this variable appropriately.
++
+**Azure resource group**: If you don't have an Azure resource group already, create a resource group with the [az group create][az-group-create] command. Modify the location value as appropriate.
+
-**Azure resource group**: If you don't have an Azure resource group already, create a resource group with the [az group create][az-group-create] command. Below is an example.
-```azurecli
-az group create --name myResourceGroup --location eastus
-```
## Deploy ACI in a virtual network
-In a typical case, you might already have an Azure virtual network in which to deploy a container group. For demonstration purposes, the following commands create a virtual network and subnet when the container group is created. The subnet is delegated to Azure Container Instances.
+In a typical case, you might already have an Azure virtual network in which to deploy a container group. For demonstration purposes, the following commands create a virtual network and subnet when the container group is created. The subnet is delegated to Azure Container Instances.
The container group runs a small web app from the `aci-helloworld` image. As shown in other articles in the documentation, this image packages a small web app written in Node.js that serves a static HTML page.
-> [!TIP]
-> To simplify the following command examples, use an environment variable for the resource group's name:
-> ```console
-> export RESOURCE_GROUP_NAME=myResourceGroup
-> ```
-> This tutorial will make use of the environment variable going forward.
Create the container group with the [az container create][az-container-create] command:
-```azurecli
-az container create \
- --name appcontainer \
- --resource-group $RESOURCE_GROUP_NAME \
- --image mcr.microsoft.com/azuredocs/aci-helloworld \
- --vnet aci-vnet \
- --vnet-address-prefix 10.0.0.0/16 \
- --subnet aci-subnet \
- --subnet-address-prefix 10.0.0.0/24
-```
> [!NOTE]
-> Adjust the value of `--subnet address-prefix` for the IP address space you need in your subnet. The smallest supported subnet is /29, which provides eight IP addresses. Some >IP addresses are reserved for use by Azure, which you can read more about [here](../virtual-network/ip-services/private-ip-addresses.md).
+> Adjust the value of `--subnet address-prefix` for the IP address space you need in your subnet. The smallest supported subnet is /29, which provides eight IP addresses. Some >IP addresses are reserved for use by Azure, which you can read more about [here](../virtual-network/ip-services/private-ip-addresses.md).
+ ## Create a public IP address In the following sections, use the Azure CLI to deploy an Azure NAT gateway in the virtual network. For background, see [Quickstart: Create a NAT gateway using Azure CLI](../virtual-network/nat-gateway/quickstart-create-nat-gateway-cli.md).
-First, use the [az network vnet public-ip create][az-network-public-ip-create] to create a public IP address for the NAT gateway. This will be used to access the Internet. You will receive a warning about an upcoming breaking change where Standard SKU IP addresses will be availability zone aware by default. You can learn more about the use of availability zones and public IP addresses [here](../virtual-network/ip-services/virtual-network-network-interface-addresses.md).
+First, use the [az network vnet public-ip create][az-network-public-ip-create] to create a public IP address for the NAT gateway. This will be used to access the Internet. You will receive a warning about an upcoming breaking change where Standard SKU IP addresses will be availability zone aware by default. You can learn more about the use of availability zones and public IP addresses [here](../virtual-network/ip-services/virtual-network-network-interface-addresses.md).
-```azurecli
-az network public-ip create \
- --name myPublicIP \
- --resource-group $RESOURCE_GROUP_NAME \
- --sku standard \
- --allocation static
-```
-Store the public IP address in a variable. We will use this later during the validation step.
+Store the public IP address in a variable for use during the validation step later in this script.
-```azurecli
-NG_PUBLIC_IP="$(az network public-ip show \
- --name myPublicIP \
- --resource-group $RESOURCE_GROUP_NAME \
- --query ipAddress --output tsv)"
-```
## Deploy a NAT gateway into a virtual network Use the following [az network nat gateway create][az-network-nat-gateway-create] to create a NAT gateway that uses the public IP you created in the previous step.
-```azurecli
-az network nat gateway create \
- --resource-group $RESOURCE_GROUP_NAME \
- --name myNATgateway \
- --public-ip-addresses myPublicIP \
- --idle-timeout 10
-```
+ ## Configure NAT service for source subnet
-We'll configure the source subnet **aci-subnet** to use a specific NAT gateway resource **myNATgateway** with [az network vnet subnet update][az-network-vnet-subnet-update]. This command will activate the NAT service on the specified subnet.
+We'll configure the source subnet **aci-subnet** to use a specific NAT gateway resource **myNATgateway** with [az network vnet subnet update][az-network-vnet-subnet-update]. This command will activate the NAT service on the specified subnet.
-```azurecli
-az network vnet subnet update \
- --resource-group $RESOURCE_GROUP_NAME \
- --vnet-name aci-vnet \
- --name aci-subnet \
- --nat-gateway myNATgateway
-```
## Test egress from a container group
-Test inbound access to the *appcontainer* running in the virtual network by browsing to the firewall's public IP address. Previously, you stored the public IP address in variable $NG_PUBLIC_IP
+Test inbound access to the `appcontainer` running in the virtual network by browsing to the firewall's public IP address. Previously, you stored the public IP address in variable $NG_PUBLIC_IP
Deploy the following sample container into the virtual network. When it runs, it sends a single HTTP request to `http://checkip.dyndns.org`, which displays the IP address of the sender (the egress IP address). If the application rule on the firewall is configured properly, the firewall's public IP address is returned.
-```azurecli
-az container create \
- --resource-group $RESOURCE_GROUP_NAME \
- --name testegress \
- --image mcr.microsoft.com/azuredocs/aci-tutorial-sidecar \
- --command-line "curl -s http://checkip.dyndns.org" \
- --restart-policy OnFailure \
- --vnet aci-vnet \
- --subnet aci-subnet
-```
View the container logs to confirm the IP address is the same as the public IP address we created in the first step of the tutorial.
-```azurecli
-az container logs \
- --resource-group $RESOURCE_GROUP_NAME \
- --name testegress
-```
Output is similar to: ```console <html><head><title>Current IP Check</title></head><body>Current IP Address: 52.142.18.133</body></html> ```
-This IP address should match the public IP address created in the first step of the tutorial.
-```Bash
-echo $NG_PUBLIC_IP
+This IP address should match the public IP address created in the first step of the tutorial.
++
+## Clean up resources
+
+When no longer needed, you can use [az group delete](/cli/azure/group) to remove the resource group and all related resources as follows. The `--no-wait` parameter returns control to the prompt without waiting for the operation to complete. The `--yes` parameter confirms that you wish to delete the resources without an additional prompt to do so.
+
+```azurecli-interactive
+az group delete --name $resourceGroup --yes --no-wait
``` ## Next steps
cosmos-db Configure Custom Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-custom-partitioning.md
df = spark.read\
.option("spark.cosmos.asns.basePath", "/mnt/CosmosDBPartitionedStore/") \ .load()
-df_filtered = df.filter("readDate='2020-11-27 00:00:00.000'")
+df_filtered = df.filter("readDate='2020-11-01 00:00:00.000'")
display(df_filtered.limit(10)) ```
val df = spark.read.
option("spark.cosmos.asns.partition.keys", "readDate String"). option("spark.cosmos.asns.basePath", "/mnt/CosmosDBPartitionedStore/"). load()
-val df_filtered = df.filter("readDate='2020-11-27 00:00:00.000'")
+val df_filtered = df.filter("readDate='2020-11-01 00:00:00.000'")
display(df_filtered.limit(10)) ```
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
For example, if you have 1 TB of data in two regions then:
* Restore cost is calculated as (1000 * 0.15) = $150 per restore
+> [!TIP]
+> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](/azure/azure-monitor/insights/cosmosdb-insights-overview#view-utilization-and-performance-metrics-for-azure-cosmos-db).
+ ## Customer-managed keys See [How do customer-managed keys affect continuous backups?](./how-to-setup-cmk.md#how-do-customer-managed-keys-affect-continuous-backups) to learn:
cosmos-db Create Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-dotnet.md
> * [PHP](create-graph-php.md) >
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases. All of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
This quickstart demonstrates how to create an Azure Cosmos DB [Gremlin API](graph-introduction.md) account, database, and graph (container) using the Azure portal. You then build and run a console app built using the open-source driver [Gremlin.Net](https://tinkerpop.apache.org/docs/3.2.7/reference/#gremlin-DotNet).
Now let's clone a Gremlin API app from GitHub, set the connection string, and ru
cd "C:\git-samples" ```
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+3. Run the following command to clone the sample repository. The ``git clone`` command creates a copy of the sample app on your computer.
```bash git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-gremlindotnet-getting-started.git
Now let's clone a Gremlin API app from GitHub, set the connection string, and ru
4. Then open Visual Studio and open the solution file.
-5. Restore the NuGet packages in the project. This should include the Gremlin.Net driver, and the Newtonsoft.Json package.
-
+5. Restore the NuGet packages in the project. The restore operation should include the Gremlin.Net driver, and the Newtonsoft.Json package.
6. You can also install the Gremlin.Net@v3.4.6 driver manually using the NuGet package manager, or the [NuGet command-line utility](/nuget/install-nuget-client-tools):
Now go back to the Azure portal to get your connection string information and co
:::image type="content" source="./media/create-graph-dotnet/endpoint.png" alt-text="Copy the endpoint":::
- To run this sample, copy the **Gremlin Endpoint** value, delete the port number at the end, that is the URI becomes `https://<your cosmos db account name>.gremlin.cosmosdb.azure.com`. The endpoint value should look like `testgraphacct.gremlin.cosmosdb.azure.com`
+ For this sample, record the *Host* value of the **Gremlin Endpoint**. For example, if the URI is ``https://graphtest.gremlin.cosmosdb.azure.com``, the *Host* value would be ``graphtest.gremlin.cosmosdb.azure.com``.
-1. Next, navigate to the **Keys** tab and copy the **PRIMARY KEY** value from the Azure portal.
+1. Next, navigate to the **Keys** tab and record the *PRIMARY KEY* value from the Azure portal.
-1. After you've copied the URI and PRIMARY KEY of your account, save them to a new environment variable on the local machine running the application. To set the environment variable, open a command prompt window, and run the following command. Make sure to replace <Your_Azure_Cosmos_account_URI> and <Your_Azure_Cosmos_account_PRIMARY_KEY> values.
+1. After you've copied the URI and PRIMARY KEY of your account, save them to a new environment variable on the local machine running the application. To set the environment variable, open a command prompt window, and run the following command. Make sure to replace ``<cosmos-account-name>`` and ``<cosmos-account-primary-key>`` values.
- ```console
- setx Host "<your Azure Cosmos account name>.gremlin.cosmosdb.azure.com"
- setx PrimaryKey "<Your_Azure_Cosmos_account_PRIMARY_KEY>"
- ```
+ ### [Windows](#tab/windows)
+
+ ```powershell
+ setx Host "<cosmos-account-name>.gremlin.cosmosdb.azure.com"
+ setx PrimaryKey "<cosmos-account-primary-key>"
+ ```
+
+ ### [Linux / macOS](#tab/linux+macos)
+
+ ```bash
+ export Host=<cosmos-account-name>.gremlin.cosmosdb.azure.com
+ export PrimaryKey=<cosmos-account-primary-key>
+ ```
+
+
1. Open the *Program.cs* file and update the "database and "container" variables with the database and container (which is also the graph name) names created above.
cosmos-db Integrated Power Bi Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-power-bi-synapse-link.md
Use the following steps to build a Power BI report from Azure Cosmos DB data in
1. From the **Select workspace** tab, choose the Azure Synapse Analytics workspace and select **Next**. This will automatically create T-SQL views in Synapse Analytics, for the containers selected earlier. For more information on T-SQL views required to connect your Cosmos DB to Power BI, see [Prepare views](../synapse-analytics/sql/tutorial-connect-power-bi-desktop.md#3prepare-view) article. > [!NOTE]
- > Your Cosmos DB container proprieties will be represented as columns in T-SQL views, including deep nested JSON data. This is a quick start for your BI dashboards. These views will be available in your Synapse workspace/database; you can also use these exact same views in Synapse Workspace for data exploration, data science, data engineering, etc. Please note that advanced scenarios may demand more complex views or fine tuning of these views, for better performance. For more information. see [best practices for Synapse Link when using Synapse serverless SQL pools](../synapse-analytics/sql/resources-self-help-sql-on-demand.md#cosmos-db-performance-issues) article.
+ > Your Cosmos DB container proprieties will be represented as columns in T-SQL views, including deep nested JSON data. This is a quick start for your BI dashboards. These views will be available in your Synapse workspace/database; you can also use these exact same views in Synapse Workspace for data exploration, data science, data engineering, etc. Please note that advanced scenarios may demand more complex views or fine tuning of these views, for better performance. For more information. see [best practices for Synapse Link when using Synapse serverless SQL pools](../synapse-analytics/sql/resources-self-help-sql-on-demand.md#azure-cosmos-db-performance-issues) article.
1. You can either choose an existing workspace or create a new one. To select an existing workspace, provide the **Subscription**, **Workspace**, and the **Database** details. Azure portal will use your Azure AD credentials to automatically connect to your Synapse workspace and create T-SQL views. Make sure you have "Synapse administrator" permissions to this workspace.
cosmos-db Tutorial Mongotools Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-mongotools-cosmos-db.md
The rest of this section will guide you through using the pair of tools you sele
1. To export the data from the source MongoDB instance, open a terminal on the MongoDB instance machine. If it is a Linux machine, type
- `mongoexport --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection query --out edx.json`
+ ```bash
+ mongoexport --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection query --out edx.json
+ ```
On windows, the executable will be `mongoexport.exe`. *HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the properties of your existing MongoDB database instance. You may also choose to export only a subset of the MongoDB dataset. One way to do this is by adding an additional filter argument:
- `mongoexport --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection query --out edx.json --query '{"field1":"value1"}'`
+ ```bash
+ mongoexport --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection query --out edx.json --query '{"field1":"value1"}'
+ ```
Only documents which match the filter `{"field1":"value1"}` will be exported.
The rest of this section will guide you through using the pair of tools you sele
![Screenshot of mongoexport call.](media/tutorial-mongotools-cosmos-db/mongo-export-output.png) 1. You can use the same terminal to import `edx.json` into Azure Cosmos DB. If you are running `mongoimport` on a Linux machine, type
- `mongoimport --host HOST:PORT -u USERNAME -p PASSWORD --db edx --collection importedQuery --ssl --type json --writeConcern="{w:0}" --file edx.json`
+ ```bash
+ mongoimport --host HOST:PORT -u USERNAME -p PASSWORD --db edx --collection importedQuery --ssl --type json --writeConcern="{w:0}" --file edx.json
+ ```
On Windows, the executable will be `mongoimport.exe`. *HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the Azure Cosmos DB credentials you collected earlier. 1. **Monitor** the terminal output from *mongoimport*. You should see that it prints lines of text to the terminal containing updates on the migration status:
The rest of this section will guide you through using the pair of tools you sele
1. To create a BSON data dump of your MongoDB instance, open a terminal on the MongoDB instance machine. If it is a Linux machine, type
- `mongodump --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection query --out edx-dump`
+ ```bash
+ mongodump --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection query --out edx-dump
+ ```
*HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the properties of your existing MongoDB database instance. You should see that an `edx-dump` directory is produced and that the directory structure of `edx-dump` reproduces the resource hierarchy (database and collection structure) of your source MongoDB instance. Each collection is represented by a BSON file: ![Screenshot of mongodump call.](media/tutorial-mongotools-cosmos-db/mongo-dump-output.png) 1. You can use the same terminal to restore the contents of `edx-dump` into Azure Cosmos DB. If you are running `mongorestore` on a Linux machine, type
- `mongorestore --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection importedQuery --ssl edx-dump/edx/query.bson`
+ ```bash
+ mongorestore --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection importedQuery --writeConcern="{w:0}" --ssl edx-dump/edx/query.bson
+ ```
On Windows, the executable will be `mongorestore.exe`. *HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the Azure Cosmos DB credentials you collected earlier. 1. **Monitor** the terminal output from *mongorestore*. You should see that it prints lines to the terminal updating on the migration status:
cosmos-db Kafka Connector Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-sink.md
Previously updated : 06/28/2021 Last updated : 05/13/2022
-# Kafka Connect for Azure Cosmos DB - Sink connector
+# Kafka Connect for Azure Cosmos DB - sink connector
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector polls data from Kafka to write to containers in the database based on the topics subscription. ## Prerequisites
-* Start with the [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md) because it gives you a complete environment to work with. If you do not wish to use Confluent Platform, then you need to install and configure Zookeeper, Apache Kafka, Kafka Connect, yourself. You will also need to install and configure the Azure Cosmos DB connectors manually.
+* Start with the [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md) because it gives you a complete environment to work with. If you don't wish to use Confluent Platform, then you need to install and configure Zookeeper, Apache Kafka, Kafka Connect, yourself. You'll also need to install and configure the Azure Cosmos DB connectors manually.
* Create an Azure Cosmos DB account, container [setup guide](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md) * Bash shell, which is tested on GitHub Codespaces, Mac, Ubuntu, Windows with WSL2. This shell doesnΓÇÖt work in Cloud Shell or WSL1. * Download [Java 11+](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html)
Kafka Connect for Azure Cosmos DB is a connector to read from and write data to
## Install sink connector
-If you are using the recommended [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md), the Azure Cosmos DB sink connector is included in the installation, and you can skip this step.
+If you're using the recommended [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md), the Azure Cosmos DB sink connector is included in the installation, and you can skip this step.
Otherwise, you can download the JAR file from the latest [Release](https://github.com/microsoft/kafka-connect-cosmosdb/releases) or package this repo to create a new JAR file. To install the connector manually using the JAR file, refer to these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually). You can also package a new JAR file from the source code.
ls target/*dependencies.jar
## Create a Kafka topic and write data
-If you are using the Confluent Platform, the easiest way to create a Kafka topic is by using the supplied Control Center UX. Otherwise, you can create a Kafka topic manually using the following syntax:
+If you're using the Confluent Platform, the easiest way to create a Kafka topic is by using the supplied Control Center UX. Otherwise, you can create a Kafka topic manually using the following syntax:
```bash ./kafka-topics.sh --create --zookeeper <ZOOKEEPER_URL:PORT> --replication-factor <NO_OF_REPLICATIONS> --partitions <NO_OF_PARTITIONS> --topic <TOPIC_NAME> ```
-For this scenario, we will create a Kafka topic named ΓÇ£hotelsΓÇ¥ and will write non-schema embedded JSON data to the topic. To create a topic inside Control Center, see the [Confluent guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-2-create-ak-topics).
+For this scenario, we'll create a Kafka topic named ΓÇ£hotelsΓÇ¥ and will write non-schema embedded JSON data to the topic. To create a topic inside Control Center, see the [Confluent guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-2-create-ak-topics).
Next, start the Kafka console producer to write a few records to the ΓÇ£hotelsΓÇ¥ topic.
The three records entered are published to the ΓÇ£hotelsΓÇ¥ Kafka topic in JSON
## Create the sink connector
-Create the Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for the sink connector. Make sure to replace the values for `connect.cosmos.connection.endpoint` and `connect.cosmos.master.key`, properties that you should have saved from the Azure Cosmos DB setup guide in the prerequisites.
+Create an Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for the sink connector. Make sure to replace the values for `connect.cosmos.connection.endpoint` and `connect.cosmos.master.key`, properties that you should have saved from the Azure Cosmos DB setup guide in the prerequisites.
-Refer to the [sink properties](#sink-configuration-properties) section for more information on each of these configuration properties.
+For more information on each of these configuration properties, see [sink properties](#sink-configuration-properties).
```json {
Once you have all the values filled out, save the JSON file somewhere locally. Y
### Create connector using Control Center
-An easy option to create the connector is by going through the Control Center webpage. Follow this [installation guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-3-install-a-ak-connector-and-generate-sample-data) to create a connector from Control Center. Instead of using the `DatagenConnector` option, use the `CosmosDBSinkConnector` tile instead. When configuring the sink connector, fill out the values as you have filled in the JSON file.
+An easy option to create the connector is by going through the Control Center webpage. Follow this [installation guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-3-install-a-ak-connector-and-generate-sample-data) to create a connector from Control Center. Instead of using the `DatagenConnector` option, use the `CosmosDBSinkConnector` tile instead. When configuring the sink connector, fill out the values as you've filled in the JSON file.
Alternatively, in the connectors page, you can upload the JSON file created earlier by using the **Upload connector config file** option. ### Create connector using REST API
Sign into the [Azure portal](https://portal.azure.com/learn.docs.microsoft.com)
## Cleanup
-To delete the connector from the Control Center, navigate to the sink connector you created and click the **Delete** icon.
+To delete the connector from the Control Center, navigate to the sink connector you created and select the **Delete** icon.
Alternatively, use the Connect REST API to delete:
To delete the created Azure Cosmos DB service and its resource group using Azure
## <a id="sink-configuration-properties"></a>Sink configuration properties
-The following settings are used to configure the Cosmos DB Kafka sink connector. These configuration values determine which Kafka topics data is consumed, which Azure Cosmos DB containerΓÇÖs data is written into, and formats to serialize the data. For an example configuration file with the default values, refer to [this config]( https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/docker/resources/sink.example.json).
+The following settings are used to configure an Azure Cosmos DB Kafka sink connector. These configuration values determine which Kafka topics data is consumed, which Azure Cosmos DB containerΓÇÖs data is written into, and formats to serialize the data. For an example configuration file with the default values, refer to [this config]( https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/docker/resources/sink.example.json).
| Name | Type | Description | Required/Optional | | : | : | : | : |
The sink Connector also supports the following AVRO logical types:
## Single Message Transforms(SMT)
-Along with the sink connector settings, you can specify the use of Single Message Transformations (SMTs) to modify messages flowing through the Kafka Connect platform. For more information, refer to the [Confluent SMT Documentation](https://docs.confluent.io/platform/current/connect/transforms/overview.html).
+Along with the sink connector settings, you can specify the use of Single Message Transformations (SMTs) to modify messages flowing through the Kafka Connect platform. For more information, see [Confluent SMT Documentation](https://docs.confluent.io/platform/current/connect/transforms/overview.html).
### Using the InsertUUID SMT
-You can use InsertUUID SMT to automatically add item IDs. With the custom `InsertUUID` SMT, you can insert the `id` field with a random UUID value for each message, before it is written to Azure Cosmos DB.
+You can use InsertUUID SMT to automatically add item IDs. With the custom `InsertUUID` SMT, you can insert the `id` field with a random UUID value for each message, before it's written to Azure Cosmos DB.
> [!WARNING] > Use this SMT only if the messages donΓÇÖt contain the `id` field. Otherwise, the `id` values will be overwritten and you may end up with duplicate items in your database. Using UUIDs as the message ID can be quick and easy but are [not an ideal partition key](https://stackoverflow.com/questions/49031461/would-using-a-substring-of-a-guid-in-cosmosdb-as-partitionkey-be-a-bad-idea) to use in Azure Cosmos DB. ### Install the SMT
-Before you can use the `InsertUUID` SMT, you will need to install this transform in your Confluent Platform setup. If you are using the Confluent Platform setup from this repo, the transform is already included in the installation, and you can skip this step.
+Before you can use the `InsertUUID` SMT, you'll need to install this transform in your Confluent Platform setup. If you're using the Confluent Platform setup from this repo, the transform is already included in the installation, and you can skip this step.
Alternatively, you can package the [InsertUUID source](https://github.com/confluentinc/kafka-connect-insert-uuid) to create a new JAR file. To install the connector manually using the JAR file, refer to these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually).
Here are solutions to some common problems that you may encounter when working w
### Read non-JSON data with JsonConverter
-If you have non-JSON data on your source topic in Kafka and attempt to read it using the `JsonConverter`, you will see the following exception:
+If you have non-JSON data on your source topic in Kafka and attempt to read it using the `JsonConverter`, you'll see the following exception:
```console org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
This error is likely caused by data in the source topic being serialized in eith
### Read non-Avro data with AvroConverter
-This scenario is applicable when you try to use the Avro converter to read data from a topic that is not in Avro format. Which, includes data written by an Avro serializer other than the Confluent Schema RegistryΓÇÖs Avro serializer, which has its own wire format.
+This scenario is applicable when you try to use the Avro converter to read data from a topic that isn't in Avro format. Which, includes data written by an Avro serializer other than the Confluent Schema RegistryΓÇÖs Avro serializer, which has its own wire format.
```console org.apache.kafka.connect.errors.DataException: my-topic-name
Kafka Connect supports a special structure of JSON messages containing both payl
} ```
-If you try to read JSON data that does not contain the data in this structure, you will get the following error:
+If you try to read JSON data that doesn't contain the data in this structure, you'll get the following error:
```none org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
To be clear, the only JSON structure that is valid for `schemas.enable=true` has
## Limitations
-* Autocreation of databases and containers in Azure Cosmos DB are not supported. The database and containers must already exist, and they must be configured correctly.
+* Autocreation of databases and containers in Azure Cosmos DB aren't supported. The database and containers must already exist, and they must be configured correctly.
## Next steps
cosmos-db Kafka Connector Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-source.md
Previously updated : 06/28/2021 Last updated : 05/13/2022
-# Kafka Connect for Azure Cosmos DB - Source connector
+# Kafka Connect for Azure Cosmos DB - source connector
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB source connector provides the capability to read data from the Azure Cosmos DB change feed and publish this data to a Kafka topic. ## Prerequisites
-* Start with the [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md) because it gives you a complete environment to work with. If you do not wish to use Confluent Platform, then you need to install and configure Zookeeper, Apache Kafka, Kafka Connect, yourself. You will also need to install and configure the Azure Cosmos DB connectors manually.
+* Start with the [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md) because it gives you a complete environment to work with. If you don't wish to use Confluent Platform, then you need to install and configure Zookeeper, Apache Kafka, Kafka Connect, yourself. You'll also need to install and configure the Azure Cosmos DB connectors manually.
* Create an Azure Cosmos DB account, container [setup guide](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md) * Bash shell, which is tested on GitHub Codespaces, Mac, Ubuntu, Windows with WSL2. This shell doesnΓÇÖt work in Cloud Shell or WSL1. * Download [Java 11+](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html)
Kafka Connect for Azure Cosmos DB is a connector to read from and write data to
## Install the source connector
-If you are using the recommended [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md), the Azure Cosmos DB source connector is included in the installation, and you can skip this step.
+If you're using the recommended [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md), the Azure Cosmos DB source connector is included in the installation, and you can skip this step.
Otherwise, you can use JAR file from latest [Release](https://github.com/microsoft/kafka-connect-cosmosdb/releases) and install the connector manually. To learn more, see these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually). You can also package a new JAR file from the source code:
ls target/*dependencies.jar
## Create a Kafka topic
-Create a Kafka topic using Confluent Control Center. For this scenario, we will create a Kafka topic named "apparels" and write non-schema embedded JSON data to the topic. To create a topic inside the Control Center, see [create Kafka topic doc](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-2-create-ak-topics).
+Create a Kafka topic using Confluent Control Center. For this scenario, we'll create a Kafka topic named "apparels" and write non-schema embedded JSON data to the topic. To create a topic inside the Control Center, see [create Kafka topic doc](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-2-create-ak-topics).
## Create the source connector
For more information on each of the above configuration properties, see the [sou
#### Create connector using Control Center
-An easy option to create the connector is from the Confluent Control Center portal. Follow the [Confluent setup guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-3-install-a-ak-connector-and-generate-sample-data) to create a connector from Control Center. When setting up, instead of using the `DatagenConnector` option, use the `CosmosDBSourceConnector` tile instead. When configuring the source connector, fill out the values as you have filled in the JSON file.
+An easy option to create the connector is from the Confluent Control Center portal. Follow the [Confluent setup guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-3-install-a-ak-connector-and-generate-sample-data) to create a connector from Control Center. When setting up, instead of using the `DatagenConnector` option, use the `CosmosDBSourceConnector` tile instead. When configuring the source connector, fill out the values as you've filled in the JSON file.
Alternatively, in the connectors page, you can upload the JSON file built from the previous section by using the **Upload connector config file** option. #### Create connector using REST API
curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file>
To delete the connector from the Confluent Control Center, navigate to the source connector you created and select the **Delete** icon. Alternatively, use the connectorΓÇÖs REST API:
cosmos-db Powerbi Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/powerbi-visualize.md
To build a Power BI report/dashboard:
1. From the **Select workspace** tab, choose the Azure Synapse Analytics workspace and select **Next**. This step will automatically create T-SQL views in Synapse Analytics, for the containers selected earlier. For more information on T-SQL views required to connect your Cosmos DB to Power BI, see [Prepare views](../../synapse-analytics/sql/tutorial-connect-power-bi-desktop.md#3prepare-view) article. > [!NOTE]
- > Your Cosmos DB container proprieties will be represented as columns in T-SQL views, including deep nested JSON data. This is a quick start for your BI dashboards. These views will be available in your Synapse workspace/database; you can also use these exact same views in Synapse Workspace for data exploration, data science, data engineering, etc. Please note that advanced scenarios may demand more complex views or fine tuning of these views, for better performance. For more information. see [best practices for Synapse Link when using Synapse serverless SQL pools](../../synapse-analytics/sql/resources-self-help-sql-on-demand.md#cosmos-db-performance-issues) article.
+ > Your Cosmos DB container proprieties will be represented as columns in T-SQL views, including deep nested JSON data. This is a quick start for your BI dashboards. These views will be available in your Synapse workspace/database; you can also use these exact same views in Synapse Workspace for data exploration, data science, data engineering, etc. Please note that advanced scenarios may demand more complex views or fine tuning of these views, for better performance. For more information. see [best practices for Synapse Link when using Synapse serverless SQL pools](../../synapse-analytics/sql/resources-self-help-sql-on-demand.md#azure-cosmos-db-performance-issues) article.
1. You can either choose an existing workspace or create a new one. To select an existing workspace, provide the **Subscription**, **Workspace**, and the **Database** details. Azure portal will use your Azure AD credentials to automatically connect to your Synapse workspace and create T-SQL views. Make sure you have "Synapse administrator" permissions to this workspace.
cost-management-billing Cost Management Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-management-error-codes.md
When using the Query or Forecast APIs to retrieve cost data, validate the query
When using portal experiences and you see the `object ID cannot be null` error, try refreshing your view.
+When using Power BI to pull reservation usage data for more than 3 months, you will need to break down the call into 3-month chunks.
+ Also, see [SubscriptionTypeNotSupported](#SubscriptionTypeNotSupported). ### More information
For more information about the Query - Usage API body examples, see [Query - Usa
For more information about the Forecast - Usage API body examples, see [Forecast - Usage](/rest/api/cost-management/forecast/usage).
+For more information about chunking reservation usage calls in Power BI, see [Power BI considerations and limitations](/power-bi/connect-data/desktop-connect-azure-cost-management#considerations-and-limitations).
+ ## BillingAccessDenied Error message `BillingAccessDenied`.
If you're facing an error not listed above or need more help, file a [support re
## Next steps -- Read the [Cost Management + Billing frequently asked questions (FAQ)](../cost-management-billing-faq.yml).
+- Read the [Cost Management + Billing frequently asked questions (FAQ)](../cost-management-billing-faq.yml).
databox Data Box Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-overview.md
Previously updated : 04/06/2022 Last updated : 05/06/2022 + #Customer intent: As an IT admin, I need to understand what Data Box is and how it works so I can use it to import on-premises data into Azure or export data from Azure. # What is Azure Data Box?
In the extreme event of any Azure region being affected by a disaster, the Data
For regions paired with a region within the same country or commerce boundary, no action is required. Microsoft is responsible for recovery, which could take up to 72 hours.
-For regions that donΓÇÖt have a paired region within the same geographic or commerce boundary, the customer will be notified to create a new Data Box order from a different, available region and copy their data to Azure in the new region. New orders would be required for the Brazil South and Southeast Asia regions.
+For regions that donΓÇÖt have a paired region within the same geographic or commerce boundary, the customer will be notified to create a new Data Box order from a different, available region and copy their data to Azure in the new region. New orders would be required for the Brazil South, Southeast Asia, and East Asia regions.
For more information, see [Business continuity and disaster recovery (BCDR): Azure Paired Regions](../best-practices-availability-paired-regions.md).
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
At the bottom of this page, there's a table describing the Microsoft Defender fo
## <a name="alerts-windows"></a>Alerts for Windows machines
+Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in addition to the ones provided by Microsoft Defender for Endpoint. The alerts provided for Windows machines are:
+ [Further details and notes](defender-for-servers-introduction.md) | Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
At the bottom of this page, there's a table describing the Microsoft Defender fo
## <a name="alerts-linux"></a>Alerts for Linux machines
+Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in addition to the ones provided by Microsoft Defender for Endpoint. The alerts provided for Linux machines are:
+ [Further details and notes](defender-for-servers-introduction.md) |Alert (alert type)|Description|MITRE tactics<br>([Learn more](#intentions))|Severity|
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 05/12/2022 Last updated : 05/16/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in May include: -- [General availability (GA) of Defender for SQL for AWS and GCP environments](#general-availability-ga-of-defender-for-sql-for-aws-and-gcp-environments) - [Multi-cloud settings of Servers plan are now available in connector level](#multi-cloud-settings-of-servers-plan-are-now-available-in-connector-level)
-### General availability (GA) of Defender for SQL for AWS and GCP environments
-
-The database protection capabilities provided by Microsoft Defender for Cloud now include support for your SQL databases hosted in AWS and GCP environments.
-
-Using Defender for SQL, enterprises can now protect their data, whether hosted in Azure, AWS, GCP, or on-premises machines.
-
-Microsoft Defender for SQL now provides a unified cross-environment experience to view security recommendations, security alerts and vulnerability assessment findings encompassing SQL servers and the underlying Windows OS.
-
-Using the multi-cloud onboarding experience, you can enable and enforce databases protection for VMs in AWS and GCP. After enabling multi-cloud protection, all supported resources covered by your subscription are protected. Future resources created within the same subscription will also be protected.
-
-Learn how to protect and connect your [AWS accounts](quickstart-onboard-aws.md) and your [GCP projects](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
- ### Multi-cloud settings of Servers plan are now available in connector level There are now connector-level settings for Defender for Servers in multi-cloud.
defender-for-iot Dell Edge 5200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-edge-5200.md
This article describes the Dell Edge 5200 appliance for OT sensors.
| Appliance characteristic |Details | ||| |**Hardware profile** | SMB|
-|**Performance** | Max bandwidth: 60 Mbp/s<br>Max devices: 1,000 |
+|**Performance** | Max bandwidth: 60 Mbp/s<br>Max devices: 1,000 |
|**Physical specifications** | Mounting: Wall Mount<br>Ports: 3x RJ45 |
-|**Status** | Supported, Not available pre-configured|
+|**Status** | Supported, Not available preconfigured|
## Specifications
-|Component | Technical specifications|
+|Component |Technical specifications|
|:-|:-| |Chassis| Desktop / Wall mount server Rugged MIL-STD-810G| |Dimensions| 211 mm (W) x 240 mm (D) x 86 mm (H)|
defender-for-iot Dell Poweredge R340 Xl Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r340-xl-legacy.md
Legacy appliances are certified but aren't currently offered as pre-configured a
|Appliance characteristic | Description| ||| |**Hardware profile** | Enterprise|
-|**Performance** | Max bandwidth: 1 Gbp/s<br>Max devices: 10,000 |
+|**Performance** | Max bandwidth: 1 Gbp/s<br>Max devices: 10,000 |
|**Physical Specifications** | Mounting: 1U<br>Ports: 8x RJ45 or 6x SFP (OPT)| |**Status** | Supported, not available as a preconfigured appliance|
In this image, numbers refer to the following components:
## Specifications
-|Component| Technical specifications|
+|Component| Technical specifications|
|:-|:-|
-|Chassis| 1U rack server|
-|Dimensions| 42.8 x 434.0 x 596 (mm) /1.67ΓÇ¥ x 17.09ΓÇ¥ x 23.5ΓÇ¥ (in)|
-|Weight| Max 29.98 lb/13.6 Kg|
-|Processor| Intel Xeon E-2144G 3.6 GHz <br>8M cache <br> 4C/8T <br> turbo (71 W|
+|Chassis| 1U rack server|
+|Dimensions| 42.8 x 434.0 x 596 (mm) /1.67ΓÇ¥ x 17.09ΓÇ¥ x 23.5ΓÇ¥ (in)|
+|Weight| Max 29.98 lb/13.6 Kg|
+|Processor| Intel Xeon E-2144G 3.6 GHz <br>8M cache <br> 4C/8T <br> turbo (71 W|
|Chipset|Intel C246|
-|Memory |32 GB = Two 16 GB 2666MT/s DDR4 ECC UDIMM|
+|Memory|32 GB = Two 16 GB 2666MT/s DDR4 ECC UDIMM|
|Storage| Three 2 TB 7.2 K RPM SATA 6 Gbps 512n 3.5in Hot-plug Hard Drive - RAID 5| |Network controller|On-board: Two 1 Gb Broadcom BCM5720 <br>On-board LOM: iDRAC Port Card 1 Gb Broadcom BCM5720 <br>External: One Intel Ethernet i350 QP 1 Gb Server Adapter Low Profile| |Management|iDRAC nine Enterprise|
-|Device access| Two rear USB 3.0|
+|Device access| Two rear USB 3.0|
|One front| USB 3.0|
-|Power| Dual Hot Plug Power Supplies 350 W|
-|Rack support| ReadyRailsΓäó II sliding rails for tool-less mounting in 4-post racks with square or unthreaded round holes or tooled mounting in 4-post threaded hole racks, with support for optional tool-less cable management arm.|
+|Power| Dual Hot Plug Power Supplies 350 W|
+|Rack support| ReadyRailsΓäó II sliding rails for tool-less mounting in four-post racks with square or unthreaded round holes. Or tooled mounting in four-post threaded hole racks with support for optional tool-less cable management arm.|
## Dell PowerEdgeR340XL installation
When the connection is established, the BIOS is configurable.
This procedure describes how to update the Dell PowerEdge R340 XL configuration for your OT deployment.
-Configure the appliance BIOS only if you did not purchase your appliance from Arrow, or if you have an appliance, but do not have access to the XML configuration file.
+Configure the appliance BIOS only if you didn't purchase your appliance from Arrow, or if you have an appliance, but don't have access to the XML configuration file.
1. Access the appliance's BIOS directly by using a keyboard and screen, or use iDRAC.
- - If the appliance is not a Defender for IoT appliance, open a browser and go to the IP address that was configured before. Sign in with the Dell default administrator privileges. Use **root** for the username and **calvin** for the password.
+ - If the appliance isn't a Defender for IoT appliance, open a browser and go to the IP address that was configured before. Sign in with the Dell default administrator privileges. Use **root** for the username and **calvin** for the password.
- If the appliance is a Defender for IoT appliance, sign in by using **XXX** for the username and **XXX** for the password.
defender-for-iot Hpe Edgeline El300 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-edgeline-el300.md
Legacy appliances are certified but aren't currently offered as pre-configured a
| Appliance characteristic |Details | ||| |**Hardware profile** | SMB|
-|**Performance** | Max bandwidth: 100 Mbp/s<br>Max devices: 800 |
+|**Performance** |Max bandwidth: 100 Mbp/s<br>Max devices: 800 |
|**Physical specifications** | Mounting: Mounting kit, Din Rail<br>Ports: 5x RJ45| |**Status** | Supported, Not available pre-configured|
A default administrative user is provided. We recommend that you change the pass
:::image type="content" source="../media/tutorial-install-components/wired-and-wireless.png" alt-text="Screenshot of the Wired and Wireless Network screen.":::
-1. Toggle off the **DHCP** option..
+1. Toggle off the **DHCP** option.
1. Configure the IPv4 addresses as such: - **IPV4 Address**: `192.168.1.125`
devtest-labs How To Move Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-labs.md
In this article, you moved a DevTest lab from one region to another and cleaned
- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md) - [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
+- [Move Microsoft.DevtestLab/schedules to another region](./how-to-move-schedule-to-new-region.md)
devtest-labs How To Move Schedule To New Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-schedule-to-new-region.md
+
+ Title: How to move a schedule to another region
+description: This article explains how to move schedules to another Azure region.
+++ Last updated : 05/09/2022+
+# Move schedules to another region
+
+In this article, you'll learn how to move schedules by using an Azure Resource Manager (ARM) template.
+
+DevTest Labs supports two types of schedules.
+
+- Schedules apply only to compute virtual machines (VMs): schedules are stored as microsoft.devtestlab/schedules resources, and often referred to as top level schedules, or simply schedules.
+
+- Lab schedules apply only to DevTest Labs (DTL) VMs: lab schedules. They are stored as microsoft.devtestlab/labs/schedules resources. This type of schedule is not covered in this article.
+
+In this article, you'll learn how to:
+> [!div class="checklist"]
+> >
+> - Export an ARM template that contains your schedules.
+> - Modify the template by adding or updating the target region and other parameters.
+> - Delete the resources in the source region.
+
+## Prerequisites
+
+- Ensure that the services and features that your account uses are supported in the target region.
+- For preview features, ensure that your subscription is allowlisted for the target region.
+- Ensure a Compute VM exists in the target region.
+
+## Move an existing schedule
+There are two ways to move a schedule:
+
+ - Manually recreate the schedules on the moved VMs. This process can be time consuming and error prone. This approach is most useful when you have a few schedules and VMs.
+ - Export and redeploy the schedules by using ARM templates.
+
+Use the following steps to export and redeploy your schedule in another Azure region by using an ARM template:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Go to the source resource group that held your VMs.
+
+3. On the **Resource Group Overview** page, under **Resources**, select **Show hidden types**.
+
+4. Select all resources with the type **microsoft.devtestlab/schedules**.
+
+5. Select **Export template**.
+
+ :::image type="content" source="./media/how-to-move-schedule-to-new-region/move-compute-schedule.png" alt-text="Screenshot that shows the hidden resources in a resource group, with schedules selected.":::
+
+6. On the **Export resource group template** page, select **Deploy**.
+
+7. On the **Custom deployment** page, select **Edit template**.
+
+8. In the template code, change all instances of `"location": "<old location>"` to `"location": "<new location>"` and then select **Save**.
+
+9. On the **Custom deployment** page, enter values that match the target VM:
+
+ |Name|Value|
+ |-|-|
+ |**Subscription**|Select an Azure subscription.|
+ |**Resource group**|Select the resource group name. |
+ |**Region**|Select a location for the lab schedule. For example, **Central US**. |
+ |**Schedule Name**|Must be a globally unique name. |
+ |**VirtualMachine_xxx_externalId**|Must be the target VM. |
+
+ :::image type="content" source="./media/how-to-move-schedule-to-new-region/move-schedule-custom-deployment.png" alt-text="Screenshot that shows the custom deployment page, with new location values for the relevant settings.":::
+
+ >[!IMPORTANT]
+ >Each schedule must have a globally unique name; you will need to change the schedule name for the new location.
+
+10. Select **Review and create** to create the deployment.
+
+11. When the deployment is complete, verify that the new schedule is configured correctly on the new VM.
+
+## Discard or clean up
+
+Now you can choose to clean up the original schedules if they're no longer used. Go to the original schedule resource group (where you exported templates from in step 5 above) and delete the schedule resource.
+
+## Next steps
+
+In this article, you moved a schedule from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+
+- [Move a DevTest Labs to another region](./how-to-move-labs.md).
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md).
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md).
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
+
+ Title: Quickstart - Create an Azure private DNS resolver using the Azure portal
+description: In this quickstart, you create and test a private DNS resolver in Azure DNS. This article is a step-by-step guide to create and manage your first private DNS resolver using the Azure portal.
+++ Last updated : 05/11/2022+++
+#Customer intent: As an experienced network administrator, I want to create an Azure private DNS resolver, so I can resolve host names on my private virtual networks.
++
+# Quickstart: Create an Azure private DNS Resolver using the Azure portal
+
+This quickstart walks you through the steps to create an Azure DNS Private Resolver (Public Preview) using the Azure portal. If you prefer, you can complete this quickstart using [Azure PowerShell](private-dns-getstarted-powershell.md).
+
+Azure DNS Private Resolver enables you to query Azure DNS private zones from an on-premises environment, and vice versa, without deploying VM based DNS servers. You no longer need to provision IaaS based solutions on your virtual networks to resolve names registered on Azure private DNS zones. You can configure conditional forwarding of domains back to on-premises, multi-cloud and public DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
+
+## Prerequisites
+
+An Azure subscription is required.
+- If you don't already have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Register the Microsoft.Network provider namespace
+
+Before you can use **Microsoft.Network** services with your Azure subscription, you must register the **Microsoft.Network** namespace:
+
+1. Select the **Subscription** blade in the Azure portal, and then choose your subscription by clicking on it.
+2. Under **Settings** select **Resource Providers**.
+3. Select **Microsoft.Network** and then select **Register**.
+
+## Create a resource group
+
+First, create or choose an existing resource group to host the resources for your DNS resolver. The resource group must be in a [supported region](dns-private-resolver-overview.md#regional-availability). In this example, the location is **West Central US**. To create a new resource group:
+
+1. Select [Create a resource group](https://ms.portal.azure.com/#create/Microsoft.ResourceGroup).
+2. Select your subscription name, enter a name for the resource group, and choose a supported region.
+3. Select **Review + create**, and then select **Create**.
+
+ ![create resource group](./media/dns-resolver-getstarted-portal/resource-group.png)
+
+## Create a virtual network
+
+Next, add a virtual network to the resource group that you created, and configure subnets.
+
+1. Select the resource group you created, select **Create**, select **Networking** from the list of categories, and then next to **Virtual network**, select **Create**.
+2. On the **Basics** tab, enter a name for the new virtual network and select the **Region** that is the same as your resource group.
+3. On the **IP Addresses** tab, modify the **IPv4 address space** to be 10.0.0.0/8.
+4. Select **Add subnet** and enter the subnet name and address range:
+ - Subnet name: snet-inbound
+ - Subnet address range: 10.0.0.0/28
+ - Select **Add** to add the new subnet.
+5. Select **Add subnet** and configure the outbound endpoint subnet:
+ - Subnet name: snet-outbound
+ - Subnet address range: 10.1.1.0/28
+ - Select **Add** to add this subnet.
+6. Select **Review + create** and then select **Create**.
+
+ ![create virtual network](./media/dns-resolver-getstarted-portal/virtual-network.png)
+
+## Create a DNS resolver inside the virtual network
+
+1. To display the **DNS Private Resolvers** resource during public preview, open the following [preview-enabled Azure portal link](https://go.microsoft.com/fwlink/?linkid=2194569).
+2. Search for and select **DNS Private Resolvers**, select **Create**, and then on the **Basics** tab for **Create a DNS Private Resolver** enter the following:
+ - Subscription: Choose the subscription name you're using.
+ - Resource group: Choose the name of the resource group that you created.
+ - Name: Enter a name for your DNS resolver (ex: mydnsresolver).
+ - Region: Choose the region you used for the virtual network.
+ - Virtual Network: Select the virtual network that you created.
+
+ Don't create the DNS resolver yet.
+
+ ![create resolver - basics](./media/dns-resolver-getstarted-portal/dns-resolver.png)
+
+3. Select the **Inbound Endpoints** tab, select **Add an endpoint**, and then enter a name next to **Endpoint name** (ex: myinboundendpoint).
+4. Next to **Subnet**, select the inbound endpoint subnet you created (ex: snet-inbound, 10.0.0.0/28) and then select **Save**.
+5. Select the **Outbound Endpoints** tab, select **Add an endpoint**, and then enter a name next to **Endpoint name** (ex: myoutboundendpoint).
+6. Next to **Subnet**, select the outbound endpoint subnet you created (ex: snet-outbound, 10.1.1.0/28) and then select **Save**.
+7. Select the **Ruleset** tab, select **Add a ruleset**, and enter the following:
+ - Ruleset name: Enter a name for your ruleset (ex: myruleset).
+ - Endpoints: Select the outbound endpoint that you created (ex: myoutboundendpoint).
+8. Under **Rules**, select **Add** and enter your conditional DNS forwarding rules. For example:
+ - Rule name: Enter a rule name (ex: contosocom).
+ - Domain Name: Enter a domain name with a trailing dot (ex: contoso.com.).
+ - Rule State: Choose **Enabled** or **Disabled**. The default is enabled.
+ - Select **Add a destination** and enter a desired destination IPv4 address (ex: 11.0.1.4).
+ - If desired, select **Add a destination** again to add another destination IPv4 address (ex: 11.0.1.5).
+ - When you're finished adding destination IP addresses, select **Add**.
+9. Select **Review and Create**, and then select **Create**.
+
+ ![create resolver - ruleset](./media/dns-resolver-getstarted-portal/resolver-ruleset.png)
+
+ This example has only one conditional forwarding rule, but you can create many. Edit the rules to enable or disable them as needed.
+
+ ![create resolver - review](./media/dns-resolver-getstarted-portal/resolver-review.png)
+
+ After selecting **Create**, the new DNS resolver will begin deployment. This process might take a minute or two, and you'll see the status of each component as it is deployed.
+
+ ![create resolver - status](./media/dns-resolver-getstarted-portal/resolver-status.png)
+
+## Create a second virtual network
+
+Create a second virtual network to simulate an on-premises or other environment. To create a second virtual network:
+
+1. Select **Virtual Networks** from the **Azure services** list, or search for **Virtual Networks** and then select **Virtual Networks**.
+2. Select **Create**, and then on the **Basics** tab select your subscription and choose the same resource group that you have been using in this guide (ex: myresourcegroup).
+3. Next to **Name**, enter a name for the new virtual network (ex: myvnet2).
+4. Verify that the **Region** selected is the same region used previously in this guide (ex: West Central US).
+5. Select the **IP Addresses** tab and edit the default IP address space. Replace the address space with a simulated on-premises address space (ex: 12.0.0.0/8).
+6. Select **Add subnet** and enter the following:
+ - Subnet name: backendsubnet
+ - Subnet address range: 12.2.0.0/24
+7. Select **Add**, select **Review + create**, and then select **Create**.
+
+ ![second vnet review](./media/dns-resolver-getstarted-portal/vnet-review.png)
+
+ ![second vnet create](./media/dns-resolver-getstarted-portal/vnet-create.png)
+
+## Test the private resolver
+
+You should now be able to send DNS traffic to your DNS resolver and resolve records based on your forwarding rulesets, including:
+- Azure DNS private zones linked to the virtual network where the resolver is deployed.
+- DNS zones in the public internet DNS namespace.
+- Private DNS zones that are hosted on-premises.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [What is Azure private DNS Resolver?](dns-private-resolver-overview.md)
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
+
+ Title: Quickstart - Create an Azure DNS Private Resolver using Azure PowerShell
+description: In this quickstart, you learn how to create and manage your first private DNS resolver using Azure PowerShell.
+++ Last updated : 05/10/2022+++
+#Customer intent: As an experienced network administrator, I want to create an Azure private DNS resolver, so I can resolve host names on my private virtual networks.
++
+# Quickstart: Create an Azure DNS Private Resolver using Azure PowerShell
+
+This article walks you through the steps to create your first private DNS zone and record using Azure PowerShell. If you prefer, you can complete this quickstart using [Azure portal](private-dns-getstarted-portal.md).
++
+Azure DNS Private Resolver is a new service currently in public preview. Azure DNS Private Resolver enables you to query Azure DNS private zones from an on-prem environment and vice versa without deploying VM based DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
+
+## Prerequisites
+
+If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+This article assumes you've [installed the Az Azure PowerShell module](/powershell/azure/install-az-ps).
++
+## Install the Az.DnsResolver PowerShell module
+
+> [!NOTE]
+> If you previously installed the Az.DnsResolver module for evaluation during private preview, you can [unregister](/powershell/module/powershellget/unregister-psrepository) and delete the local PSRepository that was created. Then, install the latest version of the Az.DnsResolver module using the steps provided in this article.
+
+Install the Az.DnsResolver module.
+
+```Azure PowerShell
+Install-Module Az.DnsResolver
+```
+
+Confirm that the Az.DnsResolver module was installed. The current version of this module is 0.2.0.
+
+```Azure PowerShell
+Get-InstalledModule -Name Az.DnsResolver
+```
+
+## Set subscription context in Azure PowerShell
+
+Connect PowerShell to Azure cloud.
+
+```Azure PowerShell
+Connect-AzAccount -Environment AzureCloud
+```
+
+If multiple subscriptions are present, the first subscription ID will be used. To specify a different subscription ID, use the following command.
+
+```Azure PowerShell
+Select-AzSubscription -SubscriptionObject (Get-AzSubscription -SubscriptionId <your-sub-id>)
+```
+
+## Register the Microsoft.Network provider namespace for your account.
+
+Before you can use Microsoft.Network services with your Azure subscription, you must register the Microsoft.Network namespace:
+
+Use the following command to register the Microsoft.Network namespace.
+
+```Azure PowerShell
+Register-AzResourceProvider -ProviderNamespace Microsoft.Network
+```
+
+## Create a DNS resolver instance
+
+Create a resource group to host the resources. The resource group must be in a [supported region](dns-private-resolver-overview.md). In this example, the location is westcentralus.
+
+```Azure PowerShell
+New-AzResourceGroup -Name myresourcegroup -Location westcentralus
+```
+
+Create a virtual network in the resource group that you created.
+
+```Azure PowerShell
+New-AzVirtualNetwork -Name myvnet -ResourceGroupName myresourcegroup -Location westcentralus -AddressPrefix "10.0.0.0/8"
+```
+
+Create a DNS resolver in the virtual network that you created.
+
+```Azure PowerShell
+New-AzDnsResolver -Name mydnsresolver -ResourceGroupName myresourcegroup -Location westcentralus -VirtualNetworkId "/subscriptions/<your subs id>/resourceGroups/myresourcegroup/providers/Microsoft.Network/virtualNetworks/myvnet"
+```
+
+Verify that the DNS resolver was created successfully and the state is connected (optional).
+
+```Azure PowerShell
+$dnsResolver = Get-AzDnsResolver -Name mydnsresolver -ResourceGroupName myresourcegroup
+$dnsResolver.ToJsonString()
+```
+## Create a DNS resolver inbound endpoint
+
+### Create a subnet in the virtual network
+
+Create a subnet in the virtual network (Microsoft.Network/virtualNetworks/subnets) from the IP address space that you previously assigned. The subnet needs to be at least /28 in size (16 IP addresses).
+
+```Azure PowerShell
+$virtualNetwork = Get-AzVirtualNetwork -Name myvnet -ResourceGroupName myresourcegroup
+Add-AzVirtualNetworkSubnetConfig -Name snet-inbound -VirtualNetwork $virtualNetwork -AddressPrefix "10.0.0.0/28"
+$virtualNetwork | Set-AzVirtualNetwork
+```
+
+### Create the inbound endpoint
+
+Create an inbound endpoint to enable name resolution from on-prem or another private location using an IP address that is part of your private virtual network address space.
+
+```Azure PowerShell
+$ipconfig = New-AzDnsResolverIPConfigurationObject -PrivateIPAllocationMethod Dynamic -SubnetId /subscriptions/<your sub id>/resourceGroups/myresourcegroup/providers/Microsoft.Network/virtualNetworks/myvnet/subnets/snet-inbound
+New-AzDnsResolverInboundEndpoint -DnsResolverName mydnsresolver -Name myinboundendpoint -ResourceGroupName myresourcegroup -Location westcentralus -IpConfiguration $ipconfig
+```
+
+### Confirm your inbound endpoint
+
+Confirm that the inbound endpoint was created and allocated an IP address within the assigned subnet.
+
+```Azure PowerShell
+$inboundEndpoint = Get-AzDnsResolverInboundEndpoint -Name myinboundendpoint -DnsResolverName mydnsresolver -ResourceGroupName myresourcegroup
+$inboundEndpoint.ToJsonString()
+```
+
+## Create a DNS resolver outbound endpoint
+
+### Create a subnet in the virtual network
+
+Create a subnet in the virtual network (Microsoft.Network/virtualNetworks/subnets) from the IP address space that you previously assigned, different than your inbound subnet (snet-inbound). The outbound subnet also needs to be at least /28 in size (16 IP addresses).
+
+```Azure PowerShell
+$virtualNetwork = Get-AzVirtualNetwork -Name myvnet -ResourceGroupName myresourcegroup
+Add-AzVirtualNetworkSubnetConfig -Name snet-outbound -VirtualNetwork $virtualNetwork -AddressPrefix "10.1.1.0/28"
+$virtualNetwork | Set-AzVirtualNetwork
+```
+
+### Create the outbound endpoint
+
+An outbound endpoint enables conditional forwarding name resolution from Azure to external DNS servers.
+
+```Azure PowerShell
+New-AzDnsResolverOutboundEndpoint -DnsResolverName mydnsresolver -Name myoutboundendpoint -ResourceGroupName myresourcegroup -Location westcentralus -SubnetId /subscriptions/<your sub id>/resourceGroups/myresourcegroup/providers/Microsoft.Network/virtualNetworks/myvnet/subnets/snet-outbound
+```
+
+### Confirm your outbound endpoint
+
+Confirm that the outbound endpoint was created and allocated an IP address within the assigned subnet.
+
+```Azure PowerShell
+$outboundEndpoint = Get-AzDnsResolverOutboundEndpoint -Name myoutboundendpoint -DnsResolverName mydnsresolver -ResourceGroupName myresourcegroup
+$outboundEndpoint.ToJsonString()
+```
+
+## Create DNS resolver forwarding ruleset
+
+Create a DNS forwarding ruleset for the outbound endpoint that you created.
+
+```Azure PowerShell
+New-AzDnsForwardingRuleset -Name myruleset -ResourceGroupName myresourcegroup -DnsResolverOutboundEndpoint $outboundendpoint -Location westcentralus
+```
+
+### Confirm your DNS forwarding ruleset
+
+Confirm the forwarding ruleset was created.
+
+```Azure PowerShell
+$dnsForwardingRuleset = Get-AzDnsForwardingRuleset -Name myruleset -ResourceGroupName myresourcegroup
+$dnsForwardingRuleset.ToJsonString()
+```
+
+## Create a virtual network link to a DNS forwarding ruleset
+
+Virtual network links enable name resolution for virtual networks that are linked to an outbound endpoint with a DNS forwarding ruleset.
+
+```Azure PowerShell
+$vnet = Get-AzVirtualNetwork -Name myvnet -ResourceGroupName myresourcegroup
+$vnetlink = New-AzDnsForwardingRulesetVirtualNetworkLink -DnsForwardingRulesetName $dnsForwardingRuleset.Name -ResourceGroupName myresourcegroup -VirtualNetworkLinkName "vnetlink" -VirtualNetworkId $vnet.Id -SubscriptionId <your sub id>
+```
+
+### Confirm the virtual network link
+
+Confirm the virtual network link was created.
+
+```Azure PowerShell
+$virtualNetworkLink = Get-AzDnsForwardingRulesetVirtualNetworkLink -DnsForwardingRulesetName $dnsForwardingRuleset.Name -ResourceGroupName myresourcegroup
+$virtualNetworkLink.ToJsonString()
+```
+
+## Create a second virtual network and link it to your DNS forwarding ruleset
+
+Create a second virtual network to simulate an on-prem or other environment.
+
+```Azure PowerShell
+$vnet2 = New-AzVirtualNetwork -Name myvnet2 -ResourceGroupName myresourcegroup -Location westcentralus -AddressPrefix "12.0.0.0/8"
+$vnetlink2 = New-AzDnsForwardingRulesetVirtualNetworkLink -DnsForwardingRulesetName $dnsForwardingRuleset.Name -ResourceGroupName myresourcegroup -VirtualNetworkLinkName "vnetlink2" -VirtualNetworkId $vnet2.Id -SubscriptionId <your sub id>
+```
+
+### Confirm the second virtual network
+
+Confirm the second virtual network was created.
+
+```Azure PowerShell
+$virtualNetworkLink2 = Get-AzDnsForwardingRulesetVirtualNetworkLink -DnsForwardingRulesetName $dnsForwardingRuleset.Name -ResourceGroupName myresourcegroup
+$virtualNetworkLink2.ToJsonString()
+```
+
+## Create a forwarding rule
+
+Create a forwarding rule for a ruleset to one or more target DNS servers. You must specify the fully qualified domain name (FQDN) with a trailing dot. The **New-AzDnsResolverTargetDnsServerObject** cmdlet sets the default port as 53, but you can also specify a unique port.
+
+```Azure PowerShell
+$targetDNS1 = New-AzDnsResolverTargetDnsServerObject -IPAddress 11.0.1.4 -Port 53
+$targetDNS2 = New-AzDnsResolverTargetDnsServerObject -IPAddress 11.0.1.5 -Port 53
+$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "contosocom" -DomainName "contoso.com." -ForwardingRuleState "Enabled" -TargetDnsServer @($targetDNS1,$targetDNS2)
+```
+
+## Test the private resolver
+
+You should now be able to send DNS traffic to your DNS resolver and resolve records based on your forwarding rulesets, including:
+- Azure DNS private zones linked to the virtual network where the resolver is deployed.
+- DNS zones in the public internet DNS namespace.
+- Private DNS zones that are hosted on-prem.
+
+## Delete a DNS resolver
+
+To delete the DNS resolver, the resource inbound endpoints created within the resolver must be deleted first. Once the inbound endpoints are removed, the parent DNS resolver can be deleted.
+
+### Delete the inbound endpoint
+
+```Azure PowerShell
+Remove-AzDnsResolverInboundEndpoint -Name myinboundendpoint -DnsResolverName mydnsresolver -ResourceGroupName myresourcegroup
+```
+
+### Delete the virtual network link
+
+```Azure PowerShell
+Remove-AzDnsForwardingRulesetVirtualNetworkLink -DnsForwardingRulesetName $dnsForwardingRuleset.Name -Name vnetlink -ResourceGroupName myresourcegroup
+```
+
+### Delete the DNS forwarding ruleset
+
+```Azure PowerShell
+Remove-AzDnsForwardingRuleset -Name $dnsForwardingRuleset.Name -ResourceGroupName myresourcegroup
+```
+
+### Delete the outbound endpoint
+
+```Azure PowerShell
+Remove-AzDnsResolverOutboundEndpoint -DnsResolverName mydnsresolver -ResourceGroupName myresourcegroup -Name myoutboundendpoint
+```
+
+### Delete the DNS resolver
+
+```Azure PowerShell
+Remove-AzDnsResolver -Name mydnsresolver -ResourceGroupName myresourcegroup
+```
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [What is Azure DNS Private Resolver?](dns-private-resolver-overview.md)
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
+
+ Title: What is Azure DNS Private Resolver?
+description: In this article, get started with an overview of the Azure DNS Private Resolver service.
+++++ Last updated : 05/10/2022+
+#Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
++
+# What is Azure DNS Private Resolver?
+
+Azure DNS Private Resolver is a new service that enables you to query Azure DNS private zones from an on-premises environment and vice versa without deploying VM based DNS servers.
+
+> [!IMPORTANT]
+> Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## How does it work?
+
+Azure DNS Private Resolver requires an [Azure Virtual Network](/azure/virtual-network/virtual-networks-overview). When you create an Azure DNS Private Resolver inside a virtual network, one or more [inbound endpoints](#inbound-endpoints) are established that can be used as the destination for DNS queries. The resolver's [outbound endpoint](#outbound-endpoints) processes DNS queries based on a [DNS forwarding ruleset](#dns-forwarding-rulesets) that you configure. DNS queries that are initiated in networks linked to a ruleset can be sent to other DNS servers.
+
+The DNS query process when using an Azure DNS Private Resolver is summarized below:
+
+1. A client in a virtual network issues a DNS query.
+2. If the DNS servers for this virtual network are [specified as custom](/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances#specify-dns-servers), then the query is forwarded to the specified IP addresses.
+3. If Default (Azure-provided) DNS servers are configured in the virtual network, and there are Private DNS zones [linked to the same virtual network](private-dns-virtual-network-links.md), these zones are consulted.
+4. If the query doesn't match a Private DNS zone linked to the virtual network, then [Virtual network links](#virtual-network-links) for [DNS forwarding rulesets](#dns-forwarding-rulesets) are consulted.
+5. If no ruleset links are present, then Azure DNS is used to resolve the query.
+6. If ruleset links are present, the [DNS forwarding rules](#dns-forwarding-rules) are evaluated.
+7. If a suffix match is found, the query is forwarded to the specified address.
+8. If multiple matches are present, the longest suffix is used.
+9. If no match is found, no DNS forwarding occurs and Azure DNS is used to resolve the query.
+
+The architecture for Azure DNS Private Resolver is summarized in the following figure. DNS resolution between Azure virtual networks and on-premises networks requires [Azure ExpressRoute](/azure/expressroute/expressroute-introduction) or a [VPN](/azure/vpn-gateway/vpn-gateway-about-vpngateways).
+
+[ ![Azure DNS Private Resolver architecture](./media/dns-resolver-overview/resolver-architecture.png) ](./media/dns-resolver-overview/resolver-architecture.png#lightbox)
+
+Figure 1: Azure DNS Private Resolver architecture
+
+For more information about creating a private DNS resolver, see:
+- [Quickstart: Create an Azure DNS Private Resolver using the Azure portal](dns-private-resolver-get-started-portal.md)
+- [Quickstart: Create an Azure DNS Private Resolver using Azure PowerShell](dns-private-resolver-get-started-powershell.md)
+
+## Azure DNS Private Resolver benefits
+
+Azure DNS Private Resolver provides the following benefits:
+* Fully managed: Built-in high availability, zone redundancy.
+* Cost reduction: Reduce operating costs and run at a fraction of the price of traditional IaaS solutions.
+* Private access to your Private DNS Zones: Conditionally forward to and from on-premises.
+* Scalability: High performance per endpoint.
+* DevOps Friendly: Build your pipelines with Terraform, ARM, or Bicep.
+
+## Regional availability
+
+Azure DNS Private Resolver is available in the following regions:
+
+- Australia East
+- UK South
+- North Europe
+- South Central US
+- West US 3
+- East US
+- North Central US
+- Central US EUAP
+- East US 2 EUAP
+- West Central US
+- East US 2
+- West Europe
+
+## DNS resolver endpoints
+
+### Inbound endpoints
+
+An inbound endpoint enables name resolution from on-premises or other private locations via an IP address that is part of your private virtual network address space. This endpoint requires a subnet in the VNet where itΓÇÖs provisioned. The subnet can only be delegated to **Microsoft.Network/dnsResolvers** and can't be used for other services. DNS queries received by the inbound endpoint will ingress to Azure. You can resolve names in scenarios where you have Private DNS Zones, including VMs that are using auto registration, or Private Link enabled services.
+
+### Outbound endpoints
+
+An outbound endpoint enables conditional forwarding name resolution from Azure to on-premises, other cloud providers, or external DNS servers. This endpoint requires a dedicated subnet in the VNet where itΓÇÖs provisioned, with no other service running in the subnet, and can only be delegated to **Microsoft.Network/dnsResolvers**. DNS queries sent to the outbound endpoint will egress from Azure.
+
+## Virtual network links
+
+Virtual network links enable name resolution for virtual networks that are linked to an outbound endpoint with a DNS forwarding ruleset. This is a 1:1 relationship.
+
+## DNS forwarding rulesets
+
+A DNS forwarding ruleset is a group of DNS forwarding rules (up to 1,000) that can be applied to one or more outbound endpoints, or linked to one or more virtual networks. This is a 1:N relationship.
+
+## DNS forwarding rules
+
+A DNS forwarding rule includes one or more target DNS servers that will be used for conditional forwarding, and is represented by:
+- A domain name,
+- A target IP address,
+- A target Port and Protocol (UDP or TCP).
+
+## Restrictions:
+
+### Virtual network restrictions
+
+The following restrictions hold with respect to virtual networks:
+- A DNS resolver can only reference a virtual network in the same region as the DNS resolver.
+- A virtual network can't be shared between multiple DNS resolvers. A single virtual network can only be referenced by a single DNS resolver.
+
+### Subnet restrictions
+
+Subnets used for DNS resolver have the following limitations:
+- A subnet must be a minimum of /28 address space or a maximum of /24 address space.
+- A subnet can't be shared between multiple DNS resolver endpoints. A single subnet can only be used by a single DNS resolver endpoint.
+- All IP configurations for a DNS resolver inbound endpoint must reference the same subnet. Spanning multiple subnets in the IP configuration for a single DNS resolver inbound endpoint is not allowed.
+- The subnet used for a DNS resolver inbound endpoint must be within the virtual network referenced by the parent DNS resolver.
+
+### Outbound endpoint restrictions
+
+Outbound endpoints have the following limitations:
+- An outbound endpoint can't be deleted unless the DNS forwarding ruleset and the virtual network links under it are deleted
+
+### DNS forwarding ruleset restrictions
+
+DNS forwarding rulesets have the following limitations:
+- A DNS forwarding ruleset can't be deleted unless the virtual network links under it are deleted
+
+### Other restrictions
+
+- DNS resolver endpoints can't be updated to include IP configurations from a different subnet
+- IPv6 enabled subnets aren't supported in Public Preview
++
+## Next steps
+
+* Learn how to create an Azure DNS Private Resolver by using [Azure PowerShell](./dns-private-resolver-get-started-powershell.md) or [Azure portal](./dns-private-resolver-get-started-portal.md).
+* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
+* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
event-hubs Store Captured Data Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/store-captured-data-data-warehouse.md
Title: 'Tutorial: Migrate event data to Azure Synapse Analytics - Azure Event Hubs' description: Describes how to use Azure Event Grid and Functions to migrate Event Hubs captured data to Azure Synapse Analytics. Previously updated : 03/08/2022 Last updated : 04/29/2022
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
While FastPath supports most configurations, it doesn't support the following fe
### IP address limits
-| ExpressRoute SKU | Bandwidth | FathPath IP limit |
+| ExpressRoute SKU | Bandwidth | FastPath IP limit |
| -- | -- | -- | | ExpressRoute Direct Port | 100Gbps | 200,000 | | ExpressRoute Direct Port | 10Gbps | 100,000 |
While FastPath supports most configurations, it doesn't support the following fe
> [!NOTE] > * ExpressRoute Direct has a cumulative limit at the port level. > * Traffic will flow through the ExpressRoute gateway when these limits are reached.
->
- ## Public preview The following FastPath features are in Public preview:
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
When you have a large distributed enterprise network, you're likely to have mult
Let's consider the example illustrated in the following diagram. In the example, Contoso has two on-premises locations connected to two Contoso IaaS deployment in two different Azure regions via ExpressRoute circuits in two different peering locations.
-[![6]][6]
How we architect the disaster recovery has an impact on how cross-regional to cross location (region1/region2 to location2/location1) traffic is routed. Let's consider two different disaster architectures that routes cross region-location traffic differently.
In the first scenario, let's design disaster recovery such that all the traffic
Scenario 1 is illustrated in the following diagram. In the diagram, green lines indicate paths for traffic flow between VNet1 and on-premises networks. The blue lines indicate paths for traffic flow between VNet2 and on-premises networks. Solid lines indicate desired path in the steady-state and the dashed lines indicate traffic path in the failure of the corresponding ExpressRoute circuit that carries steady-state traffic flow.
-[![7]][7]
You can architect the scenario using connection weight to influence VNets to prefer connection to local peering location ExpressRoute for on-premises network bound traffic. To complete the solution, you need to ensure symmetrical reverse traffic flow. You can use local preference on the iBGP session between your BGP routers (on which ExpressRoute circuits are terminated on on-premises side) to prefer a ExpressRoute circuit. The solution is illustrated in the following diagram.
-[![8]][8]
### Scenario 2 The Scenario 2 is illustrated in the following diagram. In the diagram, green lines indicate paths for traffic flow between VNet1 and on-premises networks. The blue lines indicate paths for traffic flow between VNet2 and on-premises networks. In the steady-state (solid lines in the diagram), all the traffic between VNets and on-premises locations flow via Microsoft backbone for the most part, and flows through the interconnection between on-premises locations only in the failure state (dotted lines in the diagram) of an ExpressRoute.
-[![9]][9]
The solution is illustrated in the following diagram. As illustrated, you can architect the scenario either using more specific route (Option 1) or AS-path prepend (Option 2) to influence VNet path selection. To influence on-premises network route selection for Azure bound traffic, you need configure the interconnection between the on-premises location as less preferable. How you configure the interconnection link as preferable depends on the routing protocol used within the on-premises network. You can use local preference with iBGP or metric with IGP (OSPF or IS-IS).
-[![10]][10]
+ > [!IMPORTANT] > When one or multiple ExpressRoute circuits are connected to multiple virtual networks, virtual network to virtual network traffic can route via ExpressRoute. However, this is not recommended. To enable virtual network to virtual network connectivity, [configure virtual network peering](../virtual-network/virtual-network-manage-peering.md).
In this article, we discussed how to design for disaster recovery of an ExpressR
- [Enterprise-scale disaster recovery][Enterprise DR] - [SMB disaster recovery with Azure Site Recovery][SMB DR]
-<!--Image References-->
-[6]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region.png "large distributed on-premises network considerations"
-[7]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region-arch1.png "scenario 1"
-[8]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region-sol1.png "active-active ExpressRoute circuits solution 1"
-[9]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region-arch2.png "scenario 2"
-[10]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region-sol2.png "active-active ExpressRoute circuits solution 2"
- <!--Link References--> [HA]: ./designing-for-high-availability-with-expressroute.md [Enterprise DR]: https://azure.microsoft.com/solutions/architecture/disaster-recovery-enterprise-scale-dr/
expressroute Expressroute Howto Set Global Reach Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach-portal.md
To disable connectivity between an individual circuit, select the delete button
After the operation is complete, you no longer have connectivity between your on-premises network through your ExpressRoute circuits.
+## Update configuration
+
+1. To update the configuration for a Global Reach connection, select the connection name.
++
+1. Update the configuration on the *Edit Global Reach** page and the select **Save**.
++
+1. Select **Save** on the main overview page to apply the configuration to the circuit.
++ ## Next steps - [Learn more about ExpressRoute Global Reach](expressroute-global-reach.md) - [Verify ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md)
firewall Premium Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-certificates.md
There are three types of certificates used in a typical deployment:
- **Server Certificate (Website certificate)**
- A certificate associated with to specific domain name. If a website has a valid certificate, it means that a certificate authority has taken steps to verify that the web address actually belongs to that organization. When you type a URL or follow a link to a secure website, your browser checks the certificate for the following characteristics:
+ A certificate associated with a specific domain name. If a website has a valid certificate, it means that a certificate authority has taken steps to verify that the web address actually belongs to that organization. When you type a URL or follow a link to a secure website, your browser checks the certificate for the following characteristics:
- The website address matches the address on the certificate. - The certificate is signed by a certificate authority that the browser recognizes as a *trusted* authority.
governance Pattern Deploy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pattern-deploy-resources.md
Title: "Pattern: Deploy resources with a policy definition" description: This Azure Policy pattern provides an example of how to deploy resources with a deployIfNotExists policy definition. Previously updated : 08/17/2021 Last updated : 05/16/2022 ++ # Azure Policy pattern: deploy resources
updated. When that resource is a _Microsoft.Network/virtualNetworks_, the policy
watcher in the location of the new or updated resource. If a matching network watcher isn't located, the ARM template is deployed to create the missing resource.
+> [!NOTE]
+> This policy requires you have a resource group named **NetworkWatcherRG** in your subscription. Azure
+> creates the **NetworkWatcherRG** resource group when you enable Network Watcher in a region.
+ :::code language="json" source="~/policy-templates/patterns/pattern-deploy-resources.json"::: ### Explanation
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md
An active Azure subscription. If you don't have an Azure subscription, create a
> [!TIP] > You should have at least **Contributor** access in your Azure subscription. If you created the subscription yourself, you're automatically an administrator with sufficient access. To learn more, see [What is Azure role-based access control?](../../role-based-access-control/overview.md)
-An Android or iOS phone on which you're able to install a free app from one of the official app stores.
+An Android or iOS smartphone on which you're able to install a free app from one of the official app stores.
## Create an application
IoT Central provides various industry-focused application templates to help you
## Register a device
-To connect a device to to your IoT Central application, you need some connection information. An easy way to get this connection information is to register your device.
+To connect a device to your IoT Central application, you need some connection information. An easy way to get this connection information is to register your device.
To register your device:
To register your device:
1. On the device page, select **Connect** and then **QR Code**:
- :::image type="content" source="media/quick-deploy-iot-central/device-registration.png" alt-text="Screenshot that shows the QR code you can use to connect the phone app.":::
+ :::image type="content" source="media/quick-deploy-iot-central/device-registration.png" alt-text="Screenshot that shows the QR code you can use to connect the smartphone app.":::
-Keep this page open. In the next section you scan this QR code using the phone app to connect it to IoT Central.
+Keep this page open. In the next section, you scan this QR code using the smartphone app to connect it to IoT Central.
## Connect your device
-To get you started quickly, this article uses the **IoT Plug and Play** smartphone app as an IoT device. The app sends telemetry collected from the phone's sensors, responds to commands invoked from IoT Central, and reports property values to IoT Central.
+To get you started quickly, this article uses the **IoT Plug and Play** smartphone app as an IoT device. The app sends telemetry collected from the smartphone's sensors, responds to commands invoked from IoT Central, and reports property values to IoT Central.
[!INCLUDE [iot-phoneapp-install](../../../includes/iot-phoneapp-install.md)]
-To connect the **IoT Plug and Play** app to you Iot Central application:
+To connect the **IoT Plug and Play** app to your Iot Central application:
1. Open the **IoT PnP** app on your smartphone.
-1. On the welcome page, select **Scan QR code**. Point the phone's camera at the QR code. Then wait for a few seconds while the connection is established.
+1. On the welcome page, select **Scan QR code**. Point the smartphone's camera at the QR code. Then wait for a few seconds while the connection is established.
1. On the telemetry page in the app, you can see the data the app is sending to IoT Central. On the logs page, you can see the device connecting and several initialization messages.
To view the telemetry from the smartphone app in IoT Central:
> [!TIP] > The smartphone app only sends data when the screen is on.+
+## Control your device
+
+To send a command from IoT Central to your device, select the **Commands** view for your device. The smartphone app can respond to three commands:
++
+To make the light on your smartphone flash, use the **LightOn** command. Set the duration to three seconds, the pulse interval to five seconds, and the number of pulses to two. Select **Run** to send the command to the smartphone app. The light on your smartphone app flashes twice.
+
+To see the acknowledgment from the smartphone app, select **command history**.
+ ## Clean up resources [!INCLUDE [iot-central-clean-up-resources](../../../includes/iot-central-clean-up-resources.md)]
iot-hub Iot Hub Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-event-grid.md
devices/{deviceId}
Event Grid also allows for filtering on attributes of each event, including the data content. This allows you to choose what events are delivered based contents of the telemetry message. Please see [advanced filtering](../event-grid/event-filtering.md#advanced-filtering) to view examples. For filtering on the telemetry message body, you must set the contentType to **application/json** and contentEncoding to **UTF-8** in the message [system properties](./iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these properties are case insensitive.
-For non-telemetry events like DeviceConnected, DeviceDisconnected, DeviceCreated and DeviceDeleted, the Event Grid filtering can be used when creating the subscription. For telemetry events, in addition to the filtering in Event Grid, users can also filter on device twins, message properties and body through the message routing query.
-
-When you subscribe to telemetry events via Event Grid, IoT Hub creates a default message route to send data source type device messages to Event Grid. For more information about message routing, see [IoT Hub message routing](iot-hub-devguide-messages-d2c.md). This route will be visible in the portal under IoT Hub > Message Routing. Only one route to Event Grid is created regardless of the number of EG subscriptions created for telemetry events. So, if you need several subscriptions with different filters, you can use the OR operator in these queries on the same route. The creation and deletion of the route is controlled through subscription of telemetry events via Event Grid. You cannot create or delete a route to Event Grid using IoT Hub Message Routing.
-
-To filter messages before telemetry data are sent, you can update your [routing query](iot-hub-devguide-routing-query-syntax.md). Note that routing query can be applied to the message body only if the body is JSON. You must also set the contentType to **application/json** and contentEncoding to **UTF-8** in the message [system properties](./iot-hub-devguide-routing-query-syntax.md#system-properties).
+For non-telemetry events like DeviceConnected, DeviceDisconnected, DeviceCreated and DeviceDeleted, the Event Grid filtering can be used when creating the subscription.
## Limitations for device connected and device disconnected events
load-testing How To Configure User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-user-properties.md
+
+ Title: Configure JMeter user properties
+
+description: Learn how to use JMeter user properties with Azure Load Testing.
++++ Last updated : 04/27/2022+
+zone_pivot_groups: load-testing-config
++
+# Use JMeter user properties with Azure Load Testing Preview
+
+In this article, learn how to configure and use Apache JMeter user properties with Azure Load Testing Preview. With user properties, you can make your test configurable by keeping test settings outside of the JMeter test script. Use cases for user properties include:
+
+- You want to use the JMX test script in multiple deployment environments with different application endpoints.
+- Your test script needs to accommodate multiple load patterns, such as smoke tests, peak load, or soak tests.
+- You want to override default JMeter behavior by configuring JMeter settings, such as the results file format.
+
+Azure Load Testing supports the standard [Apache JMeter properties](https://jmeter.apache.org/usermanual/test_plan.html#properties) and enables you to upload a user properties file. You can configure one user properties file per load test.
+
+Alternately, you can also [use environment variables and secrets in Azure Load Testing](./how-to-parameterize-load-tests.md) to make your tests configurable.
+
+> [!IMPORTANT]
+> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure Load Testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
+
+## Add a JMeter user properties file to your load test
+
+You can define user properties for your JMeter test script by uploading a *.properties* file to the load test. The following code snippet shows an example user properties file:
+
+```properties
+# peak-load.properties
+# User properties for testing peak load
+threadCount=250
+rampUpSeconds=30
+durationSeconds=600
+
+# Override default JMeter properties
+jmeter.save.saveservice.thread_name=false
+```
+
+Azure Load Testing supports using a single properties file per load test. Additional property files are ignored.
+
+You can also specify [JMeter configuration settings](https://jmeter.apache.org/usermanual/properties_reference.html) in user properties file to override default behavior. For example, you can modify any of the `jmeter.save.saveservice.*` settings to configure the JMeter results file.
++
+To add a user properties file to your load test by using the Azure portal, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+1. On the left pane, select **Tests** to view the list of tests.
+1. Select your test from the list by selecting the checkbox, and then select **Edit**. Alternately, select **Create test** to create a new load test.
+1. Select the **Test plan** tab.
+1. Select the properties file from your computer, and then select **Upload** to upload the file to Azure.
+
+ :::image type="content" source="media/how-to-configure-user-properties/edit-test-upload-properties.png" alt-text="Screenshot that shows the steps to upload a user properties file on the Test plan tab on the Edit test pane.":::
+
+1. Select **User properties** in the **File relevance** dropdown list.
+
+ :::image type="content" source="media/how-to-configure-user-properties/edit-test-upload-properties-file-relevance.png" alt-text="Screenshot that highlights the file relevance dropdown for a user properties file on the Test plan pane.":::
+
+ You can select only one file as a user properties file for a load test.
+
+1. Select **Apply** to modify the test, or **Review + create**, and then **Create** to create the new test.
++
+If you run a load test within your CI/CD workflow, you add the user properties file to the source control repository. You then specify this properties file in the [load test configuration YAML file](./reference-test-config-yaml.md).
+
+For more information about running a load test in a CI/CD workflow, see the [Automated regression testing tutorial](./tutorial-cicd-azure-pipelines.md).
+
+To add a user properties file to your load test, follow these steps:
+
+1. Add the *.properties* file to the source control repository.
+1. Open your YAML test configuration file in Visual Studio Code or your editor of choice.
+1. Specify the *.properties* file in the `properties.userPropertyFile` setting.
+ ```yaml
+ testName: MyTest
+ testPlan: SampleApp.jmx
+ description: Configure a load test with peak load properties.
+ engineInstances: 1
+ properties:
+ userPropertyFile: peak-load.properties
+ configurationFiles:
+ - input-data.csv
+ ```
+
+ > [!NOTE]
+ > If you store the properties file in a separate folder, specify the file with a relative path name. For more information, see the [Test configuration YAML syntax](./reference-test-config-yaml.md).
+
+1. Save the YAML configuration file and commit it to your source control repository.
+
+ The next time the CI/CD workflow runs, it will use the updated configuration.
++
+## Reference properties in JMeter
+
+Azure Load Testing supports the built-in Apache JMeter functionality to reference user properties in your JMeter test script (JMX). You can use the [**__property**](https://jmeter.apache.org/usermanual/functions.html#__property) or [**__P**](https://jmeter.apache.org/usermanual/functions.html#__P) functions to retrieve the property values from the property file you uploaded previously.
+
+The following code snippet shows an example of how to reference properties in a JMX file:
+
+ ```xml
+ <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Test home page" enabled="true">
+ <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
+ <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
+ <boolProp name="LoopController.continue_forever">false</boolProp>
+ <intProp name="LoopController.loops">-1</intProp>
+ </elementProp>
+ <stringProp name="ThreadGroup.num_threads">${__P(threadCount,1)}</stringProp>
+ <stringProp name="ThreadGroup.ramp_time">${__P(rampUpSeconds,1)}</stringProp>
+ <boolProp name="ThreadGroup.scheduler">true</boolProp>
+ <stringProp name="ThreadGroup.duration">${__P(durationSeconds,30)}</stringProp>
+ <stringProp name="ThreadGroup.delay"></stringProp>
+ <boolProp name="ThreadGroup.same_user_on_next_iteration">true</boolProp>
+ </ThreadGroup>
+ ```
+
+Alternately, you also specify properties in the JMeter user interface. The following image shows how to use properties to configure a JMeter thread group:
+
+ :::image type="content" source="media/how-to-configure-user-properties/jmeter-user-properties.png" alt-text="Screenshot that shows how to reference user properties in the JMeter user interface.":::
+
+You can [download the JMeter errors logs](./how-to-find-download-logs.md) to troubleshoot errors during the load test.
+
+## Next steps
+
+- Learn more about [parameterizing a load test by using environment variables and secrets](./how-to-parameterize-load-tests.md).
+- Learn more about [troubleshooting load test execution errors](./how-to-find-download-logs.md).
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-read-csv-data.md
To add a CSV file to your load test by using the Azure portal:
1. Select the CSV file from your computer, and then select **Upload** to upload the file to Azure.
- :::image type="content" source="media/how-to-read-csv-data/edit-test-upload-csv.png" alt-text="Screenshot of the 'Load' tab on the 'Edit test' pane.":::
+ :::image type="content" source="media/how-to-read-csv-data/edit-test-upload-csv.png" alt-text="Screenshot of the Test plan tab on the Edit test pane.":::
1. Select **Apply** to modify the test and to use the new configuration when you rerun it.
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
Previously updated : 11/30/2021 Last updated : 05/03/2022 adobe-target: true
A test configuration uses the following keys:
| `configurationFiles` | array | List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script. <BR><BR>Azure Load Testing currently doesn't support the use of file paths in the JMX file. When you reference an external file in the test script, make sure to only specify the file name. | | `description` | string | Short description of the test run. | | `failureCriteria` | object | Criteria that indicate failure of the test. Each criterion is in the form of:<BR>`[Aggregate_function] ([client_metric]) > [value]`<BR><BR>- `[Aggregate function] ([client_metric])` is either `avg(response_time_ms)` or `percentage(error).`<BR>- `value` is an integer number. |
+| `properties` | object | List of properties to configure the load test. |
+| `properties.userPropertyFile` | string | File to use as an Apache JMeter [user properties file](https://jmeter.apache.org/usermanual/test_plan.html#properties). The file will be uploaded to the Azure Load Testing resource alongside the JMeter test script and other configuration files. If the file is in a subfolder on your local machine, use a path relative to the location of the test script. |
| `secrets` | object | List of secrets that the Apache JMeter script references. | | `secrets.name` | string | Name of the secret. This name should match the secret name that you use in the Apache JMeter script. | | `secrets.value` | string | URI for the Azure Key Vault secret. | | `env` | object | List of environment variables that the Apache JMeter script references. | | `env.name` | string | Name of the environment variable. This name should match the secret name that you use in the Apache JMeter script. | | `env.value` | string | Value of the environment variable. |
-| `keyVaultReferenceIdentity` | string | Resource ID of the user-assigned managed identity for accessing the secrets from your Azure Key Vault. If you use a system-managed identity, this information is not needed. Make sure to grant this user-assigned identity access to your Azure key vault. |
+| `keyVaultReferenceIdentity` | string | Resource ID of the user-assigned managed identity for accessing the secrets from your Azure Key Vault. If you use a system-managed identity, this information isn't needed. Make sure to grant this user-assigned identity access to your Azure key vault. |
-The following example contains the configuration for a load test:
+The following YAML snippet contains an example load test configuration:
```yaml version: v0.1
testName: SampleTest
testPlan: SampleTest.jmx description: Load test website home page engineInstances: 1
+properties:
+ userPropertyFile: 'user.properties'
configurationFiles: - 'SampleData.csv' failureCriteria:
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
Last updated 05/13/2022
In this article, you'll learn about network isolation changes with our new v2 API platform on Azure Resource Manager (ARM) and its effect on network isolation. +
+## Prerequisites
+
+* The [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install) or [Azure CLI extension for machine learning v1](reference-azure-machine-learning-cli.md).
+
+ > [!IMPORTANT]
+ > The v1 extension (`azure-cli-ml`) version must be 1.41.0 or greater. Use the `az version` command to view version information.
+
## What is the new API platform on Azure Resource Manager (ARM) There are two types of operations used by the v1 and v2 APIs, __Azure Resource Manager (ARM)__ and __Azure Machine Learning workspace__.
ws.update(v1_legacy_mode=false)
# [Azure CLI extension v1](#tab/azurecliextensionv1)
-The Azure CLI [extension v1 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az-ml-workspace-update) command. To enable the parameter for a workspace, add the parameter `--set v1_legacy_mode=true`.
+The Azure CLI [extension v1 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az-ml-workspace-update) command. To enable the parameter for a workspace, add the parameter `--v1-legacy-mode true`.
+
+> [!IMPORTANT]
+> The `v1-legacy-mode` parameter is only available in version 1.41.0 or newer of the Azure CLI extension for machine learning v1 (`azure-cli-ml`). Use the `az version` command to view version information.
+
+The return value of the `az ml workspace update` command may not show the updated value. To view the current state of the parameter, use the following command:
+
+```azurecli
+az ml workspace show -g <myresourcegroup> -w <myworkspace> --query v1LegacyMode
+```
machine-learning How To Manage Environments In Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-in-studio.md
To create an environment:
1. Select the **Create** button. Create an environment by specifying one of the following:
-* Pip requirements [file](https://pip.pypa.io/en/stable/cli/pip_install)
-* Conda yaml [file](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html)
-* Docker [image](https://hub.docker.com/search?q=&type=image)
-* [Dockerfile](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
+* Create a new docker [context](https://docs.docker.com/engine/reference/commandline/build/)
+* Start from an existing custom or curated environment
+* Upload existing docker context
+* Use existing docker image with conda
:::image type="content" source="media/how-to-manage-environments-in-studio/create-page.jpg" alt-text="Environment creation wizard":::
If a new environment is given the same name as an existing environment in the wo
## View and edit environment details
-Once an environment has been created, view its details by clicking on the name. Use the dropdown menu to select different versions of the environment. Here you can view metadata and the contents of the environment through its Docker and Conda layers.
+Once an environment has been created, view its details by clicking on the name. Use the dropdown menu to select different versions of the environment. Here you can view metadata and the contents of the environment through its various dependencies.
-Click on the pencil icons to edit tags and descriptions as well as the configuration files or image. Keep in mind that any changes to the Docker or Conda sections will create a new version of the environment.
+Click on the pencil icons to edit tags and descriptions as well as the configuration files under the **Context** tab.
+
+Keep in mind that any changes to the Docker or Conda sections will create a new version of the environment.
:::image type="content" source="media/how-to-manage-environments-in-studio/details-page.jpg" alt-text="Environments details page":::
openshift Howto Create Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-service-principal.md
To interact with Azure APIs, an Azure Red Hat OpenShift cluster requires an Azur
This article explains how to create and use a service principal for your Azure Red Hat OpenShift clusters using the Azure command-line interface (Azure CLI) or the Azure portal.
-## Before you begin
-
-The user creating an Azure AD service principal must have permissions to register an application with your Azure AD tenant and to assign the application to a role in your subscription. You need **User Access Administrator** and **Contributor** permissions at the resource-group level to create service principals.
-
-Use the following Azure CLI command to add these permissions.
-
-```azurecli-interactive
-az role assignment create \
- --role 'User Access Administrator' \
- --assignee-object-id $SP_OBJECT_ID \
- --resource-group $RESOURCEGROUP \
- --assignee-principal-type 'ServicePrincipal'
-
-az role assignment create \
- --role 'Contributor' \
- --assignee-object-id $SP_OBJECT_ID \
- --resource-group $RESOURCEGROUP \
- --assignee-principal-type 'ServicePrincipal'
-```
-
-If you don't have the required permissions, you can ask your Azure AD or subscription administrator to assign them. Alternatively, your Azure AD or subscription administrator can create a service principal in advance for you to use with the Azure Red Hat OpenShift cluster.
-
-If you're using a service principal from a different Azure AD tenant, there are more considerations regarding the permissions available when you deploy the cluster. For example, you may not have the appropriate permissions to read and write directory information.
-
-For more information on user roles and permissions, seeΓÇ»[What are the default user permissions in Azure Active Directory?](../active-directory/fundamentals/users-default-permissions.md).
- > [!NOTE] > Service principals expire in one year unless configured for longer periods. For information on extending your service principal expiration period, see [Rotate service principal credentials for your Azure Red Hat OpenShift (ARO) Cluster](howto-service-principal-credential-rotation.md).
For more information on user roles and permissions, seeΓÇ»[What are the default
## Create a service principal with Azure CLI
-The following sections explain how to use the Azure CLI to create a service principal for your Azure Red Hat OpenShift cluster.
+The following sections explain how to use the Azure CLI to create a service principal for your Azure Red Hat OpenShift cluster
## Prerequisite If youΓÇÖre using the Azure CLI, youΓÇÖll need Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+## Create a resource group
-## Create a service principal - Azure CLI
+```azurecli-interactive
+AZ_RG=$(az group create -n test-aro-rg -l eastus2 --query name -o tsv)
+```
- To create a service principal with the Azure CLI, run the `az ad sp create-for-rbac` command.
+## Create a service principal - Azure CLI
-> [!NOTE]
-> When using a service principal to create a new cluster, you may need to assign a Contributor role here.
+ To create a service principal with the Azure CLI, run the following command.
-```azure-cli
-az ad sp create-for-rbac --name myAROClusterServicePrincipal
+```azurecli-interactive
+# Get Azure subscription ID
+AZ_SUB_ID=$(az account show --query id -o tsv)
+# Create a service principal with contributor role and scoped to the ARO resource group
+az ad sp create-for-rbac -n "test-aro-SP" --role contributor --scopes "/subscriptions/${AZ_SUB_ID}/resourceGroups/${AZ_RG}"
``` The output is similar to the following example.
The output is similar to the following example.
"name": "http://myAROClusterServicePrincipal",
- "password": "",
+ "password": "yourpassword",
- "tenant": ""
+ "tenant": "yourtenantname" t
} ``` Retain your `appId` and `password`. These values are used when you create an Azure Red Hat OpenShift cluster below.
+
+> [!NOTE]
+> This service principal only allows a contributor over the resource group the ARO cluster is located in. If your VNet is in another resource group, you need to assign the service principal contributor role to that resource group as well.
-## Grant permissions to the service principal - Azure CLI
-
-Grant permissions to an existing service principal with Azure CLI, as shown in the following command.
+For more information, see [Manage service principal roles](/cli/azure/create-an-azure-service-principal-azure-cli#3-manage-service-principal-roles).
-```azurecli-interactive
-az role assignment create \
- --role 'Contributor' \
- --assignee-object-id $SP_OBJECT_ID \
- --resource-group $RESOURCEGROUP \
- --assignee-principal-type 'ServicePrincipal'
-```
+To grant permissions to an existing service principal with the Azure portal, see [Create an Azure AD app and service principal in the portal](../active-directory/develop/howto-create-service-principal-portal.md#configure-access-policies-on-resources).
## Use the service principal to create a cluster - Azure CLI
az aro create \
The following sections explain how to use the Azure portal to create a service principal for your Azure Red Hat OpenShift cluster.
-## Create a service principal - Azure portal
+## Create a service principal - Azure portal
-To create a service principal using the Azure portal, see [Create an Azure AD app and service principal in the portal](../active-directory/develop/howto-create-service-principal-portal.md).
-
-## Grant permissions to the service principal - Azure portal
-
-To grant permissions to an existing service principal with the Azure portal, see [Create an Azure AD app and service principal in the portal](../active-directory/develop/howto-create-service-principal-portal.md#configure-access-policies-on-resources).
+To create a service principal using the Azure portal, complete the following steps.
-## Use the service principal - Azure portal
+1. On the Create Azure Red Hat OpenShift **Basics** tab, create a resource group for your subscription, as shown in the following example.
-When deploying an Azure Red Hat OpenShift cluster using the Azure portal, configure the service principal on the **Authentication** page of the **Azure Red Hat OpenShift** dialog.
+ :::image type="content" source="./media/basics-openshift-sp.png" alt-text="Screenshot that shows how to use the Azure Red Hat service principal with Azure portal to create a cluster." lightbox="./media/basics-openshift-sp.png":::
+2. Click **Next: Authentication** to configure and deploy the service principal on the **Authentication** page of the **Azure Red Hat OpenShift** dialog.
-Specify the following values, and then select **Review + Create**.
+ :::image type="content" source="./media/openshift-service-principal-portal.png" alt-text="Screenshot that shows how to use the Authentication tab with Azure portal to create a service principal." lightbox="./media/openshift-service-principal-portal.png":::
In the **Service principal information** section:
In the **Service principal information** section:
In the **Cluster pull secret** section: -- **Pull secret** is your cluster's pull secret's decrypted value.
+- **Pull secret** is your cluster's pull secret's decrypted value. If you don't have a pull secret, leave this field blank.
+
+After completing this tab, select **Next: Networking** to continue creating your cluster. Select **Review + Create** when you complete the remaining tabs.
+
+> [!NOTE]
+> This service principal only allows a contributor over the resource group the Azure Red Hat OpenShift cluster is located in. If your VNet is in another resource group, you need to assign the service principal contributor role to that resource group as well.
+
+## Grant permissions to the service principal - Azure portal
+
+To grant permissions to an existing service principal with the Azure portal, see [Create an Azure AD app and service principal in the portal](../active-directory/develop/howto-create-service-principal-portal.md#configure-access-policies-on-resources).
+ ::: zone-end
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
You can set up authentication for an Azure Synapse source in either of two ways:
- Use a service principal > [!IMPORTANT]
-> These steps for serverless databases **do not** apply to replicated databases. Currently in Synapse, serverless databases that are replicated from Spark databases are read-only. For more information, go [here](../synapse-analytics/sql/resources-self-help-sql-on-demand.md#operation-is-not-allowed-for-a-replicated-database).
+> These steps for serverless databases **do not** apply to replicated databases. Currently in Synapse, serverless databases that are replicated from Spark databases are read-only. For more information, go [here](../synapse-analytics/sql/resources-self-help-sql-on-demand.md#operation-isnt-allowed-for-a-replicated-database).
> [!NOTE] > You must set up authentication on each SQL database that you intended to register and scan from your Azure Synapse workspace.
role-based-access-control Conditions Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-format.md
Previously updated : 12/07/2021 Last updated : 05/16/2022 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
AND
## Actions
-Currently, conditions can be added to built-in or custom role assignments that have storage blob data actions. These include the following built-in roles:
+Currently, conditions can be added to built-in or custom role assignments that have blob storage or queue storage data actions. These include the following built-in roles:
- [Storage Blob Data Contributor](built-in-roles.md#storage-blob-data-contributor) - [Storage Blob Data Owner](built-in-roles.md#storage-blob-data-owner) - [Storage Blob Data Reader](built-in-roles.md#storage-blob-data-reader)
+- [Storage Queue Data Contributor](built-in-roles.md#storage-queue-data-contributor)
+- [Storage Queue Data Message Processor](built-in-roles.md#storage-queue-data-message-processor)
+- [Storage Queue Data Message Sender](built-in-roles.md#storage-queue-data-message-sender)
+- [Storage Queue Data Reader](built-in-roles.md#storage-queue-data-reader)
-For a list of the storage blob actions you can use in conditions, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/common/storage-auth-abac-attributes.md).
+For a list of the blob storage actions you can use in conditions, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/common/storage-auth-abac-attributes.md).
## Attributes
Depending on the selected actions, the attribute might be found in different pla
#### Resource and request attributes
-For a list of the storage blob attributes you can use in conditions, see:
+For a list of the blob storage or queue storage attributes you can use in conditions, see:
- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/common/storage-auth-abac-attributes.md)
For more information about custom security attributes, see:
- [Principal does not appear in Attribute source when adding a condition](conditions-troubleshoot.md#symptomprincipal-does-not-appear-in-attribute-source-when-adding-a-condition) - [Add or deactivate custom security attributes in Azure AD](../active-directory/fundamentals/custom-security-attributes-add.md)
-## Operators
-
-The following table lists the operators that are available to construct conditions.
-
-| Category | Operator | Description |
-| | | |
-| Logical comparison |`AND`<br/>`&&` | And operator. |
-| | `OR`<br/>`||` | Or operator. |
-| | `NOT`<br/>`!` | Not or negation operator. |
-| String comparison | `StringEquals`<br/>`StringEqualsIgnoreCase` | Case-sensitive (or case-insensitive) matching. The values must exactly match the string. |
-| | `StringNotEquals`<br/>`StringNotEqualsIgnoreCase` | Negation of `StringEquals` (or `StringEqualsIgnoreCase`) operator |
-| | `StringStartsWith`<br/>`StringStartsWithIgnoreCase` | Case-sensitive (or case-insensitive) matching. The values start with the string. |
-| | `StringNotStartsWith`<br/>`StringNotStartsWithIgnoreCase` | Negation of `StringStartsWith` (or `StringStartsWithIgnoreCase`) operator |
-| | `StringLike`<br/>`StringLikeIgnoreCase` | Case-sensitive (or case-insensitive) matching. The values can include a multi-character match wildcard (`*`) or a single-character match wildcard (`?`) anywhere in the string. If needed, these characters can be escaped by add a backslash `\*` and `\?`. |
-| | `StringNotLike`<br/>`StringNotLikeIgnoreCase` | Negation of `StringLike` (or `StringLikeIgnoreCase`) operator |
-| Numeric comparison | `NumericEquals`<br/>`NumericNotEquals`<br/>`NumericLessThan`<br/>`NumericLessThanEquals`<br/>`NumericGreaterThan`<br/>`NumericGreaterThanEquals` | Currently, only integers are supported. |
-| Higher-level functions | `ActionMatches` | Checks if Action[ID] value matches the specified action pattern. This operator is equivalent to the action matching logic that the SDK uses when comparing an action to an action pattern inside a Permission. |
-| Cross product comparison | `ForAnyOfAnyValues:StringEquals`<br/>`ForAnyOfAnyValues:StringEqualsIgnoreCase`<br/>`ForAnyOfAnyValues:StringNotEquals`<br/>`ForAnyOfAnyValues:StringNotEqualsIgnoreCase`<br/>`ForAnyOfAnyValues:StringLike`<br/>`ForAnyOfAnyValues:StringLikeIgnoreCase`<br/>`ForAnyOfAnyValues:StringNotLike`<br/>`ForAnyOfAnyValues:StringNotLikeIgnoreCase`<br/>`ForAnyOfAnyValues:NumericEquals`<br/>`ForAnyOfAnyValues:NumericNotEquals`<br/>`ForAnyOfAnyValues:NumericGreaterThan`<br/>`ForAnyOfAnyValues:NumericGreaterThanEquals`<br/>`ForAnyOfAnyValues:NumericLessThan`<br/>`ForAnyOfAnyValues:NumericLessThanEquals` | If at least one value on the left-hand side satisfies the comparison to at least one value on the right-hand side, then the expression evaluates to true. Has the format: `ForAnyOfAnyValues:<BooleanFunction>`. Supports multiple strings and numbers. |
-| | `ForAllOfAnyValues:StringEquals`<br/>`ForAllOfAnyValues:StringEqualsIgnoreCase`<br/>`ForAllOfAnyValues:StringNotEquals`<br/>`ForAllOfAnyValues:StringNotEqualsIgnoreCase`<br/>`ForAllOfAnyValues:StringLike`<br/>`ForAllOfAnyValues:StringLikeIgnoreCase`<br/>`ForAllOfAnyValues:StringNotLike`<br/>`ForAllOfAnyValues:StringNotLikeIgnoreCase`<br/>`ForAllOfAnyValues:NumericEquals`<br/>`ForAllOfAnyValues:NumericNotEquals`<br/>`ForAllOfAnyValues:NumericGreaterThan`<br/>`ForAllOfAnyValues:NumericGreaterThanEquals`<br/>`ForAllOfAnyValues:NumericLessThan`<br/>`ForAllOfAnyValues:NumericLessThanEquals` | If every value on the left-hand side satisfies the comparison to at least one value on the right-hand side, then the expression evaluates to true. Has the format: `ForAllOfAnyValues:<BooleanFunction>`. Supports multiple strings and numbers. |
-| | `ForAnyOfAllValues:StringEquals`<br/>`ForAnyOfAllValues:StringEqualsIgnoreCase`<br/>`ForAnyOfAllValues:StringNotEquals`<br/>`ForAnyOfAllValues:StringNotEqualsIgnoreCase`<br/>`ForAnyOfAllValues:StringLike`<br/>`ForAnyOfAllValues:StringLikeIgnoreCase`<br/>`ForAnyOfAllValues:StringNotLike`<br/>`ForAnyOfAllValues:StringNotLikeIgnoreCase`<br/>`ForAnyOfAllValues:NumericEquals`<br/>`ForAnyOfAllValues:NumericNotEquals`<br/>`ForAnyOfAllValues:NumericGreaterThan`<br/>`ForAnyOfAllValues:NumericGreaterThanEquals`<br/>`ForAnyOfAllValues:NumericLessThan`<br/>`ForAnyOfAllValues:NumericLessThanEquals` | If at least one value on the left-hand side satisfies the comparison to every value on the right-hand side, then the expression evaluates to true. Has the format: `ForAnyOfAllValues:<BooleanFunction>`. Supports multiple strings and numbers. |
-| | `ForAllOfAllValues:StringEquals`<br/>`ForAllOfAllValues:StringEqualsIgnoreCase`<br/>`ForAllOfAllValues:StringNotEquals`<br/>`ForAllOfAllValues:StringNotEqualsIgnoreCase`<br/>`ForAllOfAllValues:StringLike`<br/>`ForAllOfAllValues:StringLikeIgnoreCase`<br/>`ForAllOfAllValues:StringNotLike`<br/>`ForAllOfAllValues:StringNotLikeIgnoreCase`<br/>`ForAllOfAllValues:NumericEquals`<br/>`ForAllOfAllValues:NumericNotEquals`<br/>`ForAllOfAllValues:NumericGreaterThan`<br/>`ForAllOfAllValues:NumericGreaterThanEquals`<br/>`ForAllOfAllValues:NumericLessThan`<br/>`ForAllOfAllValues:NumericLessThanEquals` | If every value on the left-hand side satisfies the comparison to every value on the right-hand side, then the expression evaluates to true. Has the format: `ForAllOfAllValues:<BooleanFunction>`. Supports multiple strings and numbers. |
-
-## Operator examples
+## Function operators
-> [!div class="mx-tableFixed"]
-> | Example | Result |
+This section lists the function operators that are available to construct conditions.
+
+### ActionMatches
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operator** | `ActionMatches` |
+> | **Description** | Checks if the current action matches the specified action pattern. |
+> | **Examples** | `ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}`<br/>If the action being checked equals "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read", then true<br/><br/>`ActionMatches{'Microsoft.Authorization/roleAssignments/*'}`<br/>If the action being checked equals "Microsoft.Authorization/roleAssignments/write", then true<br/><br/>`ActionMatches{'Microsoft.Authorization/roleDefinitions/*'}`<br/>If the action being checked equals "Microsoft.Authorization/roleAssignments/write", then false |
+
+#### SubOperationMatches
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operator** | `SubOperationMatches` |
+> | **Description** | Checks if the current suboperation matches the specified suboperation pattern. |
+> | **Examples** | `SubOperationMatches{'Blob.List'}` |
+
+#### Exists
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operator** | `Exists` |
+> | **Description** | Checks if the specified attribute exists. |
+> | **Examples** | `Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot]` |
+> | **Attributes support** | [Encryption scope name](../storage/common/storage-auth-abac-attributes.md#encryption-scope-name)<br/>[Snapshot](../storage/common/storage-auth-abac-attributes.md#snapshot)<br/>[Version ID](../storage/common/storage-auth-abac-attributes.md#version-id) |
+
+## Logical operators
+
+This section lists the logical operators that are available to construct conditions.
+
+### And
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `AND`<br/>`&&` |
+> | **Description** | And operator. |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.Read.WithTagConditions'})` |
+
+### Or
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `OR`<br/>`||` |
+> | **Description** | Or operator. |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T00:00:00.0Z' OR NOT Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId` |
+
+### Not
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `NOT`<br/>`!` |
+> | **Description** | Not or negation operator. |
+> | **Examples** | `NOT Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId]` |
+
+## Boolean comparison operators
+
+This section lists the Boolean comparison operators that are available to construct conditions.
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `BoolEquals`<br/>`BoolNotEquals` |
+> | **Description** | Boolean comparison. |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts:isHnsEnabled] BoolEquals true` |
+
+## String comparison operators
+
+This section lists the string comparison operators that are available to construct conditions.
+
+### StringEquals
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `StringEquals`<br/>`StringEqualsIgnoreCase` |
+> | **Description** | Case-sensitive (or case-insensitive) matching. The values must exactly match the string. |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'` |
+
+### StringNotEquals
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `StringNotEquals`<br/>`StringNotEqualsIgnoreCase` |
+> | **Description** | Negation of `StringEquals` (or `StringEqualsIgnoreCase`) operator. |
+
+### StringStartsWith
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `StringStartsWith`<br/>`StringStartsWithIgnoreCase` |
+> | **Description** | Case-sensitive (or case-insensitive) matching. The values start with the string. |
+
+### StringNotStartsWith
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `StringNotStartsWith`<br/>`StringNotStartsWithIgnoreCase` |
+> | **Description** | Negation of `StringStartsWith` (or `StringStartsWithIgnoreCase`) operator. |
+
+### StringLike
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `StringLike`<br/>`StringLikeIgnoreCase` |
+> | **Description** | Case-sensitive (or case-insensitive) matching. The values can include a multi-character match wildcard (`*`) or a single-character match wildcard (`?`) anywhere in the string. If needed, these characters can be escaped by add a backslash `\*` and `\?`. |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'readonly/*'`<br/><br/>`Resource[name1] StringLike 'a*c?'`<br/>If Resource[name1] equals "abcd", then true<br/><br/>`Resource[name1] StringLike 'A*C?'`<br/>If Resource[name1] equals "abcd", then false<br/><br/>`Resource[name1] StringLike 'a*c'`<br/>If Resource[name1] equals "abcd", then false |
+
+### StringNotLike
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `StringNotLike`<br/>`StringNotLikeIgnoreCase` |
+> | **Description** | Negation of `StringLike` (or `StringLikeIgnoreCase`) operator. |
+
+## Numeric comparison operators
+
+This section lists the numeric comparison operators that are available to construct conditions.
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `NumericEquals`<br/>`NumericNotEquals`<br/>`NumericGreaterThan`<br/>`NumericGreaterThanEquals`<br/>`NumericLessThan`<br/>`NumericLessThanEquals` |
+> | **Description** | Number matching. Only integers are supported. |
+
+## DateTime comparison operators
+
+This section lists the date/time comparison operators that are available to construct conditions.
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `DateTimeEquals`<br/>`DateTimeNotEquals`<br/>`DateTimeGreaterThan`<br/>`DateTimeGreaterThanEquals`<br/>`DateTimeLessThan`<br/>`DateTimeLessThanEquals` |
+> | **Description** |Full-precision check with the format: `yyyy-mm-ddThh:mm:ss.mmmmmmmZ`. Used for blob version ID and blob snapshot. |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T00:00:00.0Z'` |
+
+## Cross product comparison operators
+
+This section lists the cross product comparison operators that are available to construct conditions.
+
+### ForAnyOfAnyValues
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `ForAnyOfAnyValues:StringEquals`<br/>`ForAnyOfAnyValues:StringEqualsIgnoreCase`<br/>`ForAnyOfAnyValues:StringNotEquals`<br/>`ForAnyOfAnyValues:StringNotEqualsIgnoreCase`<br/>`ForAnyOfAnyValues:StringLike`<br/>`ForAnyOfAnyValues:StringLikeIgnoreCase`<br/>`ForAnyOfAnyValues:StringNotLike`<br/>`ForAnyOfAnyValues:StringNotLikeIgnoreCase`<br/>`ForAnyOfAnyValues:NumericEquals`<br/>`ForAnyOfAnyValues:NumericNotEquals`<br/>`ForAnyOfAnyValues:NumericGreaterThan`<br/>`ForAnyOfAnyValues:NumericGreaterThanEquals`<br/>`ForAnyOfAnyValues:NumericLessThan`<br/>`ForAnyOfAnyValues:NumericLessThanEquals` |
+> | **Description** | If at least one value on the left-hand side satisfies the comparison to at least one value on the right-hand side, then the expression evaluates to true. Has the format: `ForAnyOfAnyValues:<BooleanFunction>`. Supports multiple strings and numbers. |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/encryptionScopes:name] ForAnyOfAnyValues:StringEquals {'validScope1', 'validScope2'}`<br/>If encryption scope name equals `validScope1` or `validScope2`, then true.<br/><br/>`{'red', 'blue'} ForAnyOfAnyValues:StringEquals {'blue', 'green'}`<br/>true<br/><br/>`{'red', 'blue'} ForAnyOfAnyValues:StringEquals {'orange', 'green'}`<br/>false |
+
+### ForAllOfAnyValues
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `ForAllOfAnyValues:StringEquals`<br/>`ForAllOfAnyValues:StringEqualsIgnoreCase`<br/>`ForAllOfAnyValues:StringNotEquals`<br/>`ForAllOfAnyValues:StringNotEqualsIgnoreCase`<br/>`ForAllOfAnyValues:StringLike`<br/>`ForAllOfAnyValues:StringLikeIgnoreCase`<br/>`ForAllOfAnyValues:StringNotLike`<br/>`ForAllOfAnyValues:StringNotLikeIgnoreCase`<br/>`ForAllOfAnyValues:NumericEquals`<br/>`ForAllOfAnyValues:NumericNotEquals`<br/>`ForAllOfAnyValues:NumericGreaterThan`<br/>`ForAllOfAnyValues:NumericGreaterThanEquals`<br/>`ForAllOfAnyValues:NumericLessThan`<br/>`ForAllOfAnyValues:NumericLessThanEquals` |
+> | **Description** | If every value on the left-hand side satisfies the comparison to at least one value on the right-hand side, then the expression evaluates to true. Has the format: `ForAllOfAnyValues:<BooleanFunction>`. Supports multiple strings and numbers. |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] ForAllOfAnyValues:StringEquals {'Cascade', 'Baker', 'Skagit'}`<br/><br/>`{'red', 'blue'} ForAllOfAnyValues:StringEquals {'orange', 'red', 'blue'}`<br/>true<br/><br/>`{'red', 'blue'} ForAllOfAnyValues:StringEquals {'red', 'green'}`<br/>false |
+
+### ForAnyOfAllValues
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Operators** | `ForAnyOfAllValues:StringEquals`<br/>`ForAnyOfAllValues:StringEqualsIgnoreCase`<br/>`ForAnyOfAllValues:StringNotEquals`<br/>`ForAnyOfAllValues:StringNotEqualsIgnoreCase`<br/>`ForAnyOfAllValues:StringLike`<br/>`ForAnyOfAllValues:StringLikeIgnoreCase`<br/>`ForAnyOfAllValues:StringNotLike`<br/>`ForAnyOfAllValues:StringNotLikeIgnoreCase`<br/>`ForAnyOfAllValues:NumericEquals`<br/>`ForAnyOfAllValues:NumericNotEquals`<br/>`ForAnyOfAllValues:NumericGreaterThan`<br/>`ForAnyOfAllValues:NumericGreaterThanEquals`<br/>`ForAnyOfAllValues:NumericLessThan`<br/>`ForAnyOfAllValues:NumericLessThanEquals` |
+> | **Description** | If at least one value on the left-hand side satisfies the comparison to every value on the right-hand side, then the expression evaluates to true. Has the format: `ForAnyOfAllValues:<BooleanFunction>`. Supports multiple strings and numbers. |
+> | **Examples** | `{10, 20} ForAnyOfAllValues:NumericLessThan {15, 18}`<br/>true |
+
+### ForAllOfAllValues
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
> | | |
-> | `ActionMatches{'Microsoft.Authorization/roleAssignments/*'}` | If the action being checked equals "Microsoft.Authorization/roleAssignments/write" then true |
-> | `ActionMatches{'Microsoft.Authorization/roleDefinitions/*'}` | If the action being checked equals "Microsoft.Authorization/roleAssignments/write" then false |
-> | `Resource[name1] StringLike 'a*c?'` | If Resource[name1] equals "abcd", then true |
-> | `Resource[name1] StringLike 'A*C?'` | If Resource[name1] equals "abcd", then false |
-> | `Resource[name1] StringLike 'a*c'` | If Resource[name1] equals "abcd", then false |
-> | `{'red', 'blue'} ForAnyOfAnyValues:StringEquals {'blue', 'green'}` | true |
-> | `{'red', 'blue'} ForAnyOfAnyValues:StringEquals {'orange', 'green'}` | false |
-> | `{'red', 'blue'} ForAllOfAnyValues:StringEquals {'orange', 'red', 'blue'}` | true |
-> | `{'red', 'blue'} ForAllOfAnyValues:StringEquals {'red', 'green'}` | false |
-> | `{10, 20} ForAnyOfAllValues:NumericLessThan {15, 18}` | true |
-> | `{10, 20} ForAllOfAllValues:NumericLessThan {5, 15, 18}` | false |
-> | `{10, 20} ForAllOfAllValues:NumericLessThan {25, 30}` | true |
-> | `{10, 20} ForAllOfAllValues:NumericLessThan {15, 25, 30}` | false |
+> | **Operators** | `ForAllOfAllValues:StringEquals`<br/>`ForAllOfAllValues:StringEqualsIgnoreCase`<br/>`ForAllOfAllValues:StringNotEquals`<br/>`ForAllOfAllValues:StringNotEqualsIgnoreCase`<br/>`ForAllOfAllValues:StringLike`<br/>`ForAllOfAllValues:StringLikeIgnoreCase`<br/>`ForAllOfAllValues:StringNotLike`<br/>`ForAllOfAllValues:StringNotLikeIgnoreCase`<br/>`ForAllOfAllValues:NumericEquals`<br/>`ForAllOfAllValues:NumericNotEquals`<br/>`ForAllOfAllValues:NumericGreaterThan`<br/>`ForAllOfAllValues:NumericGreaterThanEquals`<br/>`ForAllOfAllValues:NumericLessThan`<br/>`ForAllOfAllValues:NumericLessThanEquals` |
+> | **Description** | If every value on the left-hand side satisfies the comparison to every value on the right-hand side, then the expression evaluates to true. Has the format: `ForAllOfAllValues:<BooleanFunction>`. Supports multiple strings and numbers. |
+> | **Examples** | `{10, 20} ForAllOfAllValues:NumericLessThan {5, 15, 18}`<br/>false<br/><br/>`{10, 20} ForAllOfAllValues:NumericLessThan {25, 30}`<br/>true<br/><br/>`{10, 20} ForAllOfAllValues:NumericLessThan {15, 25, 30}`<br/>false |
## Special characters
role-based-access-control Conditions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-overview.md
Previously updated : 11/16/2021 Last updated : 05/16/2022 #Customer intent: As a dev, devops, or it admin, I want to learn how to constrain access within a role assignment by using conditions.
For more information about how to create these examples, see [Examples of Azure
## Where can conditions be added?
-Currently, conditions can be added to built-in or custom role assignments that have [storage blob data actions](conditions-format.md#actions). These include the following built-in roles:
+Currently, conditions can be added to built-in or custom role assignments that have [blob storage or queue storage data actions](conditions-format.md#actions). Conditions are added at the same scope as the role assignment. Just like role assignments, you must have `Microsoft.Authorization/roleAssignments/write` permissions to add a condition.
-- [Storage Blob Data Contributor](built-in-roles.md#storage-blob-data-contributor)-- [Storage Blob Data Owner](built-in-roles.md#storage-blob-data-owner)-- [Storage Blob Data Reader](built-in-roles.md#storage-blob-data-reader)
+Here are some of the [blob storage attributes](../storage/common/storage-auth-abac-attributes.md#azure-blob-storage-attributes) you can use in your conditions.
-Conditions are added at the same scope as the role assignment. Just like role assignments, you must have `Microsoft.Authorization/roleAssignments/write` permissions to add a condition.
-
-Here are the storage attributes you can use in your conditions.
--- Container name-- Blob path-- Blob index tags keys
+- Account name
- Blob index tags-
-> [!TIP]
-> Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find Azure Blob data with blob index tags (preview)](../storage/blobs/storage-manage-find-blobs.md).
+- Blob path
+- Blob prefix
+- Container name
+- Encryption scope name
+- Is hierarchical namespace enabled
+- Snapshot
+- Version ID
## What does a condition look like?
If Chandra tries to read a blob without the Project=Cascade tag, access will not
Here is what the condition looks like in the Azure portal:
-![Build expression section with values for blob index tags.](./media/shared/condition-expressions.png)
Here is what the condition looks like in code:
For more information about the format of conditions, see [Azure role assignment
## Features of conditions
-Here's a list of the some of the primary features of conditions:
+Here's a list of the primary features of conditions:
| Feature | Status | Date | | | | |
-| Add conditions to Storage Blob Data role assignments | Preview | May 2021 |
+| Use the following [attributes](../storage/common/storage-auth-abac-attributes.md#azure-blob-storage-attributes) in a condition: Account name, Blob prefix, Encryption scope name, Is hierarchical namespace enabled, Snapshot, Version ID | Preview | May 2022 |
+| Use [custom security attributes on a principal in a condition](conditions-format.md#principal-attributes) | Preview | November 2021 |
+| Add conditions to blob storage data role assignments | Preview | May 2021 |
| Use attributes on a resource in a condition | Preview | May 2021 | | Use attributes that are part of the action request in a condition | Preview | May 2021 |
-| Use custom security attributes on a principal in a condition | Preview | November 2021 |
## Conditions and Privileged Identity Management (PIM)
To better understand Azure RBAC and Azure ABAC, you can refer back to the follow
| attribute | In this context, a key-value pair such as Project=Blue, where Project is the attribute key and Blue is the attribute value. Attributes and tags are synonymous for access control purposes. | | expression | A statement in a condition that evaluates to true or false. An expression has the format of &lt;attribute&gt; &lt;operator&gt; &lt;value&gt;. |
+## Limits
+
+Here are some of the limits for conditions.
+
+| Resource | Limit | Notes |
+| | | |
+| Number of expressions per condition using the visual editor | 5 | You can add more than five expressions using the code editor |
## Known issues
role-based-access-control Conditions Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-portal.md
Previously updated : 11/16/2021 Last updated : 05/16/2022
Once you have the Add role assignment condition page open, you can review the ba
1. In the **Operator** list, select an operator.
- For more information, see [Operators](conditions-format.md#operators).
+ For more information, see [Azure role assignment condition format and syntax](conditions-format.md).
1. In the **Value** box, enter a value for the right side of the expression.
role-based-access-control Conditions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-troubleshoot.md
Previously updated : 11/16/2021 Last updated : 05/16/2022 #Customer intent:
Fix any [condition format or syntax](conditions-format.md) issues. Alternatively
**Cause**
-If you copy a condition from a document, it might include special characters and cause errors. Some editors (such as Microsoft Word) add control characters when formatting text that does not appear.
+If you use PowerShell and copy a condition from a document, it might include special characters that cause the following error. Some editors (such as Microsoft Word) add control characters when formatting text that does not appear.
+
+`The given role assignment condition is invalid.`
**Solution** If you copied a condition from a rich text editor and you are certain the condition is correct, delete all spaces and returns and then add back the relevant spaces. Alternatively, use a plain text editor or a code editor, such as Visual Studio Code.
+## Symptom - Attribute does not apply error in visual editor for previously saved condition
+
+When you open a previously saved condition in the visual editor, you get the following message:
+
+`Attribute does not apply for the selected actions. Select a different set of actions.`
+
+**Cause**
+
+In May 2022, the Read a blob action was changed from the following format:
+
+`!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})`
+
+To exclude the `Blob.List` suboperation:
+
+`!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})`
+
+If you created a condition with the Read a blob action prior to May 2022, you might see this error message in the visual editor.
+
+**Solution**
+
+Open the **Select an action** pane and reselect the **Read a blob** action.
+ ## Next steps - [Azure role assignment condition format and syntax (preview)](conditions-format.md)
route-server About Dual Homed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/about-dual-homed-network.md
Title: 'About dual-homed network with Azure Route Server ' description: Learn about how Azure Route Server works in a dual-homed network. -+ Last updated 09/01/2021-+ # About dual-homed network with Azure Route Server
route-server Anycast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/anycast.md
Title: 'Propagating anycast routes to on-premises' description: Learn about how to advertise the same route from different regions with Azure Route Server. -+ Last updated 02/03/2022-+ # Anycast routing with Azure Route Server
route-server Expressroute Vpn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md
Title: 'About Azure Route Server supports for ExpressRoute and Azure VPN' description: Learn about how Azure Route Server interacts with ExpressRoute and Azure VPN gateways. -+ Last updated 10/01/2021-+ # About Azure Route Server support for ExpressRoute and Azure VPN
route-server Monitor Route Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/monitor-route-server.md
+
+ Title: 'Monitoring Azure Route Server'
+description: Learn Azure Route Server monitoring using Azure Monitor.
++++ Last updated : 05/16/2022+
+# Monitor Azure Route Server
+
+This article helps you understand Azure Route Server monitoring and metrics using Azure Monitor. Azure Monitor is one stop shop for all metrics, alerting and diagnostic logs across all of Azure.
+
+>[!NOTE]
+>Using **Classic Metrics** is not recommended.
+>
+
+## Route Server metrics
+
+To view Azure Route Server metrics, go to your Route Server resource in the Azure portal and select **Metrics**.
+
+Once a metric is selected, the default aggregation will be applied. Optionally, you can apply splitting, which will show the metric with different dimensions.
++
+> [!IMPORTANT]
+> When viewing Route Server metrics in the Azure portal, select a time granularity of **5 minutes or greater** for best possible results.
+>
+> :::image type="content" source="./media/monitor-route-server/route-server-metrics-granularity.png" alt-text="Screenshot of time granularity options.":::
+
+### Aggregation types
+
+Metrics explorer supports Sum, Count, Average, Minimum and Maximum as [aggregation types](../azure-monitor/essentials/metrics-charts.md#aggregation). You should use the recommended Aggregation type when reviewing the insights for each Route Server metric.
+
+* Sum: The sum of all values captured during the aggregation interval.
+* Count: The number of measurements captured during the aggregation interval.
+* Average: The average of the metric values captured during the aggregation interval.
+* Minimum: The smallest value captured during the aggregation interval.
+* Maximum: The largest value captured during the aggregation interval.
++
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
+| | | | | | | |
+| [BGP Peer Status](#bgp) | Scalability | Count | Maximum | BGP availability from Route Server to Peer | BGP Peer IP, BGP Peer Type, Route Server Instance | Yes |
+| [Count of Routes Advertised to Peer](#advertised) | Scalability | Count | Maximum | Count of routes advertised from Route Server to Peer | BGP Peer IP, BGP Peer Type, Route Server Instance | Yes|
+| [Count of Routes Learned from Peer](#received) | Scalability | Count | Maximum | Count of routes learned from Peer | BGP Peer IP, BGP Peer Type, Route Server Instance | Yes
+
+> [!IMPORTANT]
+> Azure Monitor exposes another metric for Route Server, **Data Processed by the Virtual Hub Router**. This metric doesn't apply to Route Server and should be ignored.
+>
++
+### <a name = "bgp"></a>BGP Peer Status
+
+Aggregation type: **Max**
+
+This metric shows the BGP availability of peer NVA connections. The BGP Peer Status is a binary metric. 1 = BGP is up-and-running. 0 = BGP is unavailable.
++
+To check the BGP status of a specific NVA peer, select **Apply splitting** and choose **BgpPeerIp**.
++
+### <a name = "advertised"></a>Count of Routes Advertised to Peer
+
+Aggregation type: **Max**
+
+This metric shows the number of routes the Route Server advertised to NVA peers.
++
+### <a name = "received"></a>Count of Routes Learned from Peer
+
+Aggregation type: **Max**
+
+This metric shows the number of routes the Route Server learned from NVA peers.
++
+To show the number of routes the Route Server received from a specific NVA peer, select **Apply splitting** and choose **BgpPeerIp**.
+++
+## Next steps
+
+Configure Route Server.
+
+* [Create and configure Route Server](quickstart-configure-route-server-portal.md)
+* [Configure peering between Azure Route Server and an NVA](tutorial-configure-route-server-with-quagga.md)
route-server Multiregion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/multiregion.md
Title: 'Multi-region designs with Azure Route Server' description: Learn about how Azure Route Server enables multi-region designs. -+ Last updated 02/03/2022-+ # Multi-region networking with Azure Route Server
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/overview.md
Title: 'What is Azure Route Server?' description: Learn how Azure Route Server can simplify routing between your network virtual appliance (NVA) and your virtual network. -+ Last updated 09/27/2021-+ #Customer intent: As an IT administrator, I want to learn about Azure Route Server and what I can use it for.
route-server Path Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/path-selection.md
Title: 'Path selection with Azure Route Server' description: Learn about how Azure Route Server enables path selection for your network virtual appliance. -+ Last updated 11/09/2021-+ #Customer intent: As a network administrator, I want to control how traffic is routed from Azure to my on-premises network.
route-server Quickstart Configure Route Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-cli.md
Title: 'Quickstart: Create and configure Route Server using Azure CLI' description: In this quickstart, you learn how to create and configure a Route Server using Azure CLI. -+ Last updated 09/01/2021-+ ms.devlang: azurecli
az network vnet create \
Azure Route Server requires a dedicated subnet named *RouteServerSubnet*. The subnet size has to be at least /27 or short prefix (such as /26 or /25) or you'll receive an error message when deploying the Route Server. Create a subnet configuration named **RouteServerSubnet** with [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create):
-1. Run the follow command to add the *RouteServerSubnet* to your virtual network.
+1. Run the following command to add the *RouteServerSubnet* to your virtual network.
```azurecli-interactive az network vnet subnet create \
If you have an ExpressRoute and an Azure VPN gateway in the same virtual network
> For greenfield deployments make sure to create the Azure VPN gateway before creating Azure Route Server; otherwise the deployment of Azure VPN Gateway will fail. >
-1. To enable route exchange between Azure Route Server and the gateway(s) use [az network routerserver update](/cli/azure/network/routeserver#az-network-routeserver-update) with the `--allow-b2b-traffic`` flag set to **true**:
+1. To enable route exchange between Azure Route Server and the gateway(s), use [az network routerserver update](/cli/azure/network/routeserver#az-network-routeserver-update) with the `--allow-b2b-traffic`` flag set to **true**:
```azurecli-interactive az network routeserver update \
route-server Quickstart Configure Route Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-portal.md
Title: 'Quickstart: Create and configure Route Server using the Azure portal' description: In this quickstart, you learn how to create and configure a Route Server using the Azure portal. -+ Last updated 09/08/2021-+
route-server Quickstart Configure Route Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-powershell.md
Title: 'Quickstart: Create and configure Route Server using Azure PowerShell' description: In this quickstart, you learn how to create and configure a Route Server using Azure PowerShell. --++ Last updated 09/01/2021
route-server Quickstart Configure Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-template.md
Title: 'Quickstart: Create an Azure Route Server by using an Azure Resource Manager template (ARM template)' description: This quickstart shows you how to create an Azure Route Server by using Azure Resource Manager template (ARM template). -+ Last updated 04/05/2021-+ # Quickstart: Create an Azure Route Server using an ARM template
route-server Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/resource-manager-template-samples.md
Title: Resource Manager template samples - Azure Route Server description: Information about sample Azure Resource Manager templates provided for Azure Route Server. --++ Last updated 09/01/2021
route-server Route Injection In Spokes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-injection-in-spokes.md
Title: 'Default route injection in spoke VNets' description: Learn about how Azure Route Server injects routes in VNets. -+ Last updated 02/03/2022-+ # Default route injection in spoke VNets
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
Title: Frequently asked questions about Azure Route Server description: Find answers to frequently asked questions about Azure Route Server. -+ Last updated 03/25/2022-+ # Azure Route Server FAQ
route-server Troubleshoot Route Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/troubleshoot-route-server.md
Title: Troubleshoot Azure Route Server issues description: Learn how to troubleshoot issues for Azure Route Server. -+ Last updated 09/23/2021-+ # Troubleshooting Azure Route Server issues
route-server Tutorial Configure Route Server With Quagga https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/tutorial-configure-route-server-with-quagga.md
Title: "Tutorial: Configure peering between Azure Route Server and Quagga network virtual appliance" description: This tutorial shows you how to configure an Azure Route Server and peer it with a Quagga network virtual appliance.--++ Last updated 08/23/2021
route-server Vmware Solution Default Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/vmware-solution-default-route.md
Title: 'Injecting default route to Azure VMware Solution' description: Learn about how to advertise a default route to Azure VMware Solution with Azure Route Server. -+ Last updated 02/03/2022-+ # Injecting a default route to Azure VMware Solution
search Search Get Started Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-arm.md
Previously updated : 12/07/2021 Last updated : 05/16/2022 # Quickstart: Deploy Cognitive Search using an Azure Resource Manager template
This article walks you through the process for using an Azure Resource Manager (
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
+Only those properties included in the template are used in the deployment. If more customization is required, such as [setting up network security](search-security-overview.md#network-security), you can [update the service configuration](/cli/azure/search/service?view=azure-cli-latest#az-search-service-update) as a post-deployment task.
+ If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal. [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.search%2Fazure-search-create%2Fazuredeploy.json)
search Search Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-bicep.md
Previously updated : 03/18/2022 Last updated : 05/16/2022 # Quickstart: Deploy Cognitive Search using Bicep
This article walks you through the process for using a Bicep file to deploy an A
[!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
+Only those properties included in the template are used in the deployment. If more customization is required, such as [setting up network security](search-security-overview.md#network-security), you can [update the service configuration](/cli/azure/search/service?view=azure-cli-latest#az-search-service-update) as a post-deployment task.
+ ## Prerequisites If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
service-fabric Service Fabric Cluster Nodetypes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-nodetypes.md
The following are the property descriptions:
| type | "ServiceFabricLinuxNode" or "ServiceFabricWindowsNode" | Identifies OS Service Fabric is bootstrapping to | | autoUpgradeMinorVersion | true or false | Enable Auto Upgrade of SF Runtime Minor Versions | | publisher | Microsoft.Azure.ServiceFabric | Name of the Service Fabric extension publisher |
-| clusterEndpont | string | URI:PORT to Management endpoint |
+| clusterEndpoint | string | URI:PORT to Management endpoint |
| nodeTypeRef | string | Name of nodeType | | durabilityLevel | bronze, silver, gold, platinum | Time allowed to pause immutable Azure Infrastructure | | enableParallelJobs | true or false | Enable Compute ParallelJobs like remove VM and reboot VM in the same scale set in parallel |
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
18.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 4.15.0-153-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic </br> 4.15.0-153-generic </br> 5.4.0-1056-azure </br> 5.4.0-81-generic </br> 4.15.0-1122-azure </br> 4.15.0-154-generic | ||| 20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-1074-azure </br> 5.4.0-107-generic |
-20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1065-azure </br> 5.4.0-92-generic </br> 5.8.0-1033-azure </br> 5.8.0-1036-azure </br> 5.8.0-1039-azure </br> 5.8.0-1040-azure </br> 5.8.0-1041-azure </br> 5.8.0-1042-azure </br> 5.8.0-1043-azure </br> 5.8.0-23-generic </br> 5.8.0-25-generic </br> 5.8.0-28-generic </br> 5.8.0-29-generic </br> 5.8.0-31-generic </br> 5.8.0-33-generic </br> 5.8.0-34-generic </br> 5.8.0-36-generic </br> 5.8.0-38-generic </br> 5.8.0-40-generic </br> 5.8.0-41-generic </br> 5.8.0-43-generic </br> 5.8.0-44-generic </br> 5.8.0-45-generic </br> 5.8.0-48-generic </br> 5.8.0-49-generic </br> 5.8.0-50-generic </br> 5.8.0-53-generic </br> 5.8.0-55-generic </br> 5.8.0-59-generic </br> 5.8.0-63-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure |
+20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1065-azure </br> 5.4.0-92-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure |
20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-84-generic </br> 5.4.0-1058-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-1063-azure </br> 5.4.0-89-generic </br> 5.4.0-90-generic </br> 9.46 hotfix patch** </br> 5.4.0-1064-azure </br> 5.4.0-91-generic | 20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 20.04 LTS |[9.44](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 5.4.0-26-generic to 5.4.0-60-generic </br> 5.4.0-1010-azure to 5.4.0-1043-azure </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 5.4.0-81-generic </br> 5.4.0-1056-azure |
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
18.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-156-generic </br> 4.15.0-1125-azure </br> 4.15.0-161-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 18.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic </br> 4.15.0-153-generic </br> 5.4.0-1056-azure </br> 5.4.0-81-generic </br> 4.15.0-1122-azure </br> 4.15.0-154-generic | |||
-20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.11.0-1007-azure </br> 5.11.0-1012-azure </br> 5.11.0-1013-azure </br> 5.11.0-1015-azure </br> 5.11.0-1017-azure </br> 5.11.0-1019-azure </br> 5.11.0-1020-azure </br> 5.11.0-1021-azure </br> 5.11.0-1022-azure </br> 5.11.0-1023-azure </br> 5.11.0-1025-azure </br> 5.11.0-1027-azure </br> 5.11.0-1028-azure </br> 5.11.0-22-generic </br> 5.11.0-25-generic </br> 5.11.0-27-generic </br> 5.11.0-34-generic </br> 5.11.0-36-generic </br> 5.11.0-37-generic </br> 5.11.0-38-generic </br> 5.11.0-40-generic </br> 5.11.0-41-generic </br> 5.11.0-43-generic </br> 5.11.0-44-generic </br> 5.11.0-46-generic </br> 5.4.0-100-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic |
-20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1063-azure </br> 5.4.0-1064-azure </br> 5.4.0-1065-azure </br> 5.4.0-90-generic </br> 5.4.0-91-generic </br> 5.4.0-92-generic </br> 5.8.0-1033-azure </br> 5.8.0-1036-azure </br> 5.8.0-1039-azure </br> 5.8.0-1040-azure </br> 5.8.0-1041-azure </br> 5.8.0-1042-azure </br> 5.8.0-1043-azure </br> 5.8.0-23-generic </br> 5.8.0-25-generic </br> 5.8.0-28-generic </br> 5.8.0-29-generic </br> 5.8.0-31-generic </br> 5.8.0-33-generic </br> 5.8.0-34-generic </br> 5.8.0-36-generic </br> 5.8.0-38-generic </br> 5.8.0-40-generic </br> 5.8.0-41-generic </br> 5.8.0-43-generic </br> 5.8.0-44-generic </br> 5.8.0-45-generic </br> 5.8.0-48-generic </br> 5.8.0-49-generic </br> 5.8.0-50-generic </br> 5.8.0-53-generic </br> 5.8.0-55-generic </br> 5.8.0-59-generic </br> 5.8.0-63-generic |
+20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-100-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic |
+20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1063-azure </br> 5.4.0-1064-azure </br> 5.4.0-1065-azure </br> 5.4.0-90-generic </br> 5.4.0-91-generic </br> 5.4.0-92-generic |
20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-1058-azure </br> 5.4.0-1059-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-84-generic </br> 5.4.0-86-generic </br> 5.4.0-88-generic </br> 5.4.0-89-generic | 20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure | 20.04 LTS |[9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 5.4.0-26-generic to 5.4.0-80 </br> 5.4.0-1010-azure to 5.4.0-1048-azure </br> 5.4.0-81-generic </br> 5.4.0-1056-azure |
Debian 9.1 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azu
Debian 9.1 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64 </br> Debian 9.1 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> |||
-Debian 10 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.10.0-0.bpo.11-amd64 </br> 5.10.0-0.bpo.11-cloud-amd64 </br> 5.10.0-0.bpo.7-amd64 </br> 5.10.0-0.bpo.7-cloud-amd64 </br> 5.10.0-0.bpo.9-amd64 </br> 5.10.0-0.bpo.9-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64 </br> 5.9.0-0.bpo.2-amd64 </br> 5.9.0-0.bpo.2-cloud-amd64 </br> 5.9.0-0.bpo.5-amd64 </br> 5.9.0-0.bpo.5-cloud-amd64
-Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.10.0-0.bpo.9-cloud-amd64 </br> 5.10.0-0.bpo.9-amd64
+Debian 10 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | Not supported.
+Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | Not supported.
Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 <br/> 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64 Debian 10 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 <br/> 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64
-Debian 10 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.19.0-6-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64
+Debian 10 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.19.0-6-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64
### SUSE Linux Enterprise Server 12 supported kernel versions
spatial-anchors Create Locate Anchors Unity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/create-locate-anchors-unity.md
Learn more about the [AnchorLocatedDelegate](/dotnet/api/microsoft.azure.spatial
Learn more about the [DeleteAnchorAsync](/dotnet/api/microsoft.azure.spatialanchors.cloudspatialanchorsession.deleteanchorasync) method.
+### Delete anchor after locating (recommended)
```csharp await this.cloudSession.DeleteAnchorAsync(cloudAnchor); // Perform any processing you may want when delete finishes ```
+### Delete anchor without locating
+If you are unable to locate an anchor but would still like to delete it you can use the ``GetAnchorPropertiesAsync`` API which takes an anchorId as input to get the ``CloudSpatialAnchor`` object. You can then pass this object into ``DeleteAnchorAsync`` to delete it.
+```csharp
+var anchor = await cloudSession.GetAnchorPropertiesAsync(@"anchorId");
+await this.cloudSession.DeleteAnchorAsync(anchor);
+```
+
++ [!INCLUDE [Stopping](../../../includes/spatial-anchors-create-locate-anchors-stopping.md)] Learn more about the [Stop](/dotnet/api/microsoft.azure.spatialanchors.cloudspatialanchorsession.stop) method.
spring-cloud Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/faq.md
We are not actively developing additional capabilities for Service Binding in fa
If you encounter any issues with Azure Spring Cloud, create an [Azure Support Request](../azure-portal/supportability/how-to-create-azure-support-request.md). To submit a feature request or provide feedback, go to [Azure Feedback](https://feedback.azure.com/d365community/forum/79b1327d-d925-ec11-b6e6-000d3a4f06a4).
+### How do I get VMware Spring Runtime support (Enterprise tier only)
+
+Enterprise tier has built-in VMware Spring Runtime Support, so you can open support tickets to [VMware](https://aka.ms/ascevsrsupport) if you think your issue is in the scope of VMware Spring Runtime Support. To better understand VMware Spring Runtime Support itself, see <https://tanzu.vmware.com/spring-runtime>. To understand the details about how to register and use this support service, see the Support section in the [Enterprise tier FAQ from VMware](https://aka.ms/EnterpriseTierFAQ). For any other issues, open support tickets with Microsoft.
+ ## Development ### I am a Spring Cloud developer but new to Azure. What is the quickest way for me to learn how to develop an application in Azure Spring Cloud?
spring-cloud Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/overview.md
The following quickstarts apply to Basic/Standard tier only. For Enterprise tier
## Enterprise Tier overview
-Based on our learnings from customer engagements, we built Azure Spring Cloud Enterprise tier with commercially supported Spring runtime components to help enterprise customers to ship faster and unlock SpringΓÇÖs full potential.
+Based on our learnings from customer engagements, we built Azure Spring Cloud Enterprise tier with commercially supported Spring runtime components to help enterprise customers to ship faster and unlock SpringΓÇÖs full potential, including feature parity and region parity with Standard tier.
The following video introduces Azure Spring Cloud Enterprise tier.
spring-cloud Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/troubleshoot.md
You can view the billing account for your subscription if you have admin access.
### I need VMware Spring Runtime Support (Enterprise tier only)
-Enterprise tier has built-in VMware Spring Runtime Support so you can directly open support tickets to [VMware](https://aka.ms/ascevsrsupport) if you think your issue is in scope of VMware Spring Runtime Support. For more information, see [https://tanzu.vmware.com/spring-runtime](https://tanzu.vmware.com/spring-runtime). For any other issues, directly open support tickets with Microsoft.
+Enterprise tier has built-in VMware Spring Runtime Support, so you can open support tickets to [VMware](https://aka.ms/ascevsrsupport) if you think your issue is in the scope of VMware Spring Runtime Support. To better understand VMware Spring Runtime Support itself, see <https://tanzu.vmware.com/spring-runtime>. To understand the details about how to register and use this support service, see the Support section in the [Enterprise tier FAQ from VMware](https://aka.ms/EnterpriseTierFAQ). For any other issues, open support tickets with Microsoft.
## Next steps
static-web-apps Build Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/build-configuration.md
To skip building the front-end app:
- Set `skip_app_build` to `true`. - Set `output_location` to an empty string (`''`).
+> [!NOTE]
+> Make sure you have your `staticwebapp.config.json` file copied as well into the *output* directory.
+ # [GitHub Actions](#tab/github-actions) ```yml
static-web-apps Deploy Nuxtjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nuxtjs.md
-# Deploy server-rendered Nuxt.js websites on Azure Static Web Apps
+# Deploy static-rendered Nuxt.js websites on Azure Static Web Apps
In this tutorial, you learn to deploy a [Nuxt.js](https://nuxtjs.org) generated static website to [Azure Static Web Apps](overview.md). To begin, you learn to set up, configure, and deploy a Nuxt.js app. During this process, you also learn to deal with common challenges often faced when generating static pages with Nuxt.js
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-attributes.md
Previously updated : 05/06/2021 Last updated : 05/16/2022
In this case, the optional suboperation `Blob.Write.WithTagHeaders` can be used
Similarly, only select operations on the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` action can have support blob index tags as a precondition for access. This subset of operations is identified by the `Blob.Read.WithTagConditions` suboperation. > [!NOTE]
-> Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find data on Azure Blob Storage with Blob Index (preview)](../blobs/storage-manage-find-blobs.md).
+> Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md).
In this preview, storage accounts support the following suboperations: > [!div class="mx-tableFixed"]
-> | DataAction | Suboperation | Display name | Description |
-> | : | : | : | : |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | `Blob.Read.WithTagConditions` | Blob read operations that support conditions on tags | Includes REST operations Get Blob, Get Blob Metadata, Get Blob Properties, Get Block List, Get Page Ranges, Query Blob Contents. |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` <br/> `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | `Blob.Write.WithTagHeaders` | Blob writes for content with optional tags | Includes REST operations Put Blob, Copy Blob, Copy Blob From URL and Put Block List. |
+> | Display name | DataAction | Suboperation |
+> | : | : | : |
+> | [List blobs](#list-blobs) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | `Blob.List` |
+> | [Read a blob](#read-a-blob) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | **NOT** `Blob.List` |
+> | [Read content from a blob with tag conditions](#read-content-from-a-blob-with-tag-conditions) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | `Blob.Read.WithTagConditions` |
+> | [Sets the access tier on a blob](#sets-the-access-tier-on-a-blob) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | `Blob.Write.Tier` |
+> | [Write to a blob with blob index tags](#write-to-a-blob-with-blob-index-tags) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` <br/> `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | `Blob.Write.WithTagHeaders` |
-## Actions and suboperations
+## Azure Blob storage actions and suboperations
-The following table lists the supported actions and suboperations for conditions in Azure Storage.
+This section lists the supported Azure Blob storage actions and suboperations you can target for conditions.
-> [!div class="mx-tableFixed"]
-> | Display name | Description | DataAction |
-> | | | |
-> | Delete a blob | DataAction for deleting blobs. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` |
-> | Read a blob | DataAction for reading blobs. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` |
-> | Read content from a blob with tag conditions | REST operations: Get Blob, Get Blob Metadata, Get Blob Properties, Get Block List, Get Page Ranges and Query Blob Contents. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read`<br/>**Suboperation**<br/>`Blob.Read.WithTagConditions` |
-> | Write to a blob | DataAction for writing to blobs. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` |
-> | Write to a blob with blob index tags | REST operations: Put Blob, Put Block List, Copy Blob and Copy Blob From URL. |`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write`<br/>**Suboperation**<br/>`Blob.Write.WithTagHeaders` |
-> | Create a blob or snapshot, or append data | DataAction for creating blobs. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` |
-> | Write content to a blob with blob index tags | REST operations: Put Blob, Put Block List, Copy Blob and Copy Blob From URL. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action`<br/>**Suboperation**<br/>`Blob.Write.WithTagHeaders` |
-> | Delete a version of a blob | DataAction for deleting a version of a blob. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action` |
-> | Changes ownership of a blob | DataAction for changing ownership of a blob. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/manageOwnership/action` |
-> | Modify permissions of a blob | DataAction for modifying permissions of a blob. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/modifyPermissions/action` |
-> | Rename file or directory | DataAction for renaming files or directories. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action` |
-> | Permanently delete a blob overriding soft-delete | DataAction for permanently deleting a blob overriding soft-delete. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/permanentDelete/action` |
-> | All data operations for accounts with HNS | DataAction for all data operations on storage accounts with HNS. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` |
-> | Read blob index tags | DataAction for reading blob index tags. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read` |
-> | Write blob index tags | DataAction for writing blob index tags. | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` |
-
-## Attributes
-
-The following table lists the descriptions for the supported attributes for conditions in Azure Storage.
+### List blobs
-> [!div class="mx-tableFixed"]
-> | Display name | Description | Attribute |
-> | | | |
-> | Container name| Name of a storage container or file system. Use when you want to check the container name. | `containers:name` |
-> | Blob path | Path of a virtual directory, blob, folder or file resource. Use when you want to check the blob name or folders in a blob path. | `blobs:path` |
-> | Blob index tags [Keys] | Index tags on a blob resource. Arbitrary user-defined key-value properties that you can store alongside a blob resource. Use when you want to check the key in blob index tags. | `tags&$keys$&` |
-> | Blob index tags [Values in key] | Index tags on a blob resource. Arbitrary user-defined key-value properties that you can store alongside a blob resource. Use when you want to check both the key (case-sensitive) and value in blob index tags. | `tags:`*keyname*`<$key_case_sensitive$>` |
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | List blobs |
+> | **Description** | List blobs operation. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` |
+> | **Suboperation** | `Blob.List` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name) |
+> | **Request attributes** | [Blob prefix](#blob-prefix) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.List'})`<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path) |
+
+### Read a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Read a blob |
+> | **Description** | All blob read operations excluding list. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` |
+> | **Suboperation** | NOT `Blob.List` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})`<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path) |
+
+### Read content from a blob with tag conditions
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Read content from a blob with tag conditions |
+> | **Description** | Read blobs with tags. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` |
+> | **Suboperation** | `Blob.Read.WithTagConditions` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.Read.WithTagConditions'})`<br/>[Example: Read blobs with a blob index tag](storage-auth-abac-examples.md#example-read-blobs-with-a-blob-index-tag) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md) |
+
+### Read blob index tags
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Read blob index tags |
+> | **Description** | DataAction for reading blob index tags. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) |
+> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md) |
+
+### Find blobs by tags
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Find blobs by tags |
+> | **Description** | DataAction for finding blobs by index tags. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/filter/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Write to a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Write to a blob |
+> | **Description** | DataAction for writing to blobs. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})`<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers) |
+
+### Sets the access tier on a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Sets the access tier on a blob |
+> | **Description** | DataAction for writing to blobs. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` |
+> | **Suboperation** | `Blob.Write.Tier` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.Tier'})` |
+
+### Write to a blob with blob index tags
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Write to a blob with blob index tags |
+> | **Description** | REST operations: Put Blob, Put Block List, Copy Blob and Copy Blob From URL. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write`<br/>`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` |
+> | **Suboperation** | `Blob.Write.WithTagHeaders` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Request attributes** | [Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})`<br/>`!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})`<br/>[Example: New blobs must include a blob index tag](storage-auth-abac-examples.md#example-new-blobs-must-include-a-blob-index-tag) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md) |
+
+### Create a blob or snapshot, or append data
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Create a blob or snapshot, or append data |
+> | **Description** | DataAction for creating blobs. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})`<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers) |
+
+### Write blob index tags
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Write blob index tags |
+> | **Description** | DataAction for writing blob index tags. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) |
+> | **Request attributes** | [Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys)<br/>[Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})`<br/>[Example: Existing blobs must have blob index tag keys](storage-auth-abac-examples.md#example-existing-blobs-must-have-blob-index-tag-keys) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md) |
+
+### Write Blob legal hold and immutability policy
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Write Blob legal hold and immutability policy |
+> | **Description** | DataAction for writing Blob legal hold and immutability policy. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/immutableStorage/runAsSuperUser/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Delete a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Delete a blob |
+> | **Description** | DataAction for deleting blobs. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})`<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers) |
+
+### Delete a version of a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Delete a version of a blob |
+> | **Description** | DataAction for deleting a version of a blob. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | [Version ID](#version-id) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action'})`<br/>[Example: Delete old blob versions](storage-auth-abac-examples.md#example-delete-old-blob-versions) |
+
+### Permanently delete a blob overriding soft-delete
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Permanently delete a blob overriding soft-delete |
+> | **Description** | DataAction for permanently deleting a blob overriding soft-delete. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/permanentDelete/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+
+### Modify permissions of a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Modify permissions of a blob |
+> | **Description** | DataAction for modifying permissions of a blob. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/modifyPermissions/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Change ownership of a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Change ownership of a blob |
+> | **Description** | DataAction for changing ownership of a blob. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/manageOwnership/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Rename a file or a directory
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Rename a file or a directory |
+> | **Description** | DataAction for renaming files or directories. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### All data operations for accounts with hierarchical namespace enabled
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | All data operations for accounts with hierarchical namespace enabled |
+> | **Description** | DataAction for all data operations on storage accounts with hierarchical namespace enabled.<br/>If your role definition includes the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` action, you should target this action in your condition. Targeting this action ensures the condition will still work as expected if hierarchical namespace is enabled for a storage account. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+> | **Examples** | [Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled)<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers)<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path)<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path)<br/>[Example: Write blobs in named containers with a path](storage-auth-abac-examples.md#example-write-blobs-in-named-containers-with-a-path) |
+> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+## Azure Queue storage actions
+
+This section lists the supported Azure Queue storage actions you can target for conditions.
+
+### Peek messages
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Peek messages |
+> | **Description** | DataAction for peeking messages. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/read` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Put a message
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Put a message |
+> | **Description** | DataAction for putting a message. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/add/action` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Put or update a message
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Put or update a message |
+> | **Description** | DataAction for putting or updating a message. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/write` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Clear messages
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Clear messages |
+> | **Description** | DataAction for clearing messages. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/delete` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Get or delete messages
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Get or delete messages |
+> | **Description** | DataAction for getting or deleting messages. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/process/action` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+## Azure Blob storage attributes
+
+This section lists the Azure Blob storage attributes you can use in your condition expressions depending on the action you target. If you select multiple actions for a single condition, there might be fewer attributes to choose from for your condition because the attributes must be available across the selected actions.
> [!NOTE] > Attributes and values listed are considered case-insensitive, unless stated otherwise.
+### Account name
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Account name |
+> | **Description** | Name of a storage account. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts:name` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts:name] StringEquals 'sampleaccount'`<br/>[Example: Read or write blobs in named storage account with specific encryption scope](storage-auth-abac-examples.md#example-read-or-write-blobs-in-named-storage-account-with-specific-encryption-scope) |
+
+### Blob index tags [Keys]
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Blob index tags [Keys] |
+> | **Description** | Index tags on a blob resource.<br/>Arbitrary user-defined key-value properties that you can store alongside a blob resource. Use when you want to check the key in blob index tags. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&` |
+> | **Attribute source** | Resource<br/>Request |
+> | **Attribute type** | StringList |
+> | **Is key case sensitive** | True |
+> | **Hierarchical namespace support** | False |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&] ForAllOfAnyValues:StringEquals {'Project', 'Program'}`<br/>[Example: Existing blobs must have blob index tag keys](storage-auth-abac-examples.md#example-existing-blobs-must-have-blob-index-tag-keys) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+### Blob index tags [Values in key]
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Blob index tags [Values in key] |
+> | **Description** | Index tags on a blob resource.<br/>Arbitrary user-defined key-value properties that you can store alongside a blob resource. Use when you want to check both the key (case-sensitive) and value in blob index tags. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags` |
+> | **Attribute source** | Resource<br/>Request |
+> | **Attribute type** | String |
+> | **Is key case sensitive** | True |
+> | **Hierarchical namespace support** | False |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:`*keyname*`<$key_case_sensitive$>`<br/>`@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'`<br/>[Example: Read blobs with a blob index tag](storage-auth-abac-examples.md#example-read-blobs-with-a-blob-index-tag) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+### Blob path
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Blob path |
+> | **Description** | Path of a virtual directory, blob, folder or file resource.<br/>Use when you want to check the blob name or folders in a blob path. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'readonly/*'`<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path) |
+ > [!NOTE]
-> When specifying conditions for `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path` attribute, the values shouldn't include the container name or a preceding '/' character. Use the path characters without any URL encoding.
+> When specifying conditions for the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path` attribute, the values shouldn't include the container name or a preceding slash (`/`) character. Use the path characters without any URL encoding.
+
+### Blob prefix
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Blob prefix |
+> | **Description** | Allowed prefix of blobs to be listed.<br/>Path of a virtual directory or folder resource. Use when you want to check the folders in a blob path. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:prefix` |
+> | **Attribute source** | Request |
+> | **Attribute type** | String |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:prefix] StringStartsWith 'readonly/'`<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path) |
> [!NOTE]
-> Blob index tags are not supported for Data Lake Storage Gen2 storage accounts, which have a [hierarchical namespace](../blobs/data-lake-storage-namespace.md) (HNS). You should not author role-assignment conditions using index tags on storage accounts that have HNS enabled.
+> When specifying conditions for the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:prefix` attribute, the values shouldn't include the container name or a preceding slash (`/`) character. Use the path characters without any URL encoding.
-## Attributes available for each action
+### Container name
-The following table lists which attributes you can use in your condition expressions depending on the action you target. If you select multiple actions for a single condition, there might be fewer attributes to choose from for your condition because the attributes must be available across the selected actions.
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Container name |
+> | **Description** | Name of a storage container or file system.<br/>Use when you want to check the container name. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers:name` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'`<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers) |
-> [!div class="mx-tableFixed"]
-> | DataAction | Attribute | Type | Applies to |
-> | | | | |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read`<br/>**Suboperation**<br/>`Blob.Read.WithTagConditions` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | | `tags` | dictionaryOfString | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write`<br/>**Suboperation**<br/>`Blob.Write.WithTagHeaders` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | | `tags` | dictionaryOfString | RequestAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action`<br/>**Suboperation**<br/>`Blob.Write.WithTagHeaders` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | | `tags` | dictionaryOfString | RequestAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/manageOwnership/action` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/modifyPermissions/action` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/permanentDelete/action` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | | `tags` | dictionaryOfString | ResourceAttributeOnly |
-> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` | `containers:name` | string | ResourceAttributeOnly |
-> | | `blobs:path` | string | ResourceAttributeOnly |
-> | | `tags` | dictionaryOfString | RequestAttributeOnly |
+### Encryption scope name
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Encryption scope name |
+> | **Description** | Name of the encryption scope used to encrypt data.<br/>Available only for storage accounts where hierarchical namespace is not enabled. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/encryptionScopes:name` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
+> | **Exists support** | True |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/encryptionScopes:name] ForAnyOfAnyValues:StringEquals {'validScope1', 'validScope2'}`<br/>[Example: Read blobs with specific encryption scopes](storage-auth-abac-examples.md#example-read-blobs-with-specific-encryption-scopes) |
+> | **Learn more** | [Create and manage encryption scopes](../blobs/encryption-scope-manage.md) |
+
+### Is hierarchical namespace enabled
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Is hierarchical namespace enabled |
+> | **Description** | Whether hierarchical namespace is enabled on the storage account.<br/>Applicable only at resource group scope or above. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts:isHnsEnabled` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | Boolean |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts:isHnsEnabled] BoolEquals true`<br/>[Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled) |
+> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+### Snapshot
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Snapshot |
+> | **Description** | The Snapshot identifier for the Blob snapshot. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot` |
+> | **Attribute source** | Request |
+> | **Attribute type** | DateTime |
+> | **Exists support** | True |
+> | **Hierarchical namespace support** | False |
+> | **Examples** | `Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot]`<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots) |
+> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+### Version ID
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Version ID |
+> | **Description** | The version ID of the versioned Blob.<br/>Available only for storage accounts where hierarchical namespace is not enabled. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId` |
+> | **Attribute source** | Request |
+> | **Attribute type** | DateTime |
+> | **Exists support** | True |
+> | **Hierarchical namespace support** | False |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z'`<br/>[Example: Read current blob versions and a specific blob version](storage-auth-abac-examples.md#example-read-current-blob-versions-and-a-specific-blob-version) |
+> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+## Azure Queue storage attributes
+
+This section lists the Azure Queue storage attributes you can use in your condition expressions depending on the action you target.
+
+### Queue name
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Queue name |
+> | **Description** | Name of a storage queue. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/queueServices/queues:name` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
## See also
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-examples.md
Previously updated : 11/16/2021 Last updated : 05/16/2022 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
This article list some examples of role assignment conditions.
For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](../../role-based-access-control/conditions-prerequisites.md).
-## Example 1: Read access to blobs with a tag
+## Blob index tags
-This condition allows users to read blobs with a blob index tag key of Project and a tag value of Cascade. Attempts to access blobs without this key-value tag will not be allowed.
+### Example: Read blobs with a blob index tag
-> [!TIP]
-> Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find Azure Blob data with blob index tags (preview)](../blobs/storage-manage-find-blobs.md).
+This condition allows users to read blobs with a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Project and a value of Cascade. Attempts to access blobs without this key-value tag will not be allowed.
-![Diagram of example 1 condition showing read access to some blob with a tag.](./media/storage-auth-abac-examples/example-1.png)
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+
+![Diagram of condition showing read access to blobs with a blob index tag.](./media/storage-auth-abac-examples/blob-index-tags-read.png)
``` (
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
- AND
- SubOperationMatches{'Blob.Read.WithTagConditions'})
- )
- OR
- (
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'
- )
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.Read.WithTagConditions'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'
+ )
) ```
This condition allows users to read blobs with a blob index tag key of Project a
Here are the settings to add this condition using the Azure portal.
-| Condition #1 | Setting |
-| | |
-| Actions | Read content from a blob with tag conditions |
-| Attribute source | Resource |
-| Attribute | Blob index tags [Values in key] |
-| Key | {keyName} |
-| Operator | StringEquals |
-| Value | {keyValue} |
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read content from a blob with tag conditions](storage-auth-abac-attributes.md#read-content-from-a-blob-with-tag-conditions) |
+> | Attribute source | Resource |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | {keyName} |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {keyValue} |
-![Screenshot of example 1 condition editor in Azure portal.](./media/storage-auth-abac-examples/example-1-condition-1-portal.png)
#### Azure PowerShell
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
Get-AzStorageBlob -Container <containerName> -Blob <blobName> -Context $bearerCtx ```
-## Example 2: New blobs must include a tag
+### Example: New blobs must include a blob index tag
-This condition requires that any new blobs must include a blob index tag key of Project and a tag value of Cascade.
+This condition requires that any new blobs must include a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Project and a value of Cascade.
-> [!TIP]
-> Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find Azure Blob data with blob index tags (preview)](../blobs/storage-manage-find-blobs.md).
+There are two actions that allow you to create new blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions.
-There are two permissions that allow you to create new blobs, so you must target both. You must add this condition to any role assignments that include one of the following permissions.
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | |
-- /blobs/write (create or update)-- /blobs/add/action (create)-
-![Diagram of example 2 condition showing new blobs must include a tag.](./media/storage-auth-abac-examples/example-2.png)
+![Diagram of condition showing new blobs must include a blob index tag.](./media/storage-auth-abac-examples/blob-index-tags-new-blobs.png)
``` (
There are two permissions that allow you to create new blobs, so you must target
Here are the settings to add this condition using the Azure portal.
-| Condition #1 | Setting |
-| | |
-| Actions | Write to a blob with blob index tags<br/>Write content to a blob with blob index tags |
-| Attribute source | Request |
-| Attribute | Blob index tags [Values in key] |
-| Key | {keyName} |
-| Operator | StringEquals |
-| Value | {keyValue} |
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags)<br/>[Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags) |
+> | Attribute source | Request |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | {keyName} |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {keyValue} |
-![Screenshot of example 2 condition 1 editor in Azure portal.](./media/storage-auth-abac-examples/example-2-condition-1-portal.png)
#### Azure PowerShell
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example2 -Blo
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example2 -Blob "Example2.txt" -Tag $grantedTag -Context $bearerCtx ```
-## Example 3: Existing blobs must have tag keys
-
-This condition requires that any existing blobs be tagged with at least one of the allowed blob index tag keys: Project or Program. This condition is useful for adding governance to existing blobs.
+### Example: Existing blobs must have blob index tag keys
-> [!TIP]
-> Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find Azure Blob data with blob index tags (preview)](../blobs/storage-manage-find-blobs.md).
+This condition requires that any existing blobs be tagged with at least one of the allowed [blob index tag](../blobs/storage-blob-index-how-to.md) keys: Project or Program. This condition is useful for adding governance to existing blobs.
-There are two permissions that allow you to update tags on existing blobs, so you must target both. You must add this condition to any role assignments that include one of the following permissions.
+There are two actions that allow you to update tags on existing blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions.
-- /blobs/write (update or create, cannot exclude create)-- /blobs/tags/write
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` | |
-![Diagram of example 3 condition showing existing blobs must have tag keys.](./media/storage-auth-abac-examples/example-3.png)
+![Diagram of condition showing existing blobs must have blob index tag keys.](./media/storage-auth-abac-examples/blob-index-tags-keys.png)
``` (
There are two permissions that allow you to update tags on existing blobs, so yo
Here are the settings to add this condition using the Azure portal.
-| Condition #1 | Setting |
-| | |
-| Actions | Write to a blob with blob index tags<br/>Write blob index tags |
-| Attribute source | Request |
-| Attribute | Blob index tags [Keys] |
-| Operator | ForAllOfAnyValues:StringEquals |
-| Value | {keyName1}<br/>{keyName2} |
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags)<br/>[Write blob index tags](storage-auth-abac-attributes.md#write-blob-index-tags) |
+> | Attribute source | Request |
+> | Attribute | [Blob index tags [Keys]](storage-auth-abac-attributes.md#blob-index-tags-keys) |
+> | Operator | [ForAllOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#forallofanyvalues) |
+> | Value | {keyName1}<br/>{keyName2} |
-![Screenshot of example 3 condition 1 editor in Azure portal.](./media/storage-auth-abac-examples/example-3-condition-1-portal.png)
#### Azure PowerShell
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example3 -Blo
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example3 -Blob "Example3.txt" -Tag $grantedTag -Context $bearerCtx ```
-## Example 4: Existing blobs must have a tag key and values
+### Example: Existing blobs must have a blob index tag key and values
-This condition requires that any existing blobs to have a blob index tag key of Project and tag values of Cascade, Baker, or Skagit. This condition is useful for adding governance to existing blobs.
+This condition requires that any existing blobs to have a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Project and values of Cascade, Baker, or Skagit. This condition is useful for adding governance to existing blobs.
-> [!TIP]
-> Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find Azure Blob data with blob index tags (preview)](../blobs/storage-manage-find-blobs.md).
+There are two actions that allow you to update tags on existing blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions.
-There are two permissions that allow you to update tags on existing blobs, so you must target both. You must add this condition to any role assignments that include one of the following permissions.
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` | |
-- /blobs/write (update or create, cannot exclude create)-- /blobs/tags/write-
-![Diagram of example 4 condition showing existing blobs must have a tag key and values.](./media/storage-auth-abac-examples/example-4.png)
+![Diagram of condition showing existing blobs must have a blob index tag key and values.](./media/storage-auth-abac-examples/blob-index-tags-key-values.png)
``` (
There are two permissions that allow you to update tags on existing blobs, so yo
Here are the settings to add this condition using the Azure portal.
-| Condition #1 | Setting |
-| | |
-| Actions | Write to a blob with blob index tags<br/>Write blob index tags |
-| Attribute source | Request |
-| Attribute | Blob index tags [Keys] |
-| Operator | ForAnyOfAnyValues:StringEquals |
-| Value | {keyName} |
-| Operator | And |
-| **Expression 2** | |
-| Attribute source | Request |
-| Attribute | Blob index tags [Values in key] |
-| Key | {keyName} |
-| Operator | ForAllOfAnyValues:StringEquals |
-| Value | {keyValue1}<br/>{keyValue2}<br/>{keyValue3} |
-
-![Screenshot of example 4 condition 1 editor in Azure portal.](./media/storage-auth-abac-examples/example-4-condition-1-portal.png)
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags)<br/>[Write blob index tags](storage-auth-abac-attributes.md#write-blob-index-tags) |
+> | Attribute source | Request |
+> | Attribute | [Blob index tags [Keys]](storage-auth-abac-attributes.md#blob-index-tags-keys) |
+> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
+> | Value | {keyName} |
+> | Operator | And |
+> | **Expression 2** | |
+> | Attribute source | Request |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | {keyName} |
+> | Operator | [ForAllOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#forallofanyvalues) |
+> | Value | {keyValue1}<br/>{keyValue2}<br/>{keyValue3} |
+ #### Azure PowerShell
Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $grantedTag2
Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $grantedTag3 -Context $bearerCtx ```
-## Example 5: Read, write, or delete blobs in named containers
+## Blob container names or paths
+
+### Example: Read, write, or delete blobs in named containers
This condition allows users to read, write, or delete blobs in storage containers named blobs-example-container. This condition is useful for sharing specific storage containers with other users in a subscription.
-There are four permissions for read, write, and delete of existing blobs, so you must target all permissions. You must add this condition to any role assignments that include one of the following permissions.
+There are five actions for read, write, and delete of existing blobs. You must add this condition to any role assignments that include one of the following actions.
-- /blobs/delete-- /blobs/read-- /blobs/write (update or create)-- /blobs/add/action (create)
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner.<br/>Add if the storage accounts included in this condition have hierarchical namespace enabled or might be enabled in the future. |
Suboperations are not used in this condition because the subOperation is needed only when conditions are authored based on tags.
-![Diagram of example 5 condition showing read, write, or delete blobs in named containers.](./media/storage-auth-abac-examples/example-5.png)
+![Diagram of condition showing read, write, or delete blobs in named containers.](./media/storage-auth-abac-examples/containers-read-write-delete.png)
+
+Storage Blob Data Owner
``` (
Suboperations are not used in this condition because the subOperation is needed
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
) OR (
Suboperations are not used in this condition because the subOperation is needed
) ```
+Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ )
+)
+```
++ #### Azure portal Here are the settings to add this condition using the Azure portal.
-| Condition #1 | Setting |
-| | |
-| Actions | Delete a blob<br/>Read a blob<br/>Write to a blob<br/>Create a blob or snapshot, or append data |
-| Attribute source | Resource |
-| Attribute | Container name |
-| Operator | StringEquals |
-| Value | {containerName} |
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Delete a blob](storage-auth-abac-attributes.md#delete-a-blob)<br/>[Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
-![Screenshot of example 5 condition 1 editor in Azure portal.](./media/storage-auth-abac-examples/example-5-condition-1-portal.png)
#### Azure PowerShell Here's how to add this condition using Azure PowerShell. ```azurepowershell
-$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'))"
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID $testRa.Condition = $condition $testRa.ConditionVersion = "2.0"
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "Example5
$content = Remove-AzStorageBlob -Container $grantedContainer -Blob "Example5.txt" -Context $bearerCtx ```
-## Example 6: Read access to blobs in named containers with a path
+### Example: Read blobs in named containers with a path
This condition allows read access to storage containers named blobs-example-container with a blob path of readonly/*. This condition is useful for sharing specific parts of storage containers for read access with other users in the subscription.
-You must add this condition to any role assignments that include the following permission.
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner.<br/>Add if the storage accounts included in this condition have hierarchical namespace enabled or might be enabled in the future. |
+
+![Diagram of condition showing read access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-read.png)
+
+Storage Blob Data Owner
-- /blobs/read
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'readonly/*'
+ )
+)
+```
-![Diagram of example 6 condition showing read access to blobs in named containers with a path.](./media/storage-auth-abac-examples/example-6.png)
+Storage Blob Data Reader, Storage Blob Data Contributor
``` (
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})
- )
- OR
- (
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
- AND
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'readonly/*'
- )
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'readonly/*'
+ )
) ```
You must add this condition to any role assignments that include the following p
Here are the settings to add this condition using the Azure portal.
-| Condition #1 | Setting |
-| | |
-| Actions | Read a blob |
-| Attribute source | Resource |
-| Attribute | Container name |
-| Operator | StringEquals |
-| Value | {containerName} |
-| **Expression 2** | |
-| Operator | And |
-| Attribute source | Resource |
-| Attribute | Blob path |
-| Operator | StringLike |
-| Value | {pathString} |
-
-![Screenshot of example 6 condition 1 editor in Azure portal.](./media/storage-auth-abac-examples/example-6-condition-1-portal.png)
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | Resource |
+> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
+> | Operator | [StringLike](../../role-based-access-control/conditions-format.md#stringlike) |
+> | Value | {pathString} |
+ #### Azure PowerShell Here's how to add this condition using Azure PowerShell. ```azurepowershell
-$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container' AND @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'readonly/*'))"
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container' AND @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'readonly/*'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID $testRa.Condition = $condition $testRa.ConditionVersion = "2.0"
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "Ungrante
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "readonly/Example6.txt" -Context $bearerCtx ```
-## Example 7: Write access to blobs in named containers with a path
+### Example: Read or list blobs in named containers with a path
+
+This condition allows read access and also list access to storage containers named blobs-example-container with a blob path of readonly/*. Condition #1 applies to read actions excluding list blobs. Condition #2 applies to list blobs. This condition is useful for sharing specific parts of storage containers for read or list access with other users in the subscription.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner.<br/>Add if the storage accounts included in this condition have hierarchical namespace enabled or might be enabled in the future. |
+
+![Diagram of condition showing read and list access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-read.png)
+
+Storage Blob Data Owner
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringStartsWith 'readonly/'
+ )
+)
+AND
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:prefix] StringStartsWith 'readonly/'
+ )
+)
+```
+
+Storage Blob Data Reader, Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringStartsWith 'readonly/'
+ )
+)
+AND
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:prefix] StringStartsWith 'readonly/'
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!NOTE]
+> The Azure portal uses prefix='' to list blobs from container's root directory. After the condition is added with the list blobs operation using prefix StringStartsWith 'readonly/', targeted users won't be able to list blobs from container's root directory in the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | Resource |
+> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
+> | Operator | [StringStartsWith](../../role-based-access-control/conditions-format.md#stringstartswith) |
+> | Value | {pathString} |
+
+> [!div class="mx-tableFixed"]
+> | Condition #2 | Setting |
+> | | |
+> | Actions | [List blobs](storage-auth-abac-attributes.md#list-blobs)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | Request |
+> | Attribute | [Blob prefix](storage-auth-abac-attributes.md#blob-prefix) |
+> | Operator | [StringStartsWith](../../role-based-access-control/conditions-format.md#stringstartswith) |
+> | Value | {pathString} |
+
+### Example: Write blobs in named containers with a path
This condition allows a partner (an Azure AD guest user) to drop files into storage containers named Contosocorp with a path of uploads/contoso/*. This condition is useful for allowing other users to put data in storage containers.
-You must add this condition to any role assignments that include the following permissions.
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner.<br/>Add if the storage accounts included in this condition have hierarchical namespace enabled or might be enabled in the future. |
+
+![Diagram of condition showing write access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-write.png)
+
+Storage Blob Data Owner
-- /blobs/write (create or update)-- /blobs/add/action (create)
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'contosocorp'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'uploads/contoso/*'
+ )
+)
+```
-![Diagram of example 7 condition showing write access to blobs in named containers with a path.](./media/storage-auth-abac-examples/example-7.png)
+Storage Blob Data Contributor
``` (
You must add this condition to any role assignments that include the following p
Here are the settings to add this condition using the Azure portal.
-| Condition #1 | Setting |
-| | |
-| Actions | Write to a blob<br/>Create a blob or snapshot, or append data |
-| Attribute source | Resource |
-| Attribute | Container name |
-| Operator | StringEquals |
-| Value | {containerName} |
-| **Expression 2** | |
-| Operator | And |
-| Attribute source | Resource |
-| Attribute | Blob path |
-| Operator | StringLike |
-| Value | {pathString} |
-
-![Screenshot of example 7 condition 1 editor in Azure portal.](./media/storage-auth-abac-examples/example-7-condition-1-portal.png)
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | Resource |
+> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
+> | Operator | [StringLike](../../role-based-access-control/conditions-format.md#stringlike) |
+> | Value | {pathString} |
+ #### Azure PowerShell Here's how to add this condition using Azure PowerShell. ```azurepowershell
-$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'contosocorp' AND @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'uploads/contoso/*'))"
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'contosocorp' AND @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'uploads/contoso/*'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID $testRa.Condition = $condition $testRa.ConditionVersion = "2.0"
$content = Set-AzStorageBlobContent -Container $grantedContainer -Blob "Example7
$content = Set-AzStorageBlobContent -Container $grantedContainer -Blob "uploads/contoso/Example7.txt" -Context $bearerCtx -File $localSrcFile ```
-## Example 8: Read access to blobs with a tag and a path
+### Example: Read blobs with a blob index tag and a path
-This condition allows a user to read blobs with a blob index tag key of Program, a tag value of Alpine, and a blob path of logs*. The blob path of logs* also includes the blob name.
+This condition allows a user to read blobs with a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Program, a value of Alpine, and a blob path of logs*. The blob path of logs* also includes the blob name.
-> [!TIP]
-> Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find Azure Blob data with blob index tags (preview)](../blobs/storage-manage-find-blobs.md).
+You must add this condition to any role assignments that include the following action.
-You must add this condition to any role assignments that includes the following permission.
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
-- /blobs/read-
-![Diagram of example 8 condition showing read access to blobs with a tag and a path.](./media/storage-auth-abac-examples/example-8.png)
+![Diagram of condition showing read access to blobs with a blob index tag and a path.](./media/storage-auth-abac-examples/blob-index-tags-path-read.png)
``` (
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
- AND
- SubOperationMatches{'Blob.Read.WithTagConditions'})
- )
- OR
- (
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Program<$key_case_sensitive$>] StringEquals 'Alpine'
- )
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.Read.WithTagConditions'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Program<$key_case_sensitive$>] StringEquals 'Alpine'
+ )
) AND (
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})
- )
- OR
- (
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'logs*'
- )
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'logs*'
+ )
) ```
AND
Here are the settings to add this condition using the Azure portal.
-| Condition #1 | Setting |
-| | |
-| Actions | Read content from a blob with tag conditions |
-| Attribute source | Resource |
-| Attribute | Blob index tabs [Values in key] |
-| Key | {keyName} |
-| Operator | StringEquals |
-| Value | {keyValue} |
-
-![Screenshot of example 8 condition 1 editor in Azure portal.](./media/storage-auth-abac-examples/example-8-condition-1-portal.png)
-
-| Condition #2 | Setting |
-| | |
-| Actions | Read a blob |
-| Attribute source | Resource |
-| Attribute | Blob path |
-| Operator | StringLike |
-| Value | {pathString} |
-
-![Screenshot of example 8 condition 2 editor in Azure portal.](./media/storage-auth-abac-examples/example-8-condition-2-portal.png)
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read content from a blob with tag conditions](storage-auth-abac-attributes.md#read-content-from-a-blob-with-tag-conditions) |
+> | Attribute source | Resource |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | {keyName} |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {keyValue} |
++
+> [!div class="mx-tableFixed"]
+> | Condition #2 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Attribute source | Resource |
+> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
+> | Operator | [StringLike](../../role-based-access-control/conditions-format.md#stringlike) |
+> | Value | {pathString} |
+ #### Azure PowerShell Here's how to add this condition using Azure PowerShell. ```azurepowershell
-$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.Read.WithTagConditions'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Program<`$key_case_sensitive`$>] StringEquals 'Alpine')) AND ((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'logs*'))"
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.Read.WithTagConditions'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Program<`$key_case_sensitive`$>] StringEquals 'Alpine')) AND ((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'logs*'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID $testRa.Condition = $condition $testRa.ConditionVersion = "2.0"
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logsAlpi
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logs/AlpineFile.txt" -Context $bearerCtx ```
-## Example 9: Allow read and write access to blobs based on tags and custom security attributes
+## Blob versions or blob snapshots
-This condition allows read and write access to blobs if the user has a [custom security attribute](../../active-directory/fundamentals/custom-security-attributes-overview.md) that matches the blob index tag.
+### Example: Read current blob versions and a specific blob version
+
+This condition allows a user to read current blob versions as well as read blobs with a version ID of 2022-06-01T23:38:32.8883645Z. The user cannot read other blob versions.
+
+> [!NOTE]
+> The condition includes a `NOT Exists` expression for the version ID attribute. This expression is included so that the Azure portal can list list the current version of the blob.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+
+![Diagram of condition showing read access to a specific blob version.](./media/storage-auth-abac-examples/version-id-specific-blob-read.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z'
+ OR
+ NOT Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId]
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Attribute source | Request |
+> | Attribute | [Version ID](storage-auth-abac-attributes.md#version-id) |
+> | Operator | [DateTimeEquals](../../role-based-access-control/conditions-format.md#datetime-comparison-operators) |
+> | Value | &lt;blobVersionId&gt; |
+> | **Expression 2** | |
+> | Operator | Or |
+> | Attribute source | Request |
+> | Attribute | [Version ID](storage-auth-abac-attributes.md#version-id) |
+> | Exists | [Checked](../../role-based-access-control/conditions-format.md#exists) |
+> | Negate this expression | Checked |
+
+### Example: Delete old blob versions
+
+This condition allows a user to delete versions of a blob that are older than 06/01/2022 to perform clean up.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action` | |
+
+![Diagram of condition showing delete access to old blob versions.](./media/storage-auth-abac-examples/version-id-blob-delete.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeLessThan '2022-06-01T00:00:00.0Z'
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Delete a blob](storage-auth-abac-attributes.md#delete-a-blob)<br/>[Delete a version of a blob](storage-auth-abac-attributes.md#delete-a-version-of-a-blob) |
+> | Attribute source | Request |
+> | Attribute | [Version ID](storage-auth-abac-attributes.md#version-id) |
+> | Operator | [DateTimeLessThan](../../role-based-access-control/conditions-format.md#datetime-comparison-operators) |
+> | Value | &lt;blobVersionId&gt; |
+
+### Example: Read current blob versions and any blob snapshots
+
+This condition allows a user to read current blob versions and any blob snapshots.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+
+![Diagram of condition showing read access to current blob versions and any blob snapshots.](./media/storage-auth-abac-examples/version-id-snapshot-blob-read.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot]
+ OR
+ NOT Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId]
+ OR
+ @Resource[Microsoft.Storage/storageAccounts:isHnsEnabled] BoolEquals true
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Attribute source | Request |
+> | Attribute | [Snapshot](storage-auth-abac-attributes.md#snapshot) |
+> | Exists | [Checked](../../role-based-access-control/conditions-format.md#exists) |
+> | **Expression 2** | |
+> | Operator | Or |
+> | Attribute source | Request |
+> | Attribute | [Version ID](storage-auth-abac-attributes.md#version-id) |
+> | Exists | [Checked](../../role-based-access-control/conditions-format.md#exists) |
+> | Negate this expression | Checked |
+> | **Expression 3** | |
+> | Operator | Or |
+> | Attribute source | Resource |
+> | Attribute | [Is hierarchical namespace enabled](storage-auth-abac-attributes.md#is-hierarchical-namespace-enabled) |
+> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | Value | True |
+
+## Hierarchical namespace
+
+### Example: Read only storage accounts with hierarchical namespace enabled
+
+This condition allows a user to only read blobs in storage accounts with [hierarchical namespace](../blobs/data-lake-storage-namespace.md) enabled. This condition is applicable only at resource group scope or above.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+![Diagram of condition showing read access to storage accounts with hierarchical namespace enabled.](./media/storage-auth-abac-examples/hierarchical-namespace-accounts-read.png)
+
+Storage Blob Data Owner
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts:isHnsEnabled] BoolEquals true
+ )
+)
+```
+
+Storage Blob Data Reader, Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts:isHnsEnabled] BoolEquals true
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Is hierarchical namespace enabled](storage-auth-abac-attributes.md#is-hierarchical-namespace-enabled) |
+> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | Value | True |
+
+## Encryption scope
+
+### Example: Read blobs with specific encryption scopes
+
+This condition allows a user to read blobs encrypted with encryption scope `validScope1` or `validScope2`.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+
+![Diagram of condition showing read access to blobs with encryption scope validScope1 or validScope2.](./media/storage-auth-abac-examples/encryption-scope-read-blobs.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/encryptionScopes:name] ForAnyOfAnyValues:StringEquals {'validScope1', 'validScope2'}
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Attribute source | Resource |
+> | Attribute | [Encryption scope name](storage-auth-abac-attributes.md#encryption-scope-name) |
+> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
+> | Value | &lt;scopeName&gt; |
+
+### Example: Read or write blobs in named storage account with specific encryption scope
+
+This condition allows a user to read or write blobs in a storage account named `sampleaccount` and encrypted with encryption scope `ScopeCustomKey1`. If blobs are not encrypted or decrypted with `ScopeCustomKey1`, request will return forbidden.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | |
+
+> [!NOTE]
+> Since encryption scopes for different storage accounts could be different, it's recommended to use the `storageAccounts:name` attribute with the `encryptionScopes:name` attribute to restrict the specific encryption scope to be allowed.
+
+![Diagram of condition showing read or write access to blobs in sampleaccount storage account with encryption scope ScopeCustomKey1.](./media/storage-auth-abac-examples/encryption-scope-account-name-read-wite-blobs.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts:name] StringEquals 'sampleaccount'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/encryptionScopes:name] ForAnyOfAnyValues:StringEquals {'ScopeCustomKey1'}
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data) |
+> | Attribute source | Resource |
+> | Attribute | [Account name](storage-auth-abac-attributes.md#account-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | &lt;accountName&gt; |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | Resource |
+> | Attribute | [Encryption scope name](storage-auth-abac-attributes.md#encryption-scope-name) |
+> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
+> | Value | &lt;scopeName&gt; |
+
+## Principal attributes
+
+### Example: Read or write blobs based on blob index tags and custom security attributes
+
+This condition allows read or write access to blobs if the user has a [custom security attribute](../../active-directory/fundamentals/custom-security-attributes-overview.md) that matches the [blob index tag](../blobs/storage-blob-index-how-to.md).
-For example, if Brenda has the attribute `Project=Baker`, she can only read and write blobs with the `Project=Baker` blob index tag. Similarly, Chandra can only read and write blobs with `Project=Cascade`.
+For example, if Brenda has the attribute `Project=Baker`, she can only read or write blobs with the `Project=Baker` blob index tag. Similarly, Chandra can only read or write blobs with `Project=Cascade`.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | |
For more information, see [Allow read access to blobs based on tags and custom security attributes](../../role-based-access-control/conditions-custom-security-attributes.md).
-![Diagram of example 9 condition showing read and write access to blobs based on tags and custom security attributes.](./media/storage-auth-abac-examples/condition-principal-attribute-example.png)
+![Diagram of condition showing read or write access to blobs based on blob index tags and custom security attributes.](./media/storage-auth-abac-examples/principal-blob-index-tags-read-write.png)
``` (
AND
Here are the settings to add this condition using the Azure portal.
-| Condition #1 | Setting |
-| | |
-| Actions | Read content from a blob with tag conditions |
-| Attribute source | Principal |
-| Attribute | &lt;attributeset&gt;_&lt;key&gt; |
-| Operator | StringEquals |
-| Option | Attribute |
-| Attribute source | Resource |
-| Attribute | Blob index tags [Values in key] |
-| Key | &lt;key&gt; |
-
-| Condition #2 | Setting |
-| | |
-| Actions | Write to a blob with blob index tags<br/>Write to a blob with blob index tags |
-| Attribute source | Principal |
-| Attribute | &lt;attributeset&gt;_&lt;key&gt; |
-| Operator | StringEquals |
-| Option | Attribute |
-| Attribute source | Request |
-| Attribute | Blob index tags [Values in key] |
-| Key | &lt;key&gt; |
-
-## Example 10: Allow read access to blobs based on tags and multi-value custom security attributes
-
-This condition allows read access to blobs if the user has a [custom security attribute](../../active-directory/fundamentals/custom-security-attributes-overview.md) with any values that matches the blob index tag.
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read content from a blob with tag conditions](storage-auth-abac-attributes.md#read-content-from-a-blob-with-tag-conditions) |
+> | Attribute source | [Principal](../../role-based-access-control/conditions-format.md#principal-attributes) |
+> | Attribute | &lt;attributeset&gt;_&lt;key&gt; |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Option | Attribute |
+> | Attribute source | Resource |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | &lt;key&gt; |
+
+> [!div class="mx-tableFixed"]
+> | Condition #2 | Setting |
+> | | |
+> | Actions | [Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags)<br/>[Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags) |
+> | Attribute source | [Principal](../../role-based-access-control/conditions-format.md#principal-attributes) |
+> | Attribute | &lt;attributeset&gt;_&lt;key&gt; |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Option | Attribute |
+> | Attribute source | Request |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | &lt;key&gt; |
+
+### Example: Read blobs based on blob index tags and multi-value custom security attributes
+
+This condition allows read access to blobs if the user has a [custom security attribute](../../active-directory/fundamentals/custom-security-attributes-overview.md) with any values that matches the [blob index tag](../blobs/storage-blob-index-how-to.md).
For example, if Chandra has the Project attribute with the values Baker and Cascade, she can only read blobs with the `Project=Baker` or `Project=Cascade` blob index tag.
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+ For more information, see [Allow read access to blobs based on tags and custom security attributes](../../role-based-access-control/conditions-custom-security-attributes.md).
-![Diagram of example 10 condition showing read access to blobs based on tags and multi-value custom security attributes.](./media/storage-auth-abac-examples/condition-principal-attribute-multi-value-example.png)
+![Diagram of condition showing read access to blobs based on blob index tags and multi-value custom security attributes.](./media/storage-auth-abac-examples/principal-blob-index-tags-multi-value-read.png)
``` (
For more information, see [Allow read access to blobs based on tags and custom s
Here are the settings to add this condition using the Azure portal.
-| Condition #1 | Setting |
-| | |
-| Actions | Read content from a blob with tag conditions |
-| Attribute source | Resource |
-| Attribute | Blob index tags [Values in key] |
-| Key | &lt;key&gt; |
-| Operator | ForAnyOfAnyValues:StringEquals |
-| Option | Attribute |
-| Attribute source | Principal |
-| Attribute | &lt;attributeset&gt;_&lt;key&gt; |
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read content from a blob with tag conditions](storage-auth-abac-attributes.md#read-content-from-a-blob-with-tag-conditions) |
+> | Attribute source | Resource |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | &lt;key&gt; |
+> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
+> | Option | Attribute |
+> | Attribute source | [Principal](../../role-based-access-control/conditions-format.md#principal-attributes) |
+> | Attribute | &lt;attributeset&gt;_&lt;key&gt; |
## Next steps
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac.md
Previously updated : 10/14/2021 Last updated : 05/16/2022
In this preview, you can add conditions to built-in roles or custom roles. The b
- [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) - [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) - [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner).
+- [Storage Queue Data Contributor](../../role-based-access-control/built-in-roles.md#storage-queue-data-contributor)
+- [Storage Queue Data Message Processor](../../role-based-access-control/built-in-roles.md#storage-queue-data-message-processor)
+- [Storage Queue Data Message Sender](../../role-based-access-control/built-in-roles.md#storage-queue-data-message-sender)
+- [Storage Queue Data Reader](../../role-based-access-control/built-in-roles.md#storage-queue-data-reader)
-You can use conditions with custom roles so long as the role includes [actions that support conditions](storage-auth-abac-attributes.md#actions-and-suboperations).
+You can use conditions with custom roles so long as the role includes [actions that support conditions](storage-auth-abac-attributes.md#azure-blob-storage-actions-and-suboperations).
If you're working with conditions based on [blob index tags](../blobs/storage-manage-find-blobs.md), you should use the *Storage Blob Data Owner* since permissions for tag operations are included in this role.
storage Storage Ref Azcopy Load Avere Cloud File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-load-avere-cloud-file-system.md
- Title: azcopy load clfs | Microsoft Docs-
-description: This article provides reference information for the azcopy load clfs command.
--- Previously updated : 07/24/2020-----
-# azcopy load clfs
-
-Transfers local data into a Container and stores it in Microsoft's Avere Cloud FileSystem (CLFS) format.
-
-## Synopsis
-
-The load command copies data into Azure Blob storage containers and then stores that data in Microsoft's Avere Cloud FileSystem (CLFS) format.
-The proprietary CLFS format is used by the Azure HPC Cache and Avere vFXT for Azure products.
-
-To leverage this command, install the necessary extension via: pip3 install clfsload~=1.0.23. Make sure CLFSLoad.py is
-in your PATH. For more information on this step, visit [https://aka.ms/azcopy/clfs](https://aka.ms/azcopy/clfs).
-
-This command is a simple option for moving existing data to cloud storage for use with specific Microsoft high-performance computing cache products.
-
-Because these products use a proprietary cloud filesystem format to manage data, that data cannot be loaded through the native copy command.
-
-Instead, the data must be loaded through the cache product itself or via this load command, which uses the correct proprietary format.
-This command lets you transfer data without using the cache. For example, to pre-populate storage or to add files to a working set without increasing cache load.
-
-The destination is an empty Azure Storage Container. When the transfer is complete, the destination container can be used with an Azure HPC Cache instance or Avere vFXT for Azure cluster.
-
-> [!NOTE]
-> This is a preview release of the load command. Please report any issues on the AzCopy GitHub repo.
-
-```
-azcopy load clfs [local dir] [container URL] [flags]
-```
-
-## Related conceptual articles
--- [Get started with AzCopy](storage-use-azcopy-v10.md)-- [Transfer data with AzCopy and Blob storage](./storage-use-azcopy-v10.md#transfer-data)-- [Transfer data with AzCopy and file storage](storage-use-azcopy-files.md)-
-## Examples
-
-Load an entire directory to a container with a SAS in CLFS format:
-
-```azcopy
-azcopy load clfs "/path/to/dir" "https://[account].blob.core.windows.net/[container]?[SAS]" --state-path="/path/to/state/path"
-```
-
-## Options
-
-**--compression-type** string specify the compression type to use for the transfers. Available values are: `DISABLED`,`LZ4`. (default `LZ4`)
-
-**--help** help for the `azcopy load clfs` command.
-
-**--log-level** string Define the log verbosity for the log file, available levels: `DEBUG`, `INFO`, `WARNING`, `ERROR`. (default `INFO`)
-
-**--max-errors** uint32 Specify the maximum number of transfer failures to tolerate. If enough errors occur, stop the job immediately.
-
-**--new-session** Start a new job rather than continuing an existing one whose tracking information is kept at `--state-path`. (default true)
-
-**--preserve-hardlinks** Preserve hard link relationships.
-
-**--state-path** string Required path to a local directory for job state tracking. The path must point to an existing directory in order to resume a job. It must be empty for a new job.
-
-## Options inherited from parent commands
-
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string | Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
-
-## See also
--- [azcopy](storage-ref-azcopy.md)
storage Storage Ref Azcopy Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-load.md
- Title: azcopy load | Microsoft Docs-
-description: This article provides reference information for the azcopy load command.
--- Previously updated : 07/24/2020-----
-# azcopy load
-
-Subcommands related to transferring data in specific formats
-
-## Synopsis
-
-Subcommands related to transferring data in specific formats, such as Microsoft's Avere Cloud FileSystem (CLFS) format.
-
-## Related conceptual articles
--- [Get started with AzCopy](storage-use-azcopy-v10.md)-- [Transfer data with AzCopy and Blob storage](./storage-use-azcopy-v10.md#transfer-data)-- [Transfer data with AzCopy and file storage](storage-use-azcopy-files.md)-
-## Examples
-
-Load an entire directory to a container with a SAS in CLFS format:
-
-```azcopy
-azcopy load clfs "/path/to/dir" "https://[account].blob.core.windows.net/[container]?[SAS]" --state-path="/path/to/state/path"
-```
-
-## Options
-
-|Option|Description|
-|--|--|
-|-h, --help|Shows help content for the load command.|
-
-## Options inherited from parent commands
-
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string | Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
-
-## See also
--- [azcopy](storage-ref-azcopy.md)
storage Storage Use Azcopy V10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-v10.md
The following table lists all AzCopy v10 commands. Each command links to a refer
|[azcopy jobs remove](storage-ref-azcopy-jobs-remove.md?toc=/azure/storage/blobs/toc.json)|Remove all files associated with the given job ID.| |[azcopy jobs resume](storage-ref-azcopy-jobs-resume.md?toc=/azure/storage/blobs/toc.json)|Resumes the existing job with the given job ID.| |[azcopy jobs show](storage-ref-azcopy-jobs-show.md?toc=/azure/storage/blobs/toc.json)|Shows detailed information for the given job ID.|
-|[azcopy load](storage-ref-azcopy-load.md)|Subcommands related to transferring data in specific formats.|
-|[azcopy load clfs](storage-ref-azcopy-load-avere-cloud-file-system.md?toc=/azure/storage/blobs/toc.json)|Transfers local data into a Container and stores it in Microsoft's Avere Cloud FileSystem (CLFS) format.|
|[azcopy list](storage-ref-azcopy-list.md?toc=/azure/storage/blobs/toc.json)|Lists the entities in a given resource.| |[azcopy login](storage-ref-azcopy-login.md?toc=/azure/storage/blobs/toc.json)|Logs in to Azure Active Directory to access Azure Storage resources.| |[azcopy logout](storage-ref-azcopy-logout.md?toc=/azure/storage/blobs/toc.json)|Logs the user out and terminates access to Azure Storage resources.|
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
Future versions of Windows Server will be added as they are released.
> We recommend keeping all servers that you use with Azure File Sync up to date with the latest updates from Windows Update. ### Minimum system resources
-Azure File Sync requires a server, either physical or virtual, with at least one CPU and a minimum of 2 GiB of memory.
+Azure File Sync requires a server, either physical or virtual, with at least one CPU, minimum of 2 GiB of memory and a locally attached volume formatted with the NTFS file system.
> [!Important] > If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum of 2048 MiB of memory.
In the following table, we have provided both the size of the namespace as well
> > Typical churn is 0.5% of the namespace changing per day. For higher levels of churn, consider adding more CPU. -- A locally attached volume formatted with the NTFS file system.- ### Evaluation cmdlet Before deploying Azure File Sync, you should evaluate whether it is compatible with your system using the Azure File Sync evaluation cmdlet. This cmdlet checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported operating system version. Its checks cover most but not all of the features mentioned below; we recommend you read through the rest of this section carefully to ensure your deployment goes smoothly.
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
In this example, the number of cases is stored either as `int32`, `int64`, or `f
## Troubleshooting
-Review the [self-help page](resources-self-help-sql-on-demand.md#cosmos-db) to find the known issues or troubleshooting steps that can help you to resolve potential problems with Cosmos DB queries.
+Review the [self-help page](resources-self-help-sql-on-demand.md#azure-cosmos-db) to find the known issues or troubleshooting steps that can help you to resolve potential problems with Cosmos DB queries.
## Next steps
For more information, see the following articles:
- [Use Power BI and serverless SQL pool with Azure Synapse Link](../../cosmos-db/synapse-link-power-bi.md) - [Create and use views in a serverless SQL pool](create-use-views.md) - [Tutorial on building serverless SQL pool views over Azure Cosmos DB and connecting them to Power BI models via DirectQuery](./tutorial-data-analyst.md)-- Visit [Synapse link for Cosmos DB self-help page](resources-self-help-sql-on-demand.md#cosmos-db) if you are getting some errors or experiencing performance issues.
+- Visit [Synapse link for Cosmos DB self-help page](resources-self-help-sql-on-demand.md#azure-cosmos-db) if you are getting some errors or experiencing performance issues.
- Checkout the learn module on how to [Query Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-sql-serverless-for-azure-synapse-analytics/).
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Title: Serverless SQL pool self-help
-description: This section contains information that can help you troubleshoot problems with serverless SQL pool.
+description: This article contains information that can help you troubleshoot problems with serverless SQL pool.
# Self-help for serverless SQL pool
-This article contains information about how to troubleshoot most frequent problems with serverless SQL pool in Azure Synapse Analytics.
+This article contains information about how to troubleshoot the most frequent problems with serverless SQL pool in Azure Synapse Analytics.
## Synapse Studio
-Synapse studio is easy to use tool that enables you to access your data using a browser without a need to install database access tools. However, Synapse studio isn't designed to read a large set of data or full management of SQL objects.
+Synapse Studio is an easy-to-use tool that you can use to access your data by using a browser without a need to install database access tools. Synapse Studio isn't designed to read a large set of data or full management of SQL objects.
### Serverless SQL pool is grayed out in Synapse Studio
-If Synapse Studio can't establish connection to serverless SQL pool, you'll notice that serverless SQL pool is grayed out or shows status "Offline". Usually, this problem occurs when one of the following cases happens:
+If Synapse Studio can't establish a connection to serverless SQL pool, you'll notice that serverless SQL pool is grayed out or shows the status **Offline**. Usually, this problem occurs when one of the following cases happens:
-1) Your network prevents communication to Azure Synapse backend. Most frequent case is that port 1443 is blocked. To get the serverless SQL pool to work, unblock this port. Other problems could prevent serverless SQL pool to work as well, [visit full troubleshooting guide for more information](../troubleshoot/troubleshoot-synapse-studio.md).
-2) You don't have permissions to log into serverless SQL pool. To gain access, one of the Azure Synapse workspace administrators should add you to workspace administrator or SQL administrator role. [Visit full guide on access control for more information](../security/synapse-workspace-access-control-overview.md).
+- Your network prevents communication to the Azure Synapse Analytics back-end. The most frequent case is that port 1443 is blocked. To get serverless SQL pool to work, unblock this port. Other problems could prevent serverless SQL pool from working too. For more information, see the [Troubleshooting guide](../troubleshoot/troubleshoot-synapse-studio.md).
+- You don't have permission to sign in to serverless SQL pool. To gain access, an Azure Synapse workspace administrator must add you to the workspace administrator role or the SQL administrator role. For more information, see [Azure Synapse access control](../security/synapse-workspace-access-control-overview.md).
-### Websocket connection was closed unexpectedly
+### Websocket connection closed unexpectedly
-If your query fails with the error message: 'Websocket connection was closed unexpectedly', it means that your browser connection to Synapse Studio was interrupted, for example because of a network issue.
+Your query might fail with the error message "Websocket connection was closed unexpectedly." This message means that your browser connection to Synapse Studio was interrupted, for example, because of a network issue.
-To resolve this issue, rerun this query. If this message occurs often in your environment, advise help from your network administrator, check firewall settings, and [visit this troubleshooting guide for more information](../troubleshoot/troubleshoot-synapse-studio.md).
+To resolve this issue, rerun this query. If this message occurs often in your environment, get help from your network administrator. You can also check firewall settings, and check the [Troubleshooting guide](../troubleshoot/troubleshoot-synapse-studio.md).
-If the issue still continues, create a [support ticket](../../azure-portal/supportability/how-to-create-azure-support-request.md) through the Azure portal and try [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) for the same queries instead of Synapse Studio for further investigation.
+If the issue continues, create a [support ticket](../../azure-portal/supportability/how-to-create-azure-support-request.md) through the Azure portal. Try [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) for the same queries instead of Synapse Studio for further investigation.
-### Serverless databases are not shown in Synapse studio
+### Serverless databases aren't shown in Synapse Studio
-If you don't see the databases that are created in serverless SQL pool, check is your serverless SQL pool started. If the serverless SQL pool is deactivated, the databases won't be shown. Execute any query (for example `SELECT 1`) on the serverless pool to activate it, and the databases will be shown.
+If you don't see the databases that are created in serverless SQL pool, check to see if your serverless SQL pool started. If serverless SQL pool is deactivated, the databases won't show. Execute any query, for example, `SELECT 1`, on serverless SQL pool to activate it and make the databases appear.
-### Synapse Serverless SQL pool is showing as unavailable
-Wrong network configuration is often the cause for this behavior. Make sure the ports are appropriately configured. In case you use firewall or Private Endpoint check their settings as well. Finally, make sure the appropriate roles are granted.
+### Synapse Serverless SQL pool shows as unavailable
+
+Incorrect network configuration is often the cause of this behavior. Make sure the ports are properly configured. If you use a firewall or private endpoints, check these settings too. Finally, make sure the appropriate roles are granted.
## Storage access
-If you're getting the errors while trying to access the files on storage, make sure that you have permissions to access data. You should be able to access publicly available files. If you're accessing data without credentials, make sure that your Azure AD identity can directly access the files.
-If you have SAS key that you should use to access files, make sure that you created a credential ([server-level](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential) or [database-scoped](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential)) that contains that credential. The credentials are required if you need to access data using the workspace [managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) and custom [service principal name](develop-storage-files-storage-access-control.md?tabs=service-principal#database-scoped-credential).
+If you get errors while you try to access files in storage, make sure that you have permission to access data. You should be able to access publicly available files. If you try to access data without credentials, make sure that your Azure Active Directory (Azure AD) identity can directly access the files.
-### Cannot read, list or access files on data lake storage
+If you have a shared access signature key that you should use to access files, make sure that you created a [server-level](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential) or [database-scoped](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential) credential that contains that credential. The credentials are required if you need to access data by using the workspace [managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) and custom [service principal name (SPN)](develop-storage-files-storage-access-control.md?tabs=service-principal#database-scoped-credential).
-If you're using Azure AD login without explicit credential, make sure that your Azure AD identity can access the files on storage. Your Azure AD identity need to have Blob Data Reader or list/read ACL permissions to access the files - see [Query fails because file cannot be opened](#query-fails-because-file-cannot-be-opened).
+### Can't read, list, or access files in Azure Data Lake Storage
-If you're accessing storage using [credentials](develop-storage-files-storage-access-control.md#credentials), make sure that your [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity) or [SPN](develop-storage-files-storage-access-control.md?tabs=service-principal) has Data Reader/Contributor role, or ACL permissions. If you have used [SAS token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) make sure that it has `rl` permission and that it hasn't expired.
-If you are using SQL login and the `OPENROWSET` function [without data source](develop-storage-files-overview.md#query-files-using-openrowset), make sure that you have a server-level credential that matches the storage URI and has permission to access the storage.
+If you use an Azure AD sign-in without explicit credentials, make sure that your Azure AD identity can access the files in storage. Your Azure AD identity must have Blob Data Reader or list/read access control list (ACL) permissions to access the files. For more information, see [Query fails because file cannot be opened](#query-fails-because-file-cant-be-opened).
-### Query fails because file cannot be opened
+If you access storage by using [credentials](develop-storage-files-storage-access-control.md#credentials), make sure that your [managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity) or [SPN](develop-storage-files-storage-access-control.md?tabs=service-principal) has a Data Reader or Contributor role or ACL permissions. If you used a [shared access signature token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature), make sure that it has `rl` permission and that it hasn't expired.
-If your query fails with the error 'File cannot be opened because it does not exist or it is used by another process' and you're sure that both files exist and aren't used by another process, then serverless SQL pool can't access the file. This problem usually happens because your Azure Active Directory identity doesn't have rights to access the file or because a firewall is blocking access to the file. By default, serverless SQL pool is trying to access the file using your Azure Active Directory identity. To resolve this issue, you need to have proper rights to access the file. The easiest way is to grant yourself a 'Storage Blob Data Contributor' role on the storage account you're trying to query.
-- [Visit full guide on Azure Active Directory access control for storage for more information](../../storage/blobs/assign-azure-role-data-access.md). -- [Visit Control storage account access for serverless SQL pool in Azure Synapse Analytics](develop-storage-files-storage-access-control.md)
+If you use a SQL sign-in and the `OPENROWSET` function [without a data source](develop-storage-files-overview.md#query-files-using-openrowset), make sure that you have a server-level credential that matches the storage URI and has permission to access the storage.
-**Alternative to Storage Blob Data Contributor role**
+### Query fails because file can't be opened
-Instead of granting Storage Blob Data Contributor, you can also grant more granular permissions on a subset of files.
+If your query fails with the error "File cannot be opened because it does not exist or it is used by another process" and you're sure that both files exist and aren't used by another process, serverless SQL pool can't access the file. This problem usually happens because your Azure AD identity doesn't have rights to access the file or because a firewall is blocking access to the file.
-* All users that need access to some data in this container also need to have the EXECUTE permission on all parent folders up to the root (the container).
-Learn more about [how to set ACLs in Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-explorer-acl.md).
+By default, serverless SQL pool tries to access the file by using your Azure AD identity. To resolve this issue, you must have proper rights to access the file. The easiest way is to grant yourself a Storage Blob Data Contributor role on the storage account you're trying to query.
-> [!NOTE]
-> Execute permission on the container level needs to be set within the Azure Data Lake Gen2.
-> Permissions on the folder can be set within Azure Synapse.
+For more information, see:
+
+- [Azure AD access control for storage](../../storage/blobs/assign-azure-role-data-access.md)
+- [Control storage account access for serverless SQL pool in Synapse Analytics](develop-storage-files-storage-access-control.md)
+#### Alternative to Storage Blob Data Contributor role
-If you would like to query data2.csv in this example, the following permissions are needed:
- - execute permission on container
- - execute permission on folder1
- - read permission on data2.csv
+Instead of granting yourself a Storage Blob Data Contributor role, you can also grant more granular permissions on a subset of files.
-![Drawing showing permission structure on data lake.](./media/resources-self-help-sql-on-demand/folder-structure-data-lake.png)
+All users who need access to some data in this container also must have EXECUTE permission on all parent folders up to the root (the container).
+
+Learn more about how to [set ACLs in Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-explorer-acl.md).
+
+> [!NOTE]
+> Execute permission on the container level must be set within Data Lake Storage Gen2.
+> Permissions on the folder can be set within Azure Synapse.
-* Log into Azure Synapse with an admin user that has full permissions on the data you want to access.
+If you want to query data2.csv in this example, the following permissions are needed:
-* In the data pane, right-click on the file and select MANAGE ACCESS.
+ - Execute permission on container
+ - Execute permission on folder1
+ - Read permission on data2.csv
-![Screenshot showing manage access UI.](./media/resources-self-help-sql-on-demand/manage-access.png)
+![Diagram that shows permission structure on data lake.](./media/resources-self-help-sql-on-demand/folder-structure-data-lake.png)
-* Choose at least ΓÇ£readΓÇ¥ permission, type in the users UPN or Object ID, for example user@contoso.com and click Add
+1. Sign in to Azure Synapse with an admin user that has full permissions on the data you want to access.
+1. In the data pane, right-click the file and select **Manage access**.
-* Grant read permission for this user.
-![Screenshot showing grant read permissions UI](./media/resources-self-help-sql-on-demand/grant-permission.png)
+ ![Screenshot that shows the Manage access option.](./media/resources-self-help-sql-on-demand/manage-access.png)
+
+1. Select at least **Read** permission. Enter the user's UPN or object ID, for example, user@contoso.com. Select **Add**.
+1. Grant read permission for this user.
+
+ ![Screenshot that shows granting read permissions.](./media/resources-self-help-sql-on-demand/grant-permission.png)
> [!NOTE]
-> For guest users, this needs to be done directly with the Azure Data Lake Service as it can not be done directly through Azure Synapse.
+> For guest users, this step needs to be done directly with Azure Data Lake because it can't be done directly through Azure Synapse.
+
+### Content of directory on the path can't be listed
+
+This error indicates that the user who's querying Azure Data Lake can't list the files in storage. There are several scenarios where this error might happen:
+
+- The Azure AD user who's using [Azure AD pass-through authentication](develop-storage-files-storage-access-control.md?tabs=user-identity) doesn't have permission to list the files in Data Lake Storage.
+- The Azure AD or SQL user who's reading data by using a [shared access signature key](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) or [workspace managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity) and that key or identity doesn't have permission to list the files in storage.
+- The user who's accessing Dataverse data who doesn't have permission to query data in Dataverse. This scenario might happen if you use SQL users.
+- The user who's accessing Delta Lake might not have permission to read the Delta Lake transaction log.
-### Content of directory on the path cannot be listed
+The easiest way to resolve this issue is to grant yourself the Storage Blob Data Contributor role in the storage account you're trying to query.
-This error indicates that the user who is querying Azure Data Lake cannot list the files on storage. There are several scenarios where this error might happen:
-- Azure AD user who is using [Azure AD pass-through authentication](develop-storage-files-storage-access-control.md?tabs=user-identity) does not have permissions to list the files on Azure Data Lake storage.-- Azure AD or SQL user is reading data using [SAS key](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) or [workspace Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity), and that key/identity does not have permission to list the files on the storage.-- User who is accessing DataVerse data does not have permission to query data in DataVerse. This might happen if you are using SQL users.-- User who is accessing Delta Lake might not have permission to read Delta Lake transaction log.
-
-The easiest way is to resolve this issue is grant yourself `Storage Blob DataContributor` role on the storage account you're trying to query.
-- [Visit full guide on Azure Active Directory access control for storage for more information](../../storage/blobs/assign-azure-role-data-access.md).-- [Visit Control storage account access for serverless SQL pool in Azure Synapse Analytics](develop-storage-files-storage-access-control.md)
-
-#### Content of Dataverse table cannot be listed
+For more information, see:
-If you are using the Synapse link for Dataverse to read the linked DataVerse tables, you need to use Azure AD account to access the linked data using the serverless SQL pool.
-If you try to use a SQL login to read an external table that is referencing the DataVerse table, you will get the following error:
+- [Azure AD access control for storage](../../storage/blobs/assign-azure-role-data-access.md)
+- [Control storage account access for serverless SQL pool in Synapse Analytics](develop-storage-files-storage-access-control.md)
+
+#### Content of Dataverse table can't be listed
+
+If you use the Azure Synapse link for Dataverse to read the linked Dataverse tables, you must use an Azure AD account to access the linked data by using serverless SQL pool. If you try to use a SQL sign-in to read an external table that's referencing the Dataverse table, you'll get the following error:
``` External table '???' is not accessible because content of directory cannot be listed. ```
-DataVerse external tables always use **Azure AD passthrough** authentication. You **cannot** configure them to use [SAS key](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) or [workspace Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity).
+Dataverse external tables always use Azure AD passthrough authentication. You *can't* configure them to use a [shared access signature key](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) or [workspace managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity).
-#### Content of Delta Lake transaction log cannot be listed
+#### Content of Delta Lake transaction log can't be listed
-The following error is returned when a serverless SQL pool cannot read the Delta Lake transaction log folder.
+The following error is returned when serverless SQL pool can't read the Delta Lake transaction log folder:
```Msg 13807, Level 16, State 1, Line 6 Content of directory on path 'https://.....core.windows.net/.../_delta_log/*.json' cannot be listed. ```
-Make sure that `_delta_log` folder exists (maybe you're querying plain Parquet files that aren't converted to Delta Lake format). If the `_delta_log` folder exists, make sure that you have both read and list permission on the underlying Delta Lake folders. Try to read \*.json files directly using FORMAT='CSV' (put your URI in the BULK parameter):
+Make sure the `_delta_log` folder exists. Maybe you're querying plain Parquet files that aren't converted to Delta Lake format. If the `_delta_log` folder exists, make sure you have both read and list permission on the underlying Delta Lake folders. Try to read \*.json files directly by using `FORMAT='csv'`. Put your URI in the BULK parameter:
```sql select top 10 *
If this query fails, the caller doesn't have permission to read the underlying s
## Query execution
-You might get the errors during the query execution in the following cases:
-- The caller [cannot access some objects](develop-storage-files-overview.md#permissions),-- The query [cannot access external data](develop-storage-files-storage-access-control.md#storage-permissions),-- The query contains some functionalities that are [not supported in serverless SQL pools](overview-features.md).
+You might get errors during the query execution in the following cases:
-### Query fails because it cannot be executed due to current resource constraints
+- The caller [can't access some objects](develop-storage-files-overview.md#permissions).
+- The query [can't access external data](develop-storage-files-storage-access-control.md#storage-permissions).
+- The query contains some functionalities that [aren't supported in serverless SQL pools](overview-features.md).
-If your query fails with the error message 'This query can't be executed due to current resource constraints', it means that serverless SQL pool isn't able to execute it at this moment due to resource constraints:
+### Query fails because it can't be executed due to current resource constraints
-- Make sure data types of reasonable sizes are used.
+Your query might fail with the error message "This query cannot be executed due to current resource constraints." This message means serverless SQL pool can't execute at this moment because of resource constraints. Here are some troubleshooting options:
+- Make sure data types of reasonable sizes are used.
- If your query targets Parquet files, consider defining explicit types for string columns because they'll be VARCHAR(8000) by default. [Check inferred data types](./best-practices-serverless-sql-pool.md#check-inferred-data-types).
+- If your query targets CSV files, consider [creating statistics](develop-tables-statistics.md#statistics-in-serverless-sql-pool).
+- To optimize your query, see [Performance best practices for serverless SQL pool](./best-practices-serverless-sql-pool.md).
-- If your query targets CSV files, consider [creating statistics](develop-tables-statistics.md#statistics-in-serverless-sql-pool).
+### Query timeout expired
-- Visit [performance best practices for serverless SQL pool](./best-practices-serverless-sql-pool.md) to optimize query.
+The error "Query timeout expired" is returned if the query executed more than 30 minutes on serverless SQL pool. This limit for serverless SQL pool can't be changed.
-### Query timeout expired
+Try to optimize your query by applying [best practices](best-practices-serverless-sql-pool.md#prepare-files-for-querying). Or try to materialize parts of your queries by using [create external table as select (CETAS)](create-external-table-as-select.md). Check if there's a concurrent workload running on serverless SQL pool because the other queries might take the resources. In that case, you might split the workload on multiple workspaces.
-The error *Query timeout expired* is returned if the query executed more than 30 minutes on serverless SQL pool. This is a limit of serverless SQL pool that cannot be changed. Try to optimize your query by applying [best practices](best-practices-serverless-sql-pool.md#prepare-files-for-querying), or try to materialize parts of your queries using [CETAS](create-external-table-as-select.md). Check is there a concurrent workload running on the serverless pool because the other queries might take the resources. In that case you might split the workload on multiple workspaces.
+### Invalid object name
-### Invalid object name
+The error "Invalid object name 'table name'" indicates that you're using an object, such as a table or view, that doesn't exist in the serverless SQL pool database. Try these options:
-The error *Invalid object name 'table name'* indicates that you are using an object (table or view) that doesn't exist in the serverless SQL pool database.
-- List the tables/views and check does the object exists. Use SSMS or ADS because Synapse studio might show some tables that are not available in the serverless SQL pool.-- If you see the object, check are you using some case-sensitive/binary database collation. Maybe the object name does not match the name that you used in the query. With a binary database collation, `Employee` and `employee` are two different objects.-- If you don't see the object, maybe you are trying to query a table from a Lake/Spark database. There are a few reasons why the table might not be available in the serverless pool:
-  - The table has some column types that cannot be represented in serverless SQL.
-  - The table has a format that is not supported in serverless SQL pool (Delta, ORC, etc.)
+- List the tables or views and check if the object exists. Use SQL Server Management Studio or Azure Data Studio because Synapse Studio might show some tables that aren't available in serverless SQL pool.
+- If you see the object, check that you're using some case-sensitive/binary database collation. Maybe the object name doesn't match the name that you used in the query. With a binary database collation, `Employee` and `employee` are two different objects.
+- If you don't see the object, maybe you're trying to query a table from a lake or Spark database. The table might not be available in the serverless SQL pool because:
-### Unclosed quotation mark after the character string
+ - The table has some column types that can't be represented in serverless SQL pool.
+ - The table has a format that isn't supported in serverless SQL pool. Examples are Delta or ORC.
-In some rare cases, where you're using `LIKE` operator on a string column or some comparison with the string literals, you might get the following error:
+### Unclosed quotation mark after the character string
+
+In rare cases, where you use the LIKE operator on a string column or some comparison with the string literals, you might get the following error:
``` Msg 105, Level 15, State 1, Line 88 Unclosed quotation mark after the character string ```
-This error might happen if you're using `Latin1_General_100_BIN2_UTF8` collation on the column. Try to set `Latin1_General_100_CI_AS_SC_UTF8` collation on the column instead of the `Latin1_General_100_BIN2_UTF8` collation to resolve the issue. If the error is still returned, raise a support request through the Azure portal.
+This error might happen if you use the `Latin1_General_100_BIN2_UTF8` collation on the column. Try to set `Latin1_General_100_CI_AS_SC_UTF8` collation on the column instead of the `Latin1_General_100_BIN2_UTF8` collation to resolve the issue. If the error is still returned, raise a support request through the Azure portal.
-### Could not allocate tempdb space while transferring data from one distribution to another
+### Couldn't allocate tempdb space while transferring data from one distribution to another
-The error *Could not allocate tempdb space while transferring data from one distribution to another* is returned when the query execution engine cannot process data and transfer it between the nodes that are executing the query.
-It is a special case of the generic [query fails because it cannot be executed due to current resource constraints](#query-fails-because-it-cannot-be-executed-due-to-current-resource-constraints) error. This error is returned when the resources allocated to the `tempdb` database are insufficient to run the query.
+The error "Could not allocate tempdb space while transferring data from one distribution to another" is returned when the query execution engine can't process data and transfer it between the nodes that are executing the query.
+It's a special case of the generic [query fails because it cannot be executed due to current resource constraints](#query-fails-because-it-cant-be-executed-due-to-current-resource-constraints) error. This error is returned when the resources allocated to the `tempdb` database are insufficient to run the query.
-Apply the best practices before you file a support ticket.
+Apply best practices before you file a support ticket.
-### Query fails with error while handling an external file (max error count reached)
+### Query fails with an error handling an external file (max errors count reached)
-If your query fails with the error message 'error handling external file: Max errors count reached', it means that there is a mismatch of a specified column type and the data that needs to be loaded.
-To get more information about the error and which rows and columns to look at, change the parser version from ΓÇÿ2.0ΓÇÖ to ΓÇÿ1.0ΓÇÖ.
+If your query fails with the error message "Error handling external file: Max errors count reached," it means there's a mismatch between a specified column type and the data that needs to be loaded. To get more information about the error and which rows and columns to look at, change the parser version from 2.0 to 1.0.
**Example**
-If you would like to query the file ΓÇÿnames.csvΓÇÖ with this query 1, Azure Synapse SQL serverless will return with such error.
+
+If you want to query the file names.csv with this Query 1, Azure Synapse serverless SQL pool returns with the following error:
names.csv+ ```csv Id,first name, 1, Adam
Id,first name,
``` Query 1:+ ```sql SELECT     TOP 100 *
FROM
ASΓÇ»[result] ```
-causes:
-
-`Error handling external file: ΓÇÿMax error count reachedΓÇÖ. File/External table name: [filepath].`
+Causes:
-As soon as parser version is changed from version 2.0 to version 1.0, the error messages help to identify the problem. The new error message is now instead:
+"Error handling external file: 'Max error count reached'. File/External table name: [filepath]."
-`Bulk load data conversion error (truncation) for row 1, column 2 (Text) in data file [filepath]`
+As soon as the parser version is changed from version 2.0 to 1.0, the error messages help to identify the problem. The new error message is now "Bulk load data conversion error (truncation) for row 1, column 2 (Text) in data file [filepath]."
-Truncation tells us that our column type is too small to fit our data. The longest first name in this ΓÇÿnames.csvΓÇÖ file has seven characters. Therefore, the according data type to be used should be at least VARCHAR(7).
-The error is caused by this line of code:
+Truncation tells you that your column type is too small to fit your data. The longest first name in this names.csv file has seven characters. The according data type to be used should be at least VARCHAR(7). The error is caused by this line of code:
```sql [Text] VARCHAR (1) COLLATE Latin1_General_BIN2 ```
-Changing the query accordingly resolves the error: After debugging, change the parser version to 2.0 again to achieve maximum performance. Read more about when to use which parser version [here](develop-openrowset.md).
+
+Changing the query accordingly resolves the error. After debugging, change the parser version to 2.0 again to achieve maximum performance.
+
+For more information about when to use which parser version, see [Use OPENROWSET using serverless SQL pool in Synapse Analytics](develop-openrowset.md).
```sql SELECT
FROM
ASΓÇ»[result] ```
-### Cannot bulk load because the file could not be opened
+### Can't bulk load because the file couldn't be opened
+
+The error "Cannot bulk load because the file could not be opened" is returned if a file is modified during the query execution. Usually, you might get an error like "Cannot bulk load because the file {file path} could not be opened. Operating system error code 12. (The access code is invalid.)"
-The error *Cannot bulk load because the file could not be opened* is returned if a file is modified during the query execution. Usually, you might get an error like:
-`Cannot bulk load because the file {file path} could not be opened. Operating system error code 12(The access code is invalid.).`
+The serverless SQL pools can't read files that are being modified while the query is running. The query can't take a lock on the files. If you know that the modification operation is *append*, you can try to set the following option:
-The serverless sql pools cannot read files that are being modified while the query is running. The query cannot take a lock on the files.
-If you know that the modification operation is **append**, you can try to set the following option `{"READ_OPTIONS":["ALLOW_INCONSISTENT_READS"]}`. See how to [query append-only files](query-single-csv-file.md#querying-appendable-files) or [create tables on append-only files](create-use-external-tables.md#external-table-on-appendable-files).
+ `{"READ_OPTIONS":["ALLOW_INCONSISTENT_READS"]}`.
+
+For more information, see how to [query append-only files](query-single-csv-file.md#querying-appendable-files) or [create tables on append-only files](create-use-external-tables.md#external-table-on-appendable-files).
### Query fails with data conversion error
-If your query fails with the error message 'bulk load data conversion error (type mismatches or invalid character for the specified codepage) for row n, column m [columnname] in the data file [filepath]', it means that your data types did not match the actual data for row number n and column m.
+Your query might fail with the error message "Bulk load data conversion error (type mismatches or invalid character for the specified code page) for row n, column m [columnname] in the data file [filepath]." This message means your data types didn't match the actual data for row number n and column m.
+
+For instance, if you expect only integers in your data, but in row n there's a string, this error message is the one you'll get.
-For instance, if you expect only integers in your data but in row n there might be a string, this is the error message you will get.
-To resolve this problem, inspect the file and the data types you chose. Also check if your row delimiter and field terminator settings are correct. The following example shows how inspecting can be done using VARCHAR as column type.
-Read more on field terminators, row delimiters and escape quoting characters [here](query-single-csv-file.md).
+To resolve this problem, inspect the file and the data types you chose. Also check if your row delimiter and field terminator settings are correct. The following example shows how inspecting can be done by using VARCHAR as the column type.
-**Example**
-If you would like to query the file ΓÇÿnames.csvΓÇÖ:
+For more information on field terminators, row delimiters, and escape quoting characters, see [Query CSV files](query-single-csv-file.md).
+
+**Example**
+
+If you want to query the file names.csv:
names.csv ```csv
Id, first name,
4,David five,Eva ```
-with the following 'query 1':
+with the following Query 1:
Query 1: ```sql
FROM
ASΓÇ»[result] ```
-Azure Synapse SQL serverless will return the error:
-`Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 6, column 1 (ID) in data file [filepath]`
+Azure Synapse serverless SQL pool returns the error "Bulk load data conversion error (type mismatch or invalid character for the specified code page) for row 6, column 1 (ID) in data file [filepath]."
+
+It's necessary to browse the data and make an informed decision to handle this problem. To look at the data that causes this problem, the data type needs to be changed first. Instead of querying the ID column with the data type SMALLINT, VARCHAR(100) is now used to analyze this issue.
-It is necessary to browse the data and make an informed decision to handle this problem.
-To look at the data that causes this problem, the data type needs to be changed first. Instead of querying column ΓÇ£IDΓÇ¥ with the data type ΓÇ£SMALLINTΓÇ¥, VARCHAR(100) is now used to analyze this issue.
-Using this slightly changed Query 2, the data can now be processed to return the list of names.
+With this slightly changed Query 2, the data can now be processed to return the list of names.
+
+Query 2:
-Query 2:
```sql SELECT     TOP 100 *
FROM
``` names.csv+ ```csv Id, first name, 1, Adam
Id, first name,
five, Eva ```
-You might observe that the data has unexpected values for ID in the fifth row.
-In such circumstances, it is important to align with the business owner of the data to agree on how corrupt data like this can be avoided. If prevention isn't possible at application level, reasonable sized VARCHAR might be the only option here.
+You might observe that the data has unexpected values for ID in the fifth row. In such circumstances, it's important to align with the business owner of the data to agree on how corrupt data like this example can be avoided. If prevention isn't possible at the application level, reasonable-sized VARCHAR might be the only option here.
> [!Tip]
-> Try to make VARCHAR() as short as possible. Avoid VARCHAR(MAX) if possible as this can impair performance.
+> Try to make VARCHAR() as short as possible. Avoid VARCHAR(MAX) if possible because it can impair performance.
+
+### The query result doesn't look as expected
-### The query result does not look expected. Resulting columns either empty or unexpected data is returned.
+Your query might not fail, but you might see that your result set isn't as expected. The resulting columns might be empty or unexpected data might be returned. In this scenario, it's likely that a row delimiter or field terminator was incorrectly chosen.
-If your query does not fail but you find that your resultset is not as expected, it is likely that row delimiter or field terminator have been chosen wrongly.
-To resolve this problem, it is needed to have another look at the data and change those settings. As shown next, debugging this query is easy like in the upcoming example.
+To resolve this problem, take another look at the data and change those settings. Debugging this query is easy, as shown in the following example.
**Example**
-If you would like to query the file ΓÇÿnames.csvΓÇÖ with the query in 'Query 1', Azure Synapse SQL serverless will return with result that looks odd.
+
+If you want to query the file names.csv with the query in Query 1, Azure Synapse serverless SQL pool returns with a result that looks odd:
names.csv+ ```csv Id,first name, 1, Adam
FROM
ASΓÇ»[result] ```
-| ID | firstname |
+| ID | Firstname |
| - |- | | 1,Adam | NULL | | 2,Bob | NULL |
FROM
| 4,David | NULL | | 5,Eva | NULL |
-There seems to be no value in our column ΓÇ£firstnameΓÇ¥. Instead, all values ended up being in column ΓÇ£IDΓÇ¥. Those values are separated by comma.
-The problem was caused by this line of code as it is necessary to choose the comma instead of the semicolon symbol as field terminator:
+There seems to be no value in the column Firstname. Instead, all values ended up being in the ID column. Those values are separated by a comma. The problem was caused by this line of code because it's necessary to choose the comma instead of the semicolon symbol as field terminator:
```sql FIELDTERMINATOR =';',
Changing this single character solves the problem:
FIELDTERMINATOR =',', ```
-The resultset created by query 2 looks now as expected.
+The result set created by Query 2 now looks as expected:
Query 2: ```sql
FROM
returns
-| ID | firstname |
+| ID | Firstname |
| - |- | | 1 | Adam | | 2 | Bob |
returns
| 4 | David | | 5 | Eva |
+### Column of type isn't compatible with external data type
-### Column [column-name] of type [type-name] is not compatible with external data type [external-data-type-name]
-
-If your query fails with the error message 'Column [column-name] of type [type-name] is not compatible with external data type […]', it is likely that a PARQUET data type was mapped to a wrong SQL data type.
-For instance, if your parquet file has a column price with float numbers (like 12.89) and you tried to map it to INT, this is the error message you will get.
+If your query fails with the error message "Column [column-name] of type [type-name] is not compatible with external data type […]," it's likely that a PARQUET data type was mapped to an incorrect SQL data type.
+For instance, if your Parquet file has a column price with float numbers (like 12.89) and you tried to map it to INT, this error message is the one you'll get.
-To resolve this, inspect the file and the data types you chose. This [mapping table](develop-openrowset.md#type-mapping-for-parquet) helps to choose a correct SQL data type.
-Best practice hint: Specify mapping only for columns that would otherwise resolve into VARCHAR data type.
-Avoiding VARCHAR when possible, leads to better performance in queries.
+To resolve this issue, inspect the file and the data types you chose. This [mapping table](develop-openrowset.md#type-mapping-for-parquet) helps to choose a correct SQL data type. As a best practice, specify mapping only for columns that would otherwise resolve into the VARCHAR data type. Avoiding VARCHAR when possible leads to better performance in queries.
**Example**
-If you would like to query the file 'taxi-data.parquet' with this Query 1, Azure Synapse SQL serverless will return the following error.
+
+If you want to query the file taxi-data.parquet with this Query 1, Azure Synapse serverless SQL pool returns the following error:
taxi-data.parquet:
taxi-data.parquet:
| 5 | 13091570.2799993 | 111.065989028627 | Query 1:+ ```sql SELECT *
FROM
AS [result] ```
-`Column 'SumTripDistance' of type 'INT' is not compatible with external data type 'Parquet physical type: DOUBLE', please try with 'FLOAT'. File/External table name: '<filepath>taxi-data.parquet'.`
+"Column 'SumTripDistance' of type 'INT' is not compatible with external data type 'Parquet physical type: DOUBLE', please try with 'FLOAT'. File/External table name: '<filepath>taxi-data.parquet'."
-This error message tells us that data types are not compatible and already comes with the suggestion to use the FLOAT instead of INT.
-The error is hence caused by this line of code:
+This error message tells you that data types aren't compatible and comes with the suggestion to use FLOAT instead of INT. The error is caused by this line of code:
```sql SumTripDistance INT, ```
-Using this slightly changed Query 2, the data can now be processed and shows all three columns.
+With this slightly changed Query 2, the data can now be processed and shows all three columns:
+
+Query 2:
-Query 2:
```sql SELECT *
FROM
AS [result] ```
-### The query references an object that is not supported in distributed processing mode
+### Query references an object that isn't supported in distributed processing mode
+
+The error "The query references an object that is not supported in distributed processing mode" indicates that you've used an object or function that can't be used while you query data in Azure Storage or Azure Cosmos DB analytical storage.
-The error *The query references an object that is not supported in distributed processing mode* indicates that you have used for object or function that cannot be used while querying data in Azure storage or Cosmos DB analytical storage. Some objects (such as system views) and functions cannot be used while querying data stored in Azure data lake or Cosmos DB analytical storage. Avoid using the queries that join external data with system views, load external data in a temp table, or use some security or metadata functions to filter external data.
+Some objects, like system views, and functions can't be used while you query data stored in Azure Data Lake or Azure Cosmos DB analytical storage. Avoid using the queries that join external data with system views, load external data in a temp table, or use some security or metadata functions to filter external data.
-### `WaitIOCompletion` call failed
+### WaitIOCompletion call failed
-The error message `WaitIOCompletion call failed` indicates that the query failed while waiting to complete I/O operation that reads data from the remote storage (Azure Data Lake).
+The error message "WaitIOCompletion call failed" indicates that the query failed while waiting to complete the I/O operation that reads data from the remote storage, Azure Data Lake.
The error message has the following pattern:
The error message has the following pattern:
Error handling external file: 'WaitIOCompletion call failed. HRESULT = ???'. File/External table name... ```
-Make sure that your storage is placed in the same region as serverless SQL pool. Check the storage metrics and verify that there are no other workloads on the storage layer (uploading new files) that could saturate I/O requests.
+Make sure that your storage is placed in the same region as serverless SQL pool. Check the storage metrics and verify there are no other workloads on the storage layer, such as uploading new files, that could saturate I/O requests.
-The field HRESULT contains the result code, below are the most common error codes and potential solutions:
+The field HRESULT contains the result code. The following error codes are the most common along with their potential solutions.
### [0x80070002](#tab/x80070002)
This error code means the source file isn't in storage.
There are reasons why this error code can happen: - The file was deleted by another application.--- Invalid execution plan cached
- - As a temporary mitigation, run the command `DBCC FREEPROCCACHE`. If the problem persists create a support ticket.
-
+ - In this common scenario, the query execution starts, it enumerates the files, and the files are found. Later, during the query execution, a file is deleted. For example, it could be deleted by Databricks, Spark, or Azure Data Factory. The query fails because the file isn't found.
+ - This issue can also occur with the Delta format. The query might succeed on retry because there's a new version of the table and the deleted file isn't queried again.
+- An invalid execution plan is cached.
+ - As a temporary mitigation, run the command `DBCC FREEPROCCACHE`. If the problem persists, create a support ticket.
### [0x80070005](#tab/x80070005)
-This error can occur when the authentication method is User Identity, also known as "Azure AD pass-through" and the Azure AD access token expires.
+This error can occur when the authentication method is user identity, which is also known as Azure AD pass-through, and the Azure AD access token expires.
The error message might also resemble:
The error message might also resemble:
File {path} cannot be opened because it does not exist or it is used by another process. ``` -- If an Azure AD user has a connection open for more than 1 hour during query execution, any query that relies on Azure AD fails, including queries that access storage using Azure AD pass-through authentication, and statements that interact with Azure AD (like CREATE EXTERNAL PROVIDER). This issue frequently affects tools that keep connections open, like in query editor in SSMS and ADS. Tools that open new connections to execute a query, like Synapse Studio, aren't affected.--- Azure AD authentication token might be cached by the client applications. For example, Power BI caches Azure Active Directory token and reuses the same token for one hour. The long-running queries might fail if the token expires during execution.
+- If an Azure AD user has a connection open for more than one hour during query execution, any query that relies on Azure AD fails. This scenario includes queries that access storage by using Azure AD pass-through authentication and statements that interact with Azure AD like CREATE EXTERNAL PROVIDER. This issue frequently affects tools that keep connections open, like in the query editor in SQL Server Management Studio and Azure Data Studio. Tools that open new connections to execute a query, like Synapse Studio, aren't affected.
+- The Azure AD authentication token might be cached by the client applications. For example, Power BI caches the Azure AD token and reuses the same token for one hour. The long-running queries might fail if the token expires during execution.
Consider the following mitigations: -- Restart the client application to obtain a new Azure Active Directory token.-- Consider switching to:
- - [Service Principal](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types)
- - [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types)
- - or [Shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#supported-storage-authorization-types)
-
+- Restart the client application to obtain a new Azure AD token.
+- Consider switching to:
+ - [Service principal](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types)
+ - [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types)
+ - [Shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#supported-storage-authorization-types)
### [0x80070008](#tab/x80070008)
-This error message can occur when the serverless SQL pool is experiencing resource constraints, or if there was a transient platform issue.
+This error message can occur when serverless SQL pool experiences resource constraints, or if there was a transient platform issue.
- Transient issues: - This error can occur when Azure detects a potential platform issue that results in a change in topology to keep the service in a healthy state. - This type of issue happens infrequently and is transient. Retry the query. - High concurrency or query complexity:
- - Serverless SQL doesn't impose a maximum limit in query concurrency, it depends on the query complexity and the amount of data scanned.
- - One serverless SQL pool can concurrently handle 1000 active sessions that are executing lightweight queries, but the numbers will drop if the queries are more complex or scan a larger amount of data. For more information, see [Concurrency limits for Serverless SQL Pool](resources-self-help-sql-on-demand.md#constraints).
- - Try reducing the number of queries executing simultaneously or the query complexity.
+ - Serverless SQL doesn't impose a maximum limit in query concurrency. It depends on the query complexity and the amount of data scanned.
+ - One serverless SQL pool can concurrently handle 1,000 active sessions that are executing lightweight queries, but the numbers will drop if the queries are more complex or scan a larger amount of data. For more information, see [Concurrency limits for serverless SQL pool](resources-self-help-sql-on-demand.md#constraints).
+ - Try reducing the number of queries that execute simultaneously or the query complexity.
If the issue is non-transient or you confirmed the problem isn't related to high concurrency or query complexity, create a support ticket. - ### [0x8007000C](#tab/x8007000C) This error code occurs when a query is executing and the source files are modified at the same time.
The error message returned can also have the following format:
"Cannot bulk load because the file 'https://????.dfs.core.windows.net/????' could not be opened. Operating system error code 12 (The access code is invalid.)." ```
-If the source files are updated while the query is executing, it can cause inconsistent reads. For example, half row is read with the old version of the data, and half row is read with the newer version of the data.
-
+If the source files are updated while the query is executing, it can cause inconsistent reads. For example, one half of a row is read with the old version of the data and the other half of the row is read with the newer version of the data.
### CSV files
-If the problem occurs when reading CSV files, you can allow appendable files to be queried and updated at the same time, by using the option ALLOW_INCONSISTENT_READS.
+If the problem occurs when reading CSV files, you can allow appendable files to be queried and updated at the same time by using the option ALLOW_INCONSISTENT_READS.
More information about syntax and usage: - [OPENROWSET syntax](query-single-csv-file.md#querying-appendable-files) ROWSET_OPTIONS = '{"READ_OPTIONS":["ALLOW_INCONSISTENT_READS"]}'-
- - [External Tables syntax](create-use-external-tables.md#external-table-on-appendable-files)
+ - [External tables syntax](create-use-external-tables.md#external-table-on-appendable-files)
TABLE_OPTIONS = N'{"READ_OPTIONS":["ALLOW_INCONSISTENT_READS"]}' ### Parquet files When the file format is Parquet, the query won't recover automatically. It needs to be retried by the client application.
-### Synapse Link for Dataverse
-
-This error can occur when reading data from Synapse Link for Dataverse, when Synapse Link is syncing data to the lake and the data is being queried at the same time. The product group has a goal to improve this behavior.
+### Azure Synapse Link for Dataverse
+This error can occur when reading data from Azure Synapse Link for Dataverse, when Azure Synapse Link is syncing data to the lake and the data is being queried at the same time. The product group has a goal to improve this behavior.
### [0x800700A1](#tab/x800700A1)
-Confirm the storage account accessed is using the "Archive" access tier.
-
-The `archive access` tier is an offline tier. While a blob is in the `archive access` tier, it can't be read or modified.
+Confirm the storage account accessed is using the Archive access tier.
-To read or download a blob in the Archive tier, rehydrate it to an online tier: [Archive access tier](/azure/storage/blobs/access-tiers-overview#archive-access-tier)
+The Archive access tier is an offline tier. While a blob is in the Archive access tier, it can't be read or modified.
+To read or download a blob in the Archive tier, rehydrate it to an online tier. See [Archive access tier](/azure/storage/blobs/access-tiers-overview#archive-access-tier).
### [0x80070057](#tab/x80070057)
-This error can occur when the authentication method is User Identity, also known as "Azure AD pass-through" and the Azure Active Directory access token expires.
+This error can occur when the authentication method is user identity, which is also known as Azure AD pass-through, and the Azure AD access token expires.
The error message might also resemble the following pattern:
The error message might also resemble the following pattern:
File {path} cannot be opened because it does not exist or it is used by another process. ``` -- If an Azure AD user has a connection open for more than 1 hour during query execution, any query that relies on Azure AD fails, including queries that access storage using Azure AD pass-through authentication and statements that interact with Azure AD (like CREATE EXTERNAL PROVIDER). This issue frequently affects tools that keep connections open, like the query editor in SQL Server Management Studio (SSMS) and Azure Data Studio (ADS). Client tools that open new connections to execute a query, like Synapse Studio, aren't affected.--- Azure AD authentication token might be cached by the client applications. For example, Power BI caches an Azure AD token and reuses it for one hour. The long-running queries might fail if the token expires in the middle of execution.
+- If an Azure AD user has a connection open for more than one hour during query execution, any query that relies on Azure AD fails, including queries that access storage by using Azure AD pass-through authentication and statements that interact with Azure AD like CREATE EXTERNAL PROVIDER. This issue frequently affects tools that keep connections open, like the query editor in SQL Server Management Studio and Azure Data Studio. Client tools that open new connections to execute a query, like Synapse Studio, aren't affected.
+- The Azure AD authentication token might be cached by the client applications. For example, Power BI caches an Azure AD token and reuses it for one hour. The long-running queries might fail if the token expires in the middle of execution.
Consider the following mitigations to resolve the issue: -- Restart the client application to obtain a new Azure Active Directory token.-- Consider switching to:
- - [Service Principal](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types)
- - [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types)
- - or [Shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#supported-storage-authorization-types)
-
+- Restart the client application to obtain a new Azure AD token.
+- Consider switching to:
+ - [Service principal](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types)
+ - [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types)
+ - [Shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#supported-storage-authorization-types)
### [0x80072EE7](#tab/x80072EE7)
-This error code can occur when there's a transient issue in the serverless SQL pool.
-It happens infrequently and is temporary by nature. Retry the query.
+This error code can occur when there's a transient issue in the serverless SQL pool. It happens infrequently and is temporary by nature. Retry the query.
-If the issue persists create a support ticket.
+If the issue persists, create a support ticket.
+### Incorrect syntax near NOT
-### Incorrect syntax near 'NOT'
-
-The error *Incorrect syntax near 'NOT'* indicates that there are some external tables with the columns containing `NOT NULL` constraint in the column definition. Update the table to remove `NOT NULL` from the column definition. This error can sometimes also occur transiently with tables created from a CETAS statement. If the problem doesn't resolve, you can try dropping and recreating the external table.
+The error "Incorrect syntax near 'NOT'" indicates there are some external tables with columns that contain the NOT NULL constraint in the column definition. Update the table to remove NOT NULL from the column definition. This error can sometimes also occur transiently with tables created from a CETAS statement. If the problem doesn't resolve, you can try dropping and re-creating the external table.
### Partitioning column returns NULL values
-If your query returns `NULL` values instead of partitioning columns or cannot find the partition columns, you have few possible troubleshooting steps:
-- If you are using tables to query partitioned data set, note that tables do not support partitioning. Replace the table with the [partitioned views](create-use-views.md#partitioned-views).-- If you are using the [partitioned views](create-use-views.md#partitioned-views) with the OPENROWSET that [queries partitioned files using the FILEPATH() function](query-specific-files.md), make sure that you have correctly specified wildcard pattern in the location and that you have used the proper index for referencing the wildcard.-- If you are querying the files directly in the partitioned folder, note that the partitioning columns are not the parts of the file columns. The partitioning values are placed in the folder paths and not the files. Therefore, the files do not contain the partitioning values.
+If your query returns NULL values instead of partitioning columns or can't find the partition columns, you have a few possible troubleshooting steps:
+
+- If you use tables to query a partitioned dataset, be aware that tables don't support partitioning. Replace the table with the [partitioned views](create-use-views.md#partitioned-views).
+- If you use the [partitioned views](create-use-views.md#partitioned-views) with the OPENROWSET that [queries partitioned files by using the FILEPATH() function](query-specific-files.md), make sure you correctly specified the wildcard pattern in the location and used the proper index for referencing the wildcard.
+- If you're querying the files directly in the partitioned folder, be aware that the partitioning columns aren't the parts of the file columns. The partitioning values are placed in the folder paths and not the files. For this reason, the files don't contain the partitioning values.
### Inserting value to batch for column type DATETIME2 failed
-The error *Inserting value to batch for column type DATETIME2 failed* indicates that the serverless pool cannot read the date values from the underlying files. The datetime value stored in Parquet/Delta Lake file cannot be represented as `DATETIME2` column. Inspect the minimum value in the file using spark and check are there some dates less than 0001-01-03. If you stored the files using the Spark 2.4, the date time values before are written using the Julian calendar that is not aligned with the Gregorian Proleptic calendar used in serverless SQL pools. There might be a 2-days difference between Julian calendar user to write the values in Parquet (in some Spark versions) and Gregorian Proleptic calendar used in serverless SQL pool, which might cause conversion to invalid (negative) date value.
+The error "Inserting value to batch for column type DATETIME2 failed" indicates that the serverless pool can't read the date values from the underlying files. The datetime value stored in the Parquet or Delta Lake file can't be represented as a `DATETIME2` column.
+
+Inspect the minimum value in the file by using Spark, and check that some dates are less than 0001-01-03. If you stored the files by using Spark 2.4, the datetime values before are written by using the Julian calendar that isn't aligned with the proleptic Gregorian calendar used in serverless SQL pools.
+
+There might be a two-day difference between the Julian calendar used to write the values in Parquet (in some Spark versions) and the proleptic Gregorian calendar used in serverless SQL pool. This difference might cause conversion to a negative date value, which is invalid.
-Try to use Spark to update these values because they are treated as invalid date values in SQL. The following sample shows how to update the values that are out of SQL date ranges to `NULL` in Delta Lake:
+Try to use Spark to update these values because they're treated as invalid date values in SQL. The following sample shows how to update the values that are out of SQL date ranges to NULL in Delta Lake:
```spark from delta.tables import *
deltaTable = DeltaTable.forPath(spark,
deltaTable.update(col("MyDateTimeColumn") < '0001-02-02', { "MyDateTimeColumn": null } ) ```
-Note this change will remove the values that cannot be represented. The other date values might be properly loaded but incorrectly represented because there is still a difference between Julian and Gregorian Proleptic calendars. You might see an unexpected date shifts even for the dates before `1900-01-01` if you are using Spark 3.0 or older versions.
-Consider [migrating to Spark 3.1 or higher](https://spark.apache.org/docs/latest/sql-migration-guide.html) where it is uses Gregorian Proleptic calendar that is aligned with the calendar in the serverless SQL pool.
-You should reload your legacy data with the higher version of Spark, and use the following setting to correct the dates:
+This change removes the values that can't be represented. The other date values might be properly loaded but incorrectly represented because there's still a difference between Julian and proleptic Gregorian calendars. You might see unexpected date shifts even for the dates before `1900-01-01` if you use Spark 3.0 or older versions.
+
+Consider [migrating to Spark 3.1 or higher](https://spark.apache.org/docs/latest/sql-migration-guide.html). It uses a proleptic Gregorian calendar that's aligned with the calendar in serverless SQL pool. Reload your legacy data with the higher version of Spark, and use the following setting to correct the dates:
```spark spark.conf.set("spark.sql.legacy.parquet.int96RebaseModeInWrite", "CORRECTED")
spark.conf.set("spark.sql.legacy.parquet.int96RebaseModeInWrite", "CORRECTED")
### Query failed because of a topology change or compute container failure
-This error might indicate that some internal process issue happened in the serverless SQL pool. File a support ticket with all necessary details that could help Azure support team to investigate the issue.
+This error might indicate that some internal process issue happened in serverless SQL pool. File a support ticket with all necessary details that could help the Azure support team investigate the issue.
-Describe in the support requests anything that might be unusual compared to the regular workload, such as large number of concurrent requests or some special workload or query that started executing before this error happened.
+Describe anything that might be unusual compared to the regular workload. For example, perhaps there was a large number of concurrent requests or a special workload or query started executing before this error happened.
## Configuration
-Serverless pools enable you to use T-SQL to configure database objects. There are some constraints, such as - you cannot create objects in master and lake house/spark databases, you need to have master key to create credentials, you need to have permission to reference data that is used in the objects.
+Serverless SQL pools enable you to use T-SQL to configure database objects. There are some constraints:
+
+- You can't create objects in master and lakehouse or Spark databases.
+- You must have a master key to create credentials.
+- You must have permission to reference data that's used in the objects.
-### Cannot create a database
+### Can't create a database
+
+If you get the error "CREATE DATABASE failed. User database limit has been already reached," you've created the maximal number of databases that are supported in one workspace. For more information, see [Constraints](#constraints).
-If you are getting the error '*CREATE DATABASE failed. User database limit has been already reached.*' you have created the maximal number of databases that are supported in one workspace (see [Constraints](#constraints)).
- If you need to separate the objects, use schemas within the databases.-- If you just need to reference Azure Data Lake storage, create Lake house databases or Spark databases that will be synchronized in the serverless SQL pool.
+- If you need to reference Azure Data Lake storage, create lakehouse databases or Spark databases that will be synchronized in serverless SQL pool.
-### Please create a master key in the database or open the master key in the session before performing this operation.
+### Create a master key in the database or open the master key in the session before performing this operation
-If your query fails with the error message '*Please create a master key in the database or open the master key in the session before performing this operation*', it means that your user database has no access to a master key at the moment.
+If your query fails with the error message "Please create a master key in the database or open the master key in the session before performing this operation," it means that your user database has no access to a master key at the moment.
-Most likely, you just created a new user database and didn't create a master key yet.
+Most likely, you created a new user database and haven't created a master key yet.
To resolve this problem, create a master key with the following query:
CREATE MASTER KEY [ ENCRYPTION BY PASSWORD ='password' ];
``` > [!NOTE]
-> Replace 'password' with a different secret here.
+> Replace `'password'` with a different secret here.
+
+### CREATE statement isn't supported in the master database
-### CREATE STATEMENT is not supported in master database
+If your query fails with the error message "Failed to execute query. Error: CREATE EXTERNAL TABLE/DATA SOURCE/DATABASE SCOPED CREDENTIAL/FILE FORMAT is not supported in master database," it means that the master database in serverless SQL pool doesn't support the creation of:
-If your query fails with the error message `Failed to execute query. Error: CREATE EXTERNAL TABLE/DATA SOURCE/DATABASE SCOPED CREDENTIAL/FILE FORMAT is not supported in master database` it means that master database in serverless SQL pool does not support creation of:
- - External tables
- - External data sources
- - Database scoped credentials
- - External file formats
+ - External tables.
+ - External data sources.
+ - Database scoped credentials.
+ - External file formats.
-Solution:
+Here's the solution:
1. Create a user database:
-```sql
-CREATE DATABASE <DATABASE_NAME>
-```
+ ```sql
+ CREATE DATABASE <DATABASE_NAME>
+ ```
- 2. Execute create statement in the context of <DATABASE_NAME>, which failed earlier for master database.
+ 1. Execute a CREATE statement in the context of <DATABASE_NAME>, which failed earlier for the master database.
- Example for creation of External file format:
-
-```sql
-USE <DATABASE_NAME>
-CREATE EXTERNAL FILE FORMAT [SynapseParquetFormat]
-WITH ( FORMAT_TYPE = PARQUET)
-```
+ Here's an example of the creation of an external file format:
+
+ ```sql
+ USE <DATABASE_NAME>
+ CREATE EXTERNAL FILE FORMAT [SynapseParquetFormat]
+ WITH ( FORMAT_TYPE = PARQUET)
+ ```
+
+### Operation isn't allowed for a replicated database
+
+If you're trying to create SQL objects, users, or change permissions in a database, you might get errors like "Operation CREATE USER is not allowed for a replicated database." This error is returned when you try to create objects in a database that's [shared with Spark pool](../metadat). The databases that are replicated from Apache Spark pools are read only. You can't create new objects into a replicated database by using T-SQL.
-### Operation is not allowed for a replicated database.
-
-If you are trying to create some SQL objects, users, or change permissions in a database, you might get the errors like 'Operation CREATE USER is not allowed for a replicated database'. This error is returned when you try to create some objects in a database that is [shared with Spark pool](../metadat). The databases that are replicated from Apache Spark pools are read-only. You cannot create new objects into replicated database using T-SQL.
+Create a separate database and reference the synchronized [tables](../metadat) by using three-part names and cross-database queries.
-Create a separate database and reference the synchronized [tables](../metadat) using 3-part names and cross-database queries.
+### Can't create Azure AD sign-in or user
-### Cannot create Azure AD login or user
+If you get an error while you're trying to create a new Azure AD sign-in or user in a database, check the sign-in you used to connect to your database. The sign-in that's trying to create a new Azure AD user must have permission to access the Azure AD domain and check if the user exists. Be aware that:
-If you are getting an error while trying to create new Azure AD login or user in database, check what login you used to connect to your database. The login that is trying to create a new Azure AD user must have permission to access Azure AD domain and check if the user exists.
-- SQL logins do not have this permission, so you will always get this error if you use SQL authentication.-- If you are using Azure AD login to create new logins, check do you have permission to access Azure AD domain.
+- SQL sign-ins don't have this permission, so you'll always get this error if you use SQL authentication.
+- If you use an Azure AD sign-in to create new sign-ins, check to see if you have permission to access the Azure AD domain.
-## Cosmos DB
+## Azure Cosmos DB
-Serverless SQL pools enable you to query Cosmos DB analytical storage using the `OPENROWSET` function. Make sure that your Cosmos DB container has analytical storage. Make sure that you correctly specified the account, database, and container name. Also, make sure that your Cosmos DB account key is valid - see [prerequisites](query-cosmos-db-analytical-store.md#prerequisites).
+Serverless SQL pools enable you to query Azure Cosmos DB analytical storage by using the `OPENROWSET` function. Make sure that your Azure Cosmos DB container has analytical storage. Make sure that you correctly specified the account, database, and container name. Also, make sure that your Azure Cosmos DB account key is valid. For more information, see [Prerequisites](query-cosmos-db-analytical-store.md#prerequisites).
-### Cannot query Cosmos DB using the OPENROWSET function
+### Can't query Azure Cosmos DB by using the OPENROWSET function
-If you cannot connect to your Cosmos DB account, take a look at [prerequisites](query-cosmos-db-analytical-store.md#prerequisites). Possible errors and troubleshooting actions are listed in the following table.
+If you can't connect to your Azure Cosmos DB account, look at the [prerequisites](query-cosmos-db-analytical-store.md#prerequisites). Possible errors and troubleshooting actions are listed in the following table.
| Error | Root cause | | | |
-| Syntax errors:<br/> - Incorrect syntax near `Openrowset`<br/> - `...` is not a recognized `BULK OPENROWSET` provider option.<br/> - Incorrect syntax near `...` | Possible root causes:<br/> - Not using Cosmos DB as the first parameter.<br/> - Using a string literal instead of an identifier in the third parameter.<br/> - Not specifying the third parameter (container name). |
-| There was an error in the Cosmos DB connection string. | - The account, database, or key isn't specified. <br/> - There's some option in a connection string that isn't recognized.<br/> - A semicolon (`;`) is placed at the end of a connection string. |
-| Resolving Cosmos DB path has failed with the error "Incorrect account name" or "Incorrect database name." | The specified account name, database name, or container can't be found, or analytical storage hasn't been enabled to the specified collection.|
-| Resolving Cosmos DB path has failed with the error "Incorrect secret value" or "Secret is null or empty." | The account key isn't valid or is missing. |
+| Syntax errors:<br/> - Incorrect syntax near `OPENROWSET`.<br/> - `...` isn't a recognized `BULK OPENROWSET` provider option.<br/> - Incorrect syntax near `...`. | Possible root causes:<br/> - Not using Azure Cosmos DB as the first parameter.<br/> - Using a string literal instead of an identifier in the third parameter.<br/> - Not specifying the third parameter (container name). |
+| There was an error in the Azure Cosmos DB connection string. | - The account, database, or key isn't specified. <br/> - An option in a connection string isn't recognized.<br/> - A semicolon (`;`) is placed at the end of a connection string. |
+| Resolving Azure Cosmos DB path has failed with the error "Incorrect account name" or "Incorrect database name." | The specified account name, database name, or container can't be found, or analytical storage hasn't been enabled to the specified collection.|
+| Resolving Azure Cosmos DB path has failed with the error "Incorrect secret value" or "Secret is null or empty." | The account key isn't valid or is missing. |
-### UTF-8 collation warning is returned while reading Cosmos DB string types
+### UTF-8 collation warning is returned while reading Azure Cosmos DB string types
-A serverless SQL pool will return a compile-time warning if the `OPENROWSET` column collation doesn't have UTF-8 encoding. You can easily change the default collation for all `OPENROWSET` functions running in the current database by using the T-SQL statement `alter database current collate Latin1_General_100_CI_AS_SC_UTF8`.
+Serverless SQL pool returns a compile-time warning if the `OPENROWSET` column collation doesn't have UTF-8 encoding. You can easily change the default collation for all `OPENROWSET` functions running in the current database by using the T-SQL statement `alter database current collate Latin1_General_100_CI_AS_SC_UTF8`.
-[Latin1_General_100_BIN2_UTF8 collation](best-practices-serverless-sql-pool.md#use-proper-collation-to-utilize-predicate-pushdown-for-character-columns) provides the best performance when you filter your data using string predicates.
+[Latin1_General_100_BIN2_UTF8 collation](best-practices-serverless-sql-pool.md#use-proper-collation-to-utilize-predicate-pushdown-for-character-columns) provides the best performance when you filter your data by using string predicates.
-### Missing rows in Cosmos DB analytical store
+### Missing rows in Azure Cosmos DB analytical store
-Some items from Cosmos DB might not be returned by the `OPENROWSET` function.
-- There is a synchronization delay between transactional and analytical store. The document that you entered in the Cosmos DB transactional store might appear in analytical store after 2-3 minutes.-- The document might violate some [schema constraints](../../cosmos-db/analytical-store-introduction.md#schema-constraints).
+Some items from Azure Cosmos DB might not be returned by the `OPENROWSET` function. Be aware that:
-### Query returns `NULL` values in some Cosmos DB items
+- There's a synchronization delay between the transactional and analytical store. The document you entered in the Azure Cosmos DB transactional store might appear in the analytical store after two to three minutes.
+- The document might violate some [schema constraints](../../cosmos-db/analytical-store-introduction.md#schema-constraints).
-Azure Synapse SQL will return `NULL` instead of the values that you see in the transaction store in the following cases:
-- There is a synchronization delay between transactional and analytical store. The value that you entered in Cosmos DB transactional store might appear in analytical store after 2-3 minutes.-- Possibly wrong column name or path expression in the `WITH` clause. Column name (or path expression after the column type) in the `WITH` clause must match the property names in Cosmos DB collection. Comparison is case-sensitive (for example, `productCode` and `ProductCode` are different properties). Make sure that your column names exactly match the Cosmos DB property names.-- The property might not be moved to the analytical storage because it violates some [schema constraints](../../cosmos-db/analytical-store-introduction.md#schema-constraints), such as more than 1000 properties or more than 127 nesting levels.-- If you are using well-defined [schema representation](../../cosmos-db/analytical-store-introduction.md#schema-representation) the value in transactional store might have a wrong type. Well-defined schema locks the types for each property by sampling the documents. Any value added in the transactional store that doesn't match the type is treated as a wrong value and not migrated to the analytical store. -- If you are using full-fidelity [schema representation](../../cosmos-db/analytical-store-introduction.md#schema-representation) make sure that you are adding type suffix after property name like `$.price.int64`. If you don't see a value for the referenced path, maybe it is stored under different type path, for example `$.price.float64`. See [how to query Cosmos DB collections in the full-fidelity schema](query-cosmos-db-analytical-store.md#query-items-with-full-fidelity-schema).
+### Query returns NULL values in some Azure Cosmos DB items
-### Column is not compatible with external data type
+Azure Synapse SQL returns NULL instead of the values that you see in the transaction store in the following cases:
+- There's a synchronization delay between the transactional and analytical store. The value that you entered in the Azure Cosmos DB transactional store might appear in the analytical store after two to three minutes.
+- There might be a wrong column name or path expression in the WITH clause. The column name (or path expression after the column type) in the WITH clause must match the property names in the Azure Cosmos DB collection. Comparison is case sensitive. For example, `productCode` and `ProductCode` are different properties. Make sure that your column names exactly match the Azure Cosmos DB property names.
+- The property might not be moved to the analytical storage because it violates some [schema constraints](../../cosmos-db/analytical-store-introduction.md#schema-constraints), such as more than 1,000 properties or more than 127 nesting levels.
+- If you use well-defined [schema representation](../../cosmos-db/analytical-store-introduction.md#schema-representation), the value in the transactional store might have a wrong type. Well-defined schema locks the types for each property by sampling the documents. Any value added in the transactional store that doesn't match the type is treated as a wrong value and not migrated to the analytical store.
+- If you use full-fidelity [schema representation](../../cosmos-db/analytical-store-introduction.md#schema-representation), make sure that you're adding the type suffix after the property name like `$.price.int64`. If you don't see a value for the referenced path, maybe it's stored under a different type path, for example, `$.price.float64`. For more information, see [Query Azure Cosmos DB collections in the full-fidelity schema](query-cosmos-db-analytical-store.md#query-items-with-full-fidelity-schema).
-The error *Column `column name` of the type `type name` isn't compatible with the external data type `type name`* is returned is the specified column type in the `WITH` clause doesn't match the type in the Azure Cosmos DB container. Try to change the column type as it's described in the section [Azure Cosmos DB to SQL type mappings](query-cosmos-db-analytical-store.md#azure-cosmos-db-to-sql-type-mappings), or use the `VARCHAR` type.
+### Column isn't compatible with external data type
-### Resolving Cosmos DB path has failed
+The error "Column `column name` of the type `type name` is not compatible with the external data type `type name`" is returned if the specified column type in the WITH clause doesn't match the type in the Azure Cosmos DB container. Try to change the column type as it's described in the section [Azure Cosmos DB to SQL type mappings](query-cosmos-db-analytical-store.md#azure-cosmos-db-to-sql-type-mappings) or use the VARCHAR type.
-If you are getting the error: `Resolving Cosmos DB path has failed with error 'This request is not authorized to perform this operation.'`, check do you use private endpoints in Cosmos DB. To allow SQL serverless to access an analytical store with private endpoint, you need to [configure private endpoints for Azure Cosmos DB analytical store](../../cosmos-db/analytical-store-private-endpoints.md#using-synapse-serverless-sql-pools).
+### Resolving Azure Cosmos DB path has failed with error
-### Cosmos DB performance issues
+If you get the error "Resolving CosmosDB path has failed with error 'This request is not authorized to perform this operation'," check to see if you used private endpoints in Azure Cosmos DB. To allow serverless SQL pool to access an analytical store with private endpoints, you must [configure private endpoints for the Azure Cosmos DB analytical store](../../cosmos-db/analytical-store-private-endpoints.md#using-synapse-serverless-sql-pools).
-If you are experiencing some unexpected performance issues, make sure that you applied the best practices, such as:
-- Make sure that you have placed the client application, serverless pool, and Cosmos DB analytical storage in [the same region](best-practices-serverless-sql-pool.md#colocate-your-azure-cosmos-db-analytical-storage-and-serverless-sql-pool).-- Make sure that you are using the `WITH` clause with [optimal data types](best-practices-serverless-sql-pool.md#use-appropriate-data-types).-- Make sure that you are using [Latin1_General_100_BIN2_UTF8 collation](best-practices-serverless-sql-pool.md#use-proper-collation-to-utilize-predicate-pushdown-for-character-columns) when you filter your data using string predicates.
+### Azure Cosmos DB performance issues
+
+If you experience some unexpected performance issues, make sure that you applied best practices, such as:
+
+- Make sure that you placed the client application, serverless pool, and Azure Cosmos DB analytical storage in [the same region](best-practices-serverless-sql-pool.md#colocate-your-azure-cosmos-db-analytical-storage-and-serverless-sql-pool).
+- Make sure that you use the WITH clause with [optimal data types](best-practices-serverless-sql-pool.md#use-appropriate-data-types).
+- Make sure that you use [Latin1_General_100_BIN2_UTF8 collation](best-practices-serverless-sql-pool.md#use-proper-collation-to-utilize-predicate-pushdown-for-character-columns) when you filter your data by using string predicates.
- If you have repeating queries that might be cached, try to use [CETAS to store query results in Azure Data Lake Storage](best-practices-serverless-sql-pool.md#use-cetas-to-enhance-query-performance-and-joins). ## Delta Lake
-There are some limitations and known issues that you might see in Delta Lake support in serverless SQL pools.
-- Make sure that you are referencing root Delta Lake folder in the [OPENROWSET](./develop-openrowset.md) function or external table location.
- - Root folder must have a sub-folder named `_delta_log`. The query will fail if there is no `_delta_log` folder. If you don't see that folder, then you are referencing plain Parquet files that must be [converted to Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#convert-parquet-to-delta) using Apache Spark pools.
- - Do not specify wildcards to describe the partition schema. Delta Lake query will automatically identify the Delta Lake partitions.
-- Delta Lake tables created in the Apache Spark pools are not automatically available in serverless SQL pool. To query such Delta Lake tables using T-SQL language, run the [CREATE EXTERNAL TABLE](./create-use-external-tables.md#delta-lake-external-table) statement and specify Delta as format.-- External tables do not support partitioning. Use [partitioned views](create-use-views.md#delta-lake-partitioned-views) on Delta Lake folder to use the partition elimination. See known issues and workarounds later in the article.-- Serverless SQL pools do not support time travel queries. Use Apache Spark pools in Azure Synapse Analytics to [read historical data](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel).-- Serverless SQL pools do not support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Azure Synapse Analytics [to update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data).
- - You cannot [store query results to storage in Delta Lake format](create-external-table-as-select.md) using the Create external table as select (CETAS) command. The CETAS command supports only Parquet and CSV as the output formats.
-- Serverless SQL pools in Azure Synapse Analytics do not support the datasets with the [BLOOM filter](/azure/databricks/delta/optimizations/bloom-filters). The serverless SQL pool will ignore the BLOOM filters. -- Delta Lake support is not available in dedicated SQL pools. Make sure that you are using serverless pools to query Delta Lake files.
+There are some limitations and known issues that you might see in Delta Lake support in serverless SQL pools:
+
+- Make sure that you're referencing the root Delta Lake folder in the [OPENROWSET](./develop-openrowset.md) function or external table location.
+ - The root folder must have a subfolder named `_delta_log`. The query fails if there's no `_delta_log` folder. If you don't see that folder, you're referencing plain Parquet files that must be [converted to Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#convert-parquet-to-delta) by using Apache Spark pools.
+ - Don't specify wildcards to describe the partition schema. The Delta Lake query automatically identifies the Delta Lake partitions.
+- Delta Lake tables created in the Apache Spark pools aren't automatically available in serverless SQL pool. To query such Delta Lake tables by using the T-SQL language, run the [CREATE EXTERNAL TABLE](./create-use-external-tables.md#delta-lake-external-table) statement and specify Delta as the format.
+- External tables don't support partitioning. Use [partitioned views](create-use-views.md#delta-lake-partitioned-views) on the Delta Lake folder to use the partition elimination. See known issues and workarounds later in the article.
+- Serverless SQL pools don't support time travel queries. Use Apache Spark pools in Synapse Analytics to [read historical data](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel).
+- Serverless SQL pools don't support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Synapse Analytics to [update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data).
+ - You can't [store query results to storage in Delta Lake format](create-external-table-as-select.md) by using the CETAS command. The CETAS command supports only Parquet and CSV as the output formats.
+- Serverless SQL pools in Synapse Analytics don't support the datasets with the [BLOOM filter](/azure/databricks/delta/optimizations/bloom-filters). The serverless SQL pool ignores the BLOOM filters.
+- Delta Lake support isn't available in dedicated SQL pools. Make sure that you use serverless SQL pools to query Delta Lake files.
-### JSON text is not properly formatted
+### JSON text isn't properly formatted
-This error indicates that serverless SQL pool cannot read Delta Lake transaction log. You will probably see the following error:
+This error indicates that serverless SQL pool can't read the Delta Lake transaction log. You'll probably see the following error:
``` Msg 13609, Level 16, State 4, Line 1
JSON text is not properly formatted. Unexpected character '' is found at positio
Msg 16513, Level 16, State 0, Line 1 Error reading external metadata. ```
-Make sure that your Delta Lake data set is not corrupted. Verify that you can read the content of the Delta Lake folder using Apache Spark pool in Azure Synapse. This way you will ensure that the `_delta_log` file is not corrupted.
+Make sure that your Delta Lake dataset isn't corrupted. Verify that you can read the content of the Delta Lake folder by using Apache Spark pool in Azure Synapse. This way you'll ensure that the `_delta_log` file isn't corrupted.
-**Workaround** - try to create a checkpoint on Delta Lake data set using Apache Spark pool and re-run the query. The checkpoint will aggregate transactional json log files and might solve the issue.
+**Workaround**
-If the data set is valid, [create a support ticket](../../azure-portal/supportability/how-to-create-azure-support-request.md#create-a-support-request) and provide more info:
-- Do not make any changes like adding/removing the columns or optimizing the table because this operation might change the state of Delta Lake transaction log files.-- Copy the content of `_delta_log` folder into a new empty folder. **DO NOT** copy `.parquet data` files.-- Try to read the content that you copied in new folder and verify that you are getting the same error.
+Try to create a checkpoint on the Delta Lake dataset by using Apache Spark pool and rerun the query. The checkpoint aggregates transactional JSON log files and might solve the issue.
+
+If the dataset is valid, [create a support ticket](../../azure-portal/supportability/how-to-create-azure-support-request.md#create-a-support-request) and provide more information:
+
+- Don't make any changes like adding or removing the columns or optimizing the table because this operation might change the state of the Delta Lake transaction log files.
+- Copy the content of the `_delta_log` folder into a new empty folder. *Do not* copy the `.parquet data` files.
+- Try to read the content that you copied in the new folder and verify that you're getting the same error.
- Send the content of the copied `_delta_log` file to Azure support.
-Now you can continue using Delta Lake folder with Spark pool. You will provide copied data to Microsoft support if you are allowed to share this information. Azure team will investigate the content of the `delta_log` file and provide more info about the possible errors and the workarounds.
+Now you can continue using the Delta Lake folder with Spark pool. You'll provide copied data to Microsoft support if you're allowed to share this information. The Azure team will investigate the content of the `delta_log` file and provide more information about possible errors and workarounds.
## Performance
-The serverless SQL pool assigns the resources to the queries based on the size of data set and query complexity. You cannot change or limit the resources that are provided to the queries. There are some cases where you might experience unexpected query performance degradations and you might have to identify the root causes.
+Serverless SQL pool assigns the resources to the queries based on the size of the dataset and query complexity. You can't change or limit the resources that are provided to the queries. There are some cases where you might experience unexpected query performance degradations and you might have to identify the root causes.
### Query duration is very long
-If you have queries with the query duration longer than 30 min, the query is slowly returning results to the client is slow. Serverless SQL pool has 30 min limit for execution, and any additional time is spent on result streaming. Try with the following workarounds:
-- If you are using [Synapse studio](#query-is-slow-when-executed-using-synapse-studio), try to reproduce the issues with some other application like SQL Server Management Studio or Azure Data Studio.-- If your query is slow when executed using [SSMS, ADS, Power BI, or some other application](#query-is-slow-when-executed-using-application) check networking issues and best practices.-- Put the query in the CETAS command and measure the query duration. The CETAS command will store the results to Azure Data Lake Storage and will not depend on the client connection. If the CETAS command finishes faster than the original query, check the network bandwidth between the client and the serverless SQL pool.
+If you have queries with a query duration longer than 30 minutes, the query slowly returning results to the client are slow. Serverless SQL pool has a 30-minute limit for execution. Any more time is spent on result streaming. Try the following workarounds:
+
+- If you use [Synapse Studio](#query-is-slow-when-executed-by-using-synapse-studio), try to reproduce the issues with some other application like SQL Server Management Studio or Azure Data Studio.
+- If your query is slow when executed by using [SQL Server Management Studio, Azure Data Studio, Power BI, or some other application](#query-is-slow-when-executed-by-using-an-application), check networking issues and best practices.
+- Put the query in the CETAS command and measure the query duration. The CETAS command stores the results to Azure Data Lake Storage and doesn't depend on the client connection. If the CETAS command finishes faster than the original query, check the network bandwidth between the client and serverless SQL pool.
-#### Query is slow when executed using Synapse studio
+#### Query is slow when executed by using Synapse Studio
-If you are using Synapse Studio, try using some desktop client such as SQL Server Management Studio or Azure Data Studio. Synapse Studio is a web client that is connecting to serverless pool using HTTP protocol, that is generally slower than the native SQL connections used in SQL Server Management Studio or Azure Data Studio.
+If you use Synapse Studio, try using a desktop client such as SQL Server Management Studio or Azure Data Studio. Synapse Studio is a web client that connects to serverless SQL pool by using the HTTP protocol, which is generally slower than the native SQL connections used in SQL Server Management Studio or Azure Data Studio.
-#### Query is slow when executed using application
+#### Query is slow when executed by using an application
-Check the following issues if you are experiencing the slow query execution:
-- Make sure that the client applications are collocated with the serverless SQL pool endpoint. Executing a query across the region can cause additional latency and slow streaming of result set.-- Make sure that you donΓÇÖt have networking issues that can cause the slow streaming of result set -- Make sure that the client application has enough resources (for example, not using 100% CPU). -- Make sure that the storage account or Cosmos DB analytical storage is placed in the same region as your serverless SQL endpoint.
+Check the following issues if you experience slow query execution:
-See the best practices for [collocating the resources](best-practices-serverless-sql-pool.md#client-applications-and-network-connections).
+- Make sure that the client applications are collocated with the serverless SQL pool endpoint. Executing a query across the region can cause more latency and slow streaming of the result set.
+- Make sure that you don't have networking issues that can cause the slow streaming of the result set.
+- Make sure that the client application has enough resources. For example, it's not using 100% CPU.
+- Make sure that the storage account or Azure Cosmos DB analytical storage is placed in the same region as your serverless SQL endpoint.
+
+See best practices for [collocating the resources](best-practices-serverless-sql-pool.md#client-applications-and-network-connections).
### High variations in query durations
-If you are executing the same query and observing variations in the query durations, there might be several reasons that can cause this behavior:
-- Check is this a first execution of a query. The first execution of a query collects the statistics required to create a plan. The statistics are collected by scanning the underlying files and might increase the query duration. In the Synapse studio, you will see the ΓÇ£global statistics creationΓÇ¥ queries in the SQL request list, that are executed before your query.-- Statistics might expire after some time, so periodically you might observe an impact on performance because the serverless pool must scan and rebuild the statistics. You might notice additional ΓÇ£global statistics creationΓÇ¥ queries in the SQL request list, that are executed before your query.-- Check is there some workload that is running on the same endpoint when you executed the query with the longer duration. The serverless SQL endpoint will equally allocate the resources to all queries that are executed in parallel, and the query might be delayed.
+If you're executing the same query and observing variations in the query durations, several reasons might cause this behavior:
+
+- Check if this is the first execution of a query. The first execution of a query collects the statistics required to create a plan. The statistics are collected by scanning the underlying files and might increase the query duration. In Synapse Studio, you'll see the "global statistics creation" queries in the SQL request list that are executed before your query.
+- Statistics might expire after some time. Periodically, you might observe an impact on performance because the serverless pool must scan and rebuild the statistics. You might notice another "global statistics creation" queries in the SQL request list that are executed before your query.
+- Check if there's some workload that's running on the same endpoint when you executed the query with the longer duration. The serverless SQL endpoint equally allocates the resources to all queries that are executed in parallel, and the query might be delayed.
## Connections
-Serverless SQL pool enables you to connect using TDS protocol and use T-SQL language to query data. Most of the tools that can connect to SQL server or Azure SQL database, can also connect to serverless SQL pool.
+Serverless SQL pool enables you to connect by using the TDS protocol and by using the T-SQL language to query data. Most of the tools that can connect to SQL Server or Azure SQL Database can also connect to serverless SQL pool.
### SQL pool is warming up
-Following a longer period of inactivity Serverless SQL pool will be deactivated. The activation will happen automatically on the first next activity, such as the first connection attempt. Activation process might take a bit longer than a single connection attempt interval, thus the error message is displayed. Retrying the connection attempt should be enough.
-As a best practice, for the clients that support it, use ConnectionRetryCount and ConnectRetryInterval connection string keywords to control the reconnect behavior.
+Following a longer period of inactivity, serverless SQL pool will be deactivated. The activation happens automatically on the first next activity, such as the first connection attempt. The activation process might take a bit longer than a single connection attempt interval, so the error message is displayed. Retrying the connection attempt should be enough.
+
+As a best practice, for the clients that support it, use ConnectionRetryCount and ConnectRetryInterval connection string keywords to control the reconnect behavior.
If the error message persists, file a support ticket through the Azure portal.
-### Cannot connect from Synapse Studio
+### Can't connect from Synapse Studio
See the [Synapse Studio section](#synapse-studio).
-### Cannot connect to Synapse pool from a tool
+### Can't connect to the Azure Synapse pool from a tool
-Some tools might not have an explicit option that enables you to connect to the Synapse serverless SQL pool.
-Use an option that you would use to connect to SQL Server or Azure SQL database. The connection dialog doesn't need to be branded as "Synapse" because the serverless SQL pool uses the same protocol as SQL Server or Azure SQL database.
+Some tools might not have an explicit option that you can use to connect to the Azure Synapse serverless SQL pool. Use an option that you would use to connect to SQL Server or SQL Database. The connection dialog doesn't need to be branded as "Synapse" because the serverless SQL pool uses the same protocol as SQL Server or SQL Database.
-Even if a tool enables you to enter only a logical server name and predefines `database.windows.net` domain, put the Synapse workspace name followed by `-ondemand` suffix and `database.windows.net` domain.
+Even if a tool enables you to enter only a logical server name and predefines the `database.windows.net` domain, put the Azure Synapse workspace name followed by the `-ondemand` suffix and the `database.windows.net` domain.
## Security
-Make sure that a user has permissions to access databases, [permissions to execute commands](develop-storage-files-overview.md#permissions), and permissions to access [data lake](develop-storage-files-storage-access-control.md?tabs=service-principal) or [Cosmos DB storage](query-cosmos-db-analytical-store.md#prerequisites).
+Make sure that a user has permissions to access databases, [permissions to execute commands](develop-storage-files-overview.md#permissions), and permissions to access [Azure Data Lake](develop-storage-files-storage-access-control.md?tabs=service-principal) or [Azure Cosmos DB storage](query-cosmos-db-analytical-store.md#prerequisites).
+
+### Can't access Azure Cosmos DB account
+
+You must use a read-only Azure Cosmos DB key to access your analytical storage, so make sure that it didn't expire or that it isn't regenerated.
-### Cannot access Cosmos DB account
+If you get the error ["Resolving Azure Cosmos DB path has failed with error"](#resolving-azure-cosmos-db-path-has-failed-with-error), make sure that you configured a firewall.
-You must use read-only Cosmos DB key to access your analytical storage, so make sure that it didn't expire or that it isn't regenerated.
+### Can't access lakehouse or Spark database
-If you are getting the [Resolving Cosmos DB path has failed](#resolving-cosmos-db-path-has-failed) error, make sure that you configured firewall.
+If a user can't access a lakehouse or Spark database, the user might not have permission to access and read the database. A user with CONTROL SERVER permission should have full access to all databases. As a restricted permission, you might try to use [CONNECT ANY DATABASE and SELECT ALL USER SECURABLES](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-serverless-shared-database-and-tables-access-for-non/ba-p/2645947).
-### Cannot access Lakehouse/Spark database
+### SQL user can't access Dataverse tables
-If a user cannot access a lake house or Spark database, it might not have permissions to access and read the database. A user with `CONTROL SERVER` permission should have full access to all databases. As a restricted permission, you might try to use [CONNECT ANY DATABASE and SELECT ALL USER SECURABLES](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-serverless-shared-database-and-tables-access-for-non/ba-p/2645947).
+Dataverse tables access storage by using the caller's Azure AD identity. A SQL user with high permissions might try to select data from a table, but the table wouldn't be able to access Dataverse data. This scenario isn't supported.
-### SQL user cannot access Dataverse tables
+### Azure AD service principal sign-in failures when SPI creates a role assignment
-Dataverse tables are accessing storage using the callers Azure AD identity. SQL user with high permissions might try to select data from a table, but the table would not be able to access Dataverse data. This scenario isn't supported.
+If you want to create a role assignment for a service principal identifier (SPI) or Azure AD app by using another SPI, or you've already created one and it fails to sign in, you'll probably receive the following error:
-### Azure AD service principal login failures when SPI is creating a role assignment
-If you want to create role assignment for Service Principal Identifier/Azure AD app using another SPI, or have already created one and it fails to log in, you're probably receiving following error:
``` Login error: Login failed for user '<token-identified principal>'. ```
-For service principals login should be created with Application ID as SID (not with Object ID). There is a known limitation for service principals, which is preventing the Azure Synapse service from fetching Application ID from Microsoft Graph when creating role assignment for another SPI/app.
-**Solution #1**
+For service principals, sign-in should be created with an application ID as a security ID (SID) not with an object ID. There's a known limitation for service principals, which prevents Azure Synapse from fetching the application ID from Microsoft Graph when it creates a role assignment for another SPI or app.
+
+**Solution 1**
+
+Go to the **Azure portal** > **Synapse Studio** > **Manage** > **Access control** and manually add **Synapse Administrator** or **Synapse SQL Administrator** for the desired service principal.
-Navigate to Azure portal > Synapse Studio > Manage > Access control and manually add Synapse Administrator or Synapse SQL Administrator for desired Service Principal.
+**Solution 2**
-**Solution #2**
+You must manually create a proper sign-in with SQL code:
-You need to manually create a proper login through SQL code:
```sql use master go
ALTER SERVER ROLE sysadmin ADD MEMBER [<service_principal_name>];
go ```
-**Solution #3**
+**Solution 3**
+
+You can also set up a service principal Azure Synapse admin by using PowerShell. You must have the [Az.Synapse module](/powershell/module/az.synapse) installed.
+
+The solution is to use the cmdlet New-AzSynapseRoleAssignment with `-ObjectId "parameter"`. In that parameter field, provide the application ID instead of the object ID by using the workspace admin Azure service principal credentials.
+
+PowerShell script:
-You can also set up service principal Synapse Admin using PowerShell. You need to have [Az.Synapse module](/powershell/module/az.synapse) installed.
-The solution is to use cmdlet New-AzSynapseRoleAssignment with `-ObjectId "parameter"` - and in that parameter field to provide Application ID (instead of Object ID) using workspace admin Azure service principal credentials. PowerShell script:
```azurepowershell $spAppId = "<app_id_which_is_already_an_admin_on_the_workspace>" $SPPassword = "<application_secret>"
New-AzSynapseRoleAssignment -WorkspaceName "<workspaceName>" -RoleDefinitionName
**Validation**
-Connect to serverless SQL endpoint and verify that the external login with SID `app_id_to_add_as_admin` is created:
+Connect to the serverless SQL endpoint and verify that the external sign-in with SID `app_id_to_add_as_admin` is created:
+ ```sql select name, convert(uniqueidentifier, sid) as sid, create_date from sys.server_principals where type in ('E', 'X') ```
-or just try to log in on serverless SQL endpoint using the just set admin app.
+
+Or try to sign in on the serverless SQL endpoint by using the set admin app.
## Constraints
-There are some general system constraints that might affect your workload:
+Some general system constraints might affect your workload:
| Property | Limitation | |||
-| Max number of Synapse workspaces per subscription | [See limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#synapse-workspace-limits) |
-| Max number of databases per serverless pool | 20 (not including databases synchronized from Apache Spark pool) |
-| Max number of databases synchronized from Apache Spark pool | Not limited |
-| Max number of databases objects per database | The sum of the number of all objects in a database cannot exceed 2,147,483,647 (see [limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects) ) |
-| Max identifier length (in characters) | 128 (see [limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects) )|
-| Max query duration | 30 min |
-| Max size of the result set | up to 200 GB (shared between concurrent queries) |
-| Max concurrency | Not limited and depends on the query complexity and amount of data scanned. One serverless SQL pool can concurrently handle 1000 active sessions that are executing lightweight queries. However, the numbers will drop if the queries are more complex or scan a larger amount of data. |
+| Maximum number of Azure Synapse workspaces per subscription | [See limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#synapse-workspace-limits). |
+| Maximum number of databases per serverless pool | 20 (not including databases synchronized from Apache Spark pool). |
+| Maximum number of databases synchronized from Apache Spark pool | Not limited. |
+| Maximum number of databases objects per database | The sum of the number of all objects in a database can't exceed 2,147,483,647. See [Limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects). |
+| Maximum identifier length in characters | 128. See [Limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects).|
+| Maximum query duration | 30 minutes. |
+| Maximum size of the result set | Up to 200 GB shared between concurrent queries. |
+| Maximum concurrency | Not limited and depends on the query complexity and amount of data scanned. One serverless SQL pool can concurrently handle 1,000 active sessions that are executing lightweight queries. The numbers will drop if the queries are more complex or scan a larger amount of data. |
-### Cannot create a database in serverless SQL pool
+### Can't create a database in serverless SQL pool
-The serverless SQL pools have limitations and you cannot create more than 20 databases per workspace. If you need to separate objects and isolate them, use schemas.
+Serverless SQL pools have limitations, and you can't create more than 20 databases per workspace. If you need to separate objects and isolate them, use schemas.
-If you are getting the error '*CREATE DATABASE failed. User database limit has been already reached.*' you have created the maximal number of databases that are supported in one workspace.
+If you get the error "CREATE DATABASE failed. User database limit has been already reached," you've created the maximum number of databases that are supported in one workspace.
-You don't need to use separate databases to isolate data for different tenants. All data is stored externally on a data lake and Cosmos DB. The metadata (table, views, function definitions) can be successfully isolated using schemas. Schema-based isolation is also used in Spark where databases and schemas are the same concepts.
+You don't need to use separate databases to isolate data for different tenants. All data is stored externally on a data lake and Azure Cosmos DB. The metadata like table, views, and function definitions can be successfully isolated by using schemas. Schema-based isolation is also used in Spark where databases and schemas are the same concepts.
-## Querying Azure data
+## Query Azure data
-Serverless SQL pools enable you to query data in Azure storage or Azure Cosmos DB using [external tables and the OPENROWSET function](develop-storage-files-overview.md).
-Make sure that you have proper [permission setup](develop-storage-files-overview.md#permissions) on your storage.
+Serverless SQL pools enable you to query data in Azure Storage or Azure Cosmos DB by using [external tables and the OPENROWSET function](develop-storage-files-overview.md). Make sure that you have proper [permission set up](develop-storage-files-overview.md#permissions) on your storage.
-### Querying CSV data
+### Query CSV data
-Learn here how to [query single CSV file](query-single-csv-file.md) or [folders and multiple CSV files](query-folders-multiple-csv-files.md). You can also [query partitioned files](query-specific-files.md)
+Learn how to [query a single CSV file](query-single-csv-file.md) or [folders and multiple CSV files](query-folders-multiple-csv-files.md). You can also [query partitioned files](query-specific-files.md)
-### Querying Parquet data
+### Query Parquet data
-Learn here how to [query Parquet files](query-parquet-files.md) with [nested types](query-parquet-nested-types.md). You can also [query partitioned files](query-specific-files.md).
+Learn how to [query Parquet files](query-parquet-files.md) with [nested types](query-parquet-nested-types.md). You can also [query partitioned files](query-specific-files.md).
-### Querying Delta Lake
+### Query Delta Lake
-Learn here how to [query Delta Lake files](query-delta-lake-format.md) with [nested types](query-parquet-nested-types.md).
+Learn how to [query Delta Lake files](query-delta-lake-format.md) with [nested types](query-parquet-nested-types.md).
-### Querying Cosmos DB data
+### Query Azure Cosmos DB data
-Learn here how to [query Cosmos DB analytical store](query-cosmos-db-analytical-store.md). You can use [online generator](https://htmlpreview.github.io/?https://github.com/Azure-Samples/Synapse/blob/main/SQL/tools/cosmosdb/generate-openrowset.html) to generate the `WITH` clause based on a sample Cosmos DB document.
-You can [create views](create-use-views.md#cosmosdb-view) on top of Cosmos DB containers.
+Learn how to [query Azure Cosmos DB analytical store](query-cosmos-db-analytical-store.md). You can use an [online generator](https://htmlpreview.github.io/?https://github.com/Azure-Samples/Synapse/blob/main/SQL/tools/cosmosdb/generate-openrowset.html) to generate the WITH clause based on a sample Azure Cosmos DB document. You can [create views](create-use-views.md#cosmosdb-view) on top of Azure Cosmos DB containers.
-### Querying JSON data
+### Query JSON data
-Learn here how to [query JSON files](query-json-files.md). You can also [query partitioned files](query-specific-files.md)
+Learn how to [query JSON files](query-json-files.md). You can also [query partitioned files](query-specific-files.md).
-### Create views, tables and other database objects
+### Create views, tables, and other database objects
-Learn here how to create and use [views](create-use-views.md), [external tables](create-use-external-tables.md), or setup [row-level security](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-implement-row-level-security-in-serverless-sql-pools/ba-p/2354759).
-If you have [partitioned files](query-specific-files.md), make sure that you are using [partitioned views](create-use-views.md#partitioned-views).
+Learn how to create and use [views](create-use-views.md) and [external tables](create-use-external-tables.md) or set up [row-level security](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-implement-row-level-security-in-serverless-sql-pools/ba-p/2354759).
+If you have [partitioned files](query-specific-files.md), make sure you use [partitioned views](create-use-views.md#partitioned-views).
### Copy and transform data (CETAS)
-Learn here how to [store query results to storage](create-external-table-as-select.md) using Create external table as select (CETAS) command.
+Learn how to [store query results to storage](create-external-table-as-select.md) by using the CETAS command.
virtual-machines Tutorial Automate Vm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-automate-vm-deployment.md
Previously updated : 09/12/2019 Last updated : 05/13/2022
az keyvault certificate create \
### Prepare certificate for use with VM
-To use the certificate during the VM create process, obtain the ID of your certificate with [az keyvault secret list-versions](/cli/azure/keyvault/secret#az-keyvault-secret-list-versions). The VM needs the certificate in a certain format to inject it on boot, so convert the certificate with [az vm secret format](/cli/azure/vm). The following example assigns the output of these commands to variables for ease of use in the next steps:
+To use the certificate during the VM create process, obtain the ID of your certificate with [az keyvault secret list-versions](/cli/azure/keyvault/secret#az-keyvault-secret-list-versions). The VM needs the certificate in a certain format to inject it on boot, so convert the certificate with [az vm secret format](/cli/azure/vm/secret#az-vm-secret-format). The following example assigns the output of these commands to variables for ease of use in the next steps:
```azurecli-interactive secret=$(az keyvault secret list-versions \
virtual-machines Using Managed Disks Template Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/using-managed-disks-template-deployments.md
Title: Deploying disks with Azure Resource Manager templates description: Details how to use managed and unmanaged disks in Azure Resource Manager templates for Azure VMs. documentationcenter:- Last updated 06/01/2017-
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
Create a new variable group 'SDAF-MGMT' for the control plane environment using
| ARM_TENANT_ID | Enter the Tenant id for the service principal. | | | AZURE_CONNECTION_NAME | Previously created connection name | | | sap_fqdn | SAP Fully Qualified Domain Name, for example sap.contoso.net | Only needed if Private DNS isn't used. |
+| FENCING_SPN_ID | Enter the service principal application id for the fencing agent. | Required for highly available deployments |
+| FENCING_SPN_PWD | Enter the service principal password for the fencing agent. | Required for highly available deployments |
+| FENCING_SPN_TENANT | Enter the service principal tenant id for the fencing agent. | Required for highly available deployments |
Save the variables.
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md
By default the SAP System deployment uses the credentials from the SAP Workload
> | - | -- | -- | > | `azure_files_storage_account_id` | If provided the Azure resource ID of the storage account for Azure Files | Optional |
+### Azure NetApp Files Support
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type | Notes |
+> | - | --| -- | |
+> | `ANF_use_for_HANA_data` | Create Azure NetApp Files volume for HANA data | Optional | |
+> | `ANF_use_existing_data_volume` | Use existing Azure NetApp Files volume for HANA data | Optional | Use for pre-created volumes |
+> | `ANF_data_volume_name` | Azure NetApp Files volume name for HANA data | Optional | |
+> | `ANF_HANA_data_volume_size` | Azure NetApp Files volume size in GB for HANA data | Optional | default size 256 |
+> | `ANF_use_for_HANA_log` | Create Azure NetApp Files volume for HANA data | Optional | |
+> | `ANF_use_existing_log_volume` | Use existing Azure NetApp Files volume for HANA data | Optional | Use for pre-created volumes |
+> | `ANF_log_volume_name` | Azure NetApp Files volume name for HANA data | Optional | |
+> | `ANF_HANA_log_volume_size` | Azure NetApp Files volume size in GB for HANA data | Optional | default size 128 |
++
+## Oracle parameters
+
+When deploying Oracle based systems these parameters need to be updated in the sap-parameters.yaml file.
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type | Notes |
+> | - | --| -- | |
+> | `ora_release` | Release of Oracle, e.g. 19 | Mandatory | |
+> | `ora_version` | Version of Oracle, e.g. 19.0.0 | Mandatory | |
+> | `oracle_sbp_patch` | Oracle SBP patch file name, e.g. SAP19P_2202-70004508.ZIP | Mandatory | Must be part of the Bill of Materials |
+ ## Terraform parameters The table below contains the Terraform parameters, these parameters need to be entered manually if not using the deployment scripts.
virtual-machines Automation Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md
The table below defines the parameters used for defining the Key Vault informati
> | `anf_subnet_name` | The name of the ANF subnet | Optional | | > | `anf_subnet_arm_id` | The Azure resource identifier for the `ANF` subnet | Required | For existing environment deployments | > | `anf_subnet_address_prefix` | The address range for the `ANF` subnet | Required | For new environment deployments |
-> | `transport_volume_size` | Defines the size (in GB) for the 'saptransport' volume | Optional |
## Other Parameters - > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | - | -- | - |
vpn-gateway Vpn Gateway P2s Advertise Custom Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-p2s-advertise-custom-routes.md
Title: 'Advertise custom routes for point-to-site VPN Gateway clients' description: Learn how to advertise custom routes to your VPN Gateway point-to-site clients. This article includes steps for VPN client forced tunneling.- - Previously updated : 07/21/2021 Last updated : 05/16/2022 - # Advertise custom routes for P2S VPN clients
-You may want to advertise custom routes to all of your point-to-site VPN clients. For example, when you have enabled storage endpoints in your VNet and want the remote users to be able to access these storage accounts over the VPN connection. You can advertise the IP address of the storage end point to all your remote users so that the traffic to the storage account goes over the VPN tunnel, and not the public Internet. You can also use custom routes in order to configure forced tunneling for VPN clients.
+You may want to advertise custom routes to all of your point-to-site VPN clients. For example, when you have enabled storage endpoints in your VNet and want the remote users to be able to access these storage accounts over the VPN connection. You can advertise the IP address of the storage end point to all your remote users so that the traffic to the storage account goes over the VPN tunnel, and not the public Internet. You can also use custom routes in order to configure [forced tunneling](#forced-tunneling) for VPN clients.
:::image type="content" source="./media/vpn-gateway-p2s-advertise-custom-routes/custom-routes.png" alt-text="Diagram of advertising custom routes.":::
-## <a name="advertise"></a>Advertise custom routes
+## <a name="portal"></a>Azure portal
+
+You can advertise custom routes using the Azure portal on the point-to-site configuration page. You can also view and modify/delete custom routes as needed using these steps. If you want to configure forced tunneling, see the [Forced tunneling](#forced-tunneling) section in this article.
++
+1. Go to the virtual network gateway.
+1. Select **Point-to-site configuration** in the left pane.
+1. On the Point-to-site configuration page, add the routes. Don't use any spaces.
+1. Select **Save** at the top of the page.
+
+## <a name="powershell"></a>PowerShell
To advertise custom routes, use the `Set-AzVirtualNetworkGateway cmdlet`. The following example shows you how to advertise the IP for the [Contoso storage account tables](https://contoso.table.core.windows.net).
To advertise custom routes, use the `Set-AzVirtualNetworkGateway cmdlet`. The fo
Pinging table.by4prdstr05a.store.core.windows.net [13.88.144.250] with 32 bytes of data: ```
-2. Run the following PowerShell commands:
+1. Run the following PowerShell commands:
```azurepowershell-interactive $gw = Get-AzVirtualNetworkGateway -Name <name of gateway> -ResourceGroupName <name of resource group> Set-AzVirtualNetworkGateway -VirtualNetworkGateway $gw -CustomRoute 13.88.144.250/32 ```
-3. To add multiple custom routes, use a comma and spaces to separate the addresses. For example:
+1. To add multiple custom routes, use a comma and spaces to separate the addresses. For example:
```azurepowershell-interactive Set-AzVirtualNetworkGateway -VirtualNetworkGateway $gw -CustomRoute x.x.x.x/xx , y.y.y.y/yy ```
-## <a name="forced-tunneling"></a>Advertise custom routes - forced tunneling
-
-You can direct all traffic to the VPN tunnel by advertising 0.0.0.0/1 and 128.0.0.0/1 as custom routes to the clients. The reason for breaking 0.0.0.0/0 into two smaller subnets is that these smaller prefixes are more specific than the default route that may already be configured on the local network adapter and as such will be preferred when routing traffic.
-
-> [!NOTE]
-> Internet connectivity is not provided through the VPN gateway. As a result, all traffic bound for the Internet is dropped.
->
-
-1. To enable forced tunneling, use the following commands:
-
- ```azurepowershell-interactive
- $gw = Get-AzVirtualNetworkGateway -Name <name of gateway> -ResourceGroupName <name of resource group>
- Set-AzVirtualNetworkGateway -VirtualNetworkGateway $gw -CustomRoute 0.0.0.0/1 , 128.0.0.0/1
- ```
-
-## <a name="view"></a>View custom routes
+### <a name="view"></a>View custom routes
Use the following example to view custom routes:
Use the following example to view custom routes:
$gw = Get-AzVirtualNetworkGateway -Name <name of gateway> -ResourceGroupName <name of resource group> $gw.CustomRoutes | Format-List ```
-## <a name="delete"></a>Delete custom routes
+
+### <a name="delete"></a>Delete custom routes
Use the following example to delete custom routes:
Use the following example to delete custom routes:
$gw = Get-AzVirtualNetworkGateway -Name <name of gateway> -ResourceGroupName <name of resource group> Set-AzVirtualNetworkGateway -VirtualNetworkGateway $gw -CustomRoute @0 ```+
+## <a name="forced-tunneling"></a>Forced tunneling
+
+You can direct all traffic to the VPN tunnel by advertising 0.0.0.0/1 and 128.0.0.0/1 as custom routes to the clients. The reason for breaking 0.0.0.0/0 into two smaller subnets is that these smaller prefixes are more specific than the default route that may already be configured on the local network adapter and, as such, will be preferred when routing traffic.
+
+> [!NOTE]
+> Internet connectivity is not provided through the VPN gateway. As a result, all traffic bound for the Internet is dropped.
+>
+
+To enable forced tunneling, use the following commands:
+
+```azurepowershell-interactive
+$gw = Get-AzVirtualNetworkGateway -Name <name of gateway> -ResourceGroupName <name of resource group>
+Set-AzVirtualNetworkGateway -VirtualNetworkGateway $gw -CustomRoute 0.0.0.0/1 , 128.0.0.0/1
+```
+ ## Next steps For more P2S routing information, see [About point-to-site routing](vpn-gateway-about-point-to-site-routing.md).