Updates from: 09/07/2022 01:09:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/authorization-code-flow.md
Previously updated : 07/29/2022 Last updated : 09/05/2022
Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZn
Access tokens and ID tokens are short-lived. After they expire, you must refresh them to continue to access resources. When you refresh the access token, Azure AD B2C returns a new token. The refreshed access token will have updated `nbf` (not before), `iat` (issued at), and `exp` (expiration) claim values. All other claim values will be the same as the originally issued access token.
-To refresh the toke, submit another POST request to the `/token` endpoint. This time, provide the `refresh_token` instead of the `code`:
+To refresh the token, submit another POST request to the `/token` endpoint. This time, provide the `refresh_token` instead of the `code`:
```http POST https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/token HTTP/1.1
active-directory Permissions Management Trial Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-trial-playbook.md
+
+ Title: Trial Playbook - Microsoft Entra Permissions Management
+description: How to get started with your Entra Permissions free trial
++++++ Last updated : 09/01/2022+++
+# Trial playbook: Microsoft Entra Permissions Management
+
+Welcome to the Microsoft Entra Permissions Management trial playbook!
+
+This playbook is a simple guide to help you make the most of your free trial, including the Permissions Management Cloud Infrastructure Assessment to help you identify and remediate the most critical permission risks across your multicloud infrastructure. Using the suggested steps in this playbook from the Microsoft Identity team, you'll learn how Permissions Management can assist you to protect all your users and data.
+
+## What is Permissions Management?
+
+Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities including both workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions.
+
+Permissions Management helps your organization tackle cloud permissions by enabling the capabilities to continuously discover, remediate and monitor the activity of every unique user and workload identity operating in the cloud, alerting security and infrastructure teams to areas of unexpected or excessive risk.
+
+- Get granular cross-cloud visibility - Get a comprehensive view of every action performed by any identity on any resource.
+- Uncover permission risk - Assess permission risk by evaluating the gap between permissions granted and permissions used.
+- Enforce least privilege - Right-size permissions based on usage and activity and enforce permissions on-demand at cloud scale.
+- Monitor and detect anomalies - Detect anomalous permission usage and generate detailed forensic reports.
+
+![Diagram, schematic Description automatically generated](media/permissions-management-trial-playbook/microsoft-entra-permissions-management-diagram.png)
++
+## Step 1: Set-up Permissions Management
+
+Before you enable Permissions Management in your organization:
+- You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+- You must be eligible for or have an active assignment to the global administrator role as a user in that tenant.
+
+If the above points are met, continue with the following steps:
+
+1. [Enabling Permissions Management on your Azure AD tenant](../cloud-infrastructure-entitlement-management/onboard-enable-tenant.md#how-to-enable-permissions-management-on-your-azure-ad-tenant)
+2. Use the **Data Collectors** dashboard in Permissions Management to configure data collection settings for your authorization system. [Configure data collection settings](../cloud-infrastructure-entitlement-management/onboard-enable-tenant.md#configure-data-collection-settings).
+
+ Note that for each cloud platform, you will have 3 options for onboarding:
+
+ **Option 1 (Recommended): Automatically manage** ΓÇô this option allows subscriptions to be automatically detected and monitored without additional configuration.
+
+ **Option 2**: **Enter authorization systems** - you have the ability to specify only certain subscriptions to manage and monitor with MEPM (up to 10 per collector).
+
+ **Option 3**: **Select authorization systems** - this option detects all subscriptions that are accessible by the Cloud Infrastructure Entitlement Management application.
+
+ For information on how to onboard an AWS account, Azure subscription, or GCP project into Permissions Management, select one of the following articles and follow the instructions:
+ - [Onboard an AWS account](../cloud-infrastructure-entitlement-management/onboard-aws.md)
+ - [Onboard a Microsoft Azure subscription](../cloud-infrastructure-entitlement-management/onboard-azure.md)
+ - [Onboard a GCP project](../cloud-infrastructure-entitlement-management/onboard-gcp.md)
+3. [Enable or disable the controller after onboarding is complete](../cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md)
+4. [Add an account/subscription/project after onboarding is complete](../cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md)
+
+ **Actions to try:**
+
+ - [View roles/policies and requests for permission](../cloud-infrastructure-entitlement-management/ui-remediation.md#view-and-create-rolespolicies)
+ - [View information about roles/ policies](../cloud-infrastructure-entitlement-management/ui-remediation.md#view-and-create-rolespolicies)
+ - [View information about active and completed tasks](../cloud-infrastructure-entitlement-management/ui-tasks.md)
+ - [Create a role/policy](../cloud-infrastructure-entitlement-management/how-to-create-role-policy.md)
+ - [Clone a role/policy](../cloud-infrastructure-entitlement-management/how-to-clone-role-policy.md)
+ - [Modify a role/policy](../cloud-infrastructure-entitlement-management/how-to-modify-role-policy.md)
+ - [Delete a role/policy](../cloud-infrastructure-entitlement-management/how-to-delete-role-policy.md)
+ - [Attach and detach policies for Amazon Web Services (AWS) identities](../cloud-infrastructure-entitlement-management/how-to-attach-detach-permissions.md)
+ - [Add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities](../cloud-infrastructure-entitlement-management/how-to-add-remove-role-task.md)
+ - [Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities](../cloud-infrastructure-entitlement-management/how-to-revoke-task-readonly-status.md)
+ - [Create or approve a request for permissions](../cloud-infrastructure-entitlement-management/how-to-create-approve-privilege-request.md) Request permissions on-demand for one-time use or on a schedule. These permissions will automatically be revoked at the end of the requested period.
+
+## Step 2: Discover & assess
+
+Improve your security posture by getting comprehensive and granular visibility to enforce the principle of least privilege access across your entire multicloud environment. The Permissions Management dashboard gives you an overview of your permission profile and locates where the riskiest identities and resources are across your digital estate.
+
+The dashboard leverages the Permission Creep Index, which is a single and unified metric, ranging from 0 to 100, that calculates the gap between permissions granted and permissions used over a specific period. The higher the gap, the higher the index and the larger the potential attack surface. The Permission Creep Index only considers high-risk actions, meaning any action that can cause data leakage, service disruption degradation, or security posture change. Permissions Management creates unique activity profiles for each identity and resource which are used as a baseline to detect anomalous behaviors.
+
+1. [View risk metrics in your authorization system](../cloud-infrastructure-entitlement-management/ui-dashboard.md#view-metrics-related-to-avoidable-risk) in the Permissions Management Dashboard. This information is available for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
+ 1. View metrics related to avoidable risk - these metrics allow the Permission Management administrator to identify areas where they can reduce risks related to the principle of least permissions. Information includes [the Permissions Creep Index (PCI)](../cloud-infrastructure-entitlement-management/ui-dashboard.md#the-pci-heat-map) and [Analytics Dashboard](../cloud-infrastructure-entitlement-management/usage-analytics-home.md).
+
+
+ 1. Understand the [components of the Permissions Management Dashboard.](../cloud-infrastructure-entitlement-management/ui-dashboard.md#components-of-the-permissions-management-dashboard)
+
+2. View data about the activity in your authorization system
+
+ 1. [View user data on the PCI heat map](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-user-data-on-the-pci-heat-map).
+ > [!NOTE]
+ > The higher the PCI, the higher the risk.
+
+ 2. [View information about users, roles, resources, and PCI trends](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-information-about-users-roles-resources-and-pci-trends)
+ 3. [View identity findings](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-identity-findings)
+ 4. [View resource findings](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-resource-findings)
+3. [Configure your settings for data collection](../cloud-infrastructure-entitlement-management/product-data-sources.md) - use the **Data Collectors** dashboard in Permissions Management to view and configure settings for collecting data from your authorization systems.
+4. [View organizational and personal information](../cloud-infrastructure-entitlement-management/product-account-settings.md) - the **Account settings** dashboard in Permissions Management allows you to view personal information, passwords, and account preferences.
+5. [Select group-based permissions settings](../cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md)
+6. [View information about identities, resources and tasks](../cloud-infrastructure-entitlement-management/usage-analytics-home.md) - the **Analytics** dashboard displays detailed information about:
+ 1. **Users**: Tracks assigned permissions and usage by users. For more information, see View analytic information about users.
+ 2. **Groups**: Tracks assigned permissions and usage of the group and the group members. For more information, see View analytic information about groups
+ 3. **Active Resources**: Tracks resources that have been used in the last 90 days. For more information, see View analytic information about active resources
+ 4. **Active Tasks**: Tracks tasks that have been performed in the last 90 days. For more information, see View analytic information about active tasks
+ 5. **Access Keys**: Tracks the permission usage of access keys for a given user. For more information, see View analytic information about access keys
+ 6. **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions for AWS only. For more information, see View analytic information about serverless functions
+
+ System administrators can use this information to make decisions about granting permissions and reducing risk on unused permissions.
+
+## Step 3: Remediate & manage
+
+Right-size excessive and/or unused permissions in only a few clicks. Avoid any errors caused by manual processes and implement automatic remediation on all unused permissions for a predetermined set of identities and on a regular basis. You can also grant new permissions on-demand for just-in-time access to specific cloud resources.
+
+There are two facets to removing unused permissions: least privilege policy creation (remediation) and permissions-on-demand. With remediation, an administrator can create policies that remove unused permissions (also known as right-sizing permissions) to achieve least privilege across their multicloud environment.
+
+- [Manage roles/policies and permissions requests using the Remediation dashboard](../cloud-infrastructure-entitlement-management/ui-remediation.md).
+
+ The dashboard includes six subtabs:
+
+ - **Roles/Policies**: Use this subtab to perform Create Read Update Delete (CRUD) operations on roles/policies.
+ - **Role/Policy Name** ΓÇô Displays the name of the role or the AWS policy
+ - Note: An exclamation point (!) circled in red means the role or AWS policy has not been used.
+ - Role Type ΓÇô Displays the type of role or AWS policy
+ - **Permissions**: Use this subtab to perform Read Update Delete (RUD) on granted permissions.
+ - **Role/Policy Template**: Use this subtab to create a template for roles/policies template.
+ - **Requests**: Use this subtab to view approved, pending, and processed Permission on Demand (POD) requests.
+ - **My Requests**: Use this tab to manage lifecycle of the POD request either created by you or needs your approval.
+ - **Settings**: Use this subtab to select **Request Role/Policy Filters**, **Request Settings**, and **Auto-Approve** settings.
+
+**Best Practices for Remediation:**
+
+- **Creating activity-based roles/policies:** High-risk identities will be monitored and right-sized based on their historical activity. Unnecessary risk to leave unused high-risk permissions assigned to identities.
+- **Removing direct role assignments:** EPM will generate reports based on role assignments. In cases where high-risk roles are directly assigned, the Remediation permissions tab can query those identities and remove direct role assignments.
+- **Assigning read-only permissions:** Identities that are inactive or have high-risk permissions to production environments can be assigned read-only status. Access to production environments can be governed via Permissions On-demand.
+
+**Best Practices for Permissions On-demand:**
+
+- **Requesting Delete Permissions:** No user will have delete permissions unless they request them and are approved.
+- **Requesting Privileged Access:** High-privileged access is only granted through just-enough permissions and just-in-time access.
+- **Requesting Periodic Access:** Schedule reoccurring daily, weekly, or monthly permissions that are time-bound and revoked at the end of period.
+- Manage users, roles and their access levels with the User management dashboard.
+
+ **Actions to try:**
+
+ - [Manage users](../cloud-infrastructure-entitlement-management/ui-user-management.md#manage-users)
+ - [Manage groups](../cloud-infrastructure-entitlement-management/ui-user-management.md#manage-groups)
+ - [Select group-based permissions settings](../cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md)
+
+## Step 4: Monitor & alert
+
+Prevent data breaches caused by misuse and malicious exploitation of permissions with anomaly and outlier detection that alerts on any suspicious activity. Permissions Management continuously updates your Permission Creep Index and flags any incident, then immediately informs you with alerts via email. To further support rapid investigation and remediation, you can generate context-rich forensic reports around identities, actions, and resources.
+
+- Use queries to view information about user access with the **Audit** dashboard in Permissions Management. You can get an overview of queries a Permissions Management user has created to review how users access their authorization systems and accounts. The following options display at the top of the **Audit** dashboard:
+- A tab for each existing query. Select the tab to see details about the query.
+- **New Query**: Select the tab to create a new query.
+- **New tab (+)**: Select the tab to add a **New Query** tab.
+- **Saved Queries**: Select to view a list of saved queries.
+
+ **Actions to try:**
+
+ - [Use a query to view information](../cloud-infrastructure-entitlement-management/ui-audit-trail.md)
+ - [Create a custom query](../cloud-infrastructure-entitlement-management/how-to-create-custom-queries.md)
+ - [Generate an on-demand report from a query](../cloud-infrastructure-entitlement-management/how-to-audit-trail-results.md)
+ - [Filter and query user activity](../cloud-infrastructure-entitlement-management/product-audit-trail.md)
+
+Use the **Activity triggers** dashboard to view information and set alerts and triggers.
+
+- Set activity alerts and triggers
+
+ Our customizable machine learning-powered anomaly and outlier detection alerts will notify you of any suspicious activity such as deviations in usage profiles or abnormal access times. Alerts can be used to alert on permissions usage, access to resources, indicators of compromise, insider threats, or to track previous incidents.
+
+ **Actions to try**
+
+ - [View information about alerts and alert triggers](../cloud-infrastructure-entitlement-management/ui-triggers.md)
+ - [Create and view activity alerts and alert triggers](../cloud-infrastructure-entitlement-management/how-to-create-alert-trigger.md)
+ - [Create and view rule-based anomaly alerts and anomaly triggers](../cloud-infrastructure-entitlement-management/product-rule-based-anomalies.md)
+ - [Create and view statistical anomalies and anomaly triggers](../cloud-infrastructure-entitlement-management/product-statistical-anomalies.md)
+ - [Create and view permission analytics triggers](../cloud-infrastructure-entitlement-management/product-permission-analytics.md)
+
+**Best Practices for Custom Alerts:**
+
+- Permission assignments done outside of approved administrators
+ - Examples:
+
+ Example: Any activity done by root:
+
+ ![Diagram, Any activity done by root user in AWS.](media/permissions-management-trial-playbook/custom-alerts-1.png)
+
+ Alert for monitoring any direct Azure role assignment
+
+ ![Diagram, Alert for monitoring any direct Azure role assignment done by anyone other than Admin user.](media/permissions-management-trial-playbook/custom-alerts-2.png)
+
+- Access to critical sensitive resources
+
+ Example: Alert for monitoring any action on Azure resources
+
+ ![Diagram, Alert for monitoring any action on Azure resources.](media/permissions-management-trial-playbook/custom-alerts-3.png)
+
+- Use of break glass accounts like root in AWS, global admin in Azure AD accessing subscriptions, etc.
+
+ Example: BreakGlass users should be used for emergency access only.
+
+ ![Diagram, Example of break glass account users used for emergency access only.](media/permissions-management-trial-playbook/custom-alerts-4.png)
+
+- Create and view reports
+
+ To support rapid remediation, you can set up security reports to be delivered at custom intervals. Permissions Management has various types of system report types available that capture specific sets of data by cloud infrastructure (AWS, Azure, GCP), by account/subscription/project, and more. Reports are fully customizable and can be delivered via email at pre-configured intervals.
+
+ These reports enable you to:
+
+ - Make timely decisions.
+ - Analyze trends and system/user performance.
+ - Identify trends in data and high-risk areas so that management can address issues more quickly and improve their efficiency.
+ - Automate data analytics in an actionable way.
+ - Ensure compliance with audit requirements for periodic reviews of **who has access to what,**
+ - Look at views into **Separation of Duties** for security hygiene to determine who has admin permissions.
+ - See data for **identity governance** to ensure inactive users are decommissioned because they left the company or to remove vendor accounts that have been left behind, old consultant accounts, or users who as parts of the Joiner/Mover/Leaver process have moved onto another role and are no longer using their access. Consider this a fail-safe to ensure dormant accounts are removed.
+ - Identify over-permissioned access to later use the Remediation to pursue **Zero Trust and least privileges.**
+
+ **Example of** [**Permissions Management Report**](https://microsoft.sharepoint.com/:v:/t/MicrosoftEntraPermissionsManagementAssets/EQWmUsMsdkZEnFVv-M9ZoagBd4B6JUQ2o7zRTupYrfxbGA)
+
+ **Actions to try**
+ - [View system reports in the Reports dashboard](../cloud-infrastructure-entitlement-management/product-reports.md)
+ - [View a list and description of system reports](../cloud-infrastructure-entitlement-management/all-reports.md)
+ - [Generate and view a system report](../cloud-infrastructure-entitlement-management/report-view-system-report.md)
+ - [Create, view, and share a custom report](../cloud-infrastructure-entitlement-management/report-create-custom-report.md)
+ - [Generate and download the Permissions analytics report](../cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md)
+
+**Key Reports to Monitor:**
+
+- **Permissions Analytics Report:** lists the key permission risks including Super identities, Inactive identities, Over-provisioned active identities, and more
+- **Group entitlements and Usage reports:** Provides guidance on cleaning up directly assigned permissions
+- **Access Key Entitlements and Usage reports**: Identifies high risk service principals with old secrets that havenΓÇÖt been rotated every 90 days (best practice) or decommissioned due to lack of use (as recommended by the Cloud Security Alliance).
+
+## Next Steps
+For more information about Permissions Management, see:
+
+**Microsoft Docs**: [Visit Docs](../cloud-infrastructure-entitlement-management/index.yml).
+
+**Datasheet:** <https://aka.ms/PermissionsManagementDataSheet>
+
+**Solution Brief:** <https://aka.ms/PermissionsManagementSolutionBrief>
+
+**White Paper:** <https://aka.ms/CIEMWhitePaper>
+
+**Infographic:** <https://aka.ms/PermissionRisksInfographic>
+
+**Security paper:** [2021 State of Cloud Permissions Risks](https://scistorageprod.azureedge.net/assets/2021%20State%20of%20Cloud%20Permission%20Risks.pdf?sv=2019-07-07&sr=b&sig=Sb17HibpUtJm2hYlp6GYlNngGiSY5GcIs8IfpKbRlWk%3D&se=2022-05-27T20%3A37%3A22Z&sp=r)
+
+**Permissions Management Glossary:** <https://aka.ms/PermissionsManagementGlossary>
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Previously updated : 08/05/2022 Last updated : 09/06/2022
In your Conditional Access policy, you can require that an [Intune app protectio
To apply this grant control, Conditional Access requires that the device is registered in Azure AD, which requires using a broker app. The broker app can be either Microsoft Authenticator for iOS or Microsoft Company Portal for Android devices. If a broker app isn't installed on the device when the user attempts to authenticate, the user is redirected to the app store to install the broker app.
-Applications must have the Intune SDK with policy assurance implemented and must meet certain other requirements to support this setting. Developers who are implementing applications with the Intune SDK can find more information on these requirements in the SDK documentation.
+Applications must have the Intune SDK with policy assurance implemented and must meet certain other requirements to support this setting. Developers who are implementing applications with the Intune SDK can find more information on these requirements in the [SDK documentation](/mem/intune/developer/app-sdk-get-started).
-The following client apps support this setting:
+The following client apps are confirmed to support this setting:
- Microsoft Cortana - Microsoft Edge
The following client apps support this setting:
- Microsoft Word - MultiLine for Intune - Nine Mail - Email and Calendar
+- Notate for Intune
+
+This list is not all encompassing, if your app is not in this list please check with the application vendor to confirm support.
> [!NOTE] > Kaizala, Skype for Business, and Visio don't support the **Require app protection policy** grant. If you require these apps to work, use the **Require approved apps** grant exclusively. Using the "or" clause between the two grants will not work for these three applications.
active-directory Authentication Vs Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-vs-authorization.md
Previously updated : 05/22/2020 Last updated : 08/26/2022
# Authentication vs. authorization
-This article defines authentication and authorization. It also briefly covers how you can use the Microsoft identity platform to authenticate and authorize users in your web apps, web APIs, or apps that call protected web APIs. If you see a term you aren't familiar with, try our [glossary](developer-glossary.md) or our [Microsoft identity platform videos](identity-videos.md), which cover basic concepts.
+This article defines authentication and authorization. It also briefly covers Multi-Factor Authentication and how you can use the Microsoft identity platform to authenticate and authorize users in your web apps, web APIs, or apps that call protected web APIs. If you see a term you aren't familiar with, try our [glossary](developer-glossary.md) or our [Microsoft identity platform videos](identity-videos.md), which cover basic concepts.
## Authentication
-*Authentication* is the process of proving that you are who you say you are. It's sometimes shortened to *AuthN*. The Microsoft identity platform uses the [OpenID Connect](https://openid.net/connect/) protocol for handling authentication.
+*Authentication* is the process of proving that you are who you say you are. This is achieved by verification of the identity of a person or device. It's sometimes shortened to *AuthN*. The Microsoft identity platform uses the [OpenID Connect](https://openid.net/connect/) protocol for handling authentication.
## Authorization *Authorization* is the act of granting an authenticated party permission to do something. It specifies what data you're allowed to access and what you can do with that data. Authorization is sometimes shortened to *AuthZ*. The Microsoft identity platform uses the [OAuth 2.0](https://oauth.net/2/) protocol for handling authorization.
+## Multifactor authentication
+
+*Multifactor authentication* is the act of providing an additional factor of authentication to an account. This is often used to protect against brute force attacks. It is sometimes shortened to *MFA* or *2FA*. The [Microsoft Authenticator](https://support.microsoft.com/account-billing/set-up-the-microsoft-authenticator-app-as-your-verification-method-33452159-6af9-438f-8f82-63ce94cf3d29) can be used as an app for handling two-factor authentication. For more information, see [multifactor authentication](../authentication/concept-mfa-howitworks.md).
+ ## Authentication and authorization using the Microsoft identity platform Creating apps that each maintain their own username and password information incurs a high administrative burden when adding or removing users across multiple apps. Instead, your apps can delegate that responsibility to a centralized identity provider.
Creating apps that each maintain their own username and password information inc
Azure Active Directory (Azure AD) is a centralized identity provider in the cloud. Delegating authentication and authorization to it enables scenarios such as: - Conditional Access policies that require a user to be in a specific location.-- The use of [multi-factor authentication](../authentication/concept-mfa-howitworks.md), which is sometimes called two-factor authentication or 2FA.
+- Multi-Factor Authentication which requires a user to have a specific device.
- Enabling a user to sign in once and then be automatically signed in to all of the web apps that share the same centralized directory. This capability is called *single sign-on (SSO)*. The Microsoft identity platform simplifies authorization and authentication for application developers by providing identity as a service. It supports industry-standard protocols and open-source libraries for different platforms to help you start coding quickly. It allows developers to build applications that sign in all Microsoft identities, get tokens to call [Microsoft Graph](https://developer.microsoft.com/graph/), access Microsoft APIs, or access other APIs that developers have built.
Here's a comparison of the protocols that the Microsoft identity platform uses:
For other topics that cover authentication and authorization basics: * To learn how access tokens, refresh tokens, and ID tokens are used in authorization and authentication, see [Security tokens](security-tokens.md).
-* To learn about the process of registering your application so it can integrate with the Microsoft identity platform, see [Application model](application-model.md).
+* To learn about the process of registering your application so it can integrate with the Microsoft identity platform, see [Application model](application-model.md).
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Stream Plan 2 | STREAM_P2 | ec156933-b85b-4c50-84ec-c9e5603709ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_P2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Plan 2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | | Microsoft Stream Storage Add-On (500 GB) | STREAM_STORAGE | 9bd7c846-9556-4453-a542-191d527209e8 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_STORAGE (83bced11-77ce-4071-95bd-240133796768) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Storage Add-On (83bced11-77ce-4071-95bd-240133796768) | | Microsoft Teams Audio Conferencing select dial-out | Microsoft_Teams_Audio_Conferencing_select_dial_out | 1c27243e-fb4d-42b1-ae8c-fe25c9616588 | MCOMEETBASIC (9974d6cf-cd24-4ba2-921c-e2aa687da846) | Microsoft Teams Audio Conferencing with dial-out to select geographies (9974d6cf-cd24-4ba2-921c-e2aa687da846) |
-| Microsoft Teams (Free) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
-| Microsoft Teams Exploratory | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 |
-| Microsoft Teams Rooms Standard | MEETING_ROOM | 6070a4c8-34c6-4937-8dfb-39bbc6397a60 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Teams_Room_Standard (92c6b761-01de-457a-9dd9-793a975238f7)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Azure Active Directory Premium Plan 1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Teams Room Standard (92c6b761-01de-457a-9dd9-793a975238f7)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
-| Microsoft Teams Rooms Standard without Audio Conferencing | MEETING_ROOM_NOAUDIOCONF | 61bec411-e46a-4dab-8f46-8b58ec845ffe | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
+| MICROSOFT TEAMS (FREE) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
+| MICROSOFT TEAMS EXPLORATORY | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 |
+| Microsoft Teams Rooms Basic | Microsoft_Teams_Rooms_Basic | 6af4b3d6-14bb-4a2a-960c-6c902aad34f3 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
+| Microsoft Teams Rooms Basic without Audio Conferencing | Microsoft_Teams_Rooms_Basic_without_Audio_Conferencing | 50509a35-f0bd-4c5e-89ac-22f0e16a00f8 | TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
+| Microsoft Teams Rooms Pro | Microsoft_Teams_Rooms_Pro | 4cde982a-ede4-4409-9ae6-b003453c8ea6 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
+| Microsoft Teams Rooms Pro without Audio Conferencing | Microsoft_Teams_Rooms_Pro_without_Audio_Conferencing | 21943e3a-2429-4f83-84c1-02735cd49e78 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
| Microsoft Teams Trial | MS_TEAMS_IW | 74fbf1bb-47c6-4796-9623-77dc7371723b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Teams (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft Threat Experts - Experts on Demand | EXPERTS_ON_DEMAND | 9fa2f157-c8e4-4351-a3f2-ffa506da1406 | EXPERTS_ON_DEMAND (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | Microsoft Threat Experts - Experts on Demand (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | | Microsoft Workplace Analytics | WORKPLACE_ANALYTICS | 3d957427-ecdc-4df2-aacd-01cc9d519da8 | WORKPLACE_ANALYTICS (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c) | Microsoft Workplace Analytics (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>Microsoft Workplace Analytics Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Workplace Analytics Insights User (b622badb-1b45-48d5-920f-4b27a2c0996c) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Skype for Business PSTN Domestic Calling (120 Minutes)| MCOPSTN5 | 54a152dc-90de-4996-93d2-bc47e670fc06 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | DOMESTIC CALLING PLAN (54a152dc-90de-4996-93d2-bc47e670fc06) | | Skype for Business PSTN Usage Calling Plan | MCOPSTNPP | 06b48c5f-01d9-4b18-9015-03b52040f51a | MCOPSTN3 (6b340437-d6f9-4dc5-8cc2-99163f7f83d6) | MCOPSTN3 (6b340437-d6f9-4dc5-8cc2-99163f7f83d6) | | Teams Phone with Calling Plan | MCOTEAMS_ESSENTIALS | ae2343d1-0999-43f6-ae18-d816516f6e78 | MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Domestic Calling Plan (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
-| Teams Rooms Premium | MTR_PREM | 4fb214cb-a430-4a91-9c91-4976763aa78f | MMR_P1 (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Meeting Room Managed Services (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
| TELSTRA Calling for O365 | MCOPSTNEAU2 | de3312e1-c7b0-46e6-a7c3-a515ff90bc86 | MCOPSTNEAU (7861360b-dc3b-4eba-a3fc-0d323a035746) | AUSTRALIA CALLING PLAN (7861360b-dc3b-4eba-a3fc-0d323a035746) | | Universal Print | UNIVERSAL_PRINT | 9f3d9c1d-25a5-4aaa-8e59-23a1e6450a67 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9) | | Visio Plan 1 | VISIO_PLAN1_DEPT | ca7f3140-d88c-455b-9a1c-7f0679e31a76 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OneDrive for business Basic (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>Visio web app (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) |
active-directory Road To The Cloud Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-posture.md
In enterprise-sized organizations, IAM transformation, or even transformation fr
* A plan to move apps that depend on AD and are part of the vision for the future state Azure AD environment is being executed. A plan to replace services that won't move (file, print, fax services) is in place.
-* On-premises workloads have been replaced with cloud alternatives such as Windows Virtual Desktop, Azure Files, Cloud Print. SQL is replaced by SQL MI. Azure AD Kerberos is being migrated to Azure AD.
+* On-premises workloads have been replaced with cloud alternatives such as Windows Virtual Desktop, Azure Files, Cloud Print. SQL is replaced by Azure SQL Managed Instance.
**State 5 100% cloud** - In this state, IAM capability is all provided by Azure AD and other Azure tools. This is the long-term aspiration for many organizations. In this state:
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
For more information about regex replace and capture groups, see [The Regular Ex
>[!NOTE] > As described in the Azure AD documentation, you can't modify a restricted claim by using a policy. The data source can't be changed, and no transformation is applied when you're generating these claims. The group claim is still a restricted claim, so you need to customize the groups by changing the name. If you select a restricted name for the name of your custom group claim, the claim will be ignored at runtime. >
-> You can also use the regex transform feature as a filter, because any groups that don't match the regex pattern will not be emitted in the resulting claim.
+> You can also use the regex transform feature as a filter, because any groups that don't match the regex pattern will not be emitted in the resulting claim.
+>
+>If the transform applied to the original groups claim results in a new custom claim, then the original groups claim will be omitted from the token. However, if the configured regex doesn't match any value in the original list, then the custom claim will not be present and the original groups claim will be included in the token.
### Edit the group claim configuration
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
To read more about securing your Active Directory environment, see [Best practic
#### Installation prerequisites -- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later.
+- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later - note that Windows Server 2022 is not yet supported.
- The minimum .Net Framework version required is 4.6.2, and newer versions of .Net are also supported. - Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported.
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Previously updated : 10/23/2021 Last updated : 09/06/2022
# Assign users and groups to an application
-This article shows you how to assign users and groups to an enterprise application in Azure Active Directory (Azure AD) using PowerShell. When you assign a user to an application, the application appears in the user's [My Apps](https://myapps.microsoft.com/) portal for easy access. If the application exposes roles, you can also assign a specific role to the user.
+This article shows you how to assign users and groups to an enterprise application in Azure Active Directory (Azure AD) using PowerShell. When you assign a user to an application, the application appears in the user's [My Apps](https://myapps.microsoft.com/) portal for easy access. If the application exposes app roles, you can also assign a specific app role to the user.
When you assign a group to an application, only users in the group will have access. The assignment does not cascade to nested groups.
To assign users to an app using PowerShell, you need:
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. - If you have not yet installed the AzureAD module (use the command `Install-Module -Name AzureAD`). If you're prompted to install a NuGet module or the new Azure Active Directory V2 PowerShell module, type Y and press ENTER. - Azure Active Directory Premium P1 or P2 for group-based assignment. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory).-- Optional: Completion of [Configure an app](add-application-portal-configure.md). ## Assign users, and groups, to an app using PowerShell
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
Previously updated : 06/10/2022 Last updated : 09/02/2022
To configure the admin consent workflow, you need:
To enable the admin consent workflow and choose reviewers:
-1. Sign in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites.
+1. Sign-in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites.
1. Search for and select **Azure Active Directory**. 1. Select **Enterprise applications**. 1. Under **Manage**, select **User settings**.
To configure the admin consent workflow programmatically, use the [Update adminC
## Next steps [Grant tenant-wide admin consent to an application](grant-admin-consent.md)+
+[Reivew admin consent requests](review-admin-consent-requests.md)
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Previously updated : 09/23/2021 Last updated : 09/06/2022 #customer intent: As an admin, I want to disable the way a user signs in for an application so that no user can sign in to it in Azure Active Directory.
-# Disable how a user signs in for an application
+# Disable user sign-in for an application
+
+There may be situations while configuring or managing an application where you don't want tokens to be issued for an application. Or, you may want to preemptively block an application that you do not want your employees to try to access. To accomplish this, you can disable user sign-in for the application, which will prevent all tokens from being issued for that application.
+
+In this article, you will learn how to disable how a user signs in to an application in Azure Active Directory through both the Azure portal and PowerShell. If you are looking for how to block specific users from accessing an application, use [user or group assignment](./assign-user-or-group-access-portal.md).
+
-In this article, you disable how a user signs in to an application in Azure Active Directory.
## Prerequisites
To disable how a user signs in, you need:
Ensure you have installed the AzureAD module (use the command Install-Module -Name AzureAD). In case you are prompted to install a NuGet module or the new Azure Active Directory V2 PowerShell module, type Y and press ENTER.
-If you know the AppId of an app that doesn't appear on the Enterprise apps list (for example, because you deleted the app or the service principal hasn't yet been created due to the app being pre-authorized by Microsoft), you can manually create the service principal for the app and then disable it by using [AzureAD PowerShell cmdlet](/powershell/module/azuread/New-AzureADServicePrincipal).
+If you know the AppId of an app that doesn't appear on the Enterprise apps list (for example, because you deleted the app or the service principal hasn't yet been created due to the app being pre-authorized by Microsoft), you can manually create the service principal for the app and then disable it by using the cmdlet below.
```PowerShell # The AppId of the app to be disabled
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
Previously updated : 10/23/2021 Last updated : 09/02/2022
# Grant tenant-wide admin consent to an application
- In this article, you'll learn how to grant tenant-wide admin consent to an application in Azure Active Directory (Azure AD).
+ In this article, you'll learn how to grant tenant-wide admin consent to an application in Azure Active Directory (Azure AD). To understand how individual users consent, see [Configure how end-users consent to applications](configure-user-consent.md).
-When you grant tenant-wide admin consent to an application, all users can sign in to the app. To restrict which users can sign in to an application, configure the app to require user assignment and then assign users or groups to the application.
+When you grant tenant-wide admin consent to an application, you give the application access on behalf of the whole organization to the permissions requested. Granting admin consent on behalf of an organization is a sensitive operation, potentially allowing the application's publisher access to significant portions of your organization's data, or the permission to do highly privileged operations. Examples of such operations might be role management, full access to all mailboxes or all sites, and full user impersonation.
+
+By default, granting tenant-wide admin consent to an application will allow all users to access the application unless otherwise restricted. To restrict which users can sign-in to an application, configure the app to [require user assignment](application-properties.md#assignment-required) and then [assign users or groups to the application](assign-user-or-group-access-portal.md).
Tenant-wide admin consent to an app grants the app and the app's publisher access to your organization's data. Carefully review the permissions that the application is requesting before you grant consent. For more information on consenting to applications, see [Azure Active Directory consent framework](../develop/consent-framework.md).
-Granting tenant-wide admin consent may revoke any permissions which had previously been granted tenant-wide. Permissions which have previously been granted by users on their own behalf will not be affected.
+Granting tenant-wide admin consent may revoke any permissions which had previously been granted tenant-wide for that application. Permissions which have previously been granted by users on their own behalf will not be affected.
## Prerequisites
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 08/01/2022 Last updated : 09/06/2022
Welcome to what's new in Azure Active Directory (Azure AD) application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure AD](../fundamentals/whats-new.md).
+## August 2022
+
+### Updated articles
+
+- [Hide an enterprise application](hide-application-from-user-portal.md)
+ ## July 2022 ### New articles
Welcome to what's new in Azure Active Directory (Azure AD) application managemen
- [Protect against consent phishing](protect-against-consent-phishing.md) - [Request to publish your application in the Azure AD application gallery](v2-howto-app-gallery-listing.md)-
-## May 2022
-
-### New articles
--- [My Apps portal overview](myapps-overview.md)-
-### Updated articles
--- [Tutorial: Configure Datawiza with Azure AD for secure hybrid access](datawiza-with-azure-ad.md)-- [Tutorial: Manage certificates for federated single sign-on](tutorial-manage-certificates-for-federated-single-sign-on.md)-- [Tutorial: Migrate Okta federation to Azure AD-managed authentication](migrate-okta-federation-to-azure-active-directory.md)-- [Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)
active-directory Ms Confluence Jira Plugin Adminguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md
The plug-in supports the following versions of Jira and Confluence:
* Jira Core and Software: 6.0 to 8.22.1 * Jira Service Desk: 3.0.0 to 4.22.1 * JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md)
-* Confluence: 5.0 to 7.17.0
+* Confluence: 5.0 to 5.10
+* Confluence: 6.0.1 to 6.15.9
+* Confluence: 7.0.1 to 7.17.0
## Installation
No. The plug-in supports only on-premises versions of Jira and Confluence.
The plug-in supports these versions:
-* Jira Core and Software: 6.0 to 7.12
-* Jira Service Desk: 3.0.0 to 3.5.0
+* Jira Core and Software: 6.0 to 8.22.1
+* Jira Service Desk: 3.0.0 to 4.22.1
* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md) * Confluence: 5.0 to 5.10
-* Confluence: 6.0.1
-* Confluence: 6.1.1
-* Confluence: 6.2.1
-* Confluence: 6.3.4
-* Confluence: 6.4.0
-* Confluence: 6.5.0
-* Confluence: 6.6.2
-* Confluence: 6.7.0
-* Confluence: 6.8.1
-* Confluence: 6.9.0
-* Confluence: 6.10.0
-* Confluence: 6.11.0
-* Confluence: 6.12.0
+* Confluence: 6.0.1 to 6.15.9
+* Confluence: 7.0.1 to 7.17.0
### Is the plug-in free or paid?
active-directory Saml Toolkit Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saml-toolkit-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Azure AD SAML Toolkit you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Azure AD SAML Toolkit you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/credential-design.md
The following four attestation types are currently available to be configured in
* **ID token hint**: The sample App and Tutorial use the ID token Hint. When this option is configured, the relying party app will need to provide claims that should be included in the verifiable credential in the Request Service API issuance request. Where the relying party app gets the claims from is up to the app, but it can come from the current sign-in session, from backend CRM systems or even from self asserted user input. To configure this option, please see this [how to guide](how-to-use-quickstart.md)
-* **Verifiable credentials**: The end result of an issuance flow is to produce a verifiable credential but you may also ask the user to Present a verifiable credential in order to issue one. The rules definition is able to take specific claims from the presented verifiable credential and include those claims in the newly issued verifiable credential from your organization.
+* **Verifiable credentials**: The end result of an issuance flow is to produce a verifiable credential but you may also ask the user to Present a verifiable credential in order to issue one. The rules definition is able to take specific claims from the presented verifiable credential and include those claims in the newly issued verifiable credential from your organization. To configure this option, please see this [how to guide](how-to-use-quickstart-presentation.md)
* **Self-attested claims**: When this option is selected, the user can type information directly into Authenticator. At this time, strings are the only supported input for self attested claims. To configure this option, please see this [how to guide](how-to-use-quickstart-selfissued.md)
active-directory How To Use Quickstart Presentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-presentation.md
+
+ Title: Issue verifiable credentials by presenting claims from an existing verifiable credential
+description: Learn how to use a quickstart to create custom credentials for from other VC attestation
+documentationCenter: ''
+++++ Last updated : 07/06/2022++
+#Customer intent: As a verifiable credentials administrator, I want to create a verifiable credential for self-asserted claims scenario.
++
+# Issue verifiable credentials by presenting claims from an existing verifiable credential
++
+A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [presentations attestation](rules-and-display-definitions-model.md#verifiablepresentationattestation-type) type produces an issuance flow where you want the user to present another verifiable credential in the wallet during issuance and where claim values for issuance of the new credential are taken from the presented credential. An example of this can be when you present your VerifiedEmployee credential to get a visitors pass credential.
+
+## Create a custom credential with the presentations attestation type
+
+In the Azure portal, when you select **Add credential**, you get the option to launch two quickstarts. Select **custom credential**, and then select **Next**.
++
+On the **Create a new credential** page, enter the JSON code for the display and the rules definitions. In the **Credential name** box, give the credential a name. This name is just an internal name for the credential in the portal. The type name of the credential is defined in the `vc.type` property name in the rules definition. To create the credential, select **Create**.
++
+## Sample JSON display definitions
+
+The JSON display definition is nearly the same, regardless of attestation type. You only have to adjust the labels according to the claims that your verifiable credential has. The expected JSON for the display definitions is the inner content of the displays collection. The JSON is a collection, so if you want to support multiple locales, add multiple entries with a comma as separator.
+
+```json
+{
+ "locale": "en-US",
+ "card": {
+ "backgroundColor": "#000000",
+ "description": "Use your verified credential to prove to anyone that you know all about verifiable credentials.",
+ "issuedBy": "Microsoft",
+ "textColor": "#ffffff",
+ "title": "Verified Credential Expert",
+ "logo": {
+ "description": "Verified Credential Expert Logo",
+ "uri": "https://didcustomerplayground.blob.core.windows.net/public/VerifiedCredentialExpert_icon.png"
+ }
+ },
+ "consent": {
+ "instructions": "Present your True Identity card to issue your VC",
+ "title": "Do you want to get your Verified Credential?"
+ },
+ "claims": [
+ {
+ "claim": "vc.credentialSubject.firstName",
+ "label": "First name",
+ "type": "String"
+ },
+ {
+ "claim": "vc.credentialSubject.lastName",
+ "label": "Last name",
+ "type": "String"
+ }
+ ]
+}
+```
+
+## Sample JSON rules definitions
+
+The JSON attestation definition should contain the **presentations** name. The **inputClaim** in the mapping section defines what claims should be captured in the credential the user presents. They need to have the prefix `$.vc.credentialSubject`. The **outputClaim** defined the name of the claims in the credential being issued.
+
+The following rules definition will ask the user to present the **True Identity** credential during issuance. This credential comes from the [public demo application](https://woodgroveemployee.azurewebsites.net/).
+
+```json
+{
+ "attestations": {
+ "presentations": [
+ {
+ "mapping": [
+ {
+ "outputClaim": "firstName",
+ "required": true,
+ "inputClaim": "$.vc.credentialSubject.firstName",
+ "indexed": false
+ },
+ {
+ "outputClaim": "lastName",
+ "required": true,
+ "inputClaim": "$.vc.credentialSubject.lastName",
+ "indexed": false
+ }
+ ],
+ "required": false,
+ "credentialType": "TrueIdentity",
+ "contracts": [
+ "https://verifiedid.did.msidentity.com/v1.0/tenants/3c32ed40-8a10-465b-8ba4-0b1e86882668/verifiableCredentials/contracts/M2MzMmVkNDAtOGExMC00NjViLThiYTQtMGIxZTg2ODgyNjY4dHJ1ZSBpZGVudGl0eSBwcm9k/manifest"
+ ],
+ "trustedIssuers": [
+ "did:ion:EiDXOEH-YmaP2ZvxoCI-lA5zT1i5ogjgH6foIc2LFC83nQ:eyJkZWx0YSI6eyJwYXRjaGVzIjpbeyJhY3Rpb24iOiJyZXBsYWNlIiwiZG9jdW1lbnQiOnsicHVibGljS2V5cyI6W3siaWQiOiJzaWdfODEwYmQ1Y2EiLCJwdWJsaWNLZXlKd2siOnsiY3J2Ijoic2VjcDI1NmsxIiwia3R5IjoiRUMiLCJ4IjoiRUZwd051UDMyMmJVM1dQMUR0Smd4NjdMMENVVjFNeE5peHFQVk1IMkw5USIsInkiOiJfZlNUYmlqSUpqcHNxTDE2Y0lFdnh4ZjNNYVlNWThNYnFFcTA2NnlWOWxzIn0sInB1cnBvc2VzIjpbImF1dGhlbnRpY2F0aW9uIiwiYXNzZXJ0aW9uTWV0aG9kIl0sInR5cGUiOiJFY2RzYVNlY3AyNTZrMVZlcmlmaWNhdGlvbktleTIwMTkifV0sInNlcnZpY2VzIjpbeyJpZCI6ImxpbmtlZGRvbWFpbnMiLCJzZXJ2aWNlRW5kcG9pbnQiOnsib3JpZ2lucyI6WyJodHRwczovL2RpZC53b29kZ3JvdmVkZW1vLmNvbS8iXX0sInR5cGUiOiJMaW5rZWREb21haW5zIn0seyJpZCI6Imh1YiIsInNlcnZpY2VFbmRwb2ludCI6eyJpbnN0YW5jZXMiOlsiaHR0cHM6Ly9iZXRhLmh1Yi5tc2lkZW50aXR5LmNvbS92MS4wLzNjMzJlZDQwLThhMTAtNDY1Yi04YmE0LTBiMWU4Njg4MjY2OCJdfSwidHlwZSI6IklkZW50aXR5SHViIn1dfX1dLCJ1cGRhdGVDb21taXRtZW50IjoiRWlCUlNqWlFUYjRzOXJzZnp0T2F3OWVpeDg3N1l5d2JYc2lnaFlMb2xTSV9KZyJ9LCJzdWZmaXhEYXRhIjp7ImRlbHRhSGFzaCI6IkVpQXZDTkJoODlYZTVkdUk1dE1wU2ZyZ0k2aVNMMmV2QS0tTmJfUElmdFhfOGciLCJyZWNvdmVyeUNvbW1pdG1lbnQiOiJFaUN2RFdOTFhzcE1sbGJfbTFJal96ZV9SaWNKOWdFLUM1b2dlN1NnZTc5cy1BIn19"
+ ]
+ }
+ ]
+ },
+ "validityInterval": 2592001,
+ "vc": {
+ "type": [
+ "VerifiedCredentialExpert"
+ ]
+ }
+}
+```
+
+| Property | Type | Description |
+| -- | -- | -- |
+|`credentialType`| string | credential type being requested during issuance. `TrueIdentity` in the above example. |
+|`contracts` | string (array) | list of manifest URL(s) of credentials being requested. In the above example, the manifest URL is the manifest for `True Identity` |
+| `trustedIssuers` | string (array) | a list of allowed issuer DIDs for the credential being requested. In the above example, the DID is the DID of the `True Identity`issuer. |
+
+Values
+
+## Authenticator experience during issuance
+
+During issuance, Authenticator prompts the user to select a matching credential. If the user has multiple matching credentials in the wallet, the user must select which one to present.
++
+## Configure the samples to issue your custom credential
+
+To configure your sample code to issue and verify your custom credential, you need:
+
+- Your tenant's issuer decentralized identifier (DID)
+- The credential type
+- The manifest URL to your credential
+
+The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**. Then you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
++
+## Next steps
+
+See the [Rules and display definitions reference](rules-and-display-definitions-model.md).
active-directory How To Use Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-## Prerequisites
-
-To use the Microsoft Entra Verified ID quickstart, you need only to complete the verifiable credentials onboarding process.
-
-## What is the quickstart?
-
-Entra Verified ID now come with quickstarts in the Azure portal for creating custom credentials. When you use the quickstart, you don't need to edit and upload rules and display files to Azure Storage. Instead, you enter all details in the Azure portal and create the credential on a single page.
-
->[!NOTE]
->When you work with custom credentials, you provide display definitions and rules definitions in JSON documents. These definitions are stored with the credential details.
+A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [idTokenHint attestation](rules-and-display-definitions-model.md#idtokenhintattestation-type) produces an issuance flow where the relying party application passes claim values in the [issuance request payload](issuance-request-api.md#issuance-request-payload). It is the relying party application's responsibility to ensure that required claim values are passed in the request. How the claim values are gathered is up to the application.
## Create a custom credential
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
+
+ Title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS) (Preview)
+description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet.
+++ Last updated : 08/29/2022++
+# Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
+
+The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod either from a pre-reserved set of IPs on every node or from a separate subnet reserved for pods. This approach requires IP address planning and could lead to address exhaustion and difficulties in scaling your clusters as your application demands grow.
+
+With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (via the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
+
+> [!NOTE]
+> - Azure CNI Overlay is currently only available in US West Central region.
+> - Azure CNI Overlay does not currently support _v5 VM SKUs.
+
+## Overview of overlay networking
+
+In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
+
+A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
++
+Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through Network Address Translation. Azure CNI translates the source IP (overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination. Endpoints outside the cluster can't connect to a pod directly. You will have to publish the pod's application as a Kubernetes Load Balancer service to make it reachable on the VNet.
+
+Outbound (egress) connectivity to the internet for overlay pods can be provided using a [Standard SKU Load Balancer](./egress-outboundtype.md#outbound-type-of-loadbalancer) or [Managed NAT Gateway](./nat-gateway.md). You can also control egress traffic by directing it to a firewall using [User Defined Routes on the cluster subnet](./egress-outboundtype.md#outbound-type-of-userdefinedrouting).
+
+Ingress connectivity to the cluster can be achieved using an ingress controller such as Nginx or [HTTP application routing](./http-application-routing.md).
+
+## Difference between Kubenet and Azure CNI Overlay
+
+Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet but has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you do not want to assign VNet IP addresses to pods due to IP shortage, then Azure CNI Overlay is the recommended solution.
+
+| Area | Azure CNI Overlay | Kubenet |
+| -- | :--: | -- |
+| Cluster scale | 1000 nodes and 250 pods/node | 400 nodes and 250 pods/node |
+| Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking |
+| Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency |
+| Kubernetes Network Policies | Azure Network Policies, Calico | Calico |
+| OS platforms supported | Linux only | Linux only |
+
+## IP address planning
+
+* **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so ensure that you have a subnet big enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
+
+* **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
+The following are additional factors to consider when planning pod address space:
+ * Pod CIDR space must not overlap with the cluster subnet range.
+ * Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
+ * The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
+
+* **Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
+
+* **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that will be used by cluster service discovery. Don't use the first IP address in your address range. The first address in your subnet range is used for the kubernetes.default.svc.cluster.local address.
+
+## Maximum pods per node
+
+You can configure the maximum number of pods per node at the time of cluster creation or when you add a new node pool. The default for Azure CNI Overlay is 30. The maximum value that you can specify in Azure CNI Overlay is 250, and the minimum value is 10. The maximum pods per node value configured during creation of a node pool applies to the nodes in that node pool only.
+
+## Choosing a network model to use
+
+Azure CNI offers two IP addressing options for pods- the traditional configuration that assigns VNet IPs to pods, and overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
+
+Use overlay networking when:
+
+* You would like to scale to a large number of Pods but have limited IP address space in your VNet.
+* Most of the pod communication is within the cluster.
+* You don't need advanced AKS features, such as virtual nodes.
+
+Use the traditional VNet option when:
+
+* You have available IP address space.
+* Most of the pod communication is to resources outside of the cluster.
+* Resources outside the cluster need to reach pods directly.
+* You need AKS advanced features, such as virtual nodes.
+
+## Limitations with Azure CNI Overlay
+
+The overlay solution has the following limitations today
+
+* Only available for Linux and not for Windows.
+* You can't deploy multiple overlay clusters in the same subnet.
+* Overlay can be enabled only for new clusters. Existing (already deployed) clusters can't be configured to use overlay.
+* You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
+* v5 VM SKUs are not currently supported.
+
+## Steps to set up overlay clusters
++
+The following example walks through the steps to create a new virtual network with a subnet for the cluster nodes and an AKS cluster that uses Azure CNI Overlay. Be sure to replace the variables with your own values.
+
+First, opt into the feature by running the following command:
+
+```azurecli-interactive
+az feature register --namespace Microsoft.ContainerService --name AzureOverlayPreview
+```
+
+Create a virtual network with a subnet for the cluster nodes.
+
+```azurecli-interactive
+resourceGroup="myResourceGroup"
+vnet="myVirtualNetwork"
+location="westcentralus"
+
+# Create the resource group
+az group create --name $resourceGroup --location $location
+
+# Create a VNet and a subnet for the cluster nodes
+az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefix 10.10.0.0/16 -o none
+```
+
+Create a cluster with Azure CNI Overlay. Use `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16.
+
+```azurecli-interactive
+clusterName="myOverlayCluster"
+subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
+
+az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet
+```
+
+## Frequently asked questions
+
+* *How do pods and cluster nodes communicate with each other?*
+
+ Pods and nodes talk to each other directly without any SNAT requirements.
++
+* *Can I configure the size of the address space assigned to each space?*
+
+ No, this is fixed at `/24` today and can't be changed.
++
+* *Can I add more private pod CIDRs to a cluster after the cluster has been created?*
+
+ No, a private pod CIDR can only be specified at the time of cluster creation.
++
+* *What are the max nodes and pods per cluster supported by Azure CNI Overlay?*
+
+ The max scale in terms of nodes and pods per cluster is the same as the max limits supported by AKS today.
aks Manage Abort Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md
+
+ Title: Abort an Azure Kubernetes Service (AKS) long running operation
+description: Learn how to terminate a long running operation on an Azure Kubernetes Service cluster at the node pool or cluster level.
++ Last updated : 09/06/2022+++
+# Terminate a long running operation on an Azure Kubernetes Service (AKS) cluster
+
+Sometimes deployment or other processes running within pods on nodes in a cluster can run for periods of time longer than expected due to various reasons. While it's important to allow those processes to gracefully terminate when they're no longer needed, there are circumstances where you need to release control of node pools and clusters with long running operations using an *abort* command.
+
+AKS now supports aborting a long running operation, allowing you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
+
+The abort operation supports the following scenarios:
+
+- If a long running operation is stuck or suspected to be in a bad state or failing, the operation can be aborted provided it's the last running operation on the Managed Cluster or agent pool.
+- If a long running operation is stuck or failing, that operation can be aborted.
+- An operation that was triggered in error can be aborted as long as the operation doesn't reach a terminal state first.
+
+## Before you begin
+
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, start with reviewing our guidance on how to design, secure, and operate an AKS cluster to support your production-ready workloads. For more information, see [AKS architecture guidance](/azure/architecture/reference-architectures/containers/aks-start-here).
+
+## Abort a long running operation
+
+### [Azure REST API](#tab/azure-rest)
+
+You can use the Azure REST API [Abort](/rest/api/aks/managed-clusters) operation to stop an operation against the Managed Cluster.
+
+The following example terminates a process for a specified agent pool.
+
+```rest
+/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclusters/{resourceName}/agentPools/{agentPoolName}/abort
+```
+
+The following example terminates a process for a specified managed cluster.
+
+```rest
+/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclusters/{resourceName}/abort
+```
+
+In the response, an HTTP status code of 204 is returned.
+
+### [Azure CLI](#tab/azure-cli)
+
+You can use the [az aks nodepool](/cli/azure/aks/nodepool) command with the `operation-abort` argument to abort an operation on a node pool or a managed cluster.
+
+The following example terminates an operation on a node pool on a specified cluster by its name and resource group that holds the cluster.
+
+```azurecli-interactive
+az aks nodepool operation-abort\
+
+--resource-group myResourceGroup \
+
+--cluster-name myAKSCluster \
+```
+
+The following example terminates an operation against a specified managed cluster its name and resource group that holds the cluster.
+
+```azurecli-interactive
+az aks operation-abort --name myAKSCluster --resource-group myResourceGroup
+```
+
+In the response, an HTTP status code of 204 is returned.
+++
+The provisioning state on the managed cluster or agent pool should be **Canceled**. Use the REST API [Get Managed Clusters](/rest/api/aks/managed-clusters/get) or [Get Agent Pools](/rest/api/aks/agent-pools/get) to verify the operation. The provisioning state should update to **Canceled** within a few seconds of the abort request being accepted. Operation status of last running operation ID on the managed cluster/agent pool, which can be retrieved by performing a GET operation against the Managed Cluster or agent pool, should show a status of **Canceling**.
+
+## Next steps
+
+Learn more about [Container insights](../azure-monitor/containers/container-insights-overview.md) to understand how it helps you monitor the performance and health of your Kubernetes cluster and container workloads.
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Last updated 05/16/2022
In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. The initial number of nodes and their size (SKU) is defined when you create an AKS cluster, which creates a [system node pool][use-system-pool]. To support applications that have different compute or storage demands, you can create additional *user node pools*. System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and tunnelfront. User node pools serve the primary purpose of hosting your application pods. However, application pods can be scheduled on system node pools if you wish to only have one pool in your AKS cluster. User node pools are where you place your application-specific pods. For example, use these additional user node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage. > [!NOTE]
-> This feature enables higher control over how to create and manage multiple node pools. As a result, separate commands are required for create/update/delete. Previously cluster operations through `az aks create` or `az aks update` used the managedCluster API and were the only option to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and require use of the `az aks nodepool` command set to execute operations on an individual node pool.
+> This feature enables higher control over how to create and manage multiple node pools. As a result, separate commands are required for create/update/delete. Previously cluster operations through `az aks create` or `az aks update` used the managedCluster API and were the only options to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and require use of the `az aks nodepool` command set to execute operations on an individual node pool.
This article shows you how to create and manage multiple node pools in an AKS cluster.
The following limitations apply when you create and manage AKS clusters that sup
## Create an AKS cluster
-> [!Important]
+> [!IMPORTANT]
> If you run a single system node pool for your AKS cluster in a production environment, we recommend you use at least three nodes for the node pool. To get started, create an AKS cluster with a single node pool. The following example uses the [az group create][az-group-create] command to create a resource group named *myResourceGroup* in the *eastus* region. An AKS cluster named *myAKSCluster* is then created using the [az aks create][az-aks-create] command.
The following example output shows that *mynodepool* has been successfully creat
> [!TIP] > If no *VmSize* is specified when you add a node pool, the default size is *Standard_D2s_v3* for Windows node pools and *Standard_DS2_v2* for Linux node pools. If no *OrchestratorVersion* is specified, it defaults to the same version as the control plane.
-### Add an ARM64 node pool (preview)
-
-The ARM64 processor provides low power compute for your Kubernetes workloads. To create an ARM64 node pool, you will need to choose an [ARM capable instance SKU][arm-sku-vm].
--
-#### Install the `aks-preview` Azure CLI
-
-You also need the *aks-preview* Azure CLI extension version 0.5.23 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+### Add an ARM64 node pool
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-#### Register the `AKSARM64Preview` preview feature
-
-To use the feature, you must also enable the `AKSARM64Preview` feature flag on your subscription.
-
-Register the `AKSARM64Preview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKSARM64Preview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKSARM64Preview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+The ARM64 processor provides low power compute for your Kubernetes workloads. To create an ARM64 node pool, you will need to choose a [Dpsv5][arm-sku-vm1], [Dplsv5][arm-sku-vm2] or [Epsv5][arm-sku-vm3] series Virtual Machine.
Use `az aks nodepool add` command to add an ARM64 node pool.
Associating a node pool with an existing capacity reservation group can be done
```azurecli-interactive az aks nodepool add -g MyRG --cluster-name MyMC -n myAP --capacityReservationGroup myCRG ```+ Associating a system node pool with an existing capacity reservation group can be done using [az aks create][az-aks-create] command. If the capacity reservation group specified doesn't exist, then a warning is issued and the cluster gets created without any capacity reservation group association. ```azurecli-interactive az aks create -g MyRG --cluster-name MyMC --capacityReservationGroup myCRG ```+ Deleting a node pool command will implicitly dissociate a node pool from any associated capacity reservation group, before that node pool is deleted. ```azurecli-interactive az aks nodepool delete -g MyRG --cluster-name MyMC -n myAP ```+ Deleting a cluster command implicitly dissociates all node pools in a cluster from their associated capacity reservation groups. ```azurecli-interactive
az group delete --name myResourceGroup2 --yes --no-wait
## Next steps
-Learn more about [system node pools][use-system-pool].
+* Learn more about [system node pools][use-system-pool].
-In this article, you learned how to create and manage multiple node pools in an AKS cluster. For more information about how to control pods across node pools, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
+* In this article, you learned how to create and manage multiple node pools in an AKS cluster. For more information about how to control pods across node pools, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
-To create and use Windows Server container node pools, see [Create a Windows Server container in AKS][aks-quickstart-windows-cli].
+* To create and use Windows Server container node pools, see [Create a Windows Server container in AKS][aks-quickstart-windows-cli].
-Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your AKS applications.
+* Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your AKS applications.
<!-- EXTERNAL LINKS -->
-[arm-vm-sku]: https://azure.microsoft.com/updates/public-preview-arm64based-azure-vms-can-deliver-up-to-50-better-priceperformance/
+ [kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-taint]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#taint
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[capacity-reservation-groups]:/azure/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set <!-- INTERNAL LINKS -->
+[arm-sku-vm1]: ../virtual-machines/dpsv5-dpdsv5-series.md
+[arm-sku-vm2]: ../virtual-machines/dplsv5-dpldsv5-series.md
+[arm-sku-vm3]: ../virtual-machines/epsv5-epdsv5-series.md
[aks-quickstart-windows-cli]: ./learn/quick-windows-container-deploy-cli.md [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-aks-create]: /cli/azure/aks#az_aks_create
aks Virtual Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes.md
description: Overview of how using virtual node with Azure Kubernetes Services (AKS) Previously updated : 02/17/2021 Last updated : 09/06/2022
This article gives you an overview of the region availability and networking req
All regions, where ACI supports VNET SKUs, are supported for virtual nodes deployments. For more details, see [Resource availability for Azure Container Instances in Azure regions](../container-instances/container-instances-region-availability.md).
-For available CPU and Memory SKUs in each region, please check the [Azure Container Instances Resource availability for Azure Container Instances in Azure regions - Linux container groups](../container-instances/container-instances-region-availability.md#linux-container-groups)
+For available CPU and memory SKUs in each region, please check the [Azure Container Instances Resource availability for Azure Container Instances in Azure regions - Linux container groups](../container-instances/container-instances-region-availability.md#linux-container-groups)
## Network requirements
Virtual Nodes functionality is heavily dependent on ACI's feature set. In additi
* Using api server authorized ip ranges for AKS. * Volume mounting Azure Files share support [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). Follow the instructions for mounting [a volume with Azure Files share](azure-files-volume.md) * Using IPv6 is not supported.
+* Virtual nodes don't support the [Container hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) feature.
## Next steps
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
The following features are supported for Linux containers:
- Mapping `/mounts`, `mounts/foo/bar`, `/`, and `/mounts/foo.bar/` to custom-mounted storage is not supported (you can only use /mounts/pathname for mounting custom storage to your web app.) - Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation. - Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts. -- Only Azure Files [SMB](../storage/files/files-smb-protocol.md) are supported. Azure Files [NFS](../storage/files/files-nfs-protocol.md) is not currently supported for Linux App Services. ::: zone-end
The following features are supported for Linux containers:
- FTP/FTPS access to mounted storage not supported (use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)). - Mapping `[C-Z]:\`, `[C-Z]:\home`, `/`, and `/home` to custom-mounted storage is not supported. - Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation.-- Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
+- Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
> [!NOTE] > Ensure ports 80 and 445 are open when using Azure Files with VNET integration.
The following features are supported for Linux containers:
- Don't map the custom storage mount to `/tmp` or its subdirectories as this may cause timeout during app startup. - Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation. - Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
+- Only Azure Files [SMB](../storage/files/files-smb-protocol.md) are supported. Azure Files [NFS](../storage/files/files-nfs-protocol.md) is not currently supported for Linux App Services.
> [!NOTE] > When VNET integration is used, ensure the following ports are open:
To validate that the Azure Storage is mounted successfully for the app:
- [Configure a custom container](configure-custom-container.md?pivots=platform-linux). - [Video: How to mount Azure Storage as a local share](https://www.youtube.com/watch?v=OJkvpWYr57Y).
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
A benefit of using an App Service Environment instead of a multi-tenant service
App Service Environment v3 differs from earlier versions in the following ways: - There are no networking dependencies on the customer's virtual network. You can secure all inbound and outbound traffic and route outbound traffic as you want. -- You can deploy an App Service Environment v3 that's enabled for zone redundancy. You set zone redundancy only during creation and only in regions where all App Service Environment v3 dependencies are zone redundant.
+- You can deploy an App Service Environment v3 that's enabled for zone redundancy. You set zone redundancy only during creation and only in regions where all App Service Environment v3 dependencies are zone redundant. In this case, each App Service Plan on the App Service Environment will need to have a minimum of three instances so that they can be spread across zones. For more information, see [Migrate App Service Environment to availability zone support](../../availability-zones/migrate-app-service-environment.md).
- You can deploy an App Service Environment v3 on a dedicated host group. Host group deployments aren't zone redundant. - Scaling is much faster than with an App Service Environment v2. Although scaling still isn't immediate, as in the multi-tenant service, it's a lot faster. - Front-end scaling adjustments are no longer required. App Service Environment v3 front ends automatically scale to meet your needs and are deployed on better hosts.
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Gateway-required virtual network integration is built on top of point-to-site VP
### Access on-premises resources
-Apps can access on-premises resources by integrating with virtual networks that have site-to-site connections. If you use gateway-required virtual network integration, update your on-premises VPN gateway routes with your point-to-site address blocks. When the site-to-site VPN is first set up, the scripts used to configure it should set up routes properly. If you add the point-to-site addresses after you create your site-to-site VPN, you need to update the routes manually. Details on how to do that vary per gateway and aren't described here. BGP routes won't be propagated automatically.
+Apps can access on-premises resources by integrating with virtual networks that have site-to-site connections. If you use gateway-required virtual network integration, update your on-premises VPN gateway routes with your point-to-site address blocks. When the site-to-site VPN is first set up, the scripts used to configure it should set up routes properly. If you add the point-to-site addresses after you create your site-to-site VPN, you need to update the routes manually. Details on how to do that vary per gateway and aren't described here.
+
+BGP routes from on-premises won't be propagated automatically into App Service. You need to manually propagate them on the point-to-site configuration using the steps in this document [Advertise custom routes for P2S VPN clients](../vpn-gateway/vpn-gateway-p2s-advertise-custom-routes.md).
No extra configuration is required for the regional virtual network integration feature to reach through your virtual network to on-premises resources. You simply need to connect your virtual network to on-premises resources by using ExpressRoute or a site-to-site VPN.
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
When you're using a restricted Key Vault, use the following steps to configure A
> [!TIP] > Steps 1-3 are not required if your Key Vault has a Private Endpoint enabled. The application gateway can access the Key Vault using the private IP address.
+> [!Note]
+> If using Private Endpoints to access Key Vault, you must link the privatelink.vaultcore.azure.net private DNS zone, containing the corresponding record to the referenced Key Vault, to the virtual network containing Application Gateway. Custom DNS servers may continue to be used on the virtual network instead of the Azure DNS provided resolvers, however the private dns zone will need to remain linked to the virtual network as well.
+ 1. In the Azure portal, in your Key Vault, select **Networking**. 1. On the **Firewalls and virtual networks** tab, select **Selected networks**. 1. For **Virtual networks**, select **+ Add existing virtual networks**, and then add the virtual network and subnet for your Application Gateway instance. During the process, also configure the `Microsoft.KeyVault` service endpoint by selecting its checkbox.
applied-ai-services Compose Custom Models V2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-v2-1.md
Use the programming language code of your choice to create a composed model that
* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/createComposedModel.js).
-* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_compose_model.py)
+* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_compose_model.py)
applied-ai-services Compose Custom Models V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-v3.md
Training with labels leads to better performance in some scenarios. To train wit
|Language |Method| |--|--|
-|**C#**|[**StartBuildModel**](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentmodeladministrationclient.startbuildmodel?view=azure-dotnet#azure-ai-formrecognizer-documentanalysis-documentmodeladministrationclient-startbuildmodel&preserve-view=true)|
-|**Java**| [**beginBuildModel**](/java/api/com.azure.ai.formrecognizer.administration.documentmodeladministrationclient.beginbuildmodel?view=azure-java-preview&preserve-view=true)|
+|**C#**|**StartBuildModel**|
+|**Java**| [**beginBuildModel**](/java/api/com.azure.ai.formrecognizer.documentanalysis.administration.documentmodeladministrationclient.beginbuildmodel)|
|**JavaScript** | [**beginBuildModel**](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-beginbuildmodel&preserve-view=true)| | **Python** | [**begin_build_model**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.aio.documentmodeladministrationclient?view=azure-python#azure-ai-formrecognizer-aio-documentmodeladministrationclient-begin-build-model&preserve-view=true)
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
The following resources are supported by Form Recognizer **v3.0** :
| Feature | Resources | |-|-| |_**Custom model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[Java SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[JavaScript SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[Python SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
-| _**Composed model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentmodeladministrationclient.startcreatecomposedmodel?view=azure-dotnet&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.administration.documentmodeladministrationclient.begincreatecomposedmodel?view=azure-java-stable&preserve-view=true)</li><li>[JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|
+| _**Composed model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</li><li>[JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|
The following resources are supported by Form Recognizer v2.1:
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
This table provides links to the build mode programming language SDK references
|Programming language | SDK reference | Code sample | |||| | C#/.NET | [DocumentBuildMode Struct](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentbuildmode?view=azure-dotnet&preserve-view=true#properties) | [Sample_BuildCustomModelAsync.cs](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/tests/samples/Sample_BuildCustomModelAsync.cs)
-|Java| [DocumentBuildMode Class](/java/api/com.azure.ai.formrecognizer.administration.models.documentbuildmode?view=azure-java-preview&preserve-view=true#fields) | [BuildModel.java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/BuildModel.java)|
+|Java| DocumentBuildMode Class | [BuildModel.java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/BuildModel.java)|
|JavaScript | [DocumentBuildMode type](/javascript/api/@azure/ai-form-recognizer/documentbuildmode?view=azure-node-latest&preserve-view=true)| [buildModel.js](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/buildModel.js)|
-|Python | [DocumentBuildMode Enum](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.documentbuildmode?view=azure-python&preserve-view=true#fields)| [sample_build_model.py](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_build_model.py)|
+|Python | DocumentBuildMode Enum| [sample_build_model.py](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_build_model.py)|
## Compare model features
applied-ai-services Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md
Form Recognizer SDK supports the following languages and platforms:
| Programming language/SDK | Package| Azure SDK client-library |Supported API version| Platform support | |:-:|:-|:-| :-|--|
-|[C#/4.0.0-beta.5](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-csharp#set-up)| [NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5) | [Azure SDK for .NET](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0-beta.5/https://docsupdatetracker.net/index.html)|[2022-06-30-preview, 2022-01-30-preview, 2021-09-30-preview, **v2.1-ga**, v2.0](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=form+recognizer) |[Windows, macOS, Linux, Docker](/dotnet.microsoft.com/download)|
+|[C#/4.0.0-beta.5](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-csharp#set-up)| [NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5) | [Azure SDK for .NET](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0-beta.5/https://docsupdatetracker.net/index.html)|[2022-06-30-preview, 2022-01-30-preview, 2021-09-30-preview, **v2.1-ga**, v2.0](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=form+recognizer) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
|[Jav?pivots=programming-language-java#set-up) |[Maven](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar) | [Azure SDK for Java](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0-beta.6/https://docsupdatetracker.net/index.html)|[2022-06-30-preview, 2022-01-30-preview, 2021-09-30-preview, **v2.1-ga**, v2.0](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=form+recognizer)|[Windows, macOS, Linux](/java/openjdk/install)| |[JavaScript/4.0.0-beta.6](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-javascript#set-up)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6)| [Azure SDK for JavaScript](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0-beta.6/https://docsupdatetracker.net/index.html) | [2022-06-30-preview, 2022-01-30-preview, 2021-09-30-preview, **v2.1-ga**, v2.0](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=form+recognizer) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | |[Python/3.2.0b6](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-python#set-up) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)| [Azure SDK for Python](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0b6/https://docsupdatetracker.net/index.html)| [2022-06-30-preview, 2022-01-30-preview, 2021-09-30-preview, **v2.1-ga**, v2.0](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=form+recognizer) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
The `BuildModelOperation` and `CopyModelOperation` now correctly populate the `P
#### Feature updates
-* The `get_words()` method has been added to the `DocumentLine` model. *See* our [How to get words contained in a Document line](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_get_words_on_document_line.py) sample on GitHub.
+* The `get_words()` method has been added to the `DocumentLine` model. *See* our [How to get words contained in a Document line](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_get_words_on_document_line.py) sample on GitHub.
#### Breaking changes
applied-ai-services Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/cost-management.md
Previously updated : 07/06/2021 Last updated : 09/06/2022
Azure Metrics Advisor monitors the performance of your organization's growth eng
## Key points about cost management and pricing -- You will be charged for the number of **distinct time series** analyzed during a month. If one data point is analyzed for a time series, it will be calculated as well.-- The number of distinct time series is **irrespective** of its granularity. An hourly time series and a daily time series will be charged at the same price.
+- You will be charged for the number of **distinct [time series](glossary.md#time-series)** analyzed during a month. If one data point is analyzed for a time series, it will be calculated as well.
+- The number of distinct time series is **irrespective** of its [granularity](glossary.md#granularity). An hourly time series and a daily time series will be charged at the same price.
+- The number of distinct time series is **highly related** to the data schema(choice of timestamp, dimension, measure) during onboarding. Please don't choose **timestamp or any IDs** as [dimension](glossary.md#dimension) to avoid dimension explosion, and introduce unexpected cost.
- You will be charged based on the tiered pricing structure listed below. The first day of next month will initialize a new statistic window. - The more time series you onboard to the service for analysis, the lower price you pay for each time series.
Based on the tiered pricing model described above, 66,000 analyzed time series p
- [Configurations for different data sources](data-feeds-from-different-sources.md) - [Configure metrics and fine tune detection configuration](how-tos/configure-metrics.md) -
automanage Arm Deploy Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/arm-deploy-arc.md
This ARM template will create a configuration profile assignment for your specif
The `configurationProfile` value can be one of the following values: * "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction" * "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesDevTest"
+* "/subscriptions/[sub ID]/resourceGroups/resourceGroupName/providers/Microsoft.Automanage/configurationProfiles/customProfileName (for custom profiles)
Follow these steps to deploy the ARM template: 1. Save this ARM template as `azuredeploy.json`.
automation Add User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/add-user-assigned-identity.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites -- An Azure Automation account. For instructions, see [Create an Azure Automation account](./quickstarts/create-account-portal.md).
+- An Azure Automation account. For instructions, see [Create an Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal).
- The user-assigned managed identity and the target Azure resources that your runbook manages using that identity can be in different Azure subscriptions.
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-overview.md
This article covers authentication scenarios supported by Azure Automation and t
When you start Azure Automation for the first time, you must create at least one Automation account. Automation accounts allow you to isolate your Automation resources, runbooks, assets, and configurations from the resources of other accounts. You can use Automation accounts to separate resources into separate logical environments or delegated responsibilities. For example, you might use one account for development, another for production, and another for your on-premises environment. Or you might dedicate an Automation account to manage operating system updates across all of your machines with [Update Management](update-management/overview.md).
-An Azure Automation account is different from your Microsoft account or accounts created in your Azure subscription. For an introduction to creating an Automation account, see [Create an Automation account](./quickstarts/create-account-portal.md).
+An Azure Automation account is different from your Microsoft account or accounts created in your Azure subscription. For an introduction to creating an Automation account, see [Create an Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal).
## Automation resources
automation Enable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/enable-managed-identity-for-automation.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites -- An Azure Automation account. For instructions, see [Create an Azure Automation account](./quickstarts/create-account-portal.md).
+- An Azure Automation account. For instructions, see [Create an Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal).
- The latest version of Az PowerShell modules Az.Accounts, Az.Resources, Az.Automation, Az.KeyVault.
automation Manage Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-office-365.md
Use of Office 365 within Azure Automation requires Microsoft Azure Active Direct
## Create an Azure Automation account
-To complete the steps in this article, you need an account in Azure Automation. See [Create an Azure Automation account](./quickstarts/create-account-portal.md).
+To complete the steps in this article, you need an account in Azure Automation. See [Create an Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal).
## Add MSOnline and MSOnlineExt as assets
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md
You can review the prices associated with Azure Automation on the [pricing](http
## Next steps > [!div class="nextstepaction"]
-> [Create an Automation account](./quickstarts/create-account-portal.md)
+> [Create an Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal)
automation Enable Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/enable-managed-identity.md
This Quickstart shows you how to enable managed identities for an Azure Automati
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An Azure Automation account. For instructions, see [Create an Automation account](create-account-portal.md).
+- An Azure Automation account. For instructions, see [Create an Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal).
- A user-assigned managed identity. For instructions, see [Create a user-assigned managed identity](../../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity). The user-assigned managed identity and the target Azure resources that your runbook manages using that identity must be in the same Azure subscription.
availability-zones Migrate App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-service-environment.md
This guide describes how to migrate an App Service Environment from non-availabi
Azure App Service Environment can be deployed across [Availability Zones (AZ)](../availability-zones/az-overview.md) to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.
-When you configure to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across all three zones in the selected region. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise, instance counts beyond 3*N are spread across the remaining one or two zones.
+When you configure to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across all three zones in the selected region. This means that the minimum App Service Plan instance count will always be three. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise, instance counts beyond 3*N are spread across the remaining one or two zones.
## Prerequisites - You configure availability zones when you create your App Service Environment.
- - All App Service plans created in that App Service Environment will automatically be zone redundant.
+ - All App Service plans created in that App Service Environment will need a minimum of 3 instances and those will automatically be zone redundant.
- You can only specify availability zones when creating a **new** App Service Environment. A pre-existing App Service Environment can't be converted to use availability zones. - Availability zones are only supported in a [subset of regions](../app-service/environment/overview.md#regions).
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 8/17/2022 Last updated : 9/6/2022
In addition to the generally available data collection listed above, Azure Monit
| Azure service | Current support | Other extensions installed | More information | | : | : | : | : |
-| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Sign-up link](https://aka.ms/AMAgent) |
+| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](/azure/defender-for-cloud/release-notes#auto-deployment-of-azure-monitor-agent-preview) |
| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows DNS logs: Preview</li><li>Linux Syslog CEF: Preview</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | <ul><li>[Sign-up link for Windows DNS logs](https://aka.ms/AMAgent)</li><li>[Sign-up link for Linux Syslog CEF](https://aka.ms/AMAgent)</li><li>No sign-up needed for Windows Forwarding Event (WEF) and Windows Security Events</li></ul> | | [Change Tracking](../../automation/change-tracking/overview.md) (part of Defender) | Supported as File Integrity Monitoring in the Microsoft Defender for Cloud: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/AMAgent) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) |
The following tables list the operating systems that Azure Monitor Agent and the
| Windows Server 2008 R2 SP1 | X | X | X | | Windows Server 2008 R2 | | | X | | Windows Server 2008 SP2 | | X | |
-| Windows 11 client OS | X<sup>2</sup> | | |
+| Windows 11 Client Enterprise and Pro | X<sup>2</sup>, <sup>3</sup> | | |
| Windows 10 1803 (RS4) and higher | X<sup>2</sup> | | | | Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only<sup>1</sup>) | X | X | X | | Windows 8 Enterprise and Pro<br>(Server scenarios only<sup>1</sup>) | | X | | | Windows 7 SP1<br>(Server scenarios only<sup>1</sup>) | | X | | | Azure Stack HCI | | X | |
-<sup>1</sup> Running the OS on server hardware, for example, machines that are always connected, always turned on, and not running other workloads (PC, office, browser)<br>
-<sup>2</sup> Using the Azure Monitor agent [client installer (Public preview)](./azure-monitor-agent-windows-client.md)
+<sup>1</sup> Running the OS on server hardware, for example, machines that are always connected, always turned on, and not running other workloads (PC, office, browser).<br>
+<sup>2</sup> Using the Azure Monitor agent [client installer (Public preview)](./azure-monitor-agent-windows-client.md).<br>
+<sup>3</sup> Also supported on Arm64-based machines.
#### Linux | Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|::
+| AlmaLinux 8.5 | X<sup>3</sup> | | |
| AlmaLinux 8 | X | X | | | Amazon Linux 2017.09 | | X | | | Amazon Linux 2 | | X | | | CentOS Linux 8 | X | X | |
-| CentOS Linux 7 | X | X | X |
+| CentOS Linux 7 | X<sup>3</sup> | X | X |
| CentOS Linux 6 | | X | | | CentOS Linux 6.5+ | | X | X |
-| Debian 11 | X | | |
+| CBL-Mariner 2.0 | X | | |
+| Debian 11 | X<sup>3</sup> | | |
| Debian 10 | X | X | | | Debian 9 | X | X | X | | Debian 8 | | X | |
The following tables list the operating systems that Azure Monitor Agent and the
| Oracle Linux 7 | X | X | X | | Oracle Linux 6 | | X | | | Oracle Linux 6.4+ | | X | X |
+| Red Hat Enterprise Linux Server 8.6 | X<sup>3</sup> | | |
| Red Hat Enterprise Linux Server 8 | X | X | | | Red Hat Enterprise Linux Server 7 | X | X | X | | Red Hat Enterprise Linux Server 6 | | X | | | Red Hat Enterprise Linux Server 6.7+ | | X | X | | Rocky Linux 8 | X | X | |
+| SUSE Linux Enterprise Server 15 SP4 | X<sup>3</sup> | | |
| SUSE Linux Enterprise Server 15 SP2 | X | | | | SUSE Linux Enterprise Server 15 SP1 | X | X | | | SUSE Linux Enterprise Server 15 | X | X | | | SUSE Linux Enterprise Server 12 | X | X | X | | Ubuntu 22.04 LTS | X | | |
-| Ubuntu 20.04 LTS | X | X | X |
-| Ubuntu 18.04 LTS | X | X | X |
+| Ubuntu 20.04 LTS | X<sup>3</sup> | X | X |
+| Ubuntu 18.04 LTS | X<sup>3</sup> | X | X |
| Ubuntu 16.04 LTS | X | X | X | | Ubuntu 14.04 LTS | | X | X | <sup>1</sup> Requires Python (2 or 3) to be installed on the machine.<br> <sup>2</sup> Requires Python 2 to be installed on the machine and aliased to the `python` command.<br>-
+<sup>3</sup> Also supported on Arm64-based machines.
## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
See [this article](alerts-types.md) for detailed information about each alert ty
|[Smart detection alerts](alerts-types.md#smart-detection-alerts)|Smart detection on an Application Insights resource automatically warns you of potential performance problems and failure anomalies in your web application. You can migrate smart detection on your Application Insights resource to create alert rules for the different smart detection modules.| ## Out-of-the-box alert rules (preview)
-If you don't have alert rules defined for the selected resource, you can [enable recommended out-of-the-box alert rules in the Azure portal](alerts-page.md#alert-rule-recommendations-preview).
+If you don't have alert rules defined for the selected resource, you can [enable recommended out-of-the-box alert rules in the Azure portal](alerts-manage-alert-rules.md#enable-recommended-alert-rules-in-the-azure-portal-preview).
> [!NOTE] > The alert rule recommendations feature is currently in preview and is only enabled for VMs.
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-plugin.md
The `useTrackEvent` Hook is used to track any custom event that an application m
- Application Insights instance (which can be obtained from the `useAppInsightsContext` Hook). - Name for the event. - Event data object that encapsulates the changes that has to be tracked.-- skipFirstRun (optional) flag to skip calling the `trackEvent` call on initialization. Default value is set to `true`.
+- skipFirstRun (optional) flag to skip calling the `trackEvent` call on initialization. Default value is set to `true` to mimic more closely the way the non-hook version works. With `useEffect` hooks, the effect is triggered on each value update _including_ the initial setting of the value, thereby starting the tracking too early causing potentially unwanted events to be tracked.
```javascript import React, { useState, useEffect } from "react";
azure-monitor Tutorial Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md
If you'd like to follow along with the guidance in this article, certain pre-req
## Deploy Azure resources
-Please follow the guidance to deploy the sample application from its [GitHub repository.](https://github.com/solliancenet/appinsights-azurecafe).
+Please follow the guidance to deploy the sample application from its [GitHub repository.](https://github.com/gitopsbook/sample-app-deployment).
In order to provide globally unique names to some resources, a 5 character suffix has been assigned. Please make note of this suffix for use later on in this article.
azure-monitor Web App Extension Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/web-app-extension-release-notes.md
Learn more about [Azure Web App Extension for Application Insights](azure-web-ap
## Release notes
+### 2.8.44
+
+- .NET/.NET Core: Upgraded to [ApplicationInsights .NET SDK to 2.20.1-redfield](https://github.com/microsoft/ApplicationInsights-dotnet/tree/autoinstrumentation/2.20.1).
+
+### 2.8.43
+
+- Separate .NET/.NET Core, Java and Node.js package into different App Service Windows Site Extension.
+ ### 2.8.42 - JAVA extension: Upgraded to [Java Agent 3.2.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0) from 2.5.1.
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
Supported data types:
## Using Private links Customer-managed storage accounts are used to ingest Custom logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
+> [!IMPORTANT]
+> Collection of IIS logs is not supported with private link.
+ ### Using a customer-managed storage account over a Private Link #### Workspace requirements When connecting to Azure Monitor over a private link, Log Analytics agents are only able to send logs to workspaces accessible over a private link. This requirement means you should:
azure-monitor Profiler Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-azure-functions.md
In this article, you'll use the Azure portal to:
> [!NOTE] > You can enable the Application Insights Profiler for Azure Functions apps on the **App Service** plan.
-## Pre-requisites
+## Prerequisites
- [An Azure Functions app](../../azure-functions/functions-create-function-app-portal.md). Verify your Functions app is on the **App Service** plan.
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-bring-your-own-storage.md
When you configure Bring Your Own Storage (BYOS), artifacts are uploaded into a
For example, if your Application Insights resource is in West US 2, your Storage Account must be also in West US 2.
-* Grant the `Storage Blob Data Contributor` role to the Azure AD application named `Diagnostic Services Trusted Storage Access` via the [Access Control (IAM)](/role-based-access-control/role-assignments-portal.md) page in your storage account.
+* Grant the `Storage Blob Data Contributor` role to the Azure AD application named `Diagnostic Services Trusted Storage Access` via the [Access Control (IAM)](../../role-based-access-control/role-assignments-portal.md) page in your storage account.
* If Private Link is enabled, allow connection to our Trusted Microsoft Service from your virtual network. ## Enable BYOS
azure-resource-manager Learn Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/learn-bicep.md
Title: Learn modules for Bicep description: Provides an overview of the Learn modules for Bicep. Previously updated : 12/03/2021 Last updated : 09/05/2022 # Learn modules for Bicep Ready to see how Bicep can help simplify and accelerate your deployments to Azure? Check out the many hands-on courses. > [!TIP]
-> Want to learn Bicep live from subject matter experts? [Learn Live with our experts every Tuesday (Pacific time) beginning March 8, 2022.](/events/learntv/learnlive-iac-and-bicep/)
+> Want to learn Bicep live from subject matter experts? [Follow on-demand Learn Live sessions with our experts.](/events/learntv/learnlive-iac-and-bicep/)
## Get started
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameters.md
description: Describes how to define parameters in a Bicep file.
Previously updated : 04/20/2022 Last updated : 09/06/2022 # Parameters in Bicep
Resource Manager resolves parameter values before starting the deployment operat
Each parameter must be set to one of the [data types](data-types.md).
+You are limited to 256 parameters. For more information, see [Template limits](../templates/best-practices.md#template-limits).
+
+For parameter best practices, see [Parameters](./best-practices.md#parameters).
+ ### Training resources If you would rather learn about parameters through step-by-step guidance, see [Build reusable Bicep templates by using parameters](/learn/modules/build-reusable-bicep-templates-parameters).
You might use this decorator to track information about the parameter that doesn
source: 'database' contact: 'Web team' })
-param settings object
+param settings object
``` ## Use parameter
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
Previously updated : 01/07/2022 Last updated : 09/06/2022
The deployment script resource is only available in the regions where Azure Cont
> [!NOTE] > Retry logic for Azure sign in is now built in to the wrapper script. If you grant permissions in the same template as your deployment scripts, the deployment script service retries sign in for 10 minutes with 10-second interval until the managed identity role assignment is replicated.
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [Deployment script](../bicep/deployment-script-bicep.md).
+ ### Training resources To learn more about the ARM template test toolkit, and for hands-on guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts).
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/parameters.md
Title: Parameters in templates description: Describes how to define parameters in an Azure Resource Manager template (ARM template). Previously updated : 01/19/2022 Last updated : 09/06/2022 # Parameters in ARM templates
Each parameter must be set to one of the [data types](data-types.md).
> [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [parameters](../bicep/parameters.md).
+You are limited to 256 parameters. For more information, see [Template limits](./best-practices.md#template-limits).
+
+For parameter best practices, see [Parameters](./best-practices.md#parameters).
+ ## Minimal declaration At a minimum, every parameter needs a name and type.
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
The trial account option is not available on the Azure Government cloud. For oth
## A paid (unlimited) account
-You can later create a paid account where you're not limited by the quota. Two types of paid accounts are available to you: Azure Resource Manager (ARM) (currently in preview) and classic (generally available). The main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, which enables apply access control to all services with role-based access control (Azure RBAC) natively.
+You can later create a paid account where you're not limited by the quota. Two types of paid accounts are available to you: Azure Resource Manager (ARM) and classic. The main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, which enables apply access control to all services with role-based access control (Azure RBAC) natively.
With the paid option, you pay for indexed minutes, for more information, see [Azure Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
After you complete these steps, you'll have a production-ready environment for c
- [VMware HCX Connector](install-vmware-hcx.md) has been installed. -- If you plan to use VMware HCX Enterprise, make sure you've enabled the [VMware HCX Enterprise](https://cloud.vmware.com/community/2019/08/08/introducing-hcx-enterprise/) add-on through a [support request](https://portal.azure.com/#create/Microsoft.Support). It's a free 12-month trial in Azure VMware Solution.
+- If you plan to use VMware HCX Enterprise, make sure you've enabled the [VMware HCX Enterprise](https://cloud.vmware.com/community/2019/08/08/introducing-hcx-enterprise/) add-on through a [support request](https://portal.azure.com/#create/Microsoft.Support). VMware HCX Enterprise edition is available and supported on Azure VMware Solution, at no additional cost.
- If you plan to [enable VMware HCX MON](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html), make sure you have:
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
With this capability, you have the following features:
- DDoS Security protection against network traffic in and out of the Internet. - HCX Migration support over the Public Internet.
+>[!IMPORTANT]
+>You can configure up to 64 total Public IP addresses across these network blocks. If you want to configure more than 64 Public IP addresses, please submit a support ticket stating how many.
+ ## Prerequisites - Azure VMware Solution private cloud - DNS Server configured on the NSX-T Datacenter
A No NAT rule can be used to exclude certain matches from performing Network Add
### Inbound Internet Access for VMs A Destination Network Translation Service (DNAT) is used to expose a VM on a specific Public IP address and/or a specific port. This service provides inbound internet access to your workload VMs.
-**Log in VMware NSX-T**
+**Log in to VMware NSX-T**
1. From your Azure VMware Solution private cloud, select **VMware credentials**. 2. Locate your NSX-T URL and credentials. 3. Log in to **VMware NSX-T**. **Configure the DNAT rule**
- 1. Name the rule.
- 1. Select **DNAT** as the action.
- 1. Enter the reserved Public IP in the destination match. This IP is from the range of Public IPs reserved from the Azure VMware Solution Portal.
- 1. Enter the VM Private IP in the translated IP.
- 1. Select **SAVE**.
- 1. Optionally, configure the Translated Port or source IP for more specific matches.
+1. Name the rule.
+1. Select **DNAT** as the action.
+1. Enter the reserved Public IP in the destination match. This IP is from the range of Public IPs reserved from the Azure VMware Solution Portal.
+1. Enter the VM Private IP in the translated IP.
+1. Select **SAVE**.
+1. Optionally, configure the Translated Port or source IP for more specific matches.
The VM is now exposed to the internet on the specific Public IP and/or specific ports.
azure-vmware Enable Vmware Cds With Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-vmware-cds-with-azure.md
The following diagram shows typical architecture for Cloud Director services wit
VMware Cloud Director supports multi-tenancy by using organizations. A single organization can have multiple organization virtual data centers (VDC). Each OrganizationΓÇÖs VDC can have their own dedicated Tier-1 router (Edge Gateway) which is further connected with the providerΓÇÖs managed shared Tier-0 router.
+[Learn more about CDs on Azure VMware Solutions refernce architecture](https://cloudsolutions.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/cloud-director-service-reference-architecture-for-azure-vmware-solution.pdf)
+ ## Connect tenants and their organization virtual datacenters to Azure vNet based resources To provide access to vNET based Azure resources, each tenant can have their own dedicated Azure vNET with Azure VPN gateway. A site-to-site VPN between customer organization VDC and Azure vNET is established. To achieve this connectivity, the provider will provide public IP to the organization VDC. Organization VDCΓÇÖs administrator can configure IPSEC VPN connectivity from the Cloud Director service portal.
For more information about VMware Cloud Director Availability, see [VMware Cloud
**Answer**: This offering is supported in all Azure regions where Azure VMware Solution is available except for Brazil South and South Africa. Ensure that the region you wish to connect to VMware Cloud Director service is within a 150-milliseconds round trip time for latency with VMware Cloud Director service.
+**Question**: How do I configure VMware Cloud Director service on Microsoft Azure VMware Solutions?
+
+**Answer** [Learn about how to configure CDs on Azure VMware Solutions](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/using-vmware-cloud-director-service/GUID-602DE9DD-E7F6-4114-BD89-347F9720A831.html)
+ ## Next steps
-[VMware Cloud Director service Documentation](https://docs.vmware.com/en/VMware-Cloud-Director-service/https://docsupdatetracker.net/index.html)
+
+[VMware Cloud Director Service Documentation](https://docs.vmware.com/en/VMware-Cloud-Director-service/https://docsupdatetracker.net/index.html)
+[Migration to Azure VMware Solutions with Cloud Director service](https://cloudsolutions.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/migration-to-azure-vmware-solution-with-cloud-director-service.pdf)
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
When Teams external users leave the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting.
+## Server capabilities
+
+The following table shows supported server-side capabilities available in Azure Communication
+
+|Capability | Supported |
+| | |
+| [Manage ACS call recording](../../voice-video-calling/call-recording.md) | ❌ |
+| [Azure Metrics](../../metrics.md) | ✔️ |
+| [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
+| [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ |
+| [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ |
++
+## Teams capabilities
+
+The following table shows supported Teams capabilities:
+
+|Capability | Supported |
+| | |
+| [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
+| [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
+| [Teams meeting attendance report](https://support.microsoft.com/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ✔️ |
## Next steps
communication-services Teams Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-administration.md
Teams meeting organizers can also configure the Teams meeting options to adjust
| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | If set to "Everyone", Teams external users can bypass lobby. Otherwise, Teams external users have to wait in the lobby until an authenticated user admits them.| ✔️ | | [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable | | Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ |
-| [Choose co-organizers](/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Not applicable to external users | ✔️ |
+| [Choose co-organizers](https://support.microsoft.com/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Not applicable to external users | ✔️ |
| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ |
-|[Manage what attendees see](/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌|
-|[Allow mic for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If external user is attendee, then this option controls whether external user can send local audio |✔️|
-|[Allow camera for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If external user is attendee, then this option controls whether external user can send local video |✔️|
+|[Manage what attendees see](https://support.microsoft.com/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌|
+|[Allow mic for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If external user is attendee, then this option controls whether external user can send local audio |✔️|
+|[Allow camera for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If external user is attendee, then this option controls whether external user can send local video |✔️|
|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️| |Allow meeting chat|If enabled, external users can use the chat associated with the Teams meeting.|✔️| |[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, external users can use reactions in the Teams meeting |❌| |[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable|
-|[Provide CART Captions](/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
+|[Provide CART Captions](https://support.microsoft.com/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
## Next steps
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
The following list presents the set of features that are currently available in
| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | | | Set / update scaling mode | ✔️ | | | Render remote video stream | ✔️ |+
+Support for streaming, timeouts, platforms, and browsers is shared with [Communication Services calling SDK overview](./../voice-video-calling/calling-sdk-features.md).
+
+## Detailed Teams capabilities
+
+The following list presents the set of Teams capabilities, which are currently available in the Azure Communication Services Calling SDK for JavaScript.
+
+|Group of features | Teams capability | JS |
+|-|--||
+| Core Capabilities | Placing a call honors Teams external access configuration | ✔️ |
+| | Placing a call honors Teams guest access configuration | ✔️ |
+| | Joining Teams meeting honors configuration for automatic people admit in the Lobby | ✔️ |
+| | Actions available in the Teams meeting are defined by assigned role | ✔️ |
+| Mid call control | Receive forwarded call | ✔️ |
+| | Receive simultaneous ringing | ✔️ |
+| | Play music on hold | ❌ |
+| | Park a call | ❌ |
+| | Transfer a call to a person | ✔️ |
+| | Transfer a call to a call | ✔️ |
+| | Transfer a call to Voicemail | ❌ |
+| | Merge ongoing calls | ❌ |
+| | Place a call on behalf of the user | ❌ |
+| | Start call recording | ❌ |
+| | Start call transcription | ❌ |
+| | Start live captions | ❌ |
+| | Receive information of call being recorded | ✔️ |
+| PSTN | Make an Emergency call | ✔️ |
+| | Place a call honors location-based routing | ❌ |
+| | Support for survivable branch appliance | ❌ |
+| Phone system | Receive a call from Teams auto attendant | ✔️ |
+| | Transfer a call to Teams auto attendant | ✔️ |
+| | Receive a call from Teams call queue (only conference mode) | ✔️ |
+| | Transfer a call from Teams call queue (only conference mode) | ✔️ |
+| Compliance | Place a call honors information barriers | ✔️ |
+| | Support for compliance recording | ✔️ |
+| Meeting | [Include participant in Teams meeting attendance report](https://support.microsoft.com/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ❌ |
++
+## Teams meeting options
+
+Teams meeting organizers can configure the Teams meeting options to adjust the experience for participants. The following options are supported in Azure Communication Services for Teams users:
+
+|Option name|Description| Supported |
+| | | |
+| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | Teams user can bypass the lobby, if Teams meeting organizer set value to include "people in my organization" for single tenant meetings and "people in trusted organizations" for cross-tenant meetings. Otherwise, Teams users have to wait in the lobby until an authenticated user admits them.| ✔️ |
+| [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable |
+| Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ |
+| [Choose co-organizers](https://support.microsoft.com/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Teams user can be selected as co-organizer. It affects the availability of actions in Teams meetings. | ✔️ |
+| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ |
+|[Manage what attendees see](https://support.microsoft.com/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌|
+|[Allow mic for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local audio |✔️|
+|[Allow camera for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️|
+|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️|
+|Allow meeting chat|If enabled, Teams users can use the chat associated with the Teams meeting.|✔️|
+|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Services doesn't support reactions. |❌|
+|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable|
+|[Provide CART Captions](https://support.microsoft.com/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
| | See together mode video stream | ❌ | | | See Large gallery view | ❌ | | | Receive video stream from Teams media bot | ❌ |
communication-services Azure Ad Api Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/azure-ad-api-permissions.md
None.
- Application admin - Cloud application admin
-Find more details in [Azure Active Directory documentation](/azure/active-directory/roles/permissions-reference.md).
+Find more details in [Azure Active Directory documentation](/azure/active-directory/roles/permissions-reference).
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
Learn more:
[Included CA Certificate List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT) >[!IMPORTANT]
->Azure Communication Services direct routing supports only TLS 1.2 (or a later version), make sure that the cipher suites you're using on an SBC are supported by Azure Front Door. Microsoft 365 and Azure Front Door have slight differences in cipher suite support. For details, see [What are the current cipher suites supported by Azure Front Door?](/azure/frontdoor/concept-end-to-end-tls#supported-cipher-suites).
+>Azure Communication Services direct routing supports only TLS 1.2 (or a later version). To avoid any service impact, ensure that your SBCs are configured to support TLS1.2 and can connect using one of the following cipher suites:
+>TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 i.e. ECDHE-RSA-AES256-GCM-SHA384
+>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 i.e. ECDHE-RSA-AES128-GCM-SHA256
+>TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 i.e. ECDHE-RSA-AES256-SHA384
+>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 i.e. ECDHE-RSA-AES128-SHA256
SBC pairing works on the Communication Services resource level. It means you can pair many SBCs to a single Communication Services resource. Still, you cannot pair a single SBC to more than one Communication Services resource. Unique SBC FQDNs are required for pairing to different resources.
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md
Title: Connect to IBM MQ server
description: Connect to an MQ server on premises or in Azure from a workflow using Azure Logic Apps. ms.suite: integration--++ Last updated 03/14/2022
connectors Connectors Run 3270 Apps Ibm Mainframe Create Api 3270 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-run-3270-apps-ibm-mainframe-create-api-3270.md
Title: Connect to 3270 apps on IBM mainframes
description: Integrate and automate 3270 screen-driven apps with Azure by using Azure Logic Apps and IBM 3270 connector ms.suite: integration--++ Last updated 02/03/2021
container-apps Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress.md
The following settings are available when configuring ingress:
| Property | Description | Values | Required | |||||
-| `external` | When enabled, the environment is assigned a public IP and fully qualified domain name (FQDN) for external ingress and an internal IP and FQDN for internal ingress. When disabled, only an internal IP/FQDN is created. |`true` for external visibility, `false` for internal visibility (default) | Yes |
+| `external` | Your ingress IP and app fully qualified domain name (FQDN) can either be visible externally from the internet, or internally from a VNET depending on whether the app environment has an external or internal endpoint, respectively - or visibility from within the app environment only. |`true` for visibility from internet or VNET, depending on app environment endpoint is configured, `false` for visibility within app environment only. (default) | Yes |
| `targetPort` | The port your container listens to for incoming requests. | Set this value to the port number that your container uses. Your application ingress endpoint is always exposed on port `443`. | Yes | | `transport` | You can use either HTTP/1.1 or HTTP/2, or you can set it to automatically detect the transport type. | `http` for HTTP/1, `http2` for HTTP/2, `auto` to automatically detect the transport type (default) | No | | `allowInsecure` | Allows insecure traffic to your container app. | `false` (default), `true`<br><br>If set to `true`, HTTP requests to port 80 aren't automatically redirected to port 443 using HTTPS, allowing insecure connections. | No |
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
az cloud set -n AzureCloud
az login az account set --subscription <your subscription ID> ```
-7. Enable the RBAC capability on your existing API for MongoDB database account.
-Get your existing capabilities. Capabilities are account features. Some are optional and some can't be changed.
-```powershell
-az cosmosdb show -n <account_name> -g <azure_resource_group>
-```
-You should see a capability section similar to this
-```powershell
-"capabilities": [
- {
- "name": "EnableMongo"
- },
- {
- "name": "DisableRateLimitingResponses"
- }
-```
-Copy the existing capabilities and add the RBAC capability (EnableMongoRoleBasedAccessControl) to the list:
-```powershell
-az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities EnableMongoRoleBasedAccessControl, EnableMongo, DisableRateLimitingResponses
-```
-If you prefer a new database account instead, create a new database account with the RBAC capability set to true. Your subscription must be allow-listed in order to create an account with the EnableMongoRoleBasedAccessControl capability.
+7. Enable the RBAC capability on your existing API for MongoDB database account. You'll need to [add the capability](how-to-configure-capabilities.md) "EnableMongoRoleBasedAccessControl" to your database account.
+If you prefer a new database account instead, create a new database account with the RBAC capability set to true.
```powershell az cosmosdb create -n <account_name> -g <azure_resource_group> --kind MongoDB --capabilities EnableMongoRoleBasedAccessControl ```
cosmos-db Create Table Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-java.md
Title: Use the Table API and Java to build an app - Azure Cosmos DB description: This quickstart shows how to use the Azure Cosmos DB Table API to create an application with the Azure portal and Java--++ ms.devlang: java
cosmos-db Create Table Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-nodejs.md
Title: 'Quickstart: Table API with Node.js - Azure Cosmos DB' description: This quickstart shows how to use the Azure Cosmos DB Table API to create an application with the Azure portal and Node.js--++ ms.devlang: javascript
cosmos-db Dotnet Standard Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/dotnet-standard-sdk.md
Title: Azure Cosmos DB Table API .NET Standard SDK & Resources description: Learn all about the Azure Cosmos DB Table API and the .NET Standard SDK including release dates, retirement dates, and changes made between each version.--++ ms.devlang: csharp
cosmos-db How To Use C Plus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-c-plus.md
ms.devlang: cpp Last updated 10/07/2019--++ # How to use Azure Table storage and Azure Cosmos DB Table API with C++ [!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)]
cosmos-db How To Use Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-java.md
ms.devlang: Java Last updated 12/10/2020--++
cosmos-db How To Use Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-nodejs.md
ms.devlang: javascript Last updated 07/23/2020--++ # How to use Azure Table storage or the Azure Cosmos DB Table API from Node.js
cosmos-db How To Use Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-php.md
Title: Use Azure Storage Table service or Azure Cosmos DB Table API from PHP description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB Table API from PHP.--++ ms.devlang: php
cosmos-db How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-python.md
Title: 'Quickstart: Table API with Python - Azure Cosmos DB' description: This quickstart shows how to access the Azure Cosmos DB Table API from a Python application using the Azure Data Tables SDK-+ ms.devlang: python Last updated 03/23/2021-+
cosmos-db How To Use Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-ruby.md
ms.devlang: ruby Last updated 07/23/2020--++ # How to use Azure Table Storage and the Azure Cosmos DB Table API with Ruby
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/introduction.md
Title: Introduction to the Azure Cosmos DB Table API description: Learn how you can use Azure Cosmos DB to store and query massive volumes of key-value data with low latency by using the Azure Tables API.--++
cosmos-db Table Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/table-support.md
Last updated 11/03/2021--++ ms.devlang: cpp, csharp, java, javascript, php, python, ruby
cosmos-db Tutorial Global Distribution Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-global-distribution-table.md
Title: Azure Cosmos DB global distribution tutorial for Table API description: Learn how global distribution works in Azure Cosmos DB Table API accounts and how to configure the preferred list of regions--++
cosmos-db Tutorial Query Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-query-table.md
Title: How to query table data in Azure Cosmos DB? description: Learn how to query data stored in the Azure Cosmos DB Table API account by using OData filters and LINQ queries--++
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 08/19/2022 Last updated : 09/05/2022 # Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
To write data into a lookup field using alternate key columns, follow this guida
:::image type="content" source="./media/connector-dynamics-crm-office-365/connector-dynamics-lookup-field-column-mapping-alternate-key-2.png" alt-text="Screenshot shows mapping columns to lookup fields via alternate keys step 2."::: > [!Note]
-> Currently this is only supported in mapping data flows.
+> Currently this is only supported when you use inline mode in the sink transformation of mapping data flows.
## Mapping data flow properties
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
The following table shows features and corresponding SKUs.
| Active traffic monitoring & always on detection| Yes | Yes| | Automatic attack mitigation | Yes | Yes | | Availability guarantee| Not available | Yes |
+| Cost protection | Not available | Yes |
| Application based mitigation policies | Not available | Yes| | Metrics & alerts | Not available | Yes | | Mitigation reports | Not available | Yes |
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
This procedure describes how to add a Defender for IoT plan for OT networks to a
1. Select the **I accept the terms** option, and then select **Save**.
-Your OT networks plan will be shown under the associated subscription in the **Plans** grid.
+Your OT networks plan will be shown under the associated subscription in the **Plans** grid. For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md).
+ ## Add a Defender for IoT plan for Enterprise IoT networks to an Azure subscription
For more information, see:
- [Welcome to Microsoft Defender for IoT for organizations](overview.md) - [Microsoft Defender for IoT architecture](architecture.md)
+- [Move existing sensors to a different subscription](how-to-manage-subscriptions.md#move-existing-sensors-to-a-different-subscription)
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
This article describes how to create and manage users of sensors and the on-prem
Features are also available to track user activity and enable Active Directory sign in.
-By default, each sensor and on-premises management console is installed with a *cyberx, support* and *cyberx_host* user. These users have access to advanced tools for troubleshooting and setup. Administrator users should sign in with these user credentials, create an admin user, and then create extra users for security analysts and read-only users.
+By default, each sensor and on-premises management console is installed with the *cyberx* and *support* users. Sensors are also installed with the *cyberx_host* user. These users have access to advanced tools for troubleshooting and setup. Administrator users should sign in with these user credentials, create an admin user, and then create extra users for security analysts and read-only users.
## Role-based permissions The following user roles are available:
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
When onboarding or editing your Defender for IoT plan, you'll need to know how m
[!INCLUDE [devices-inventoried](includes/devices-inventoried.md)]
-**To calculate the number of devices you need to monitor**:
+#### Calculate the number of devices you need to monitor
We recommend making an initial estimate of your committed devices when onboarding your Defender for IoT plan.
-1. Collect the total number of devices in your network.
+**For OT devices**:
-1. Remove any devices that are *not* considered as committed devices by Defender for IoT.
+1. Collect the total number of devices at each site in your network, and add them together.
- If you are also a Defender for Endpoint customer, you can identify devices managed by Defender for Endpoint in the Defender for Endpoint **Device inventory** page. In the **Endpoints** tab, filter for devices by **Onboarding status**. For more information, see [Defender for Endpoint Device discovery overview](/microsoft-365/security/defender-endpoint/device-discovery).
+1. Remove any devices that are [*not* considered as committed devices by Defender for IoT](#defender-for-iot-committed-devices).
After you've set up your network sensor and have full visibility into all devices, you can [Edit a plan](#edit-a-plan-for-ot-networks) to update the number of committed devices as needed.
+**For Enterprise IoT devices**:
+
+In the **Device inventory** page in the **Defender for Endpoint** portal:
+
+1. Add the total number of discovered **network devices** with the total number of discovered **IoT devices**.
+
+ For example:
+
+ :::image type="content" source="media/how-to-manage-subscriptions/eiot-calculate-devices.png" alt-text="Screenshot of network device and IoT devices in the device inventory in Microsoft Defender for Endpoint.":::
+
+ For more information, see the [Defender for Endpoint Device discovery overview](/microsoft-365/security/defender-endpoint/device-discovery).
+
+1. Remove any devices that are [*not* considered as committed devices by Defender for IoT](#defender-for-iot-committed-devices).
+
+1. Round up your total to a multiple of 100.
+
+ For example: In the device inventory, you have 473 network devices and 1206 IoT devices. Added together the total is 1679 devices, and rounded up to a multiple of 100 is 1700. Use 1700 as the estimated number of committed devices.
+
+To edit the number of committed Enterprise IoT devices after you've onboarded a plan, you will need to cancel the plan and onboard a new plan in Defender for Endpoint. For more information, see the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+ ## Onboard a Defender for IoT plan for OT networks This procedure describes how to add a Defender for IoT plan for OT networks to an Azure subscription.
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
# Azure Digital Twins APIs and SDKs
-This article gives an overview of the Azure Digital Twins APIs available, and the methods for interacting with them. You can either use the REST APIs directly with their associated Swaggers (through a tool like [Postman](how-to-use-postman.md)), or through an SDK.
+This article gives an overview of the Azure Digital Twins APIs available, and the methods for interacting with them. You can either use the REST APIs directly with their associated Swaggers (through a tool like [Postman](how-to-use-postman-with-digital-twins.md)), or through an SDK.
Azure Digital Twins comes equipped with control plane APIs, data plane APIs, and SDKs for managing your instance and its elements. * The control plane APIs are [Azure Resource Manager (ARM)](../azure-resource-manager/management/overview.md) APIs, and cover resource management operations like creating and deleting your instance.
The available helper classes are:
The following list provides more detail and general guidelines for using the APIs and SDKs.
-* You can use an HTTP REST-testing tool like Postman to make direct calls to the Azure Digital Twins APIs. For more information about this process, see [Make API requests with Postman](how-to-use-postman.md).
+* You can use an HTTP REST-testing tool like Postman to make direct calls to the Azure Digital Twins APIs. For more information about this process, see [Call the Azure Digital Twins APIs with Postman](how-to-use-postman-with-digital-twins.md).
* To use the SDK, instantiate the `DigitalTwinsClient` class. The constructor requires credentials that can be obtained with different kinds of authentication methods in the `Azure.Identity` package. For more on `Azure.Identity`, see its [namespace documentation](/dotnet/api/azure.identity?view=azure-dotnet&preserve-view=true). * You may find the `InteractiveBrowserCredential` useful while getting started, but there are several other options, including credentials for [managed identity](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true), which you'll likely use to authenticate [Azure functions set up with MSI](../app-service/overview-managed-identity.md?tabs=dotnet) against Azure Digital Twins. For more about `InteractiveBrowserCredential`, see its [class documentation](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true). * Requests to the Azure Digital Twins APIs require a user or service principal that is a part of the same [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) tenant where the Azure Digital Twins instance exists. To prevent malicious scanning of Azure Digital Twins endpoints, requests with access tokens from outside the originating tenant will be returned a "404 Sub-Domain not found" error message. This error will be returned even if the user or service principal was given an Azure Digital Twins Data Owner or Azure Digital Twins Data Reader role through [Azure AD B2B](../active-directory/external-identities/what-is-b2b.md) collaboration. For information on how to achieve access across multiple tenants, see [Write app authentication code](how-to-authenticate-client.md#authenticate-across-tenants).
From here, you can view the metrics for your instance and create custom views.
## Next steps
-See how to make direct requests to the APIs using Postman:
-* [Make API requests with Postman](how-to-use-postman.md)
+See how to make direct requests to the Azure Digital Twins APIs using Postman:
+* [Call the Azure Digital Twins APIs with Postman](how-to-use-postman-with-digital-twins.md)
Or, practice using the .NET SDK by creating a client app with this tutorial: * [Code a client app](tutorial-code.md)
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-security.md
The following list describes the levels at which you can scope access to Azure D
### Troubleshoot permissions
-If a user attempts to perform an action not allowed by their role, they may receive an error from the service request reading `403 (Forbidden)`. For more information and troubleshooting steps, see [Troubleshoot failed service request: Error 403 (Forbidden)](troubleshoot-error-403.md).
+If a user attempts to perform an action not allowed by their role, they may receive an error from the service request reading `403 (Forbidden)`. For more information and troubleshooting steps, see [Troubleshoot Azure Digital Twins failed service request: Error 403 (Forbidden)](troubleshoot-error-403-digital-twins.md).
## Managed identity for accessing other resources
digital-twins How To Send Twin To Twin Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-send-twin-to-twin-events.md
Now, your function can receive events through your Event Grid topic. The data fl
The last step is to verify that the flow is working, by updating a twin and checking that related twins are updated according to the logic in your Azure function.
-To kick off the process, update the twin that's the source of the event flow. You can use the [Azure CLI](/cli/azure/dt/twin#az-dt-twin-update), [Azure Digital Twins SDK](how-to-manage-twin.md#update-a-digital-twin), or [Azure Digital Twins REST APIs](how-to-use-postman.md?tabs=data-plane) to make the update.
+To kick off the process, update the twin that's the source of the event flow. You can use the [Azure CLI](/cli/azure/dt/twin#az-dt-twin-update), [Azure Digital Twins SDK](how-to-manage-twin.md#update-a-digital-twin), or [Azure Digital Twins REST APIs](how-to-use-postman-with-digital-twins.md?tabs=data-plane) to make the update.
Next, query your Azure Digital Twins instance for the related twin. You can use the [Azure CLI](/cli/azure/dt/twin#az-dt-twin-query), or the [Azure Digital Twins REST APIs and SDK](how-to-query-graph.md#run-queries-with-the-api). Verify that the twin received the data and updated as expected.
digital-twins How To Use Postman With Digital Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-postman-with-digital-twins.md
+
+ Title: Call the Azure Digital Twins APIs with Postman
+
+description: Learn how to authorize, configure, and use Postman to call the Azure Digital Twins APIs.
+++++ Last updated : 09/06/2022+++
+# How to send requests to the Azure Digital Twins APIs using Postman
+
+[Postman](https://www.getpostman.com/) is a REST testing tool that provides key HTTP request functionalities in a desktop and plugin-based GUI. You can use it to craft HTTP requests and submit them to the [Azure Digital Twins REST APIs](concepts-apis-sdks.md). This article describes how to configure the [Postman REST client](https://www.getpostman.com/) to interact with the Azure Digital Twins APIs. This information is specific to the Azure Digital Twins service.
+
+This article contains information about the following steps:
+
+1. Use the Azure CLI to [get a bearer token](#get-bearer-token) that you will use to make API requests in Postman.
+1. Set up a [Postman collection](#about-postman-collections) and configure the Postman REST client to use your bearer token to authenticate. When setting up the collection, you can choose either of these options:
+ 1. [Import a pre-built collection of Azure Digital Twins API requests](#import-collection-of-azure-digital-twins-apis).
+ 1. [Create your own collection from scratch](#create-your-own-collection).
+1. [Add requests to your configured collection](#add-an-individual-request) and send them to the Azure Digital Twins APIs.
+
+Azure Digital Twins has two sets of APIs that you can work with: data plane and control plane. For more about the difference between these API sets, see [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md). This article contains information for both API sets.
+
+## Prerequisites
+
+To proceed with using Postman to access the Azure Digital Twins APIs, you need to set up an Azure Digital Twins instance and download Postman. The rest of this section walks you through these steps.
+
+### Set up Azure Digital Twins instance
++
+### Download Postman
+
+Next, [download the desktop version of the Postman client](https://www.getpostman.com/apps).
+
+## Get bearer token
+
+Now that you've set up Postman and your Azure Digital Twins instance, you'll need to get a bearer token that Postman requests can use to authorize against the Azure Digital Twins APIs.
+
+There are several possible ways to obtain this token. This article uses the [Azure CLI](/cli/azure/install-azure-cli) to sign into your Azure account and obtain a token that way.
+
+If you have an [Azure CLI installed locally](/cli/azure/install-azure-cli), you can start a command prompt on your machine to run the following commands.
+Otherwise, you can open an [Azure Cloud Shell](https://shell.azure.com) window in your browser and run the commands there.
+
+1. First, make sure you're logged into Azure with the appropriate credentials, by running this command:
+
+ ```azurecli-interactive
+ az login
+ ```
+
+2. Next, use the [az account get-access-token](/cli/azure/account#az-account-get-access-token) command to get a bearer token with access to the Azure Digital Twins service. In this command, you'll pass in the resource ID for the Azure Digital Twins service endpoint, in order to get an access token that can access Azure Digital Twins resources.
+
+ The required context for the token depends on which set of APIs you're using, so use the tabs below to select between [data plane](concepts-apis-sdks.md#overview-data-plane-apis) and [control plane](concepts-apis-sdks.md#overview-control-plane-apis) APIs.
+
+ # [Data plane](#tab/data-plane)
+
+ To get a token to use with the data plane APIs, use the following static value for the token context: `0b07f429-9f4b-4714-9392-cc5e8e80c8b0`. This is the resource ID for the Azure Digital Twins service endpoint.
+
+ ```azurecli-interactive
+ az account get-access-token --resource 0b07f429-9f4b-4714-9392-cc5e8e80c8b0
+ ```
+
+ # [Control plane](#tab/control-plane)
+
+ To get a token to use with the control plane APIs, use the following value for the token context: `https://management.azure.com/`.
+
+ ```azurecli-interactive
+ az account get-access-token --resource https://management.azure.com/
+ ```
+
+
+ >[!NOTE]
+ > If you need to access your Azure Digital Twins instance using a service principal or user account that belongs to a different Azure Active Directory tenant from the instance, you'll need to request a token from the Azure Digital Twins instance's "home" tenant. For more information on this process, see [Write app authentication code](how-to-authenticate-client.md#authenticate-across-tenants).
+
+3. Copy the value of `accessToken` in the result, and save it to use in the next section. This is your **token value** that you will provide to Postman to authorize your requests.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/console-access-token.png" alt-text="Screenshot of the console showing the result of the az account get-access-token command. The accessToken field and its sample value is highlighted.":::
+
+>[!TIP]
+>This token is valid for at least five minutes and a maximum of 60 minutes. If you run out of time allotted for the current token, you can repeat the steps in this section to get a new one.
+
+Next, you'll set up Postman to use this token to make API requests to Azure Digital Twins.
+
+## About Postman collections
+
+Requests in Postman are saved in *collections* (groups of requests). When you create a collection to group your requests, you can apply common settings to many requests at once. This can greatly simplify authorization if you plan to create more than one request against the Azure Digital Twins APIs, as you only have to configure these details once for the entire collection.
+
+When working with Azure Digital Twins, you can get started by importing a [pre-built collection of all the Azure Digital Twins requests](#import-collection-of-azure-digital-twins-apis). You may want to do this if you're exploring the APIs and want to quickly set up a project with request examples.
+
+Alternatively, you can also choose to start from scratch, by [creating your own empty collection](#create-your-own-collection) and populating it with individual requests that call only the APIs you need.
+
+The following sections describe both of these processes. The rest of the article takes place in your local Postman application, so go ahead and open the Postman application on your computer now.
+
+## Import collection of Azure Digital Twins APIs
+
+A quick way to get started with Azure Digital Twins in Postman is to import a pre-built collection of requests for the Azure Digital Twins APIs.
+
+### Download the collection file
+
+The first step in importing the API set is to download a collection. Choose the tab below for your choice of data plane or control plane to see the pre-built collection options.
+
+# [Data plane](#tab/data-plane)
+
+There are currently two Azure Digital Twins data plane collections available for you to choose from:
+* [Azure Digital Twins Postman Collection](https://github.com/microsoft/azure-digital-twins-postman-samples): This collection provides a simple getting started experience for Azure Digital Twins in Postman. The requests include sample data, so you can run them with minimal edits required. Choose this collection if you want a digestible set of key API requests containing sample information.
+ - To find the collection, navigate to the repo link and open the file named *postman_collection.json*.
+* [Azure Digital Twins data plane Swagger](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins): This repo contains complete Swagger files for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request, but with empty data bodies rather than sample data. Choose this collection if you want to have access to every API call and fill in all the data yourself. You should also use this collection if you need a specific version of the APIs (like one that supports a preview feature).
+ - To find the collection, navigate to the repo link and choose the folder for your preferred spec version. From here, open the file called *digitaltwins.json*.
+
+# [Control plane](#tab/control-plane)
+
+The collection currently available for control plane is the [Azure Digital Twins control plane Swagger](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This repo contains the complete Swagger file for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request.
+
+To find the collection, navigate to the repo link and choose the folder for your preferred spec version. From here, open the file called *digitaltwins.json*.
+++
+Here's how to download your chosen collection to your machine so that you can import it into Postman.
+1. Use the links above to open the collection file in GitHub in your browser.
+1. Select the **Raw** button to open the raw text of the file.
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/swagger-raw.png" alt-text="Screenshot of the data plane digitaltwins.json file in GitHub. There is a highlight around the Raw button." lightbox="media/how-to-use-postman-with-digital-twins/swagger-raw.png":::
+1. Copy the text from the window, and paste it into a new file on your machine.
+1. Save the file with a .json extension (the file name can be whatever you want, as long as you can remember it to find the file later).
+
+### Import the collection
+
+Next, import the collection into Postman.
+
+1. From the main Postman window, select the **Import** button.
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-import-collection.png" alt-text="Screenshot of a newly opened Postman window. The 'Import' button is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-import-collection.png":::
+
+1. In the **Import** window that follows, select **Upload Files** and navigate to the collection file on your machine that you created earlier. Select Open.
+1. Select the **Import** button to confirm.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-import-collection-2.png" alt-text="Screenshot of Postman's 'Import' window, showing the file to import as a collection and the Import button.":::
+
+The newly imported collection can now be seen from your main Postman view, in the Collections tab.
++
+Next, continue on to the next section to add a bearer token to the collection for authorization and connect it to your Azure Digital twins instance.
+
+### Configure authorization
+
+Next, edit the collection you've created to configure some access details. Highlight the collection you've created and select the **View more actions** icon to pull up a menu. Select **Edit**.
++
+Follow these steps to add a bearer token to the collection for authorization. This is where you'll use the token value you gathered in the [Get bearer token](#get-bearer-token) section in order to use it for all API requests in your collection.
+
+1. In the edit dialog for your collection, make sure you're on the **Authorization** tab.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-authorization-imported.png" alt-text="Screenshot of the imported collection's edit dialog in Postman, showing the 'Authorization' tab." lightbox="media/how-to-use-postman-with-digital-twins/postman-authorization-imported.png":::
+
+1. Set the Type to **OAuth 2.0**, paste your access token into the Access Token box, and select **Save**.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-paste-token-imported.png" alt-text="Screenshot of Postman edit dialog for the imported collection, on the 'Authorization' tab. Type is 'OAuth 2.0', and Access Token box is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-paste-token-imported.png":::
+
+### Additional configuration
+
+# [Data plane](#tab/data-plane)
+
+If you're making a [data plane](concepts-apis-sdks.md#overview-data-plane-apis) collection, help the collection connect easily to your Azure Digital Twins resources by setting some variables provided with the collections. When many requests in a collection require the same value (like the host name of your Azure Digital Twins instance), you can store the value in a variable that applies to every request in the collection. Both of the downloadable collections for Azure Digital Twins come with pre-created variables that you can set at the collection level.
+
+1. Still in the edit dialog for your collection, move to the **Variables** tab.
+
+1. Use your instance's **host name** from the [Prerequisites section](#prerequisites) to set the CURRENT VALUE field of the relevant variable. Select **Save**.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-variables-imported.png" alt-text="Screenshot of the imported collection's edit dialog in Postman, showing the 'Variables' tab. The 'CURRENT VALUE' field is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-variables-imported.png":::
+
+1. If your collection has additional variables, fill and save those values as well.
+
+When you're finished with the above steps, you're done configuring the collection. You can close the editing tab for the collection if you want.
+
+# [Control plane](#tab/control-plane)
+
+If you're making a [control plane](concepts-apis-sdks.md#overview-control-plane-apis) collection, you've done everything that you need to configure the collection. You can close the editing tab for the collection if you want, and proceed to the next section.
+
+
+
+### Explore requests
+
+Next, explore the requests inside the Azure Digital Twins API collection. You can expand the collection to view the pre-created requests (sorted by category of operation).
+
+Different requests require different information about your instance and its data. To see all the information required to craft a particular request, look up the request details in the [Azure Digital Twins REST API reference documentation](/rest/api/azure-digitaltwins/).
+
+You can edit the details of a request in the Postman collection using these steps:
+
+1. Select it from the list to pull up its editable details.
+
+1. Fill in values for the variables listed in the **Params** tab under **Path Variables**.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-request-details-imported.png" alt-text="Screenshot of Postman. The collection is expanded to show a request. The 'Path Variables' section is highlighted in the request details." lightbox="media/how-to-use-postman-with-digital-twins/postman-request-details-imported.png":::
+
+1. Provide any necessary **Headers** or **Body** details in the respective tabs.
+
+Once all the required details are provided, you can run the request with the **Send** button.
+
+You can also add your own requests to the collection, using the process described in the [Add an individual request](#add-an-individual-request) section below.
+
+## Create your own collection
+
+Instead of importing the existing collection of all Azure Digital Twins APIs, you can also create your own collection from scratch. You can then populate it with individual requests using the [Azure Digital Twins REST API reference documentation](/rest/api/azure-digitaltwins/).
+
+### Create a Postman collection
+
+1. To create a collection, select the **New** button in the main postman window.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-new.png" alt-text="Screenshot of the main Postman window. The 'New' button is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-new.png":::
+
+ Choose a type of **Collection**.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-new-collection-2.png" alt-text="Screenshot of the 'Create New' dialog in Postman. The 'Collection' option is highlighted.":::
+
+1. This will open a tab for filling the details of the new collection. Select the Edit icon next to the collection's default name (**New Collection**) to replace it with your own choice of name.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-new-collection-3.png" alt-text="Screenshot of the new collection's edit dialog in Postman. The Edit icon next to the name 'New Collection' is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-new-collection-3.png":::
+
+Next, continue on to the next section to add a bearer token to the collection for authorization.
+
+### Configure authorization
+
+Follow these steps to add a bearer token to the collection for authorization. This is where you'll use the token value you gathered in the [Get bearer token](#get-bearer-token) section in order to use it for all API requests in your collection.
+
+1. Still in the edit dialog for your new collection, move to the **Authorization** tab.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-authorization-custom.png" alt-text="Screenshot of the new collection's edit dialog in Postman, showing the 'Authorization' tab." lightbox="media/how-to-use-postman-with-digital-twins/postman-authorization-custom.png":::
+
+1. Set the Type to **OAuth 2.0**, paste your access token into the Access Token box, and select **Save**.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-paste-token-custom.png" alt-text="Screenshot of the Postman edit dialog for the new collection, on the 'Authorization' tab. Type is 'OAuth 2.0', and Access Token box is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-paste-token-custom.png":::
+
+When you're finished with the above steps, you're done configuring the collection. You can close the edit tab for the new collection if you want.
+
+The new collection can be seen from your main Postman view, in the Collections tab.
++
+## Add an individual request
+
+Now that your collection is set up, you can add your own requests to the Azure Digital Twin APIs.
+
+1. To create a request, use the **New** button again.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-new.png" alt-text="Screenshot of the main Postman window. The 'New' button is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-new.png":::
+
+ Choose a type of **Request**.
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-new-request-2.png" alt-text="Screenshot of the 'Create New' dialog in Postman. The 'Request' option is highlighted.":::
+
+1. This action opens the SAVE REQUEST window, where you can enter a name for your request, give it an optional description, and choose the collection that it's a part of. Fill in the details and save the request to the collection you created earlier.
+
+ :::row:::
+ :::column:::
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-save-request.png" alt-text="Screenshot of 'Save request' window in Postman showing the fields described. The 'Save to Azure Digital Twins collection' button is highlighted.":::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+ :::row-end:::
+
+You can now view your request under the collection, and select it to pull up its editable details.
++
+### Set request details
+
+To make a Postman request to one of the Azure Digital Twins APIs, you'll need the URL of the API and information about what details it requires. You can find this information in the [Azure Digital Twins REST API reference documentation](/rest/api/azure-digitaltwins/).
+
+To proceed with an example query, this article will use the Query API (and its [reference documentation](/rest/api/digital-twins/dataplane/query/querytwins)) to query for all the digital twins in an instance.
+
+1. Get the request URL and type from the reference documentation. For the Query API, this is currently *POST* `https://digitaltwins-host-name/query?api-version=2020-10-31`.
+1. In Postman, set the type for the request and enter the request URL, filling in placeholders in the URL as required. This is where you will use your instance's host name from the [Prerequisites section](#prerequisites).
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-request-url.png" alt-text="Screenshot of the new request's details in Postman. The query URL from the reference documentation has been filled into the request URL box." lightbox="media/how-to-use-postman-with-digital-twins/postman-request-url.png":::
+
+1. Check that the parameters shown for the request in the **Params** tab match those described in the reference documentation. For this request in Postman, the `api-version` parameter was automatically filled when the request URL was entered in the previous step. For the Query API, this is the only required parameter, so this step is done.
+1. In the **Authorization** tab, set the Type to **Inherit auth from parent**. This indicates that this request will use the authorization you set up earlier for the entire collection.
+1. Check that the headers shown for the request in the **Headers** tab match those described in the reference documentation. For this request, several headers have been automatically filled. For the Query API, none of the header options are required, so this step is done.
+1. Check that the body shown for the request in the **Body** tab matches the needs described in the reference documentation. For the Query API, a JSON body is required to provide the query text. Here is an example body for this request that queries for all the digital twins in the instance:
+
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-request-body.png" alt-text="Screenshot of the new request's details in Postman, on the Body tab. It contains a raw JSON body with a query of 'SELECT * FROM DIGITALTWINS'." lightbox="media/how-to-use-postman-with-digital-twins/postman-request-body.png":::
+
+ For more information about crafting Azure Digital Twins queries, see [Query the twin graph](how-to-query-graph.md).
+
+1. Check the reference documentation for any other fields that may be required for your type of request. For the Query API, all requirements have now been met in the Postman request, so this step is done.
+1. Use the **Send** button to send your completed request.
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-request-send.png" alt-text="Screenshot of Postman showing the details of the new request. The Send button is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-request-send.png":::
+
+After sending the request, the response details will appear in the Postman window below the request. You can view the response's status code and any body text.
++
+You can also compare the response to the expected response data given in the reference documentation, to verify the result or learn more about any errors that arise.
+
+## Next steps
+
+To learn more about the Digital Twins APIs, read [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md), or view the [reference documentation for the REST APIs](/rest/api/azure-digitaltwins/).
digital-twins Troubleshoot Error 403 Digital Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-error-403-digital-twins.md
+
+ Title: "Troubleshoot Azure Digital Twins: Error 403 (Forbidden)"
+
+description: Learn how to diagnose and resolve error 403 (Forbidden) failed service requests from Azure Digital Twins.
++++ Last updated : 09/06/2022++
+# Troubleshoot Azure Digital Twins failed service request: Error 403 (Forbidden)
+
+This article describes causes and resolution steps for receiving a 403 error from service requests to Azure Digital Twins. This information is specific to the Azure Digital Twins service.
+
+## Symptoms
+
+This error may occur on many types of service requests that require authentication. The effect is that the API request fails, returning an error status of `403 (Forbidden)`.
+
+## Causes
+
+### Cause #1
+
+Most often, this error indicates that your Azure role-based access control (Azure RBAC) permissions for the service aren't set up correctly. Many actions for an Azure Digital Twins instance require you to have the Azure Digital Twins Data Owner role on the instance you are trying to manage.
+
+### Cause #2
+
+If you're using a client app to communicate with Azure Digital Twins that's authenticating with an [app registration](./how-to-create-app-registration.md), this error may happen because your app registration doesn't have permissions set up for the Azure Digital Twins service.
+
+The app registration must have access permissions configured for the Azure Digital Twins APIs. Then, when your client app authenticates against the app registration, it will be granted the permissions that the app registration has configured.
+
+## Solutions
+
+### Solution #1
+
+The first solution is to verify that your Azure user has the Azure Digital Twins Data Owner role on the instance you're trying to manage. If you don't have this role, set it up.
+
+This role is different from...
+* the former name for this role during preview, Azure Digital Twins Owner (Preview). In this case, the role is the same, but the name has changed.
+* the Owner role on the entire Azure subscription. Azure Digital Twins Data Owner is a role within Azure Digital Twins and is scoped to this individual Azure Digital Twins instance.
+* the Owner role in Azure Digital Twins. These are two distinct Azure Digital Twins management roles, and Azure Digital Twins Data Owner is the role that should be used for management.
+
+#### Check current setup
++
+#### Fix issues
+
+If you don't have this role assignment, someone with an Owner role in your Azure subscription should run the following command to give your Azure user the Azure Digital Twins Data Owner role on the Azure Digital Twins instance.
+
+If you're an Owner on the subscription, you can run this command yourself. If you're not, contact an Owner to run this command on your behalf.
+
+```azurecli-interactive
+az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<your-Azure-AD-email>" --role "Azure Digital Twins Data Owner"
+```
+
+For more information about this role requirement and the assignment process, see [Set up your user's access permissions](how-to-set-up-instance-CLI.md#set-up-user-access-permissions).
+
+If you have this role assignment already and you're using an Azure AD app registration to authenticate a client app, you can continue to the next solution if this solution didn't resolve the 403 issue.
+
+### Solution #2
+
+If you're using an Azure AD app registration to authenticate a client app, the second possible solution is to verify that the app registration has permissions configured for the Azure Digital Twins service. If these aren't configured, set them up.
+
+#### Check current setup
+
+To check whether the permissions have been configured correctly, navigate to the [Azure AD app registration overview page](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal. You can get to this page yourself by searching for *app registrations* in the portal search bar.
+
+Switch to the **All applications** tab to see all the app registrations that have been created in your subscription.
+
+You should see the app registration you created in the list. Select it to open up its details.
++
+First, verify that the Azure Digital Twins permissions settings were properly set on the registration: Select **Manifest** from the menu bar to view the app registration's manifest code. Scroll to the bottom of the code window and look for these fields under `requiredResourceAccess`. The values should match the ones in the screenshot below:
++
+Next, select **API permissions** from the menu bar to verify that this app registration contains Read/Write permissions for Azure Digital Twins. You should see an entry like this:
++
+#### Fix issues
+
+If any of this appears differently than described, follow the instructions on how to set up an app registration in [Create an app registration with Azure Digital Twins access](./how-to-create-app-registration.md).
+
+## Next steps
+
+Read the setup steps for creating and authenticating a new Azure Digital Twins instance:
+* [Set up an instance and authentication (CLI)](how-to-set-up-instance-cli.md)
+
+Read more about security and permissions on Azure Digital Twins:
+* [Security for Azure Digital Twins solutions](concepts-security.md)
digital-twins Troubleshoot Error 404 Digital Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-error-404-digital-twins.md
+
+ Title: "Troubleshoot Azure Digital Twins: Error 404 (Sub-Domain not found)"
+
+description: Learn how to diagnose and resolve error 404 (Sub-Domain not found) failed service requests from Azure Digital Twins.
++++ Last updated : 09/06/2022++
+# Troubleshoot Azure Digital Twins failed service request: Error 404 (Sub-Domain not found)
+
+This article describes causes and resolution steps for receiving a 404 error from service requests to Azure Digital Twins. This information is specific to the Azure Digital Twins service.
+
+## Symptoms
+
+This error may occur when accessing an Azure Digital Twins instance using a service principal or user account that belongs to a different [Azure Active Directory (Azure AD) tenant](../active-directory/develop/quickstart-create-new-tenant.md) from the instance. The correct [roles](concepts-security.md) seem to be assigned to the identity, but API requests fail with an error status of `404 Sub-Domain not found`.
+
+## Causes
+
+### Cause #1
+
+Azure Digital Twins requires that all authenticating users belong to the same Azure AD tenant as the Azure Digital Twins instance.
++
+## Solutions
+
+### Solution #1
+
+You can resolve this issue by having each federated identity from another tenant request a token from the Azure Digital Twins instance's "home" tenant.
++
+### Solution #2
+
+If you're using the `DefaultAzureCredential` class in your code and you continue encountering this issue after getting a token, you can specify the home tenant in the `DefaultAzureCredential` options to clarify the tenant even when authentication defaults down to another type.
++
+## Next steps
+
+Read more about security and permissions on Azure Digital Twins:
+* [Security for Azure Digital Twins solutions](concepts-security.md)
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-filtering.md
FOR_EACH filter IN (a, b, c)
See [Limitations](#limitations) section for current limitation of this operator. ## StringBeginsWith
-The **StringBeginsWith** operator evaluates to true if the **key** value **begins with** any of the specified **filter** values. In the following example, it checks whether the value of the `key1` attribute in the `data` section begins with `event` or `grid`. For example, `event hubs` begins with `event`.
+The **StringBeginsWith** operator evaluates to true if the **key** value **begins with** any of the specified **filter** values. In the following example, it checks whether the value of the `key1` attribute in the `data` section begins with `event` or `message`. For example, `event hubs` begins with `event`.
```json "advancedFilters": [{
event-grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/system-topics.md
In the past, a system topic was implicit and wasn't exposed for simplicity. Syst
- Set up alerts on publish and delivery failures > [!NOTE]
-> Azure Event Grid creates a system topic resource in the same Azure subscription that has the event source. For example, if you create a system topic for a storage account *ContosoStorage* in an Azure subscription *ContosoSubscription*, Event Grid creates the system topic in the *ContosoSubscription*. It's not possible to create a system topic in an Azure subscription that's different from the event source's Azure subscription.
+> - Only one Azure Event Grid system topic is allowed per source (like Subscription, Resource Group, etc.).
+> - Resource Group is required for Subscription level Event Grid system topic and cannot be changed until deleted/moved to another subscription.
+> - Azure Event Grid creates a system topic resource in the same Azure subscription that has the event source. For example, if you create a system topic for a storage account *ContosoStorage* in an Azure subscription *ContosoSubscription*, Event Grid creates the system topic in the *ContosoSubscription*. It's not possible to create a system topic in an Azure subscription that's different from the event source's Azure subscription.
## Lifecycle of system topics You can create a system topic in two ways:
event-hubs Event Hubs Management Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-management-libraries.md
Title: Management libraries - Azure Event Hubs| Microsoft Docs description: This article provides information on the library that you can use to manage Azure Event Hubs namespaces and entities from .NET. Previously updated : 09/23/2021 Last updated : 09/06/2022 ms.devlang: csharp
You can use the Azure Event Hubs management libraries to dynamically provision E
## Prerequisites
-To get started using the Event Hubs management libraries, you must authenticate with Azure Active Directory (AAD). AAD requires that you authenticate as a service principal, which provides access to your Azure resources. For information about creating a service principal, see one of these articles:
+To get started using the Event Hubs management libraries, you must authenticate with Azure Active Directory (Azure AD). Azure AD requires that you authenticate as a service principal, which provides access to your Azure resources. For information about creating a service principal, see one of these articles:
* [Use the Azure portal to create Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md) * [Use Azure PowerShell to create a service principal to access resources](../active-directory/develop/howto-authenticate-service-principal-powershell.md) * [Use Azure CLI to create a service principal to access resources](/cli/azure/create-an-azure-service-principal-azure-cli)
-These tutorials provide you with an `AppId` (Client ID), `TenantId`, and `ClientSecret` (authentication key), all of which are used for authentication by the management libraries. You must have **Owner** permissions for the resource group on which you want to run.
+These tutorials provide you with an `AppId` (Client ID), `TenantId`, and `ClientSecret` (authentication key), all of which are used for authentication by the management libraries. The Azure AD application must be added to the **Azure Event Hubs Data Owner** role at the resource group level.
-## Programming pattern
+## Sample code
The pattern to manipulate any Event Hubs resource follows a common protocol:
-1. Obtain a token from AAD using the `Microsoft.IdentityModel.Clients.ActiveDirectory` library.
- ```csharp
- var context = new AuthenticationContext($"https://login.microsoftonline.com/{tenantId}");
-
- var result = await context.AcquireTokenAsync(
- "https://management.core.windows.net/",
- new ClientCredential(clientId, clientSecret)
- );
- ```
-
+1. Obtain a token from Azure AD using the `Microsoft.Identity.Client` library.
1. Create the `EventHubManagementClient` object.
- ```csharp
- var creds = new TokenCredentials(token);
- var ehClient = new EventHubManagementClient(creds)
- {
- SubscriptionId = SettingsCache["SubscriptionId"]
- };
- ```
-
-1. Set the `CreateOrUpdate` parameters to your specified values.
- ```csharp
- var ehParams = new EventHubCreateOrUpdateParameters()
- {
- Location = SettingsCache["DataCenterLocation"]
- };
- ```
-
-1. Execute the call.
- ```csharp
- await ehClient.EventHubs.CreateOrUpdateAsync(resourceGroupName, namespaceName, EventHubName, ehParams);
- ```
+1. Then, use the client object to create an Event Hubs namespace and an event hub.
+
+Here's the sample code to create an Event Hubs namespace and an event hub.
+
+```csharp
+
+namespace event_hub_dotnet_management
+{
+ using System;
+ using System.Threading.Tasks;
+ using Microsoft.Azure.Management.EventHub;
+ using Microsoft.Azure.Management.EventHub.Models;
+ using Microsoft.Identity.Client;
+ using Microsoft.Rest;
++
+ public static class EventHubManagementSample
+ {
+ private static string resourceGroupName = "<YOUR EXISTING RESOURCE GROUP NAME>";
+ private static string namespaceName = "<EVENT HUBS NAMESPACE TO BE CREATED>";
+ private const string eventHubName = "<EVENT HUB TO BE CREATED>";
+ private const string location = "<REGION>"; //for example: "eastus"
+
+ public static async Task Main()
+ {
+ // get a token from Azure AD
+ var token = await GetToken();
+
+ // create an EventHubManagementClient
+ var creds = new TokenCredentials(token);
+ var ehClient = new EventHubManagementClient(creds)
+ {
+ SubscriptionId = "<AZURE SUBSCRIPTION ID>"
+ };
+
+ // create an Event Hubs namespace using the EventHubManagementClient
+ await CreateNamespace(ehClient);
+
+ // create an event hub using the EventHubManagementClient
+ await CreateEventHub(ehClient);
+
+ Console.WriteLine("Press a key to exit.");
+ Console.ReadLine();
+ }
+
+ // Get an authentication token from Azure AD first
+ private static async Task<string> GetToken()
+ {
+ try
+ {
+ Console.WriteLine("Acquiring token...");
+
+ var tenantId = "<AZURE TENANT ID>";
+
+ // use the Azure AD app that's a member of Azure Event Hubs Data Owner role at the resource group level
+ var clientId = "<AZURE APPLICATION'S CLIENT ID>";
+ var clientSecret = "<CLIENT SECRET>";
+
+ IConfidentialClientApplication app;
+
+ app = ConfidentialClientApplicationBuilder.Create(clientId)
+ .WithClientSecret(clientSecret)
+ .WithAuthority($"https://login.microsoftonline.com/{tenantId}")
+ .Build();
+
+ var result = await app.AcquireTokenForClient(new[] { $"https://management.core.windows.net/.default" })
+ .ExecuteAsync()
+ .ConfigureAwait(false);
+
+ // If the token isn't a valid string, throw an error.
+ if (string.IsNullOrEmpty(result.AccessToken))
+ {
+ throw new Exception("Token result is empty!");
+ }
+
+ return result.AccessToken;
+ }
+ catch (Exception e)
+ {
+ Console.WriteLine("Could not get a new token...");
+ Console.WriteLine(e.Message);
+ throw e;
+ }
+ }
+
+ // Create an Event Hubs namespace
+ private static async Task CreateNamespace(EventHubManagementClient ehClient)
+ {
+ try
+ {
+ Console.WriteLine("Creating namespace...");
+ await ehClient.Namespaces.CreateOrUpdateAsync(resourceGroupName, namespaceName, new EHNamespace { Location = location });
+ Console.WriteLine("Created namespace successfully.");
+ }
+ catch (Exception e)
+ {
+ Console.WriteLine("Could not create a namespace...");
+ Console.WriteLine(e.Message);
+ }
+ }
++
+ // Create an event hub
+ private static async Task CreateEventHub(EventHubManagementClient ehClient)
+ {
+ try
+ {
+ Console.WriteLine("Creating Event Hub...");
+ await ehClient.EventHubs.CreateOrUpdateAsync(resourceGroupName, namespaceName, eventHubName, new Eventhub());
+ Console.WriteLine("Created Event Hub successfully.");
+ }
+ catch (Exception e)
+ {
+ Console.WriteLine("Could not create an Event Hub...");
+ Console.WriteLine(e.Message);
+ }
+ }
+ }
+}
+```
## Next steps * [.NET Management sample](https://github.com/Azure-Samples/event-hubs-dotnet-management/)
event-hubs Explore Captured Avro Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/explore-captured-avro-files.md
Event Hubs Capture is the easiest way to get data into Azure. Using Azure Data L
[support request]: https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade [Azure Storage Explorer]: https://github.com/microsoft/AzureStorageExplorer/releases [Avro Tools]: https://downloads.apache.org/avro/stable/java/
-[Java]: https://avro.apache.org/docs/current/gettingstartedjava.html
-[Python]: https://avro.apache.org/docs/current/gettingstartedpython.html
+[Java]: https://avro.apache.org/docs/1.11.1/getting-started-java/
+[Python]: https://avro.apache.org/docs/1.11.1/getting-started-python/
[Event Hubs overview]: ./event-hubs-about.md [HDInsight: Address files in Azure storage]: ../hdinsight/hdinsight-hadoop-use-blob-storage.md [Azure Databricks: Azure Blob Storage]:https://docs.databricks.com/spark/latest/data-sources/azure/azure-storage.html
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | Interxion | | **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | Supported | Aryaka Networks, AT&T NetBond, Cologix, Cox Business Cloud Port, Equinix, Intercloud, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Orange, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo| | **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | Supported | CoreSite, Megaport, PacketFabric, Zayo |
+| **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | Supported | |
| **Doha2** | [Ooredoo](https://www.ooredoo.qa/portal/OoredooQatar/b2b-data-centre) | 3 | Qatar Central | Supported | | | **Dubai** | [PCCS](https://www.pacificcontrols.net/cloudservices/https://docsupdatetracker.net/index.html) | 3 | UAE North | Supported | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, GBI, Megaport, Orange, Orixcom |
The following table shows connectivity locations and the service providers for e
| **Paris2** | [Equinix](https://www.equinix.com/data-centers/europe-colocation/france-colocation/paris-data-centers/pa4) | 1 | France Central | Supported | Equinix | | **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Megaport, NextDC | | **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | West US 3 | Supported | Cox Business Cloud Port, CenturyLink Cloud Connect, Megaport, Zayo |
+| **Portland** | [EdgeConnex POR01](https://www.edgeconnex.com/locations/north-america/portland-or/) | 1 | West US 2 | Supported | |
| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| Supported | Tata Communications | | **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | Supported | Bell Canada, Equinix, Megaport, Telus | | **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | Supported | Megaport, Transtelco| | **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | Supported | | | **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | Supported | Equinix | | **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | Supported | CenturyLink Cloud Connect, Megaport, Zayo |
+| **Santiago** | [EdgeConnex SCL](https://www.edgeconnex.com/locations/south-america/santiago/) | 3 | n/a | Supported | |
| **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | Supported | Aryaka Networks, Ascenty Data Centers, British Telecom, Equinix, InterCloud, Level 3 Communications, Neutrona Networks, Orange, Tata Communications, Telefonica, UOLDIVEO | | **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | Supported | Ascenty Data Centers, Tivit | | **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks, CenturyLink Cloud Connect, Equinix, Level 3 Communications, Megaport, Telus, Zayo |
The following table shows connectivity locations and the service providers for e
| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ | | **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport, NextDC | | **Taipei** | Chief Telecom | 2 | n/a | Supported | Chief Telecom, Chunghwa Telecom, FarEasTone |
+| **Tel Aviv** | Bezeq International | 2 | n/a | Supported | |
| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | Supported | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> | | **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO, China Unicom Global, Colt, Fibrenoire, IX Reach, Megaport, PCCW Global Limited, Tokai Communications | | **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | Supported | |
frontdoor Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/billing.md
Previously updated : 08/25/2022 Last updated : 09/06/2022
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
Title: Configure import settings in the FHIR service - Azure Health Data Services description: This article describes how to configure import settings in the FHIR service.-+ Last updated 06/06/2022-+ # Configure bulk-import settings (Preview)
healthcare-apis How To Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-display-metrics.md
Metric category|Metric name|Metric description|
> [!TIP] >
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure-monitor/essentials/metrics-getting-started)
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure/azure/azure-monitor/essentials/metrics-getting-started)
> [!IMPORTANT] >
healthcare-apis How To Use Iot Jsonpath Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iot-jsonpath-content-mappings.md
Previously updated : 09/02/2022 Last updated : 09/06/2022
If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContent
```json {
- "Body": {
- "heartRate": "78"
+ "Body": {
+ "heartRate": "78"
}, "Properties": { "iothub-creation-time-utc" : "2021-02-01T22:46:01.8750000Z"
If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContent
```json {
- "templateType": "IotJsonPathContentTemplate",
- "template": {
- "typeName": "heartrate",
- "typeMatchExpression": "$..[?(@Body.heartRate)]"
- "values": [
+ "templateType": "CollectionContent",
+ "template": [
{
- "required": "true",
- "valueExpression": "$.Body.heartRate",
- "valueName": "hr"
+ "templateType": "IotJsonPathContentTemplate",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@Body.heartRate)]",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.Body.heartRate",
+ "valueName": "hr"
+ }
+ ]
}
- ]
- }
-}
+ }
+ ]
+}
```
If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContent
```json {
- "Body": {
- "systolic": "123",
- "diastolic" : "87"
- },
- "Properties": {
- "iothub-creation-time-utc" : "2021-02-01T22:46:01.8750000Z"
- },
- "SystemProperties": {
- "iothub-connection-device-id" : "device123"
+ "Body": {
+ "systolic": "123",
+ "diastolic" : "87"
+ },
+ "Properties": {
+ "iothub-creation-time-utc" : "2021-02-01T22:46:01.8750000Z"
+ },
+ "SystemProperties": {
+ "iothub-connection-device-id" : "device123"
+ }
+}
+
+```
+
+*Template*
+
+```json
+
+{
+ "templateType": "CollectionContent",
+ "template": [
+ {
+ "templateType": "IotJsonPathContentTemplate",
+ "template": {
+ "typeName": "bloodpressure",
+ "typeMatchExpression": "$..[?(@Body.systolic && @Body.diastolic)]",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.Body.systolic",
+ "valueName": "systolic"
+ },
+ {
+ "required": "true",
+ "valueExpression": "$.Body.diastolic",
+ "valueName": "diastolic"
+ }
+ ]
+ }
}
+ ]
} ```
+> [!TIP]
+> The above IotJsonPathTemplate examples will work separately with your MedTech service device mapping or you can combine them into a single MedTech service device mapping as shown below. Additionally, the IotJasonPathTemplates can also be combined with with other template types such as [JasonPathContentTemplate mappings](how-to-use-jsonpath-content-mappings.md) to create and tune your MedTech service device mapping to meet your individual needs and scenarios.
+ *Template* ```json {
- "templateType": "IotJsonPathContentTemplate",
- "template": {
- "typeName": "bloodpressure",
- "typeMatchExpression": "$..[?(@Body.systolic && @Body.diastolic)]",
- "values": [
- {
+ "templateType": "CollectionContent",
+ "template": [
+ {
+ "templateType": "IotJsonPathContentTemplate",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@Body.heartRate)]",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.Body.heartRate",
+ "valueName": "hr"
+ }
+ ]
+ }
+ },
+ {
+ "templateType": "IotJsonPathContentTemplate",
+ "template": {
+ "typeName": "bloodpressure",
+ "typeMatchExpression": "$..[?(@Body.systolic && @Body.diastolic)]",
+ "values": [
+ {
"required": "true", "valueExpression": "$.Body.systolic", "valueName": "systolic"
- },
- {
+ },
+ {
"required": "true", "valueExpression": "$.Body.diastolic", "valueName": "diastolic"
+ }
+ ]
}
- ]
- }
+ }
+ ]
} ```
If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContent
In this article, you learned how to use IotJsonPathContentTemplate mappings with the MedTech service device mapping. To learn how to use MedTech service FHIR destination mapping, see >[!div class="nextstepaction"]
->[How to use FHIR destination mapping](how-to-use-fhir-mappings.md)
+>[How to use the FHIR destination mapping](how-to-use-fhir-mappings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-develop Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/libraries-sdks.md
The IoT Plug and Play libraries and SDKs enable developers to build IoT solution
| .NET - IoT Hub service | [NuGet 1.27.1](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/iot-hub/Samples/service/PnpServiceSamples) | N/A | [Reference](/dotnet/api/microsoft.azure.devices) | | Java - IoT Hub service | [Maven 1.26.0](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-service-client/1.26.0) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/service/iot-service-samples/pnp-service-sample) | N/A | [Reference](/java/api/com.microsoft.azure.sdk.iot.service) | | Node - IoT Hub service | [npm 1.13.0](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples) | N/A | [Reference](/javascript/api/azure-iothub/) |
-| Python - Digital Twins service | [pip 2.2.3](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | [Interact with IoT Hub Digital Twins API](tutorial-service.md) | N/A |
-| Node - Digital Twins service | [npm 1.13.0](https://www.npmjs.com/package/azure-iot-digitaltwins-service) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples/javascript) | [Interact with IoT Hub Digital Twins API](tutorial-service.md) | N/A |
+| Python - IoT Hub service | [pip 2.2.3](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-hub-python) | [Samples](https://github.com/Azure/azure-iot-hub-python/tree/main/samples) | N/A | [Reference](/python/api/azure-iot-hub/) |
## Next steps
iot-develop Tutorial Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-service.md
Title: Tutorial - Interact with an IoT Plug and Play device connected to your Azure IoT solution | Microsoft Docs description: Tutorial - Use C#, JavaScript, Java, or Python to connect to and interact with an IoT Plug and Play device that's connected to your Azure IoT solution.--++ Last updated 09/21/2020
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
To update and run the provisioning sample with your device information:
3. Open a command prompt and go to the directory where the sample file, _provision_symmetric_key.py_, is located. ```cmd
- cd azure-iot-sdk-python\azure-iot-device\samples\async-hub-scenarios
+ cd azure-iot-sdk-python\samples\async-hub-scenarios
``` 4. In the command prompt, run the following commands to set environment variables used by the sample:
To update and run the provisioning sample with your device information:
7. You should now see something similar to the following output. Some example wind speed telemetry messages are also sent to the hub as a test. ```output
- D:\azure-iot-sdk-python\azure-iot-device\samples\async-hub-scenarios>python provision_symmetric_key.py
+ D:\azure-iot-sdk-python\samples\async-hub-scenarios>python provision_symmetric_key.py
RegistrationStage(RequestAndResponseOperation): Op will transition into polling after interval 2. Setting timer. The complete registration result is python-device-008
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
You won't need the Git Bash prompt for the rest of this quickstart. However, you
6. Copy the device certificate and private key to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the SDK. ```bash
- cp device-cert.pem ./azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios
- cp device-key.pem ./azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios
+ cp device-cert.pem ./azure-iot-sdk-python/samples/async-hub-scenarios
+ cp device-key.pem ./azure-iot-sdk-python/samples/async-hub-scenarios
``` You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
In this section, you'll use your Windows command prompt.
1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK. ```cmd
- cd ./azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios
+ cd ./azure-iot-sdk-python/samples/async-hub-scenarios
``` This sample uses six environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
In this section, you'll use your Windows command prompt.
1. Run the sample. The sample will connect to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub. ```cmd
- $ python azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios/provision_x509.py
+ $ python azure-iot-sdk-python/samples/async-hub-scenarios/provision_x509.py
RegistrationStage(RequestAndResponseOperation): Op will transition into polling after interval 2. Setting timer. The complete registration result is my-x509-device
iot-edge Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/development-environment.md
Only the IoT Edge runtime is supported for production deployments, but the follo
| - | - | - | | | IoT EdgeHub dev tool | iotedgehubdev | Windows, Linux, macOS | Simulating a device to debug modules. | | IoT Edge dev container | iotedgedev | Windows, Linux, macOS | Developing without installing dependencies. |
-| IoT Edge runtime in a container | iotedgec | Windows, Linux, macOS, ARM | Testing on a device that may not support the runtime. |
### IoT EdgeHub dev tool
The Azure IoT Edge dev container is a Docker container that has all the dependen
For more information, see [Azure IoT Edge dev container](https://github.com/Azure/iotedgedev/wiki/quickstart-with-iot-edge-dev-container).
-### IoT Edge device container
-
-The IoT Edge device container is a complete IoT Edge device, ready to be launched on any machine with a container engine. The device container includes the IoT Edge runtime and a container engine itself. Each instance of the container is a fully functional self-provisioning IoT Edge device. The device container supports remote debugging of modules, as long as there is a network route to the module. The device container is good for quickly creating large numbers of IoT Edge devices to test at-scale scenarios or Azure Pipelines. It also supports deployment to kubernetes via helm.
-
-For more information, see [Azure IoT Edge device container](https://github.com/toolboc/azure-iot-edge-device-container).
- ## DevOps tools When you're ready to develop at-scale solutions for extensive production scenarios, take advantage of modern DevOps principles including automation, monitoring, and streamlined software engineering processes. IoT Edge has extensions to support DevOps tools including Azure DevOps, Azure DevOps Projects, and Jenkins. If you want to customize an existing pipeline or use a different DevOps tool like CircleCI or TravisCI, you can do so with the CLI features included in the IoT Edge dev tool.
iot-edge How To Connect Usb Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-usb-devices.md
+
+ Title: How to connect a USB device to Azure IoT Edge for Linux on Windows | Microsoft Docs
+description: How to connect a USB device using USB over IP to the Azure IoT Edge for Linux on Windows (EFLOW) virtual machine.
+++++ Last updated : 07/25/2022+++
+# How to connect a USB device to Azure IoT Edge for Linux on Windows
+
+In some scenarios, your workloads need to get data or communicate with USB devices. Because Azure IoT Edge for Linux on Windows (EFLOW) runs as a virtual machine, you need to connect these devices to the virtual machine. This article guides you through the steps necessary to connect a USB device to the EFLOW virtual machine using the USB/IP open-source project named [usbipd-win](https://github.com/dorssel/usbipd-win).
+
+Setting up the USB/IP project on your Windows machine enables common developer USB scenarios like flashing an Arduino, connecting a USB serial device, or accessing a smartcard reader directly from the EFLOW virtual machine.
+
+> [!WARNING]
+> *USB over IP* provides a generic mechanism for redirecting USB devices using the network between the Windows host OS and the EFLOW virtual machine. Some devices that are sensitive to network latency might experience issues. Additionally, some devices might not function as expected due to driver compatibility issues. Ensure that your devices work as expected before deploying to production. For more information about USB/IP tested devices, see [USBIP-Win - Wiki - Tested Devices](https://github.com/dorssel/usbipd-win/wiki/Tested-Devices).
+
+## Prerequisites
+
+- Azure IoT Edge for Linux on Windows 1.3.1 update or higher. For more information about EFLOW release notes, see [EFLOW Releases](https://aka.ms/AzEFLOW-Releases).
+- A machine with an x64/x86 processor is required, *usbipd-win* doesn't support ARM64.
+
+> [!NOTE]
+> To check your Azure IoT Edge for Linux on Windows version, go to _Add or Remove Programs_ and then search for _Azure IoT Edge_. The installed version is listed under _Azure IoT Edge_. If you need to update to the latest version, see [Azure IoT Edge for Linux on Windows updates](./iot-edge-for-linux-on-windows-updates.md).
+
+## Install the UsbIp-Win project
+
+Support for connecting USB devices isn't natively available with EFLOW. You'll need to install the open-source [usbipd-win](https://github.com/dorssel/usbipd-win) project using the following steps:
+
+1. Go to the [latest release page for the usbipd-win](https://github.com/dorssel/usbipd-win/releases) project.
+1. Choose and download the _usbipd-win_x.y.z.msi_ file. (You may get a warning asking you to confirm that you trust the downloaded installer).
+1. Run the downloaded _usbipd-win_x.y.z.msi_ installer file.
+
+> [!NOTE]
+> Alternatively, you can also install the usbipd-win project using [Windows Package Manager](/windows/package-manager/winget/) (_winget_). If you have already installed _winget_, use the command: `winget install --interactive --exact dorssel.usbipd-win` to install usbipd-win. If you don't use the `--interactive` parameter, _winget_ may immediately restart your computer if needed to install the drivers.
+
+The UsbIp-Win installs:
+
+- A service called `usbipd` (USBIP Device Host). You can check the status of this service using the *Services* app in Windows.
+- A command line tool `usbipd`. The location of this tool is added to the PATH environment variable.
+- A firewall rule called `usbipd` to allow all local subnets to connect to the service. You can modify this firewall rule to fine tune access control.
+
+At this point, a service is running on Windows to share USB devices, and the necessary tools are installed in the EFLOW virtual machine to attach to shared devices.
+
+> [!WARNING]
+> If you have an open PowerShell session, make sure to close it and open a new one to load the `usbipd` command line tool.
+
+## Attach a USB device to the EFLOW VM
+
+The following steps provide a sample EFLOW PowerShell cmdlet to attach a USB device to the EFLOW VM. If you want to manually execute the needed commands, see [How to use usbip-win](https://github.com/dorssel/usbipd-win).
+
+> [!IMPORTANT]
+> The following functions are samples that are not meant to be used in production deployments. For production use, ensure you validate the functionality and create your own functions based on these samples. The sample functions are subject to change and deletion.
+
+1. Go to [EFLOW-Util](https://github.com/Azure/iotedge-eflow/tree/main/eflow-util/eflow-usbip) and download the EFLOW-USBIP sample PowerShell module.
+
+1. Open an elevated PowerShell session by starting with **Run as Administrator**.
+
+1. Import the downloaded EFLOW-USBIP module.
+ ```powershell
+ Import-Module "<path-to-module>/EflowUtil-Usbip.psm1"
+ ```
+
+1. List all of the USB devices connected to Windows.
+ ```powershell
+ Get-EflowUSBDevices
+ ```
+
+1. List all the network interfaces and get the Windows host OS IP address
+ ```powershell
+ ipconfig
+ ```
+
+1. Select the *bus ID* of the device youΓÇÖd like to attach to the EFLOW.
+ ```powershell
+ Add-EflowUSBDevices -busid <busid> -hostIp <host-ip>
+ ```
+
+1. Check the device was correctly attached to the EFLOW VM.
+ ```powershell
+ Invoke-EflowVmCommand "lsusb"
+ ```
+
+1. Once you're finished using the device in EFLOW, you can either physically disconnect the USB device or run this command from an elevated PowerShell session.
+ ```powershell
+ Remove-EflowUSBDevices -busid <busid>
+ ```
+> [!IMPORTANT]
+> The attachment from the EFLOW VM to the USB device does not persist across reboots. To attach the USB device after reboot, you may need to create a bash script that runs during startup and connects the device using the `usbip` bash command. For more information about how to attach the device on the EFLOW VM side, see [Add-EflowUSBDevices](https://github.com/Azure/iotedge-eflow/tree/main/eflow-util/eflow-usbip/EflowUtil.psm1).
+
+To learn more about how USB over IP, see the [Connecting USB devices to WSL](https://devblogs.microsoft.com/commandline/connecting-usb-devices-to-wsl/#how-it-works) and the [usbipd-win repo on GitHub](https://github.com/dorssel/usbipd-win/wiki).
+
+## Next steps
+
+Follow the steps in [How to develop IoT Edge modules with Linux containers using IoT Edge for Linux on Windows.](./tutorial-develop-for-linux-on-windows.md) to develop and debug a module with IoT Edge for Linux on Windows.
iot-edge How To Share Windows Folder To Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-share-windows-folder-to-vm.md
+
+ Title: Share a Windows folder with Azure IoT Edge for Linux on Windows | Microsoft Docs
+description: How to share a Windows folder with the Azure IoT Edge for Linux on Windows virtual machine
+++++ Last updated : 07/28/2022+++
+# Share a Windows folder with Azure IoT Edge for Linux on Windows
++
+The Azure IoT Edge for Linux on Windows (EFLOW) virtual machine is isolated from the Windows host OS and the virtual machine doesn't have access to the host file system. By default, the EFLOW virtual machine has its own file system and has no access to the folders or files on the host computer. The *EFLOW file and folder sharing mechanism* provides a way to share Windows files and folders to the CBL-Mariner Linux EFLOW VM.
+
+This article shows you how to enable the folder sharing between the Windows host OS and the EFLOW virtual machine.
+
+## Prerequisites
+- Azure IoT Edge for Linux on Windows 1.3.1.30082 update or higher. For more information about EFLOW release notes, see [EFLOW Releases](https://aka.ms/AzEFLOW-Releases).
+- A machine with an x64/x86 processor.
+- Windows 11 Sun Valley 2 (build 22621) or higher. To get Windows SV2 update, you must be part of Windows Insider Program. For more information, see [Getting started with the Windows Insider Program](https://insider.windows.com/en-us/getting-started). After installation, you can verify your build version by running `winver` at the command prompt.
+
+>[!NOTE]
+>We plan to include support for Windows 10 21H2 (version 19044) version, Windows Server 2019/2022, and ARM64 processors in the upcoming months.
+
+If you don't have an EFLOW device ready, you should create one before continuing with this guide. Follow the steps in [Create and provision an IoT Edge for Linux on Windows device using symmetric keys](how-to-provision-single-device-linux-on-windows-symmetric.md) to install, deploy and provision EFLOW.
+
+## How it works?
+
+The Azure IoT Edge for Linux on Windows file and folder sharing mechanism is implemented using [virtiofs](https://virtio-fs.gitlab.io/) technology. *Virtiofs* is a shared file system that lets virtual machines access a directory tree on the host OS. Unlike other approaches, it's designed to offer local file system semantics and performance. *Virtiofs* isn't a network file system repurposed for virtualization. It's designed to take advantage of the locality of virtual machines and the hypervisor. It takes advantage of the virtual machine's co-location with the hypervisor to avoid overhead associated with network file systems.
+
+![Windows folder shared with the EFLOW virtual machine using Virtio-FS technology](media/how-to-share-windows-folder-to-vm/folder-sharing-virtiofs.png)
+
+Only Windows folders can be shared to the EFLOW Linux VM and not the other way. Also, for security reasons, when setting the folder sharing mechanism, the user must provide a _root folder_ and all the shared folders must be under that _root folder_.
+
+Before starting with the adding and removing share mechanisms, let's define four concepts:
+
+- **Root folder**: Windows folder that is the root path containing subfolders to be shared to the EFLOW VM. The root folder isn't shared to the EFLOW VM. Only the subfolders under the root folder are shared to the EFLOW VM.
+- **Shared folder**: A Windows folder that's under the _root folder_ and is shared with the EFLOW VM. All the content inside this folder is shared with the EFLOW VM.
+- **Mounting point**: Path inside the EFLOW VM where the Windows folder content is placed.
+- **Mounting option**: *Read-only* or *read and write* access. Controls the file access of the mounted folder inside the EFLOW VM.
+
+## Add shared folders
+The following steps provide example EFLOW PowerShell commands to share one or more Windows host OS folders with the EFLOW virtual machine.
+
+1. Start by creating a new root shared folder. Go to **File Explorer** and choose a location for the *root folder* and create the folder.
+
+ For example, create a *root folder* under _C:\Shared_ named **EFLOW-Shared**.
+
+ ![Windows root folder](media/how-to-share-windows-folder-to-vm/root-folder.png)
+
+1. Create one or more *shared folders* to be shared with the EFLOW virtual machine. Shared folders should be created under the *root folder* from the previous step.
+
+ For example, create two folders one named **Read-Access** and one named **Read-Write-Access**.
+
+ ![Windows shared folders](media/how-to-share-windows-folder-to-vm/shared-folders.png)
+
+1. Within the _Read-Access_ shared folder, create a sample file that we'll later read inside the EFLOW virtual machine.
+
+ For example, using a text editor, create a file named _Hello-World.txt_ within the _Read-Access_ folder and save some text in the file.
+
+1. Using a text editor, create the shared folder configuration file. This file contains the information about the folders to be shared with the EFLOW VM including the mounting points and the mounting options. For more information about the JSON configuration file, see [PowerShell functions for IoT Edge for Linux on Windows](reference-iot-edge-for-linux-on-windows-functions.md).
+
+ For example, using the previous scenario, we'll share the two *shared folders* we created under the *root folder*.
+ - _Read-Access_ shared folder will be mounted in the EFLOW virtual machine under the path _/tmp/host-read-access_ with *read-only* access.
+ - _Read-Write-Access_ shared folder will be mounted in the EFLOW virtual machine under the path _/tmp/host-read-write-access_ with *read and write* access.
+
+ Create the JSON configuration file named **sharedFolders.json** within the *root folder* *EFLOW-Shared* with the following contents:
+
+ ```json
+ [
+ {
+ "sharedFolderRoot": "C:\\Shared\\EFLOW-Shared",
+ "sharedFolders": [
+ {
+ "hostFolderPath": "Read-Access",
+ "readOnly": true,
+ "targetFolderOnGuest": "/tmp/host-read-access"
+ },
+ {
+ "hostFolderPath": "Read-Write-Access",
+ "readOnly": false,
+ "targetFolderOnGuest": "/tmp/host-read-write-access"
+ }
+ ]
+ }
+ ]
+ ```
+
+1. Open an elevated _PowerShell_ session by starting with **Run as Administrator**.
+
+1. Create the shared folder assignation using the configuration file (_sharedFolders.json_) previously created.
+
+ ```powershell
+ Add-EflowVmSharedFolder -sharedFoldersJsonPath "C:\Shared\EFLOW-Shared\sharedFolders.json"
+ ```
+
+1. Once the cmdlet finished, the EFLOW virtual machine should have access to the shared folders. Connect to the EFLOW virtual machine and check the folders are correctly shared.
+ ```powershell
+ Connect-EflowVm
+ ```
+
+1. Go to the _Read-Access_ shared folder (mounted under _/tmp/host-read-access_) and check the content of the _Hello-World.txt_ file.
+
+ >[!NOTE]
+ >By default, all shared folders are shared under *root* ownership. To access the folder, you should log in as root using `sudo su` or change the folder ownership to *iotedge-user* using `chown` command.
+
+ ```bash
+ sudo su
+ cd /tmp/host-read-access
+ cat Hello-World.txt
+ ```
+If everything was successful, you should be able to see the contents of the _Hello-World.txt_ file within the EFLOW virtual machine. Verify write access by creating a file inside the _/tmp/host-read-write-access_ and then checking the contents of the new created file inside the _Read-Write-Access_ Windows host folder.
+
+## Check shared folders
+The following steps provide example EFLOW PowerShell commands to check the Windows shared folders and options (access permissions and mounting point) with the EFLOW virtual machine.
+
+1. Open an elevated PowerShell session by starting with **Run as Administrator**.
+
+1. List the information of the Windows shared folders under the *root folder*.
+ For example, using the scenario in the previous section, we can list the information of both _Read-Access_ and _Read-Write-Access_ shared folders.
+ ```powershell
+ Get-EflowVmSharedFolder -sharedfolderRoot "C:\Shared\EFLOW-Shared" -hostFolderPath @("Read-Access", "Read-Write-Access")
+ ```
+
+For more information about the `Get-EflowVmSharedFolder` cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](reference-iot-edge-for-linux-on-windows-functions.md).
++
+## Remove shared folders
+The following steps provide example EFLOW PowerShell commands to stop sharing a Windows shared folder with the EFLOW virtual machine.
+
+1. Open an elevated PowerShell session by starting with **Run as Administrator**.
+
+1. Stop sharing the folder named _Read-Access_ under the **Root folder** with the EFLOW virtual machine.
+ ```powershell
+ Remove-EflowVmSharedFolder -sharedfolderRoot "C:\Shared\EFLOW-Shared" -hostFolderPath "Read-Access"
+ ```
+
+For more information about the `Remove-EflowVmSharedFolder` cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](reference-iot-edge-for-linux-on-windows-functions.md).
+
+## Next steps
+Follow the steps in [Common issues and resolutions for Azure IoT Edge for Linux on Windows](troubleshoot-iot-edge-for-linux-on-windows-common-errors.md) to troubleshoot any issues encountered when setting up IoT Edge for Linux on Windows.
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
description: Reference information for Azure IoT Edge for Linux on Windows Power
Previously updated : 07/05/2022 Last updated : 07/28/2022
The commands described in this article are from the `AzureEFLOW.psm1` file, whic
If you don't have the **AzureEflow** folder in your PowerShell directory, use the following steps to download and install Azure IoT Edge for Linux on Windows:
+<!-- 1.1 -->
1. In an elevated PowerShell session, run each of the following commands to download IoT Edge for Linux on Windows. ```powershell
If you don't have the **AzureEflow** folder in your PowerShell directory, use th
$ProgressPreference = 'SilentlyContinue' Invoke-WebRequest "https://aka.ms/AzEflowMSI" -OutFile $msiPath ```
+<!-- end iotedge-2018-06 -->
+
+<!-- iotedge-2020-11 -->
+1. In an elevated PowerShell session, run each of the following commands to download IoT Edge for Linux on Windows.
+
+ * **X64/AMD64**
+ ```powershell
+ $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))
+ $ProgressPreference = 'SilentlyContinue'
+ Invoke-WebRequest "https://aka.ms/AzEFLOWMSI-CR-X64" -OutFile $msiPath
+ ```
+
+ * **ARM64**
+ ```powershell
+ $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))
+ $ProgressPreference = 'SilentlyContinue'
+ Invoke-WebRequest "https://aka.ms/AzEFLOWMSI-CR-ARM64" -OutFile $msiPath
+ ```
+<!-- end iotedge-2020-11 -->
+ 1. Install IoT Edge for Linux on Windows on your device.
If you don't have the **AzureEflow** folder in your PowerShell directory, use th
You can specify custom installation and VHDX directories by adding `INSTALLDIR="<FULLY_QUALIFIED_PATH>"` and `VHDXDIR="<FULLY_QUALIFIED_PATH>"` parameters to the install command.
-1. Set the execution policy on the target device to `AllSigned` if it is not already.
+1. Set the execution policy on the target device to at least `AllSigned`.
```powershell Set-ExecutionPolicy -ExecutionPolicy AllSigned -Force
It returns an object that contains four properties:
For more information, use the command `Get-Help Add-EflowVmEndpoint -full`.
+<!-- iotedge-2020-11 -->
+## Add-EflowVmSharedFolder
+
+The **Add-EflowVmSharedFolder** command allows sharing one or more Windows host OS folders with the EFLOW virtual machine.
+
+| Parameter | Accepted values | Comments |
+| | | -- |
+| sharedFoldersJsonPath | String | Path to the **Shared Folders** JSON configuration file. |
+
+The JSON configuration file must have the following structure:
+
+- **sharedFOlderRoot** : Path to the Windows root folder that contains all the folders to be shared with the EFLOW virtual machine.
+- **hostFolderPath**: Relative path (to the parent root folder) of the folder to be shared with the EFLOW VM.
+- **readOnly**: Defines if the shared folder will be writeable or read-only from the EFLOW virtual machine - Values: **false** or **true**.
+- **targetFolderOnGuest** : Folder path inside the EFLOW virtual machine where Windows host OS folder will be mounted.
+
+```json
+[
+ {
+ "sharedFolderRoot": "<shared-folder-root-windows-path>",
+ "sharedFolders": [
+ { "hostFolderPath": "<path-shared-folder>",
+ "readOnly": "<read-only>",
+ "targetFolderOnGuest": "<linux-mounting-point>"
+ }
+ ]
+ }
+]
+```
+For more information, use the command `Get-Help Add-EflowVmSharedFolder -full`.
+
+<!-- end iotedge-2020-11 -->
+ ## Connect-EflowVm The **Connect-EflowVm** command connects to the virtual machine using SSH. The only account allowed to SSH to the virtual machine is the user that created it.
The **Deploy-Eflow** command is the main deployment method. The deployment comma
| memoryInMB | Integer **even** value between 1024 and the maximum amount of free memory of the device |Memory allocated for the VM.<br><br>**Default value**: 1024 MB. | | vmDiskSize | Between 21 GB and 2 TB | Maximum logical disk size of the dynamically expanding virtual hard disk.<br><br>**Default value**: 29 GB. <br><br>**Note**: Either _vmDiskSize_ or _vmDataSize_ can be used, but not both together. | | vmDataSize | Between 2 GB and 2 TB | Maximum data partition size of the resulting hard disk, in GB.<br><br>**Default value**: 10 GB. <br><br>**Note**: Either _vmDiskSize_ or _vmDataSize_ can be used, but not both together. |
-| vmLogSize | **Small** or **Large** | Specificy the log partition size. Small = 1GB, Large = 6GB.<br><br>**Default value**: Small. |
+| vmLogSize | **Small** or **Large** | Specify the log partition size. Small = 1GB, Large = 6GB.<br><br>**Default value**: Small. |
| vswitchName | Name of the virtual switch | Name of the virtual switch assigned to the EFLOW VM. | | vswitchType | **Internal** or **External** | Type of the virtual switch assigned to the EFLOW VM. | | ip4Address | IPv4 Address in the range of the DCHP Server Scope | Static Ipv4 address of the EFLOW VM. |
The **Deploy-Eflow** command is the main deployment method. The deployment comma
| gpuPassthroughType | **DirectDeviceAssignment**, **ParaVirtualization**, or none (CPU only) | GPU Passthrough type | | gpuCount | Integer value between 1 and the number of the device's GPU cores | Number of GPU devices for the VM. <br><br>**Note**: If using ParaVirtualization, make sure to set gpuCount = 1 | | customSsh | None | Determines whether user wants to use their custom OpenSSH.Client installation. If present, ssh.exe must be available to the EFLOW PSM |
+| sharedFoldersJsonPath | String | Path to the **Shared Folders** JSON configuration file. |
:::moniker-end <!-- end iotedge-2020-11 -->
The **Get-EflowVmName** command returns the virtual machine's current hostname.
For more information, use the command `Get-Help Get-EflowVmName -full`.
+<!-- iotedge-2020-11 -->
+## Get-EflowVmSharedFolder
+
+The **Get-EflowVmSharedFolder** command returns the information about one or more Windows host OS folders shared with the EFLOW virtual machine.
+
+| Parameter | Accepted values | Comments |
+| | | -- |
+| sharedfolderRoot | String | Path to the Windows host OS shared root folder.|
+| hostFolderPath | String or List | Relative path/paths (to the root folder) to the Windows host OS shared folder/s.|
+
+It returns a list of objects that contains three properties:
+- **hostFolderPath**: Relative path (to the parent root folder) of the folder shared with the EFLOW VM.
+- **readOnly**: Defines if the shared folder is writeable or read-only from the EFLOW virtual machine - Values: **false** or **true**.
+- **targetFolderOnGuest**: Folder path inside the EFLOW virtual machine where the Windows folder is mounted.
+
+For more information, use the command `Get-Help Get-EflowVmSharedFolder -full`.
+<!-- end iotedge-2020-11 -->
+ ## Get-EflowVmTelemetryOption The **Get-EflowVmTelemetryOption** command displays the status of the telemetry (either **Optional** or **Required**) inside the virtual machine.
The **Remove-EflowVmEndpoint** command removes an existing network endpoint atta
For more information, use the command `Get-Help Remove-EflowVmEndpoint -full`.
+<!-- iotedge-2020-11 -->
+## Remove-EflowVmSharedFolder
+
+The **Remove-EflowVmSharedFolder** command stops sharing the Windows host OS folder to the EFLOW virtual machine. This command takes two parameters.
+
+| Parameter | Accepted values | Comments |
+| | | -- |
+| sharedfolderRoot | String | Path to the Windows host OS shared root folder.|
+| hostFolderPath | String or List | Relative path/paths (to the root folder) to the Windows host OS shared folder/s.|
+
+For more information, use the command `Get-Help Remove-EflowVmSharedFolder -full`.
+<!-- end iotedge-2020-11 -->
## Set-EflowVM
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
This table provides recent version history for IoT Edge package releases, and hi
| IoT Edge release | Available in EFLOW branch | Release date | Highlights | ||--|--|| | 1.4 | Continuous release (CR) <br> Long-term support (LTS) | TBA | |
-| 1.3 | Continuous release (CR) | TBA | |
-| 1.2 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.2.7.07022) | January 2022 | **Public Preview** |
+| 1.3 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.3.1.02092) | September 2022 | [Azure IoT Edge 1.3.0](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0)<br/> [CBL-Mariner 2.0](https://microsoft.github.io/CBL-Mariner/announcing-mariner-2.0/)<br/> [USB passthrough using USB-Over-IP](https://aka.ms/AzEFLOW-USBIP)<br/>[File/Folder sharing between Windows OS and the EFLOW VM](https://aka.ms/AzEFLOW-FolderSharing) |
+| 1.2 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.2.7.07022) | January 2022 | [Public Preview](https://techcommunity.microsoft.com/t5/internet-of-things-blog/azure-iot-edge-for-linux-on-windows-eflow-continuous-release/ba-p/3169590) |
| 1.1 | [Long-term support (LTS)](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2106.0) | June 2021 | [Long-term support plan and supported systems updates](support.md) | ## Next steps
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md
IoT Hub device twin example:
"deviceProperties": { "manufacturer": "contoso", "model": "virtual-vacuum-v1",
- "interfaceId": "dtmi:azure:iot:deviceUpdateModel;1",
+ "interfaceId": "dtmi:azure:iot:deviceUpdate;1",
"aduVer": "DU;agent/0.8.0-rc1-public-preview", "doVer": "DU;lib/v0.6.0+20211001.174458.c8c4051,DU;agent/v0.6.0+20211001.174418.c8c4051" },
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
The Azure IoT service SDKs contain code to facilitate building applications that
| .NET | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) | [Reference](/dotnet/api/microsoft.azure.devices) | | Java | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-service-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/service/iot-service-samples/pnp-service-sample) | [Reference](/java/api/com.microsoft.azure.sdk.iot.service) | | Node | [npm](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples) | [Reference](/javascript/api/azure-iothub/) |
-| Python | [pip](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | [Reference](/python/api/azure-iot-hub) |
+| Python | [pip](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-hub-python) | [Samples](https://github.com/Azure/azure-iot-hub-python/tree/main/samples) | [Reference](/python/api/azure-iot-hub) |
## Azure IoT Hub management SDKs
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md
result = iothub_job_manager.create_import_export_job(JobProperties(
## SDK samples - [.NET SDK sample](https://aka.ms/iothubmsicsharpsample) - [Java SDK sample](https://aka.ms/iothubmsijavasample)-- [Python SDK sample](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples)
+- [Python SDK sample](https://github.com/Azure/azure-iot-hub-python/tree/main/samples)
- Node.js SDK samples: [bulk device import](https://aka.ms/iothubmsinodesampleimport), [bulk device export](https://aka.ms/iothubmsinodesampleexport) ## Next steps
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
In order to ensure a client/IoT Hub connection stays alive, both the service and
|Java | 230 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-java/blob/main/device/iot-device-client/src/main/java/com/microsoft/azure/sdk/iot/device/ClientOptions.java#L64) | |C | 240 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/Iothub_sdk_options.md#mqtt-transport) | |C# | 300 seconds* | [Yes](/dotnet/api/microsoft.azure.devices.client.transport.mqtt.mqtttransportsettings.keepaliveinseconds) |
-|Python | 60 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/azure/iot/device/iothub/abstract_clients.py#L339) |
+|Python | 60 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/azure/iot/device/iothub/abstract_clients.py#L343) |
> *The C# SDK defines the default value of the MQTT KeepAliveInSeconds property as 300 seconds but in reality the SDK sends a ping request four times per keep-alive duration set. This means the SDK sends a keep-alive ping every 75 seconds.
iot-hub Iot Hub Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-tls-support.md
For IoT Hubs not configured for TLS 1.2 enforcement, TLS 1.2 still works with th
* `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA` * `TLS_RSA_WITH_AES_256_CBC_SHA` * `TLS_RSA_WITH_AES_128_CBC_SHA`
-* `TLS_RSA_WITH_3DES_EDE_CBC_SHA`
+* `TLS_RSA_WITH_3DES_EDE_CBC_SHA` **(This cipher will be deprecated on 10/01/2022 and will no longer be used for TLS handshakes)**
A client can suggest a list of higher cipher suites to use during `ClientHello`. However, some of them might not be supported by IoT Hub (for example, `ECDHE-ECDSA-AES256-GCM-SHA384`). In this case, IoT Hub will try to follow the preference of the client, but eventually negotiate down the cipher suite with `ServerHello`.
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/best-practices.md
tags: azure-key-vault
Previously updated : 01/29/2021 Last updated : 09/04/2022 # Customer intent: As a developer who's using Key Vault, I want to know the best practices so I can implement them.
Make sure you take regular backups of your vault. Backups should be performed wh
- Turn on [soft-delete](soft-delete-overview.md). - Turn on purge protection if you want to guard against force deletion of the secrets and key vault even after soft-delete is turned on.
+## Multitenant solutions and Key Vault
+
+A multitenant solution is built on an architecture where components are used to serve multiple customers or tenants. Multitenant solutions are often used to support software as a service (SaaS) solutions. If you're building a multitenant solution that includes Key Vault, review [Multitenancy and Azure Key Vault](/azure/architecture/guide/multitenant/service/key-vault).
+ ## Learn more - [Best practices for secrets management in Key Vault](../secrets/secrets-best-practices.md)
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md
description: Learn how virtual network service endpoints for Azure Key Vault all
Previously updated : 01/02/2019 Last updated : 09/06/2022
Here's a list of trusted services that are allowed to access a key vault if the
| Azure Database for PostgreSQL Single server | [Data encryption for Azure Database for PostgreSQL Single server](../../postgresql/howto-data-encryption-cli.md) | | Azure Databricks|[Fast, easy, and collaborative Apache SparkΓÇôbased analytics service](/azure/databricks/scenarios/what-is-azure-databricks)| | Azure Disk Encryption volume encryption service|Allow access to BitLocker Key (Windows VM) or DM Passphrase (Linux VM), and Key Encryption Key, during virtual machine deployment. This enables [Azure Disk Encryption](../../security/fundamentals/encryption-overview.md).|
+| Azure Disk Storage | When configured with a Disk Encryption Set (DES). For more information, see [Server-side encryption of Azure Disk Storage using customer-managed keys](../../virtual-machines/disk-encryption.md#customer-managed-keys).|
| Azure Event Hubs|[Allow access to a key vault for customer-managed keys scenario](../../event-hubs/configure-customer-managed-key.md)| | Azure Front Door Classic|[Using Key Vault certificates for HTTPS](../../frontdoor/front-door-custom-domain-https.md#prepare-your-key-vault-and-certificate) | Azure Front Door Standard/Premium|[Using Key Vault certificates for HTTPS](../../frontdoor/standard-premium/how-to-configure-https-custom-domain.md#prepare-your-key-vault-and-certificate)
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
Previously updated : 06/20/2022 Last updated : 08/30/2022 # Service limits in Azure Load Testing Preview
-This section lists basic quotas and limits for Azure Load Testing Preview.
+Azure uses limits and quotas to prevent budget overruns due to fraud, and to honor Azure capacity constraints. Consider these limits as you scale for production workloads. In this article, you learn about:
+
+- Default limits on Azure resources related to Azure Load Testing Preview.
+- Requesting quota increases.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Limits
+## Default resource quotas
+
+In this section, you learn about the default and maximum quota limits.
+
+### Test engine instances
+
+The following limits apply on a per-region, per-subscription basis.
+
+| Resource | Limit |
+|||
+| Concurrent engine instances | 100 |
+| Engine instances per test run | 45 |
+
+### Test runs
-|Resource |Limit |
+The following limits apply on a per-region, per-subscription basis.
+
+| Resource | Limit |
+|||
+| Concurrent test runs | 25 |
+| Test duration | 3 hours |
+
+### Data retention
+
+When you run a load test, Azure Load Testing stores both client-side and [server-side metrics](./how-to-monitor-server-side-metrics.md) for the test run. Azure Load Testing has a per-test-run limit on the retention period for this data:
+
+| Resource | Limit |
|||
-|Maximum concurrent engine instances that can be utilized per region per subscription | 100 |
-|Maximum concurrent test runs per region per subscription | 25 |
+| Server-side metrics | 90 days |
+| Client-side metrics | 365 days |
+
+The test run associated with the load test isn't removed.
-## Increase quotas
+## Request quota increases
-You can increase the default limits and quotas by requesting the increase through an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+To raise the limit or quota above the default limit, [open an online customer support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) at no charge.
1. Select **create a support ticket**.
machine-learning Concept Causal Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-causal-inference.md
Title: Make data-driven policies and influence decision making
+ Title: Make data-driven policies and influence decision-making
-description: Make data-driven decisions and policies with the Responsible AI dashboard's integration of the Causal Analysis tool EconML.
+description: Make data-driven decisions and policies with the Responsible AI dashboard's integration of the causal analysis tool EconML.
Last updated 08/17/2022
-# Make data-driven policies and influence decision making (preview)
+# Make data-driven policies and influence decision-making (preview)
-While machine learning models are powerful in identifying patterns in data and making predictions, they offer little support for estimating how the real-world outcome changes in the presence of an intervention. Practitioners have become increasingly focused on using historical data to inform their future decisions and business interventions. For example, how would the revenue be affected if a corporation pursues a new pricing strategy? Would a new medication improve a patientΓÇÖs condition, all else equal?
+Machine learning models are powerful in identifying patterns in data and making predictions. But they offer little support for estimating how the real-world outcome changes in the presence of an intervention.
+Practitioners have become increasingly focused on using historical data to inform their future decisions and business interventions. For example, how would the revenue be affected if a corporation pursued a new pricing strategy? Would a new medication improve a patient's condition, all else equal?
-The Causal Inference component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) addresses these questions by estimating the effect of a feature on an outcome of interest on average, across a population or a cohort, and on an individual level. It also helps to construct promising interventions by simulating different feature responses to various interventions and creating rules to determine which population cohorts would benefit from a particular intervention. Collectively, these functionalities allow decision-makers to apply new policies and affect real-world change.
+The *causal inference* component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) addresses these questions by estimating the effect of a feature on an outcome of interest on average, across a population or a cohort, and on an individual level. It also helps construct promising interventions by simulating feature responses to various interventions and creating rules to determine which population cohorts would benefit from an intervention. Collectively, these functionalities allow decision-makers to apply new policies and effect real-world change.
-The capabilities of this component are founded by the [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via [double machine learning](https://econml.azurewebsites.net/spec/estimation/dml.html) technique.
+The capabilities of this component come from the [EconML](https://github.com/Microsoft/EconML) package. It estimates heterogeneous treatment effects from observational data via the [double machine learning](https://econml.azurewebsites.net/spec/estimation/dml.html) technique.
-Use Causal Inference when you need to:
+Use causal inference when you need to:
- Identify the features that have the most direct effect on your outcome of interest. - Decide what overall treatment policy to take to maximize real-world impact on an outcome of interest. - Understand how individuals with certain feature values would respond to a particular treatment policy. - ## How are causal inference insights generated? >[!NOTE]
-> Only historic data is required to generate causal insights. The causal effects computed based on the treatment features are purely a data property. Hence, a trained model is optional when computing the causal effects.
+> Only historical data is required to generate causal insights. The causal effects computed based on the treatment features are purely a data property. So, a trained model is optional when you're computing the causal effects.
+
+Double machine learning is a method for estimating heterogeneous treatment effects when all potential confounders/controls (factors that simultaneously had a direct effect on the treatment decision in the collected data and the observed outcome) are observed but either of the following problems exists:
-Double Machine Learning is a method for estimating (heterogeneous) treatment effects when all potential confounders/controls (factors that simultaneously had a direct effect on the treatment decision in the collected data and the observed outcome) are observed but are either too many (high-dimensional) for classical statistical approaches to be applicable or their effect on the treatment and outcome can't be satisfactorily modeled by parametric functions (non-parametric). Both latter problems can be addressed via machine learning techniques (to see an example, check out [Chernozhukov2016](https://econml.azurewebsites.net/spec/references.html#chernozhukov2016)).
+- There are too many for classical statistical approaches to be applicable. That is, they're *high-dimensional*.
+- Their effect on the treatment and outcome can't be satisfactorily modeled by parametric functions. That is, they're *non-parametric*.
-The method reduces the problem by first estimating two predictive tasks:
+You can use machine learning techniques to address both problems. For an example, see [Chernozhukov2016](https://econml.azurewebsites.net/spec/references.html#chernozhukov2016).
+
+Double machine learning reduces the problem by first estimating two predictive tasks:
- Predicting the outcome from the controls - Predicting the treatment from the controls
-Then the method combines these two predictive models in a final stage estimation to create a model of the heterogeneous treatment effect. The approach allows for arbitrary machine learning algorithms to be used for the two predictive tasks while maintaining many favorable statistical properties related to the final model (for example, small mean squared error, asymptotic normality, and construction of confidence intervals).
+Then the method combines these two predictive models in a final-stage estimation to create a model of the heterogeneous treatment effect. This approach allows for arbitrary machine learning algorithms to be used for the two predictive tasks while maintaining many favorable statistical properties related to the final model. These properties include small mean squared error, asymptotic normality, and construction of confidence intervals.
## What other tools does Microsoft provide for causal inference?
-[Project Azua](https://www.microsoft.com/research/project/project_azua/) provides a novel framework focusing on end-to-end causal inference. AzuaΓÇÖs technology DECI (deep end-to-end causal inference) is a single model that can simultaneously do causal discovery and causal inference. We only require the user to provide data, and the model can output the causal relationships among all different variables. By itself, this can provide insights into the data and enables metrics such as individual treatment effect (ITE), average treatment effect (ATE), and conditional average treatment effect (CATE) to be calculated, which can then be used to make optimal decisions. The framework is scalable for large data, both in terms of the number of variables and the number of data points; it can also handle missing data entries with mixed statistical types.
+- [Project Azua](https://www.microsoft.com/research/project/project_azua/) provides a novel framework that focuses on end-to-end causal inference.
+
+ Azua's DECI (deep end-to-end causal inference) technology is a single model that can simultaneously do causal discovery and causal inference. The user provides data, and the model can output the causal relationships among all variables.
+
+ By itself, this approach can provide insights into the data. It enables the calculation of metrics such as individual treatment effect (ITE), average treatment effect (ATE), and conditional average treatment effect (CATE). You can then use these calculations to make optimal decisions.
+
+ The framework is scalable for large data, in terms of both the number of variables and the number of data points. It can also handle missing data entries with mixed statistical types.
+
+- [EconML](https://www.microsoft.com/research/project/econml/) powers the back end of the Responsible AI dashboard's causal inference component. It's a Python package that applies machine learning techniques to estimate individualized causal responses from observational or experimental data.
+
+ The suite of estimation methods in EconML represents the latest advances in causal machine learning. By incorporating individual machine learning steps into interpretable causal models, these methods improve the reliability of what-if predictions and make causal analysis quicker and easier for a broad set of users.
+
+- [DoWhy](https://py-why.github.io/dowhy/) is a Python library that aims to spark causal thinking and analysis. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible.
-[EconML](https://www.microsoft.com/research/project/econml/) (powering the backend of the Responsible AI dashboard's causal inference component) is a Python package that applies the power of machine learning techniques to estimate individualized causal responses from observational or experimental data. The suite of estimation methods provided in EconML represents the latest advances in causal machine learning. By incorporating individual machine learning steps into interpretable causal models, these methods improve the reliability of what-if predictions and make causal analysis quicker and easier for a broad set of users.
+ The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method. It makes inference more robust and accessible to non-experts.
-[DoWhy](https://py-why.github.io/dowhy/) is a Python library that aims to spark causal thinking and analysis. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for backdoor, front-door, instrumental variable, and other identification methods, and estimation of the conditional effect (CATE) through an integration with the EconML library.
+ DoWhy supports estimation of the average causal effect for back-door, front-door, instrumental variable, and other identification methods. It also supports estimation of the CATE through an integration with the EconML library.
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
- Explore the [supported causal inference visualizations](how-to-responsible-ai-dashboard.md#causal-analysis) of the Responsible AI dashboard. - Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Counterfactual Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-counterfactual-analysis.md
Title: Counterfactuals analysis and what-if
-description: Generate diverse counterfactual examples with feature perturbations to see minimal changes required to achieve desired prediction with the Responsible AI dashboard's integration of DiceML.
+description: Generate diverse counterfactual examples with feature perturbations to see minimal changes required to achieve desired prediction with the Responsible AI dashboard's integration of DiCE machine learning.
# Counterfactuals analysis and what-if (preview)
-What-if counterfactuals address the question of ΓÇ£what would the model predict if the action input is changedΓÇ¥, enabling understanding and debugging of a machine learning model in terms of how it reacts to input (feature) changes. Compared with approximating a machine learning model or ranking features by their predictive importance (which standard interpretability techniques do), counterfactual analysis ΓÇ£interrogatesΓÇ¥ a model to determine what changes to a particular datapoint would flip the model decision. Such an analysis helps in disentangling the impact of different correlated features in isolation or for acquiring a more nuanced understanding of how much of a feature change is needed to see a model decision flip for classification models and decision change for regression models.
+What-if counterfactuals address the question of what the model would predict if you changed the action input. They enable understanding and debugging of a machine learning model in terms of how it reacts to input (feature) changes.
-The Counterfactual Analysis and what-if component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) consists of two functionalities:
+Standard interpretability techniques approximate a machine learning model or rank features by their predictive importance. By contrast, counterfactual analysis "interrogates" a model to determine what changes to a particular data point would flip the model decision.
-- Generating a set of examples with minimal changes to a given point such that they change the model's prediction (showing the closest data points with opposite model predictions)-- Enabling users to generate their own what-if perturbations to understand how the model reacts to featuresΓÇÖ changes.-
-One of the top differentiators of the Responsible AI dashboard's counterfactual analysis component is the fact that you can identify which features to vary and their permissible ranges for valid and logical counterfactual examples.
+Such an analysis helps in disentangling the impact of correlated features in isolation. It also helps you get a more nuanced understanding of how much of a feature change is needed to see a model decision flip for classification models and a decision change for regression models.
+The *counterfactual analysis and what-if* component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) has two functions:
+- Generate a set of examples with minimal changes to a particular point such that they change the model's prediction (showing the closest data points with opposite model predictions).
+- Enable users to generate their own what-if perturbations to understand how the model reacts to feature changes.
-The capabilities of this component are founded by the [DiCE](https://github.com/interpretml/DiCE) package.
+One of the top differentiators of the Responsible AI dashboard's counterfactual analysis component is the fact that you can identify which features to vary and their permissible ranges for valid and logical counterfactual examples.
+The capabilities of this component come from the [DiCE](https://github.com/interpretml/DiCE) package.
-Use What-If Counterfactuals when you need to:
+Use what-if counterfactuals when you need to:
-- Examine fairness and reliability criteria as a decision evaluator (by perturbing sensitive attributes such as gender, ethnicity, etc., and observing whether model predictions change).
+- Examine fairness and reliability criteria as a decision evaluator by perturbing sensitive attributes such as gender and ethnicity, and then observing whether model predictions change.
- Debug specific input instances in depth.-- Provide solutions to end users and determine what they can do to get a desirable outcome from the model next time.
+- Provide solutions to users and determine what they can do to get a desirable outcome from the model.
## How are counterfactual examples generated? To generate counterfactuals, DiCE implements a few model-agnostic techniques. These methods apply to any opaque-box classifier or regressor. They're based on sampling nearby points to an input point, while optimizing a loss function based on proximity (and optionally, sparsity, diversity, and feasibility). Currently supported methods are: -- [Randomized Search](http://interpret.ml/DiCE/notebooks/DiCE_model_agnostic_CFs.html#1.-Independent-random-sampling-of-features): Samples points randomly near the given query point and returns counterfactuals as those points whose predicted label is the desired class.-- [Genetic Search](http://interpret.ml/DiCE/notebooks/DiCE_model_agnostic_CFs.html#2.-Genetic-Algorithm): Samples points using a genetic algorithm, given the combined objective of optimizing proximity to the given query point, changing as few features as possible, and diversity among the counterfactuals generated.-- [KD Tree Search](http://interpret.ml/DiCE/notebooks/DiCE_model_agnostic_CFs.html#3.-Querying-a-KD-Tree) (For counterfactuals from a given training dataset): This algorithm returns counterfactuals from the training dataset. It constructs a KD tree over the training data points based on a distance function and then returns the closest points to a given query point that yields the desired predicted label.
+- [Randomized search](http://interpret.ml/DiCE/notebooks/DiCE_model_agnostic_CFs.html#1.-Independent-random-sampling-of-features): This method samples points randomly near a query point and returns counterfactuals as points whose predicted label is the desired class.
+- [Genetic search](http://interpret.ml/DiCE/notebooks/DiCE_model_agnostic_CFs.html#2.-Genetic-Algorithm): This method samples points by using a genetic algorithm, given the combined objective of optimizing proximity to the query point, changing as few features as possible, and seeking diversity among the generated counterfactuals.
+- [KD tree search](http://interpret.ml/DiCE/notebooks/DiCE_model_agnostic_CFs.html#3.-Querying-a-KD-Tree): This algorithm returns counterfactuals from the training dataset. It constructs a KD tree over the training data points based on a distance function and then returns the closest points to a particular query point that yields the desired predicted label.
## Next steps
machine-learning Concept Data Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-analysis.md
Title: Understand your datasets
-description: Perform exploratory data analysis to understand feature biases and imbalances with the Responsible AI dashboard's Data Explorer.
+description: Perform exploratory data analysis to understand feature biases and imbalances by using the Responsible AI dashboard's data explorer.
# Understand your datasets (preview)
-Machine learning models "learn" from historical decisions and actions captured in training data. As a result, their performance in real-world scenarios is heavily influenced by the data they're trained on. When feature distribution in a dataset is skewed, it can cause a model to incorrectly predict data points belonging to an underrepresented group or to be optimized along an inappropriate metric. For example, while training a housing price prediction AI, the training set was representing 75% of newer houses that have less than median prices. As a result, it was much less accurate in successfully identifying more expensive historic houses. The fix was to add older and expensive houses to the training data and augment the features to include insights about the historic value of the house. Upon incorporating that data augmentation, results improved.
+Machine learning models "learn" from historical decisions and actions captured in training data. As a result, their performance in real-world scenarios is heavily influenced by the data they're trained on. When feature distribution in a dataset is skewed, it can cause a model to incorrectly predict data points that belong to an underrepresented group or to be optimized along an inappropriate metric.
-The Data Explorer component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) helps visualize datasets based on predicted and actual outcomes, error groups, and specific features. This enables you to identify issues of over- and under-representation and to see how data is clustered in the dataset. Data visualizations consist of aggregate plots or individual data points.
+For example, while a model was training an AI system for predicting house prices, the training set was representing 75 percent of newer houses that had less than median prices. As a result, it was much less accurate in successfully identifying more expensive historic houses. The fix was to add older and expensive houses to the training data and augment the features to include insights about historical value. That data augmentation improved results.
-## When to use data explorer?
+The data explorer component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) helps visualize datasets based on predicted and actual outcomes, error groups, and specific features. It helps you identify issues of overrepresentation and underrepresentation and to see how data is clustered in the dataset. Data visualizations consist of aggregate plots or individual data points.
-Use Data Explorer when you need to:
+## When to use the data explorer
+
+Use the data explorer when you need to:
- Explore your dataset statistics by selecting different filters to slice your data into different dimensions (also known as cohorts). - Understand the distribution of your dataset across different cohorts and feature groups.-- Determine whether your findings related to fairness, error analysis and causality (derived from other dashboard components) are a result of your datasetΓÇÖs distribution.-- Decide in which areas to collect more data to mitigate errors arising from representation issues, label noise, feature noise, label bias, etc.
+- Determine whether your findings related to fairness, error analysis, and causality (derived from other dashboard components) are a result of your dataset's distribution.
+- Decide in which areas to collect more data to mitigate errors that come from representation issues, label noise, feature noise, label bias, and similar factors.
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
- Explore the [supported data explorer visualizations](how-to-responsible-ai-dashboard.md#data-explorer) of the Responsible AI dashboard.-- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Error Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-error-analysis.md
Title: Assess errors in ML models
+ Title: Assess errors in machine learning models
-description: Assess model error distributions in different cohorts of your dataset with the Responsible AI dashboard's integration of Error Analysis.
+description: Assess model error distributions in different cohorts of your dataset with the Responsible AI dashboard's integration of error analysis.
Last updated 08/17/2022
-# Assess errors in ML models (preview)
+# Assess errors in machine learning models (preview)
-One of the most apparent challenges with current model debugging practices is using aggregate metrics to score models on a benchmark dataset. Model accuracy may not be uniform across subgroups of data, and there might exist input cohorts for which the model fails more often. The direct consequences of these failures are a lack of reliability and safety, appearance of fairness issues, and a loss of trust in machine learning altogether.
+One of the biggest challenges with current model-debugging practices is using aggregate metrics to score models on a benchmark dataset. Model accuracy might not be uniform across subgroups of data, and there might be input cohorts for which the model fails more often. The direct consequences of these failures are a lack of reliability and safety, the appearance of fairness issues, and a loss of trust in machine learning altogether.
-Error Analysis moves away from aggregate accuracy metrics, exposes the distribution of errors to developers in a transparent way, and enables them to identify & diagnose errors efficiently.
+Error analysis moves away from aggregate accuracy metrics. It exposes the distribution of errors to developers in a transparent way, and it enables them to identify and diagnose errors efficiently.
-The Error Analysis component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) provides machine learning practitioners with a deeper understanding of model failure distribution and assists them with quickly identifying erroneous cohorts of data. It contributes to the ΓÇ£identifyΓÇ¥ stage of the model lifecycle workflow through a decision tree that reveals cohorts with high error rates and a heatmap that visualizes how input features impact the error rate across cohorts. Discrepancies in error might occur when the system underperforms for specific demographic groups or infrequently observed input cohorts in the training data.
+The error analysis component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) provides machine learning practitioners with a deeper understanding of model failure distribution and helps them quickly identify erroneous cohorts of data. This component identifies the cohorts of data with a higher error rate versus the overall benchmark error rate. It contributes to the identification stage of the model lifecycle workflow through:
-The capabilities of this component are founded by [Error Analysis](https://erroranalysis.ai/)) package, generating model error profiles.
+- A decision tree that reveals cohorts with high error rates.
+- A heatmap that visualizes how input features affect the error rate across cohorts.
-Use Error Analysis when you need to:
+Discrepancies in errors might occur when the system underperforms for specific demographic groups or infrequently observed input cohorts in the training data.
-- Gain a deep understanding of how model failures are distributed across a given dataset and across several input and feature dimensions.-- Break down the aggregate performance metrics to automatically discover erroneous cohorts in order to inform your targeted mitigation steps.
+The capabilities of this component come from the [Error Analysis](https://erroranalysis.ai/) package, which generates model error profiles.
-## How are error analyses generated?
+Use error analysis when you need to:
-Error Analysis identifies the cohorts of data with a higher error rate versus the overall benchmark error rate. The dashboard allows for error exploration by using either a decision tree or a heatmap guided by errors.
+- Gain a deep understanding of how model failures are distributed across a dataset and across several input and feature dimensions.
+- Break down the aggregate performance metrics to automatically discover erroneous cohorts in order to inform your targeted mitigation steps.
## Error tree
-Often, error patterns may be complex and involve more than one or two features. Therefore, it may be difficult for developers to explore all possible combinations of features to discover hidden data pockets with critical failure. To alleviate the burden, the binary tree visualization automatically partitions the benchmark data into interpretable subgroups, which have unexpectedly high or low error rates. In other words, the tree uses the input features to maximally separate model error from success. For each node defining a data subgroup, users can investigate the following information:
+Often, error patterns are complex and involve more than one or two features. Developers might have difficulty exploring all possible combinations of features to discover hidden data pockets with critical failures.
+
+To alleviate the burden, the binary tree visualization automatically partitions the benchmark data into interpretable subgroups that have unexpectedly high or low error rates. In other words, the tree uses the input features to maximally separate model error from success. For each node that defines a data subgroup, users can investigate the following information:
+
+- **Error rate**: A portion of instances in the node for which the model is incorrect. It's shown through the intensity of the red color.
+- **Error coverage**: A portion of all errors that fall into the node. It's shown through the fill rate of the node.
+- **Data representation**: The number of instances in each node of the error tree. It's shown through the thickness of the incoming edge to the node, along with the total number of instances in the node.
-- **Error rate**: a portion of instances in the node for which the model is incorrect. This is shown through the intensity of the red color.-- **Error coverage**: a portion of all errors that fall into the node. This is shown through the fill rate of the node.-- **Data representation**: number of instances in each node of the error tree. This is shown through the thickness of the incoming edge to the node along with the actual total number of instances in the node.
+## Error heatmap
-## Error Heatmap
+The view slices the data based on a one-dimensional or two-dimensional grid of input features. Users can choose the input features of interest for analysis.
-The view slices the data based on a one- or two-dimensional grid of input features. Users can choose the input features of interest for analysis. The heatmap visualizes cells with higher error with a darker red color to bring the userΓÇÖs attention to regions with high error discrepancy. This is beneficial especially when the error themes are different in different partitions, which happen frequently in practice. In this error identification view, the analysis is highly guided by the users and their knowledge or hypotheses of what features might be most important for understanding failure.
+The heatmap visualizes cells with high error by using a darker red color to bring the user's attention to those regions. This feature is especially beneficial when the error themes are different across partitions, which happens often in practice. In this error identification view, the analysis is highly guided by the users and their knowledge or hypotheses of what features might be most important for understanding failures.
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).-- Explore the [supported Error Analysis visualizations](how-to-responsible-ai-dashboard.md#error-analysis).
+- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Explore the [supported error analysis visualizations](how-to-responsible-ai-dashboard.md#error-analysis).
- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Fairness Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-fairness-ml.md
# Model performance and fairness (preview)
-This article describes methods you can use for understanding your model performance and fairness in Azure Machine Learning.
+This article describes methods that you can use to understand your model performance and fairness in Azure Machine Learning.
## What is machine learning fairness?
-Artificial intelligence and machine learning systems can display unfair behavior. One way to define unfair behavior is by its harm, or impact on people. There are many types of harm that AI systems can give rise to. To learn more, [NeurIPS 2017 keynote by Kate Crawford](https://www.youtube.com/watch?v=fMym_BKWQzk).
+Artificial intelligence and machine learning systems can display unfair behavior. One way to define unfair behavior is by its harm, or its impact on people. AI systems can give rise to many types of harm. To learn more, see the [NeurIPS 2017 keynote by Kate Crawford](https://www.youtube.com/watch?v=fMym_BKWQzk).
Two common types of AI-caused harms are: -- Harm of allocation: An AI system extends or withholds opportunities, resources, or information for certain groups. Examples include hiring, school admissions, and lending where a model might be much better at picking good candidates among a specific group of people than among other groups.
+- **Harm of allocation**: An AI system extends or withholds opportunities, resources, or information for certain groups. Examples include hiring, school admissions, and lending, where a model might be better at picking good candidates among a specific group of people than among other groups.
-- Harm of quality-of-service: An AI system doesn't work as well for one group of people as it does for another. As an example, a voice recognition system might fail to work as well for women as it does for men.
+- **Harm of quality-of-service**: An AI system doesn't work as well for one group of people as it does for another. For example, a voice recognition system might fail to work as well for women as it does for men.
-To reduce unfair behavior in AI systems, you have to assess and mitigate these harms. The model overview component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) contributes to the ΓÇ£identifyΓÇ¥ stage of the model lifecycle by generating various model performance metrics for your entire dataset, your identified cohorts of data, and across subgroups identified in terms of **sensitive features** or sensitive attributes.
+To reduce unfair behavior in AI systems, you have to assess and mitigate these harms. The *model overview* component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) contributes to the identification stage of the model lifecycle by generating model performance metrics for your entire dataset and your identified cohorts of data. It generates these metrics across subgroups identified in terms of sensitive features or sensitive attributes.
>[!NOTE]
-> Fairness is a socio-technical challenge. Many aspects of fairness, such as justice and due process, are not captured in quantitative fairness metrics. Also, many quantitative fairness metrics can't all be satisfied simultaneously. The goal of the Fairlearn open-source package is to enable humans to assess the different impact and mitigation strategies. Ultimately, it is up to the human users building artificial intelligence and machine learning models to make trade-offs that are appropriate to their scenario.
+> Fairness is a socio-technical challenge. Quantitative fairness metrics don't capture many aspects of fairness, such as justice and due process. Also, many quantitative fairness metrics can't all be satisfied simultaneously.
+>
+> The goal of the Fairlearn open-source package is to enable humans to assess the impact and mitigation strategies. Ultimately, it's up to the humans who build AI and machine learning models to make trade-offs that are appropriate for their scenarios.
-In this component of the Responsible AI dashboard, fairness is conceptualized through an approach known as **group fairness**, which asks: Which groups of individuals are at risk for experiencing harm? The term **sensitive features** suggests that the system designer should be sensitive to these features when assessing group fairness.
+In this component of the Responsible AI dashboard, fairness is conceptualized through an approach known as *group fairness*. This approach asks: "Which groups of individuals are at risk for experiencing harm?" The term *sensitive features* suggests that the system designer should be sensitive to these features when assessing group fairness.
-During the assessment phase, fairness is quantified through disparity metrics. **Disparity metrics** can evaluate and compare model behavior across different groups either as ratios or as differences. The Responsible AI dashboard supports two classes of disparity metrics:
+During the assessment phase, fairness is quantified through *disparity metrics*. These metrics can evaluate and compare model behavior across groups either as ratios or as differences. The Responsible AI dashboard supports two classes of disparity metrics:
-- Disparity in model performance: These sets of metrics calculate the disparity (difference) in the values of the selected performance metric across different subgroups of data. Some examples include:
+- **Disparity in model performance**: These sets of metrics calculate the disparity (difference) in the values of the selected performance metric across subgroups of data. Here are a few examples:
- - disparity in accuracy rate
- - disparity in error rate
- - disparity in precision
- - disparity in recall
- - disparity in MAE
- - many others
+ - Disparity in accuracy rate
+ - Disparity in error rate
+ - Disparity in precision
+ - Disparity in recall
+ - Disparity in mean absolute error (MAE)
-- Disparity in selection rate: This metric contains the difference in selection rate (favorable prediction) among different subgroups. An example of this is disparity in loan approval rate. Selection rate means the fraction of data points in each class classified as 1 (in binary classification) or distribution of prediction values (in regression).
+- **Disparity in selection rate**: This metric contains the difference in selection rate (favorable prediction) among subgroups. An example of this is disparity in loan approval rate. Selection rate means the fraction of data points in each class classified as 1 (in binary classification) or distribution of prediction values (in regression).
-The fairness assessment capabilities of this component are founded by the [Fairlearn](https://fairlearn.org/) package, providing a collection of model fairness assessment metrics and unfairness mitigation algorithms.
+The fairness assessment capabilities of this component come from the [Fairlearn](https://fairlearn.org/) package. Fairlearn provides a collection of model fairness assessment metrics and unfairness mitigation algorithms.
>[!NOTE]
-> A fairness assessment is not a purely technical exercise. The Fairlearn open-source package can help you assess the fairness of a model, but it will not perform the assessment for you. The Fairlearn open-source package helps identify quantitative metrics to assess fairness, but developers must also perform a qualitative analysis to evaluate the fairness of their own models. The sensitive features noted above is an example of this kind of qualitative analysis.
+> A fairness assessment is not a purely technical exercise. The Fairlearn open-source package can identify quantitative metrics to help you assess the fairness of a model, but it won't perform the assessment for you. You must perform a qualitative analysis to evaluate the fairness of your own models. The sensitive features noted earlier are an example of this kind of qualitative analysis.
-## Mitigate unfairness in machine learning models
+## Parity constraints for mitigating unfairness
-Upon understanding your model's fairness issues, you can use [Fairlearn](https://fairlearn.org/)'s mitigation algorithms to mitigate your observed fairness issues.
+After you understand your model's fairness issues, you can use the mitigation algorithms in the [Fairlearn](https://fairlearn.org/) open-source package to mitigate those issues. These algorithms support a set of constraints on the predictor's behavior called *parity constraints* or criteria.
-The Fairlearn open-source package includes various unfairness mitigation algorithms. These algorithms support a set of constraints on the predictor's behavior called **parity constraints** or criteria. Parity constraints require some aspects of the predictor behavior to be comparable across the groups that sensitive features define (for example, different races). The mitigation algorithms in the Fairlearn open-source package use such parity constraints to mitigate the observed fairness issues.
+Parity constraints require some aspects of the predictor's behavior to be comparable across the groups that sensitive features define (for example, different races). The mitigation algorithms in the Fairlearn open-source package use such parity constraints to mitigate the observed fairness issues.
>[!NOTE]
-> Mitigating unfairness in a model means reducing the unfairness, but this technical mitigation cannot eliminate this unfairness completely. The unfairness mitigation algorithms in the Fairlearn open-source package can provide suggested mitigation strategies to help reduce unfairness in a machine learning model, but they are not solutions to eliminate unfairness completely. There may be other parity constraints or criteria that should be considered for each particular developer's machine learning model. Developers using Azure Machine Learning must determine for themselves if the mitigation sufficiently eliminates any unfairness in their intended use and deployment of machine learning models.
+> The unfairness mitigation algorithms in the Fairlearn open-source package can provide suggested mitigation strategies to reduce unfairness in a machine learning model, but those strategies don't eliminate unfairness. Developers might need to consider other parity constraints or criteria for their machine learning models. Developers who use Azure Machine Learning must determine for themselves if the mitigation sufficiently reduces unfairness in their intended use and deployment of machine learning models.
-The Fairlearn open-source package supports the following types of parity constraints:
+The Fairlearn package supports the following types of parity constraints:
|Parity constraint | Purpose |Machine learning task | ||||
-|Demographic parity | Mitigate allocation harms | Binary classification, Regression |
+|Demographic parity | Mitigate allocation harms | Binary classification, regression |
|Equalized odds | Diagnose allocation and quality-of-service harms | Binary classification | |Equal opportunity | Diagnose allocation and quality-of-service harms | Binary classification | |Bounded group loss | Mitigate quality-of-service harms | Regression |
-### Mitigation algorithms
+## Mitigation algorithms
-The Fairlearn open-source package provides postprocessing and reduction unfairness mitigation algorithms:
+The Fairlearn open-source package provides two types of unfairness mitigation algorithms:
-- Reduction: These algorithms take a standard black-box machine learning estimator (for example, a LightGBM model) and generate a set of retrained models using a sequence of reweighted training datasets. For example, applicants of a certain gender might be up-weighted or down-weighted to retrain models and reduce disparities across different gender groups. Users can then pick a model that provides the best trade-off between accuracy (or other performance metric) and disparity, which generally would need to be based on business rules and cost calculations. -- Post-processing: These algorithms take an existing classifier and the sensitive feature as input. Then, they derive a transformation of the classifier's prediction to enforce the specified fairness constraints. The biggest advantage of threshold optimization is its simplicity and flexibility as it doesnΓÇÖt need to retrain the model.
+- **Reduction**: These algorithms take a standard black-box machine learning estimator (for example, a LightGBM model) and generate a set of retrained models by using a sequence of reweighted training datasets.
-| Algorithm | Description | Machine learning task | Sensitive features | Supported parity constraints | Algorithm Type |
+ For example, applicants of a certain gender might be upweighted or downweighted to retrain models and reduce disparities across gender groups. Users can then pick a model that provides the best trade-off between accuracy (or another performance metric) and disparity, based on their business rules and cost calculations.
+- **Post-processing**: These algorithms take an existing classifier and a sensitive feature as input. They then derive a transformation of the classifier's prediction to enforce the specified fairness constraints. The biggest advantage of one post-processing algorithm, threshold optimization, is its simplicity and flexibility because it doesn't need to retrain the model.
+
+| Algorithm | Description | Machine learning task | Sensitive features | Supported parity constraints | Algorithm type |
| | | | | | |
-| `ExponentiatedGradient` | Black-box approach to fair classification described in [A Reductions Approach to Fair Classification](https://arxiv.org/abs/1803.02453) | Binary classification | Categorical | Demographic parity, equalized odds| Reduction |
-| `GridSearch` | Black-box approach described in [A Reductions Approach to Fair Classification](https://arxiv.org/abs/1803.02453)| Binary classification | Binary | Demographic parity, equalized odds | Reduction |
-| `GridSearch` | Black-box approach that implements a grid-search variant of Fair Regression with the algorithm for bounded group loss described in [Fair Regression: Quantitative Definitions and Reduction-based Algorithms](https://arxiv.org/abs/1905.12843) | Regression | Binary | Bounded group loss| Reduction |
-| `ThresholdOptimizer` | Postprocessing algorithm based on the paper [Equality of Opportunity in Supervised Learning](https://arxiv.org/abs/1610.02413). This technique takes as input an existing classifier and the sensitive feature, and derives a monotone transformation of the classifier's prediction to enforce the specified parity constraints. | Binary classification | Categorical | Demographic parity, equalized odds| Post-processing |
+| `ExponentiatedGradient` | Black-box approach to fair classification described in [A Reductions Approach to Fair Classification](https://arxiv.org/abs/1803.02453). | Binary classification | Categorical | Demographic parity, equalized odds| Reduction |
+| `GridSearch` | Black-box approach described in [A Reductions Approach to Fair Classification](https://arxiv.org/abs/1803.02453).| Binary classification | Binary | Demographic parity, equalized odds | Reduction |
+| `GridSearch` | Black-box approach that implements a grid-search variant of fair regression with the algorithm for bounded group loss described in [Fair Regression: Quantitative Definitions and Reduction-based Algorithms](https://arxiv.org/abs/1905.12843). | Regression | Binary | Bounded group loss| Reduction |
+| `ThresholdOptimizer` | Postprocessing algorithm based on the paper [Equality of Opportunity in Supervised Learning](https://arxiv.org/abs/1610.02413). This technique takes as input an existing classifier and a sensitive feature. Then, it derives a monotone transformation of the classifier's prediction to enforce the specified parity constraints. | Binary classification | Categorical | Demographic parity, equalized odds| Post-processing |
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
- Explore the [supported model overview and fairness assessment visualizations](how-to-responsible-ai-dashboard.md#model-overview) of the Responsible AI dashboard. - Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.-- Learn how to use the different components by checking out the [Fairlearn's GitHub](https://github.com/fairlearn/fairlearn/), [user guide](https://fairlearn.github.io/main/user_guide/https://docsupdatetracker.net/index.html), [examples](https://fairlearn.github.io/main/auto_examples/https://docsupdatetracker.net/index.html), and [sample notebooks](https://github.com/fairlearn/fairlearn/tree/master/notebooks).
+- Learn how to use the components by checking out Fairlearn's [GitHub repository](https://github.com/fairlearn/fairlearn/), [user guide](https://fairlearn.github.io/main/user_guide/https://docsupdatetracker.net/index.html), [examples](https://fairlearn.github.io/main/auto_examples/https://docsupdatetracker.net/index.html), and [sample notebooks](https://github.com/fairlearn/fairlearn/tree/master/notebooks).
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
Azure Private Link enables you to connect to your workspace using a private endp
## Prerequisites * You must have an existing virtual network to create the private endpoint in. +
+ > [!IMPORTANT]
+ > We do not recommend using the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
+ * [Disable network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md) before adding the private endpoint. ## Limitations
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
You can set up other applications, such as RStudio, when creating a compute inst
1. Select **Add application** under the **Custom application setup (RStudio Workbench, etc.)** section :::image type="content" source="media/how-to-create-manage-compute-instance/custom-service-setup.png" alt-text="Screenshot showing Custom Service Setup.":::-
-> [!NOTE]
-> Custom applications are currently not supported in private link workspaces.
### Setup RStudio Workbench
To use RStudio open source, set up a custom application as follows:
1. Select **Custom Application** on the **Application** dropdown 1. Configure the **Application name** you would like to use. 1. Set up the application to run on **Target port** `8787` - the docker image for RStudio open source listed below needs to run on this Target port. +
+ > [!TIP]
+ > Using ports 8704-8993 is also supported.
+ 1. Set up the application to be accessed on **Published port** `8787` - you can configure the application to be accessed on a different Published port if you wish.+
+ > [!TIP]
+ > Using ports 8704-8993 is also supported.
+ 1. Point the **Docker image** to `ghcr.io/azure/rocker-rstudio-ml-verse:latest`. 1. Use **Bind mounts** to add access to the files in your default storage account: * Specify **/home/azureuser/cloudfiles** for **Host path**. * Specify **/home/azureuser/cloudfiles** for the **Container path**. * Select **Add** to add this mounting. Because the files are mounted, changes you make to them will be available in other compute instances and applications.
-3. Select **Create** to set up RStudio as a custom application on your compute instance.
+1. Select **Create** to set up RStudio as a custom application on your compute instance.
:::image type="content" source="media/how-to-create-manage-compute-instance/rstudio-open-source.png" alt-text="Screenshot shows form to set up RStudio as a custom application" lightbox="media/how-to-create-manage-compute-instance/rstudio-open-source.png":::
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
However, in order to load that model in a notebook in your custom local Conda en
## Next steps
-* Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
+* Learn more about [how and where to deploy a model](/azure/machine-learning/v1/how-to-deploy-and-where).
* See how to [enable interpretability features](how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
In this article you learn how to enable the following workspaces resources in a
+ An existing virtual network and subnet to use with your compute resources.
+ > [!IMPORTANT]
+ > We do not recommend using the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
+ + To deploy resources into a virtual network or subnet, your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC): - "Microsoft.Network/virtualNetworks/join/action" on the virtual network resource.
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
This article lists the curated environments with latest framework versions in Az
## Curated environments +
+> [!IMPORTANT]
+> Items marked (preview) in this article are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+### Azure Curated Environment for PyTorch (preview)
+
+**Description**: The Azure Curated Environment for PyTorch is optimized for large, distributed deep learning workloads. it comes pre-packaged with the best of Microsoft technologies for accelerated training, e.g., OnnxRuntime Training (ORT), DeepSpeed, MSCCL, etc.
+
+The following configurations are supported:
+
+| Environment Name | OS | GPU Version| Python Version | PyTorch Version | ORT-training Version | DeepSpeed Version | torch-ort Version |
+| | | | | | | | |
+| AzureML-ACPT-pytorch-1.11-py38-cuda11.3-gpu | Ubuntu 20.04 | cu113 | 3.8 | 1.11.0 | 1.11.1 | 0.7.1 | 1.11.0 |
+| AzureML-ACPT-pytorch-1.11-py38-cuda11.5-gpu | Ubuntu 20.04 | cu115 | 3.8 | 1.11.0 | 1.11.1 | 0.7.1 | 1.11.0 |
+ ### PyTorch **Name**: AzureML-pytorch-1.10-ubuntu18.04-py38-cuda11-gpu
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
Previously updated : 04/06/2022 Last updated : 09/06/2022
To create a virtual network, use the following steps:
> > The workspace and other dependency services will go into the training subnet. They can still be used by resources in other subnets, such as the scoring subnet.
- 1. Look at the default __IPv4 address space__ value. In the screenshot, the value is __172.17.0.0/16__. __The value may be different for you__. While you can use a different value, the rest of the steps in this tutorial are based on the __172.16.0.0/16 value__.
+ 1. Look at the default __IPv4 address space__ value. In the screenshot, the value is __172.16.0.0/16__. __The value may be different for you__. While you can use a different value, the rest of the steps in this tutorial are based on the __172.16.0.0/16 value__.
> [!IMPORTANT]
- > We do not recommend using an address in the 172.17.0.1/16 range if you plan on using Azure Kubernetes Services for deployment with this cluster. The Docker bridge in Azure Kubernetes Services uses 172.17.0.1/16 as its default. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
+ > We do not recommend using the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
1. Select the __Default__ subnet and then select __Remove subnet__.
machine-learning How To Cicd Data Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-cicd-data-ingestion.md
steps:
artifact: di-notebooks ```
-The pipeline uses [flake8](https://pypi.org/project/flake8/) to do the Python code linting. It runs the unit tests defined in the source code and publishes the linting and test results so they're available in the Azure Pipeline execution screen.
+The pipeline uses [flake8](https://pypi.org/project/flake8/) to do the Python code linting. It runs the unit tests defined in the source code and publishes the linting and test results so they're available in the Azure Pipelines execution screen.
If the linting and unit testing is successful, the pipeline will copy the source code to the artifact repository to be used by the subsequent deployment steps.
The values in the JSON file are default values configured in the pipeline defini
The Continuous Delivery process takes the artifacts and deploys them to the first target environment. It makes sure that the solution works by running tests. If successful, it continues to the next environment.
-The CD Azure Pipeline consists of multiple stages representing the environments. Each stage contains [deployments](/azure/devops/pipelines/process/deployment-jobs) and [jobs](/azure/devops/pipelines/process/phases?tabs=yaml) that perform the following steps:
+The CD Azure Pipelines consists of multiple stages representing the environments. Each stage contains [deployments](/azure/devops/pipelines/process/deployment-jobs) and [jobs](/azure/devops/pipelines/process/phases?tabs=yaml) that perform the following steps:
* Deploy a Python Notebook to Azure Databricks workspace * Deploy an Azure Data Factory pipeline
stages:
* [Source Control in Azure Data Factory](/azure/data-factory/source-control) * [Continuous integration and delivery in Azure Data Factory](/azure/data-factory/continuous-integration-delivery)
-* [DevOps for Azure Databricks](https://marketplace.visualstudio.com/items?itemName=riserrad.azdo-databricks)
+* [DevOps for Azure Databricks](https://marketplace.visualstudio.com/items?itemName=riserrad.azdo-databricks)
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-saps4hana-source.md
This article outlines how to register SAP S/4HANA, and how to authenticate and i
\* *Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
->[!NOTE]
->The supported version of SAP S4/HANA is 6.0.
- When scanning SAP S/4HANA source, Microsoft Purview supports: - Extracting technical metadata including:
remote-rendering Deploy To Hololens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/quickstarts/deploy-to-hololens.md
This quickstart covers how to deploy and run the quickstart sample app for Unity to a HoloLens 2.
-In this quickstart you will learn how to:
+In this quickstart you'll learn how to:
> [!div class="checklist"] >
In this quickstart you will learn how to:
## Prerequisites
-In this quickstart we will deploy the sample project from [Quickstart: Render a model with Unity](render-model.md).
-
+In this quickstart, we'll deploy the sample project from [Quickstart: Render a model with Unity](render-model.md).
Make sure your credentials are saved properly with the scene and you can connect to a session from within the Unity editor.
+The HoloLens 2 must be in developer mode and paired with the desktop machine. Refer to [using the device portal](/windows/mixed-reality/develop/advanced-concepts/using-the-windows-device-portal#setting-up-hololens-to-use-windows-device-portal) for further instructions.
+ ## Build the sample project 1. Open *File > Build Settings*.
Make sure your credentials are saved properly with the scene and you can connect
1. Set *Build Type* to **D3D Project**\ ![Build settings](./media/unity-build-settings.png) 1. Select **Switch to Platform**
-1. When pressing **Build** (or 'Build And Run'), you will be asked to select some folder where the solution should be stored
+1. When pressing **Build** (or 'Build And Run'), you'll be asked to select some folder where the solution should be stored
1. Open the generated **Quickstart.sln** with Visual Studio 1. Change the configuration to **Release** and **ARM64** 1. Switch the debugger mode to **Remote Machine**\
If you want to launch the sample a second time later, you can also find it from
## Next steps
-In the next quickstart, we will take a look at converting a custom model.
+In the next quickstart, we'll take a look at converting a custom model.
> [!div class="nextstepaction"] > [Quickstart: Convert a model for rendering](convert-model.md)
role-based-access-control Conditions Custom Security Attributes Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-custom-security-attributes-example.md
If you have a similar scenario, follow these steps to see if you could potential
To use this solution, you must have: -- Multiple built-in or custom role assignments that have [storage blob data actions](../storage/common/storage-auth-abac-attributes.md). These include the following built-in roles:
+- Multiple built-in or custom role assignments that have [storage blob data actions](../storage/blobs/storage-auth-abac-attributes.md). These include the following built-in roles:
- [Storage Blob Data Contributor](built-in-roles.md#storage-blob-data-contributor) - [Storage Blob Data Owner](built-in-roles.md#storage-blob-data-owner)
role-based-access-control Conditions Custom Security Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-custom-security-attributes.md
For more information about conditions, see [What is Azure attribute-based access
## Azure PowerShell
-You can also use Azure PowerShell to add role assignment conditions. The following commands show how to add conditions. For information, see [Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell (preview)](../storage/common/storage-auth-abac-powershell.md).
+You can also use Azure PowerShell to add role assignment conditions. The following commands show how to add conditions. For information, see [Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell (preview)](../storage/blobs/storage-auth-abac-powershell.md).
### Add a condition
You can also use Azure PowerShell to add role assignment conditions. The followi
## Azure CLI
-You can also use Azure CLI to add role assignments conditions. The following commands show how to add conditions. For information, see [Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI (preview)](../storage/common/storage-auth-abac-cli.md).
+You can also use Azure CLI to add role assignments conditions. The following commands show how to add conditions. For information, see [Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI (preview)](../storage/blobs/storage-auth-abac-cli.md).
### Add a condition
You can also use Azure CLI to add role assignments conditions. The following com
- [What are custom security attributes in Azure AD? (Preview)](../active-directory/fundamentals/custom-security-attributes-overview.md) - [Azure role assignment condition format and syntax (preview)](conditions-format.md)-- [Example Azure role assignment conditions (preview)](../storage/common/storage-auth-abac-examples.md?toc=/azure/role-based-access-control/toc.json)
+- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md?toc=/azure/role-based-access-control/toc.json)
role-based-access-control Conditions Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-format.md
Currently, conditions can be added to built-in or custom role assignments that h
- [Storage Queue Data Message Sender](built-in-roles.md#storage-queue-data-message-sender) - [Storage Queue Data Reader](built-in-roles.md#storage-queue-data-reader)
-For a list of the blob storage actions you can use in conditions, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/common/storage-auth-abac-attributes.md).
+For a list of the blob storage actions you can use in conditions, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md).
## Attributes
Depending on the selected actions, the attribute might be found in different pla
For a list of the blob storage or queue storage attributes you can use in conditions, see: -- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/common/storage-auth-abac-attributes.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md)
#### Principal attributes
This section lists the function operators that are available to construct condit
> | **Operator** | `Exists` | > | **Description** | Checks if the specified attribute exists. | > | **Examples** | `Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot]` |
-> | **Attributes support** | [Encryption scope name](../storage/common/storage-auth-abac-attributes.md#encryption-scope-name)<br/>[Snapshot](../storage/common/storage-auth-abac-attributes.md#snapshot)<br/>[Version ID](../storage/common/storage-auth-abac-attributes.md#version-id) |
+> | **Attributes support** | [Encryption scope name](../storage/blobs/storage-auth-abac-attributes.md#encryption-scope-name)<br/>[Snapshot](../storage/blobs/storage-auth-abac-attributes.md#snapshot)<br/>[Version ID](../storage/blobs/storage-auth-abac-attributes.md#version-id) |
## Logical operators
a AND (b OR c)
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/common/storage-auth-abac-examples.md)-- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/common/storage-auth-abac-attributes.md)
+- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md)
- [Add or edit Azure role assignment conditions using the Azure portal (preview)](conditions-role-assignments-portal.md)
role-based-access-control Conditions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-overview.md
There are several scenarios where you might want to add a condition to your role
- Read access to blobs with the tag Program=Alpine and a path of logs - Read access to blobs with the tag Project=Baker and the user has a matching attribute Project=Baker
-For more information about how to create these examples, see [Examples of Azure role assignment conditions](../storage/common/storage-auth-abac-examples.md).
+For more information about how to create these examples, see [Examples of Azure role assignment conditions](../storage/blobs/storage-auth-abac-examples.md).
## Where can conditions be added? Currently, conditions can be added to built-in or custom role assignments that have [blob storage or queue storage data actions](conditions-format.md#actions). Conditions are added at the same scope as the role assignment. Just like role assignments, you must have `Microsoft.Authorization/roleAssignments/write` permissions to add a condition.
-Here are some of the [blob storage attributes](../storage/common/storage-auth-abac-attributes.md#azure-blob-storage-attributes) you can use in your conditions.
+Here are some of the [blob storage attributes](../storage/blobs/storage-auth-abac-attributes.md#azure-blob-storage-attributes) you can use in your conditions.
- Account name - Blob index tags
Here's a list of the primary features of conditions:
| Feature | Status | Date | | | | |
-| Use the following [attributes](../storage/common/storage-auth-abac-attributes.md#azure-blob-storage-attributes) in a condition: Account name, Blob prefix, Encryption scope name, Is Current Version, Is hierarchical namespace enabled, Snapshot, Version ID | Preview | May 2022 |
+| Use the following [attributes](../storage/blobs/storage-auth-abac-attributes.md#azure-blob-storage-attributes) in a condition: Account name, Blob prefix, Encryption scope name, Is Current Version, Is hierarchical namespace enabled, Snapshot, Version ID | Preview | May 2022 |
| Use [custom security attributes on a principal in a condition](conditions-format.md#principal-attributes) | Preview | November 2021 | | Add conditions to blob storage data role assignments | Preview | May 2021 | | Use attributes on a resource in a condition | Preview | May 2021 |
Here are the known issues with conditions:
## Next steps - [FAQ for Azure role assignment conditions (preview)](conditions-faq.md)-- [Example Azure role assignment conditions (preview)](../storage/common/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/common/storage-auth-abac-portal.md)
+- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md)
role-based-access-control Conditions Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-prerequisites.md
For more information about custom security attributes, see:
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/common/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/common/storage-auth-abac-portal.md)
+- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md)
role-based-access-control Conditions Role Assignments Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-cli.md
Alternatively, if you want to delete both the role assignment and the condition,
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/common/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI (preview)](../storage/common/storage-auth-abac-cli.md)
+- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI (preview)](../storage/blobs/storage-auth-abac-cli.md)
- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-portal.md
For information about the prerequisites to add or edit role assignment condition
## Step 1: Determine the condition you need
-To determine the conditions you need, review the examples in [Example Azure role assignment conditions](../storage/common/storage-auth-abac-examples.md).
+To determine the conditions you need, review the examples in [Example Azure role assignment conditions](../storage/blobs/storage-auth-abac-examples.md).
-Currently, conditions can be added to built-in or custom role assignments that have [storage blob data actions](../storage/common/storage-auth-abac-attributes.md). These include the following built-in roles:
+Currently, conditions can be added to built-in or custom role assignments that have [storage blob data actions](../storage/blobs/storage-auth-abac-attributes.md). These include the following built-in roles:
- [Storage Blob Data Contributor](built-in-roles.md#storage-blob-data-contributor) - [Storage Blob Data Owner](built-in-roles.md#storage-blob-data-owner)
Once you have the Add role assignment condition page open, you can review the ba
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/common/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/common/storage-auth-abac-portal.md)
+- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md)
- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-powershell.md
Alternatively, if you want to delete both the role assignment and the condition,
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/common/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell (preview)](../storage/common/storage-auth-abac-powershell.md)
+- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell (preview)](../storage/blobs/storage-auth-abac-powershell.md)
- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-rest.md
Alternatively, if you want to delete both the role assignment and the condition,
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/common/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/common/storage-auth-abac-portal.md)
+- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md)
- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-template.md
az deployment group create --resource-group example-group --template-file rbac-t
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/common/storage-auth-abac-examples.md)
+- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md) - [Assign Azure roles using Azure Resource Manager templates](role-assignments-template.md)
role-based-access-control Conditions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-troubleshoot.md
The previously selected attribute no longer applies to the currently selected ac
**Solution 1**
-In the **Add action** section, select an action that applies to the selected attribute. For a list of storage actions that each storage attribute supports, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/common/storage-auth-abac-attributes.md).
+In the **Add action** section, select an action that applies to the selected attribute. For a list of storage actions that each storage attribute supports, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md).
**Solution 2**
-In the **Build expression** section, select an attribute that applies to the currently selected actions. For a list of storage attributes that each storage action supports, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/common/storage-auth-abac-attributes.md).
+In the **Build expression** section, select an attribute that applies to the currently selected actions. For a list of storage attributes that each storage action supports, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md).
### Symptom - Attribute does not apply in this context warning
sentinel Automate Responses With Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-responses-with-playbooks.md
The following recommended playbooks, and other similar playbooks are available t
- **Notification playbooks** are triggered when an alert or incident is created and send a notification to a configured destination:
- - [Post a message in a Microsoft Teams channel](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Post-Message-Teams)
+ - [Post a message in a Microsoft Teams channel](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Teams/Playbooks/Post-Message-Teams)
- [Send an Outlook email notification](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Incident-Email-Notification) - [Post a message in a Slack channel](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Post-Message-Slack)
sentinel Connect Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-aws.md
Permissions policies that must be applied to the [Microsoft Sentinel role you cr
For more information, see [Monitor the health of your data connectors](monitor-data-connector-health.md).
+Learn how to [troubleshoot Amazon Web Services S3 connector issues](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/troubleshoot-amazon-web-services-s3-connector-issues/ba-p/3608072).
# [CloudTrail connector (legacy)](#tab/ct)
service-bus-messaging Entity Suspend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/entity-suspend.md
Title: Azure Service Bus - suspend messaging entities description: This article explains how to temporarily suspend and reactivate Azure Service Bus message entities (queues, topics, and subscriptions). Previously updated : 09/28/2021 Last updated : 09/06/2022 # Suspend and reactivate messaging entities (disable)
-Queues, topics, and subscriptions can be temporarily suspended. Suspension puts the entity into a disabled state in which all messages are maintained in storage. However, messages cannot be removed or added, and the respective protocol operations yield errors.
+Queues, topics, and subscriptions can be temporarily suspended. Suspension puts the entity into a disabled state in which all messages are maintained in storage. However, messages can't be removed or added, and the respective protocol operations yield errors.
You may want to suspend an entity for urgent administrative reasons. For example, a faulty receiver takes messages off the queue, fails processing, and yet incorrectly completes the messages and removes them. In this case, you may want to disable the queue for receives until you correct and deploy the code.
-A suspension or reactivation can be performed either by the user or by the system. The system only suspends entities because of grave administrative reasons such as hitting the subscription spending limit. System-disabled entities cannot be reactivated by the user, but are restored when the cause of the suspension has been addressed.
+A suspension or reactivation can be performed either by the user or by the system. The system only suspends entities because of grave administrative reasons such as hitting the subscription spending limit. System-disabled entities can't be reactivated by the user, but are restored when the cause of the suspension has been addressed.
## Queue status The states that can be set for a **queue** are:
The states that can be set for a **queue** are:
- **Active**: The queue is active. You can send messages to and receive messages from the queue. - **Disabled**: The queue is suspended. It's equivalent to setting both **SendDisabled** and **ReceiveDisabled**. - **SendDisabled**: You can't send messages to the queue, but you can receive messages from it. You'll get an exception if you try to send messages to the queue. -- **ReceiveDisabled**: You can send messages to the queue, but you can't receive messages from it. You'll get an exception if you try to receive messages to the queue.
+- **ReceiveDisabled**: You can send messages to the queue, but you can't receive messages from it. You'll get an exception if you try to receive messages from the queue.
### Change the queue status in the Azure portal:
The states that can be set for a **queue** are:
:::image type="content" source="./media/entity-suspend/entity-state-change.png" alt-text="Set state of the queue":::
-You can also disable the send and receive operations using the Service Bus [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) APIs in the .NET SDK, or using an Azure Resource Manager template through Azure CLI or Azure PowerShell.
+You can also disable the send and receive operations using an Azure Resource Manager template through Azure CLI or Azure PowerShell.
### Change the queue status using Azure PowerShell The PowerShell command to disable a queue is shown in the following example. The reactivation command is equivalent, setting `Status` to **Active**.
You can change topic status in the Azure portal. Select the current status of th
The states that can be set for a **topic** are: - **Active**: The topic is active. You can send messages to the topic. -- **Disabled**: The topic is suspended. You can't send messages to the topic.
+- **Disabled**: The topic is suspended. You can't send messages to the topic. Setting **Disabled** is equivalent to setting **SendDisabled** for a topic.
- **SendDisabled**: Same effect as **Disabled**. You can't send messages to the topic. You'll get an exception if you try to send messages to the topic. ## Subscription status
You can change subscription status in the Azure portal. Select the current statu
The states that can be set for a **subscription** are: - **Active**: The subscription is active. You can receive messages from the subscription.-- **Disabled**: The subscription is suspended. You can't receive messages from the subscription. -- **ReceiveDisabled**: Same effect as **Disabled**. You can't receive messages from the subscription. You'll get an exception if you try to receive messages to the subscription.
+- **Disabled**: The subscription is suspended. You can't receive messages from the subscription. Setting **Disabled** on a subscription is equivalent to setting **ReceiveDisabled**. You'll get an exception if you try to receive messages from the subscription.
+- **ReceiveDisabled**: Same effect as **Disabled**. You can't receive messages from the subscription. You'll get an exception if you try to receive messages from the subscription.
+
+Here's how the behavior is based on the status you set on a topic and its subscription.
| Topic status | Subscription status | Behavior | | | - | -- |
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
IIS | Make sure you:<br/><br/> - Don't have a pre-existing default website <br/>
NIC type | VMXNET3 (when deployed as a VMware VM) IP address type | Static Ports | 443 used for control channel orchestration<br/>9443 for data transport
+IP address | Make sure that configuration server and process server have a static IPv4 address, and doesn't have NAT configured.
> [!NOTE] > Operating system has to be installed with English locale. Conversion of locale post installation could result in potential issues.
spring-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/disaster-recovery.md
- Title: Azure Spring Apps geo-disaster recovery | Microsoft Docs
-description: Learn how to protect your Spring application from regional outages
--- Previously updated : 10/24/2019----
-# Azure Spring Apps disaster recovery
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-
-**This article applies to:** ✔️ Java ✔️ C#
-
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-
-This article explains some strategies you can use to protect your applications in Azure Spring Apps from experiencing downtime. Any region or data center may experience downtime caused by regional disasters, but careful planning can mitigate impact on your customers.
-
-## Plan your application deployment
-
-Applications in Azure Spring Apps run in a specific region. Azure operates in multiple geographies around the world. An Azure geography is a defined area of the world that contains at least one Azure Region. An Azure region is an area within a geography, containing one or more data centers. Each Azure region is paired with another region within the same geography, together making a regional pair. Azure serializes platform updates (planned maintenance) across regional pairs, ensuring that only one region in each pair is updated at a time. In the event of an outage affecting multiple regions, at least one region in each pair will be prioritized for recovery.
-
-Ensuring high availability and protection from disasters requires that you deploy your Spring applications to multiple regions. Azure provides a list of [paired regions](../availability-zones/cross-region-replication-azure.md) so that you can plan your Spring app deployments to regional pairs. We recommend that you consider three key factors when designing your architecture: region availability, Azure paired regions, and service availability.
-
-* Region availability: Choose a geographic area close to your users to minimize network lag and transmission time.
-* Azure paired regions: Choose paired regions within your chosen geographic area to ensure coordinated platform updates and prioritized recovery efforts if needed.
-* Service availability: Decide whether your paired regions should run hot/hot, hot/warm, or hot/cold.
-
-## Use Azure Traffic Manager to route traffic
-
-[Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) provides DNS-based traffic load-balancing and can distribute network traffic across multiple regions. Use Azure Traffic Manager to direct customers to the closest Azure Spring Apps service instance to them. For best performance and redundancy, direct all application traffic through Azure Traffic Manager before sending it to your Azure Spring Apps service.
-
-If you have applications in Azure Spring Apps running in multiple regions, use Azure Traffic Manager to control the flow of traffic to your applications in each region. Define an Azure Traffic Manager endpoint for each service using the service IP. Customers should connect to an Azure Traffic Manager DNS name pointing to the Azure Spring Apps service. Azure Traffic Manager load balances traffic across the defined endpoints. If a disaster strikes a data center, Azure Traffic Manager will direct traffic from that region to its pair, ensuring service continuity.
-
-## Create Azure Traffic Manager for Azure Spring Apps
-
-1. Create Azure Spring Apps in two different regions.
-You will need two service instances of Azure Spring Apps deployed in two different regions (East US and West Europe). Launch an existing application in Azure Spring Apps using the Azure portal to create two service instances. Each will serve as primary and fail-over endpoint for Traffic.
-
-**Two service instances info:**
-
-| Service Name | Location | Application |
-|--|--|--|
-| service-sample-a | East US | gateway / auth-service / account-service |
-| service-sample-b | West Europe | gateway / auth-service / account-service |
-
-2. Set up Custom Domain for Service
-Follow [Custom Domain Document](./tutorial-custom-domain.md) to set up custom domain for these two existing service instances. After successful set up, both service instances will bind to custom domain: bcdr-test.contoso.com
-
-3. Create a traffic manager and two endpoints: [Create a Traffic Manager profile using the Azure portal](../traffic-manager/quickstart-create-traffic-manager-profile.md).
-
-Here is the traffic manager profile:
-* Traffic Manager DNS Name: `http://asa-bcdr.trafficmanager.net`
-* Endpoint Profiles:
-
-| Profile | Type | Target | Priority | Custom Header Settings |
-|--|--|--|--|--|
-| Endpoint A Profile | External Endpoint | service-sample-a.azuremicroservices.io | 1 | host: bcdr-test.contoso.com |
-| Endpoint B Profile | External Endpoint | service-sample-b.azuremicroservices.io | 2 | host: bcdr-test.contoso.com |
-
-4. Create a CNAME record in DNS Zone: bcdr-test.contoso.com CNAME asa-bcdr.trafficmanager.net.
-
-5. Now, the environment is completely set up. Customers should be able to access the app via: bcdr-test.contoso.com
-
-## Next steps
-
-* [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md)
spring-apps Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/faq.md
Azure Spring Apps intelligently schedules your applications on the underlying Ku
### In which regions is Azure Spring Apps Basic/Standard tier available?
-East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, Switzerland North, China East 2 (Mooncake), and China North 2 (Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
+East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, Switzerland North, China East 2 (Mooncake), China North 2 (Mooncake), and China North 3 (Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
### In which regions is Azure Spring Apps Enterprise tier available?
spring-apps How To Enable Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-availability-zone.md
- Title: Create an Azure Spring Apps instance with availability zone enabled-
-description: How to create an Azure Spring Apps instance with availability zone enabled.
---- Previously updated : 04/14/2022--
-# Create Azure Spring Apps instance with availability zone enabled
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
--
-**This article applies to:** ✔️ Standard tier ✔️ Enterprise tier
-
-> [!NOTE]
-> This feature is not available in Basic tier.
-
-This article explains availability zones in Azure Spring Apps, and how to enable them.
-
-In Microsoft Azure, [Availability Zones (AZ)](../availability-zones/az-overview.md) are unique physical locations within an Azure region. Each zone is made up of one or more data centers that are equipped with independent power, cooling, and networking. Availability zones protect your applications and data from data center failures.
-
-When an Azure Spring Apps service instance is created with availability zone enabled, Azure Spring Apps will automatically distribute fundamental resources across logical sections of underlying Azure infrastructure. This distribution provides a higher level of availability to protect against a hardware failure or a planned maintenance event.
-
-## How to create an instance in Azure Spring Apps with availability zone enabled
-
->[!NOTE]
-> You can only enable availability zone when creating your instance. You can't enable or disable availability zone after creation of the service instance.
-
-You can enable availability zone in Azure Spring Apps using the [Azure CLI](/cli/azure/install-azure-cli) or [Azure portal](https://portal.azure.com).
-
-### [Azure CLI](#tab/azure-cli)
-
-To create a service in Azure Spring Apps with availability zone enabled using the Azure CLI, include the `--zone-redundant` parameter when you create your service in Azure Spring Apps.
-
-```azurecli
-az spring create \
- --resource-group <your-resource-group-name> \
- --name <your-Azure-Spring-Apps-instance-name> \
- --location <location> \
- --zone-redundant true
-```
-
-### [Azure portal](#tab/portal)
-
-To create a service in Azure Spring Apps with availability zone enabled using the Azure portal, enable the Zone Redundant option when creating the instance.
-
-![Image of where to enable availability zone using the portal.](media/spring-cloud-availability-zone/availability-zone-portal.png)
---
-## Region availability
-
-Azure Spring Apps currently supports availability zones in the following regions:
--- Australia East-- Brazil South-- Canada Central-- Central US-- East US-- East US 2-- France Central-- Germany West Central-- North Europe-- Japan East-- Korea Central-- South Africa North-- South Central US-- Southeast Asia-- UK South-- West Europe-- West US 2-- West US 3-
-> [!NOTE]
-> The following regions could only be created with availability zone enabled by using Azure CLI, and Azure Portal will coming soon.
->
-> - Canada Central
-> - Germany West Central
-> - Japan East
-> - Korea Central
-> - South Africa North
-> - Southeast Asia
-> - West US 3
-
-## Pricing
-
-There's no extra cost for enabling the availability zone.
-
-## Next steps
--- [Plan for disaster recovery](disaster-recovery.md)
spring-apps How To Enable Redundancy And Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-redundancy-and-disaster-recovery.md
+
+ Title: Enable redundancy and disaster recovery for Azure Spring Apps
+description: Learn how to protect your Spring Apps application from zonal and regional outages
++++ Last updated : 07/12/2022+++
+# Enable redundancy and disaster recovery for Azure Spring Apps
+
+**Zone redundancy applies to:** ✔️ Standard tier ✔️ Enterprise tier
+
+**Customer-managed disaster recovery applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article describes the resiliency strategy for Azure Spring Apps and explains how to configure zone redundancy and customer-managed geo-disaster recovery.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Availability zones
+
+Availability zones are unique physical locations within a Microsoft Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. To ensure resiliency, there's always more than one zone in each zone-enabled region. The physical separation of availability zones within a region protects your applications and data from datacenter failures. For more information, see [Regions and availability zones](../availability-zones/az-overview.md).
+
+When you create an Azure Spring Apps service instance with zone redundancy enabled, Azure Spring Apps automatically distributes fundamental resources across logical sections of underlying Azure infrastructure. The underlying compute resource distributes VMs across all availability zones to ensure the ability to compute. The underlying storage resource replicates data across availability zones to keep it available even if there are datacenter failures. This distribution provides a higher level of availability and protects against hardware failures or planned maintenance events.
+
+## Limitations and region availability
+
+Azure Spring Apps currently supports availability zones in the following regions:
+
+- Australia East
+- Brazil South
+- Canada Central
+- Central US
+- East Asia
+- East US
+- East US 2
+- France Central
+- Germany West Central
+- North Europe
+- Japan East
+- Korea Central
+- South Africa North
+- South Central US
+- Southeast Asia
+- UK South
+- West Europe
+- West US 2
+- West US 3
+
+The following limitations apply when you create an Azure Spring Apps Service instance with zone redundancy enabled:
+
+- Zone redundancy is not available in basic tier.
+- You can enable zone redundancy only when you create a new Azure Spring Apps Service instance.
+- If you enable your own resource in Azure Spring Apps, such as your own persistent storage, make sure to enable zone redundancy for the resource. For more information, see [How to enable your own persistent storage in Azure Spring Apps](how-to-custom-persistent-storage.md).
+- Zone redundancy ensures that underlying VM nodes are distributed evenly across all availability zones but does not guarantee even distribution of app instances. If an app instance fails because its located zone goes down, Azure Spring Apps creates a new app instance for this app on nodes in other availability zones.
+- Geo-disaster recovery is not the purpose of zone redundancy. To protect your service from regional outages, see the [Customer-managed geo-disaster recovery](#customer-managed-geo-disaster-recovery) section later in this article.
+
+## Create an Azure Spring Apps instance with zone redundancy enabled
+
+> [!NOTE]
+> You can enable zone redundancy only when creating your Azure Spring Apps service instance. You can't change the zone redundancy property after creation.
+
+You can enable zone redundancy in Azure Spring Apps using the [Azure CLI](/cli/azure/install-azure-cli) or [Azure portal](https://portal.azure.com).
+
+### [Azure CLI](#tab/azure-cli)
+
+To create a service in Azure Spring Apps with zone redundancy enabled using the Azure CLI, include the `--zone-redundant` parameter when you create your service, as shown in the following example:
+
+```azurecli
+az spring create \
+ --resource-group <your-resource-group-name> \
+ --name <your-Azure-Spring-Apps-instance-name> \
+ --location <location> \
+ --zone-redundant true
+```
+
+### [Azure portal](#tab/portal)
+
+To create a service in Azure Spring Apps with zone redundancy enabled using the Azure portal, select the **Zone Redundant** option when you create the instance.
++++
+## Verify the Zone Redundant property setting
+
+You can verify the zone redundancy property setting in an Azure Spring Apps instance using the [Azure CLI](/cli/azure/install-azure-cli) or [Azure portal](https://portal.azure.com).
+
+### [Azure CLI](#tab/azure-cli)
+
+To verify the zone redundancy property setting using the Azure CLI, use the following command to show the details of the Azure Spring Apps instance, including the zone redundancy property.
+
+```azurecli
+az spring show \
+ --resource-group <your-resource-group-name> \
+ --name <your-Azure-Spring-Apps-instance-name>
+```
+
+### [Azure portal](#tab/portal)
+
+To verify the zone redundancy property of an Azure Spring Apps instance using the Azure portal, check the setting on the service instance **Overview** page.
++++
+## Pricing
+
+There's no additional cost associated with enabling zone redundancy. You only need to pay for Standard or Enterprise tier, which is required to enable zone redundancy.
+
+## Customer-managed geo-disaster recovery
+
+The Azure Spring Apps service doesn't provide geo-disaster recovery, but careful planning can help protect you from experiencing downtime.
+
+### Plan your application deployment
+
+To plan your application, it's helpful to understand the following information about Azure regions and geographies:
+
+- Applications hosted in Azure Spring Apps run in a specific region.
+- Azure operates in multiple geographies around the world.
+- An Azure geography is a defined area of the world that contains at least one Azure region.
+- An Azure region is an area within a geography containing one or more data centers.
+
+Most Azure regions are paired with another region within the same geography, together making a regional pair. Azure serializes platform updates (planned maintenance) across regional pairs, ensuring that only one region in each pair is updated at a time. In the event of an outage affecting multiple regions, at least one region in each pair is prioritized for recovery.
+
+To ensure high availability and protection from disasters, deploy your applications hosted in Azure Spring Apps to multiple regions. Azure provides a list of paired regions so that you can plan your app deployments accordingly. For more information, see [Cross-region replication in Azure: Business continuity and disaster recovery](../availability-zones/cross-region-replication-azure.md).
+
+ Consider the following three key factors when you design your architecture:
+
+- Region availability. To minimize network lag and transmission time, choose a region that supports Azure Spring Apps zone redundancy, or a geographic area close to your users.
+- Azure paired regions. To ensure coordinated platform updates and prioritized recovery efforts if needed, choose paired regions within your chosen geographic area.
+- Service availability. Decide whether your paired regions should run hot/hot, hot/warm, or hot/cold.
+
+### Use Azure Traffic Manager to route traffic
+
+Azure Traffic Manager provides DNS-based traffic load-balancing and can distribute network traffic across multiple regions. Use Azure Traffic Manager to direct customers to the closest Azure Spring Apps service instance. For best performance and redundancy, direct all application traffic through Azure Traffic Manager before sending it to your Azure Spring Apps service instance. For more information, see [What is Traffic Manager?](../traffic-manager/traffic-manager-overview.md)
+
+If you have applications in Azure Spring Apps running in multiple regions, Azure Traffic Manager can control the flow of traffic to your applications in each region. Define an Azure Traffic Manager endpoint for each service instance using the instance IP. You should connect to an Azure Traffic Manager DNS name pointing to the Azure Spring Apps service instance. Azure Traffic Manager load balances traffic across the defined endpoints. If a disaster strikes a data center, Azure Traffic Manager directs traffic from that region to its pair, ensuring service continuity.
+
+Use the following steps to create an Azure Traffic Manager instance for Azure Spring Apps instances:
+
+1. Create Azure Spring Apps instances in two different regions. For example, create service instances in East US and West Europe, as shown in the following table. Each instance serves as a primary and fail-over endpoint for traffic.
+
+ | Service name | Location | Application |
+ ||-||
+ | service-sample-a | East US | gateway / auth-service / account-service |
+ | service-sample-b | West Europe | gateway / auth-service / account-service |
+
+1. Set up a custom domain for the service instances. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md). After successful setup, both service instances will bind to the same custom domain, such as `bcdr-test.contoso.com`.
+
+1. Create a traffic manager and two endpoints. For instructions, see [Create a Traffic Manager profile using the Azure portal](../traffic-manager/quickstart-create-traffic-manager-profile.md), which produces the following Traffic Manager profile:
+
+ - Traffic Manager DNS Name: `http://asa-bcdr.trafficmanager.net`
+ - Endpoint Profiles:
+
+ | Profile | Type | Target | Priority | Custom header settings |
+ |--|-||-|-|
+ | Endpoint A Profile | External Endpoint | `service-sample-a.azuremicroservices.io` | 1 | `host: bcdr-test.contoso.com` |
+ | Endpoint B Profile | External Endpoint | `service-sample-b.azuremicroservices.io` | 2 | `host: bcdr-test.contoso.com` |
+
+1. Create a CNAME record in a DNS Zone similar to the following example: `bcdr-test.contoso.com CNAME asa-bcdr.trafficmanager.net`.
+
+The environment is now set up. If you used the example values in the linked articles, you should be able to access the app using `https://bcdr-test.contoso.com`.
+
+## Next steps
+
+* [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md)
spring-apps Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart.md
Title: "Quickstart - Deploy your first application to Azure Spring Apps"
-description: In this quickstart, we deploy an application to Azure Spring Apps.
+description: Describes how to deploy an application to Azure Spring Apps.
Previously updated : 10/18/2021 Last updated : 08/22/2022
-zone_pivot_groups: programming-languages-spring-apps
# Quickstart: Deploy your first application to Azure Spring Apps
zone_pivot_groups: programming-languages-spring-apps
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier This quickstart explains how to deploy a small application to run on Azure Spring Apps.
->[!NOTE]
-> Steeltoe support for Azure Spring Apps is currently offered as a public preview. Public preview offerings allow customers to experiment with new features prior to their official release. Public preview features and services aren't meant for production use. For more information about support during previews, see the [FAQ](https://azure.microsoft.com/support/faq/) or file a [Support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+The application code used in this tutorial is a simple app. When you've completed this example, the application will be accessible online, and you can manage it through the Azure portal.
-By following this quickstart, you'll learn how to:
-> [!div class="checklist"]
-> * Generate a basic Steeltoe .NET Core project
-> * Provision an Azure Spring Apps service instance
-> * Build and deploy the app with a public endpoint
-> * Stream logs in real time
-
-The application code used in this quickstart is a simple app built with a .NET Core Web API project template. When you've completed this example, the application will be accessible online and can be managed via the Azure portal and the Azure CLI.
-
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1). The Azure Spring Apps service supports .NET Core 3.1 and later versions.
-* [Azure CLI version 2.0.67 or later](/cli/azure/install-azure-cli).
-* [Git](https://git-scm.com/).
-
-## Install Azure CLI extension
-
-Verify that your Azure CLI version is 2.0.67 or later:
-
-```azurecli
-az --version
-```
-
-Install the Azure Spring Apps extension for the Azure CLI using the following command:
-
-```azurecli
-az extension add --name spring
-```
-
-## Sign in to Azure
-
-1. Sign in to the Azure CLI:
-
- ```azurecli
- az login
- ```
-
-1. If you have more than one subscription, choose the one you want to use for this quickstart.
-
- ```azurecli
- az account list -o table
- ```
-
- ```azurecli
- az account set --subscription <Name or ID of a subscription from the last step>
- ```
-
-## Generate a Steeltoe .NET Core project
-
-In Visual Studio, create an ASP.NET Core Web application named as "hello-world" with API project template. Please notice there will be an auto-generated WeatherForecastController that will be our test endpoint later on.
-
-1. Create a folder for the project source code and generate the project.
+This quickstart explains how to:
- ```console
- mkdir source-code
- ```
+> [!div class="checklist"]
+> - Generate a basic Spring project.
+> - Provision a service instance.
+> - Build and deploy an app with a public endpoint.
+> - Clean up the resources.
- ```console
- cd source-code
- ```
+At the end of this quickstart, you'll have a working spring app running on Azure Spring Apps.
- ```dotnetcli
- dotnet new webapi -n hello-world --framework netcoreapp3.1
- ```
+## [Azure CLI](#tab/Azure-CLI)
-1. Navigate into the project directory.
+## Prerequisites
- ```console
- cd hello-world
- ```
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-1. Edit the *appSettings.json* file to add the following settings:
-
- ```json
- "spring": {
- "application": {
- "name": "hello-world"
- }
- },
- "eureka": {
- "client": {
- "shouldFetchRegistry": true,
- "shouldRegisterWithEureka": true
- }
- }
- ```
+## Provision an instance of Azure Spring Apps
-1. Also in *appsettings.json*, change the log level for the `Microsoft` category from `Warning` to `Information`. This change ensures that logs will be produced when you view streaming logs in a later step.
-
- The *appsettings.json* file now looks similar to the following example:
-
- ```json
- {
- "Logging": {
- "LogLevel": {
- "Default": "Information",
- "Microsoft": "Information",
- "Microsoft.Hosting.Lifetime": "Information"
- }
- },
- "AllowedHosts": "*",
- "spring": {
- "application": {
- "name": "hello-world"
- }
- },
- "eureka": {
- "client": {
- "shouldFetchRegistry": true,
- "shouldRegisterWithEureka": true
- }
- }
- }
- ```
+Use the following steps to provision a service instance.
-1. Add dependencies and a `Zip` task to the *.csproj* file:
+1. Select **Try It** and sign in to your Azure account in [Azure Cloud Shell](/azure/cloud-shell/overview).
- ```xml
- <ItemGroup>
- <PackageReference Include="Steeltoe.Discovery.ClientCore" Version="3.1.0" />
- <PackageReference Include="Microsoft.Azure.SpringCloud.Client" Version="2.0.0-preview.1" />
- </ItemGroup>
- <Target Name="Publish-Zip" AfterTargets="Publish">
- <ZipDirectory SourceDirectory="$(PublishDir)" DestinationFile="$(MSBuildProjectDirectory)/deploy.zip" Overwrite="true" />
- </Target>
+ ```azurecli-interactive
+ az account show
```
- The packages are for Steeltoe Service Discovery and the Azure Spring Apps client library. The `Zip` task is for deployment to Azure. When you run the `dotnet publish` command, it generates the binaries in the *publish* folder, and this task zips the *publish* folder into a *.zip* file that you upload to Azure.
+1. Azure Cloud Shell workspaces are temporary. On initial start, the shell prompts you to associate an [Azure Storage](/azure/storage/common/storage-introduction) instance with your subscription to persist files across sessions.
-1. In the *Program.cs* file, add a `using` directive and code that uses the Azure Spring Apps client library:
+ :::image type="content" source="media/quickstart/azure-storage-subscription.png" alt-text="Screenshot of Azure Storage subscription." lightbox="media/quickstart/azure-storage-subscription.png":::
- ```csharp
- using Microsoft.Azure.SpringCloud.Client;
- ```
+1. After you sign in successfully, use the following command to display a list of your subscriptions.
- ```csharp
- public static IHostBuilder CreateHostBuilder(string[] args) =>
- Host.CreateDefaultBuilder(args)
- .UseAzureSpringCloudService()
- .ConfigureWebHostDefaults(webBuilder =>
- {
- webBuilder.UseStartup<Startup>();
- });
+ ```azurecli-interactive
+ az account list --output table
```
-1. In the *Startup.cs* file, add a `using` directive and code that uses the Steeltoe Service Discovery at the end of the `ConfigureServices` method:
+1. Use the following command to choose and link to your subscription.
- ```csharp
- using Steeltoe.Discovery.Client;
+ ```azurecli-interactive
+ az account set --subscription <subscription-id>
```
- ```csharp
- public void ConfigureServices(IServiceCollection services)
- {
- // Template code not shown.
+1. Use the following command to create a resource group.
- services.AddDiscoveryClient(Configuration);
- }
+ ```azurecli-interactive
+ az group create \
+ --resource-group <name-of-resource-group> \
+ --location eastus
```
-1. Build the project to make sure there are no compile errors.
+1. Use the following command to create an Azure Spring Apps service instance.
- ```dotnetcli
- dotnet build
+ ```azurecli-interactive
+ az spring create \
+ --resource-group <name-of-resource-group> \
+ --name <service-instance-name>
```
-## Provision a service instance
-
-The following procedure creates an instance of Azure Spring Apps using the Azure portal.
-
-1. Open the [Azure portal](https://portal.azure.com/).
-
-1. From the top search box, search for **Azure Spring Apps**.
-
-1. Select **Azure Spring Apps** from the results.
-
- :::image type="content" source="media/quickstart/spring-apps-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps service in search results." lightbox="media/quickstart/spring-apps-start.png":::
-
-1. On the Azure Spring Apps page, select **Create**.
-
- :::image type="content" source="media/quickstart/spring-apps-create.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps resource with Create button highlighted." lightbox="media/quickstart/spring-apps-create.png":::
-
-1. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines:
-
- * **Subscription**: Select the subscription you want to be billed for this resource.
- * **Resource group**: Create a new resource group. The name you enter here will be used in later steps as **\<resource group name\>**.
- * **Service Details/Name**: Specify the **\<service instance name\>**. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
- * **Region**: Select the region for your service instance.
+1. Choose **Y** to install the Azure Spring Apps extension and run it.
- :::image type="content" source="media/quickstart/portal-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create page." lightbox="media/quickstart/portal-start.png":::
-
-1. Select **Review and create**.
+## Create an app in your Azure Spring Apps instance
-1. Select **Create**.
+Use the following command to specify the app name on Azure Spring Apps as *hellospring*.
-## Build and deploy the app
+```azurecli-interactive
+az spring app create \
+ --resource-group <name-of-resource-group> \
+ --service <service-instance-name> \
+ --name hellospring \
+ --assign-endpoint true
+```
-The following procedure builds and deploys the project that you created earlier.
+## Clone and build the Spring Boot sample project
-1. Make sure the command prompt is still in the project folder.
+Use the following steps to clone the Spring Boot sample project.
-1. Run the following command to build the project, publish the binaries, and store the binaries in a *.zip* file in the project folder.
+1. Use the following command to clone the [Spring Boot sample project](https://github.com/spring-guides/gs-spring-boot.git) from GitHub.
- ```dotnetcorecli
- dotnet publish -c release -o ./publish
+ ```azurecli-interactive
+ git clone https://github.com/spring-guides/gs-spring-boot.git
```
-1. Create an app in your Azure Spring Apps instance with a public endpoint assigned. Use the same application name "hello-world" that you specified in *appsettings.json*.
+1. Use the following command to move to the project folder.
- ```azurecli
- az spring app create -n hello-world -s <service instance name> -g <resource group name> --assign-endpoint --runtime-version NetCore_31
+ ```azurecli-interactive
+ cd gs-spring-boot/complete
```
-1. Deploy the *.zip* file to the app.
+1. Use the following [Maven](https://maven.apache.org/what-is-maven.html) command to build the project.
- ```azurecli
- az spring app deploy -n hello-world -s <service instance name> -g <resource group name> --runtime-version NetCore_31 --main-entry hello-world.dll --artifact-path ./deploy.zip
+ ```azurecli-interactive
+ mvn clean package -DskipTests
```
- The `--main-entry` option identifies the *.dll* file that contains the application's entry point. After the service uploads the *.zip* file, it extracts all the files and folders and tries to execute the entry point in the *.dll* file specified by `--main-entry`.
-
- It takes a few minutes to finish deploying the application. To confirm that it has deployed, go to the **Apps** section in the Azure portal.
-
-## Test the app
-
-Once deployment has completed, access the app at the following URL:
-
-```url
-https://<service instance name>-hello-world.azuremicroservices.io/weatherforecast
-```
-
-The app returns JSON data similar to the following example:
+## Deploy the local app to Azure Spring Apps
-```json
-[{"date":"2020-09-08T21:01:50.0198835+00:00","temperatureC":14,"temperatureF":57,"summary":"Bracing"},{"date":"2020-09-09T21:01:50.0200697+00:00","temperatureC":-14,"temperatureF":7,"summary":"Bracing"},{"date":"2020-09-10T21:01:50.0200715+00:00","temperatureC":27,"temperatureF":80,"summary":"Freezing"},{"date":"2020-09-11T21:01:50.0200717+00:00","temperatureC":18,"temperatureF":64,"summary":"Chilly"},{"date":"2020-09-12T21:01:50.0200719+00:00","temperatureC":16,"temperatureF":60,"summary":"Chilly"}]
-```
-
-## Stream logs in real time
-
-Use the following command to get real-time logs from the App.
-
-```azurecli
-az spring app logs -n hello-world -s <service instance name> -g <resource group name> --lines 100 -f
-```
+Use the following command to deploy the *.jar* file for the app (*target/spring-boot-complete-0.0.1-SNAPSHOT.jar* on Windows).
-Logs appear in the output:
-
-```output
-[Azure Spring Apps] The following environment variables are loaded:
-2020-09-08 20:58:42,432 INFO supervisord started with pid 1
-2020-09-08 20:58:43,435 INFO spawned: 'event-gather_00' with pid 9
-2020-09-08 20:58:43,436 INFO spawned: 'dotnet-app_00' with pid 10
-2020-09-08 20:58:43 [Warning] No managed processes are running. Wait for 30 seconds...
-2020-09-08 20:58:44,843 INFO success: event-gather_00 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
-2020-09-08 20:58:44,843 INFO success: dotnet-app_00 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
-←[40m←[32minfo←[39m←[22m←[49m: Steeltoe.Discovery.Eureka.DiscoveryClient[0]
- Starting HeartBeat
-info: Microsoft.Hosting.Lifetime[0]
- Now listening on: http://[::]:1025
-info: Microsoft.Hosting.Lifetime[0]
- Application started. Press Ctrl+C to shut down.
-info: Microsoft.Hosting.Lifetime[0]
- Hosting environment: Production
-info: Microsoft.Hosting.Lifetime[0]
- Content root path: /netcorepublish/6e4db42a-b160-4b83-a771-c91adec18c60
-2020-09-08 21:00:13 [Information] [10] Start listening...
-info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
- Request starting HTTP/1.1 GET http://asa-svc-hello-world.azuremicroservices.io/weatherforecast
-info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
- Executing endpoint 'hello_world.Controllers.WeatherForecastController.Get (hello-world)'
-info: Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker[3]
- Route matched with {action = "Get", controller = "WeatherForecast"}. Executing controller action with signature System.Collections.Generic.IEnumerable`1[hello_world.WeatherForecast] Get() on controller hello_world.Controllers.WeatherForecastController (hello-world).
-info: Microsoft.AspNetCore.Mvc.Infrastructure.ObjectResultExecutor[1]
- Executing ObjectResult, writing value of type 'hello_world.WeatherForecast[]'.
-info: Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker[2]
- Executed action hello_world.Controllers.WeatherForecastController.Get (hello-world) in 1.8902ms
-info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
- Executed endpoint 'hello_world.Controllers.WeatherForecastController.Get (hello-world)'
-info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
- Request finished in 4.2591ms 200 application/json; charset=utf-8
+```azurecli-interactive
+az spring app deploy \
+ --resource-group <name-of-resource-group> \
+ --service <service-instance-name> \
+ --name hellospring \
+ --artifact-path target/spring-boot-complete-0.0.1-SNAPSHOT.jar
```
-> [!TIP]
-> Use `az spring app logs -h` to explore more parameters and log stream functionalities.
-
-For advanced log analytics features, visit **Logs** tab in the menu on the [Azure portal](https://portal.azure.com/). Logs here have a latency of a few minutes.
--
+Deploying the application can take a few minutes.
-
-This quickstart explains how to deploy a small application to Azure Spring Apps.
-
-The application code used in this tutorial is a simple app built with Spring Initializr. When you've completed this example, the application will be accessible online and can be managed via the Azure portal.
-
-This quickstart explains how to:
-
-> [!div class="checklist"]
-> * Generate a basic Spring project
-> * Provision a service instance
-> * Build and deploy the app with a public endpoint
-> * Stream logs in real time
+## [IntelliJ](#tab/IntelliJ)
## Prerequisites
-To complete this quickstart:
-
-* [Install JDK 8 or JDK 11](/java/azure/jdk/)
-* [Sign up for an Azure subscription](https://azure.microsoft.com/free/)
-* (Optional) [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and the Azure Spring Apps extension with the command: `az extension add --name spring`
-* (Optional) [Install IntelliJ IDEA](https://www.jetbrains.com/idea/)
-* (Optional) [Install the Azure Toolkit for IntelliJ](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/) and [sign-in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in)
-* (Optional) [Install Maven](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html). If you use the Azure Cloud Shell, this installation isn't needed.
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [IntelliJ IDEA](https://www.jetbrains.com/idea/).
+- [Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/install-toolkit).
## Generate a Spring project
-Start with [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.5.7&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Apps. This link uses the following URL to provide default settings for you.
+Use the following steps to create the project:
+
+1. Use [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Apps. The following URL provides default settings for you.
```url
-https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.5.7&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client
+https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client
```
-The following image shows the recommended Initializr set up for this sample project.
-This example uses Java version 8. If you want to use Java version 11, change the option under **Project Metadata**.
+The following image shows the recommended Initializr settings for the *hellospring* sample project.
+This example uses Java version 11. To use a different Java version, change the Java version setting under **Project Metadata**.
-1. Select **Generate** when all the dependencies are set.
-1. Download and unpack the package, then create a web controller for a simple web application by adding the file *src/main/java/com/example/hellospring/HelloController.java* with the following contents:
+
+1. When all dependencies are set, select **Generate**.
+1. Download and unpack the package, and then create a web controller for your web application by adding the file *src/main/java/com/example/hellospring/HelloController.java* with the following contents:
```java package com.example.hellospring;
This example uses Java version 8. If you want to use Java version 11, change th
## Provision an instance of Azure Spring Apps
-The following procedure creates an instance of Azure Spring Apps using the Azure portal.
+Use the following steps to create an instance of Azure Spring Apps using the Azure portal.
1. In a new tab, open the [Azure portal](https://portal.azure.com/).
The following procedure creates an instance of Azure Spring Apps using the Azure
1. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines: - **Subscription**: Select the subscription you want to be billed for this resource.
- - **Resource group**: Creating new resource groups for new resources is a best practice. You will use this resource group in later steps as **\<resource group name\>**.
- - **Service Details/Name**: Specify the **\<service instance name\>**. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
- - **Location**: Select the region for your service instance.
+ - **Resource group**: Creating new resource groups for new resources is a best practice.
+ - **Service Name**: Specify the service instance name. You'll use this name later in this article where the *\<service-instance-name\>* placeholder appears. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+ - **Region**: Select the region for your service instance.
:::image type="content" source="media/quickstart/portal-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create page." lightbox="media/quickstart/portal-start.png"::: 1. Select **Review and create**.
-## Build and deploy the app
-
-#### [CLI](#tab/Azure-CLI)
-The following procedure builds and deploys the application using the Azure CLI. Execute the following command at the root of the project.
-
-1. Sign in to Azure and choose your subscription.
-
- ```azurecli
- az login
- ```
-
- If you have more than one subscription, use the following command to list the subscriptions you have access to, then choose the one you want to use for this quickstart.
+## Import the project
- ```azurecli
- az account list -o table
- ```
+Use the following steps to import the project.
- Use the following command to set the default subscription to use with the Azure CLI commands in this quickstart.
+1. Open IntelliJ IDEA, and then select **Open**.
+1. In the **Open File or Project** dialog box, select the *hellospring* folder.
- ```azurecli
- az account set --subscription <Name or ID of a subscription from the last step>
- ```
+ :::image type="content" source="media/quickstart/intellij-new-project.png" alt-text="Screenshot of IntelliJ IDEA showing Open File or Project dialog box." lightbox="media/quickstart/intellij-new-project.png":::
-1. Build the project using Maven:
+## Build and deploy your app
- ```console
- mvn clean package -DskipTests
- ```
-
-1. Create the app with a public endpoint assigned. If you selected Java version 11 when generating the Spring project, include the `--runtime-version=Java_11` switch.
-
- ```azurecli
- az spring app create -n hellospring -s <service instance name> -g <resource group name> --assign-endpoint true
- ```
-
-1. Deploy the Jar file for the app (`target\hellospring-0.0.1-SNAPSHOT.jar` on Windows):
-
- ```azurecli
- az spring app deploy -n hellospring -s <service instance name> -g <resource group name> --artifact-path <jar file path>/hellospring-0.0.1-SNAPSHOT.jar
- ```
-
-1. It takes a few minutes to finish deploying the application. To confirm that it has deployed, go to the **Apps** section in the Azure portal. You should see the status of the application.
-
-#### [IntelliJ](#tab/IntelliJ)
-
-The following procedure uses the IntelliJ plug-in for Azure Spring Apps to deploy the sample app in IntelliJ IDEA.
-
-### Import project
-
-1. Open the IntelliJ **Welcome** dialog, then select **Open** to open the import wizard.
-1. Select the **hellospring** folder.
-
- :::image type="content" source="media/spring-cloud-quickstart-java/intellij-new-project.png" alt-text="Screenshot of IntelliJ IDEA showing Open File or Project dialog box.":::
+> [!NOTE]
+> To run the project locally, add `spring.config.import=optional:configserver:` to the project's *application.properties* file.
-### Deploy the app
+Use the following steps to build and deploy your app.
-In order to deploy to Azure, you must sign in with your Azure account, then choose your subscription. For sign-in details, see [Installation and sign-in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in).
+1. If you haven't already installed the Azure Toolkit for IntelliJ, follow the steps in [Install the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/install-toolkit).
-1. Right-click your project in IntelliJ project explorer, then select **Azure** -> **Deploy to Azure Spring Apps**.
+1. Right-click your project in the IntelliJ Project window, and then select **Azure** -> **Deploy to Azure Spring Apps**.
- :::image type="content" source="media/spring-cloud-quickstart-java/intellij-deploy-azure-1.png" alt-text="Screenshot of IntelliJ IDEA menu showing Deploy to Azure Spring Apps option." lightbox="media/spring-cloud-quickstart-java/intellij-deploy-azure-1.png":::
+ :::image type="content" source="media/quickstart/intellij-deploy-azure.png" alt-text="Screenshot of IntelliJ IDEA menu showing Deploy to Azure Spring Apps option." lightbox="media/quickstart/intellij-deploy-azure.png":::
-1. Accept the name for the app in the **Name** field. **Name** refers to the configuration, not the app name. Users don't usually need to change it.
+1. Accept the name for the app in the **Name** field. **Name** refers to the configuration, not the app name. You don't usually need to change it.
1. In the **Artifact** textbox, select **Maven:com.example:hellospring-0.0.1-SNAPSHOT**.
-1. In the **Subscription** textbox, verify your subscription is correct.
-1. In the **Service** textbox, select the instance of Azure Spring Apps that you created in [Provision an instance of Azure Spring Apps](./quickstart-provision-service-instance.md).
-1. In the **App** textbox, select **+** to create a new app.
+1. In the **Subscription** textbox, verify that your subscription is correct.
+1. In the **Service** textbox, select the instance of Azure Spring Apps that you created in [Provision an instance of Azure Spring Apps](#provision-an-instance-of-azure-spring-apps-1).
+1. In the **App** textbox, select the plus sign (**+**) to create a new app.
- :::image type="content" source="media/spring-cloud-quickstart-java/intellij-create-new-app.png" alt-text="Screenshot of IntelliJ IDEA showing Deploy Azure Spring Apps dialog box.":::
+ :::image type="content" source="media/quickstart/intellij-create-new-app.png" alt-text="Screenshot of IntelliJ IDEA showing Deploy Azure Spring Apps dialog box." lightbox="media/quickstart/intellij-create-new-app.png":::
-1. In the **App name:** textbox, enter *hellospring*, then check the **More settings** check box.
-1. Select the **Enable** button next to **Public endpoint**. The button will change to *Disable \<to be enabled\>*.
-1. If you used Java 11, select **Java 11** in **Runtime**.
+1. In the **App name:** textbox under **App Basics**, enter *hellospring*, and then select the **More settings** check box.
+1. Select the **Enable** button next to **Public endpoint**. The button changes to **Disable \<to be enabled\>**.
+1. If you're using Java 11, select **Java 11** for the **Runtime** option.
1. Select **OK**.
- :::image type="content" source="media/spring-cloud-quickstart-java/intellij-create-new-app-2.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with public endpoint Disable button highlighted.":::
-
-1. Under **Before launch**, select the **Run Maven Goal 'hellospring:package'** line, then select the pencil to edit the command line.
-
- :::image type="content" source="media/spring-cloud-quickstart-java/intellij-edit-maven-goal.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with Maven Goal edit button highlighted.":::
-
-1. In the **Command line** textbox, enter *-DskipTests* after *package*, then select **OK**.
-
- :::image type="content" source="media/spring-cloud-quickstart-java/intellij-maven-goal-command-line.png" alt-text="Screenshot of IntelliJ IDEA Select Maven Goal dialog box with Command Line value highlighted.":::
-
-1. Start the deployment by selecting the **Run** button at the bottom of the **Deploy Azure Spring Apps app** dialog. The plug-in will run the command `mvn package -DskipTests` on the `hellospring` app and deploy the jar generated by the `package` command.
-
-#### [Visual Studio Code](#tab/VS-Code)
-
-To deploy a simple Spring Boot web app to Azure Spring Apps, follow the steps in [Build and Deploy Java Spring Boot Apps to Azure Spring Apps with Visual Studio Code](https://code.visualstudio.com/docs/java/java-spring-cloud#_download-and-test-the-spring-boot-app).
---
-Once deployment has completed, you can access the app at `https://<service instance name>-hellospring.azuremicroservices.io/`.
-
+ :::image type="content" source="media/quickstart/intellij-more-settings.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with public endpoint Disable button highlighted." lightbox="media/quickstart/intellij-more-settings.png":::
-## Streaming logs in real time
-
-#### [CLI](#tab/Azure-CLI)
-
-Use the following command to get real-time logs from the App.
-
-```azurecli
-az spring app logs -n hellospring -s <service instance name> -g <resource group name> --lines 100 -f
-```
+1. Under **Before launch**, select **Run Maven Goal 'hellospring:package'**, and then select the pencil icon to edit the command line.
-Logs appear in the results:
+ :::image type="content" source="media/quickstart/intellij-edit-maven-goal.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with Maven Goal edit button highlighted." lightbox="media/quickstart/intellij-edit-maven-goal.png":::
+1. In the **Command line** textbox, enter *-DskipTests* after *package*, and then select **OK**.
->[!TIP]
-> Use `az spring app logs -h` to explore more parameters and log stream functionalities.
+ :::image type="content" source="media/quickstart/intellij-maven-goal-command-line.png" alt-text="Screenshot of IntelliJ IDEA Select Maven Goal dialog box with Command Line value highlighted." lightbox="media/quickstart/intellij-maven-goal-command-line.png":::
-#### [IntelliJ](#tab/IntelliJ)
+1. To start the deployment, select the **Run** button at the bottom of the **Deploy Azure Spring Apps app** dialog box. The plug-in runs the command `mvn package -DskipTests` on the `hellospring` app and deploys the *.jar* file generated by the `package` command.
-1. Select **Azure Explorer**, then **Spring Cloud**.
-1. Right-click the running app.
-1. Select **Streaming Logs** from the drop-down list.
-1. Select instance.
+## [Visual Studio Code](#tab/VS-Code)
- :::image type="content" source="media/spring-cloud-quickstart-java/intellij-get-streaming-logs.png" alt-text="Screenshot of IntelliJ IDEA showing Select instance dialog box." lightbox="media/spring-cloud-quickstart-java/intellij-get-streaming-logs.png":::
+## Deploy a Spring Boot web app to Azure Spring Apps with Visual Studio Code
-1. The streaming log will be visible in the output window.
-
- :::image type="content" source="media/spring-cloud-quickstart-java/intellij-streaming-logs-output.png" alt-text="Screenshot of IntelliJ IDEA showing streaming log output." lightbox="media/spring-cloud-quickstart-java/intellij-streaming-logs-output.png":::
-
-#### [Visual Studio Code](#tab/VS-Code)
-
-To get real-time application logs with Visual Studio Code, follow the steps in [Stream your application logs](https://code.visualstudio.com/docs/java/java-spring-cloud#_stream-your-application-logs).
+To deploy a Spring Boot web app to Azure Spring Apps, follow the steps in [Java on Azure Spring Apps](https://code.visualstudio.com/docs/java/java-spring-apps).
-For advanced logs analytics features, visit the **Logs** tab in the menu on the [Azure portal](https://portal.azure.com/). Logs here have a latency of a few minutes.
--
+Once deployment has completed, you can access the app at `https://<service instance name>-hellospring.azuremicroservices.io/`.
## Clean up resources
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. Use the following commands to delete the resource group:
-```azurecli
+```azurecli-interactive
echo "Enter the Resource Group name:" && read resourceGroupName && az group delete --name $resourceGroupName &&
echo "Press [ENTER] to continue ..."
## Next steps
-In this quickstart, you learned how to:
+In this quickstart, you learned how to generate a basic Spring project, provision a service instance, build and deploy an app with a public endpoint, and clean up the resources.
-> [!div class="checklist"]
-> * Generate a basic Spring project
-> * Provision a service instance
-> * Build and deploy the app with a public endpoint
-> * Stream logs in real time
+You also have access to powerful logs, metrics, and distributed tracing capability from the Azure portal. For more information, see [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md).
To learn how to use more Azure Spring capabilities, advance to the quickstart series that deploys a sample application to Azure Spring Apps:
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-access-azure-active-directory.md
Azure Active Directory (Azure AD) authorizes access rights to secured resources
An Azure AD security principal may be a user, a group, an application service principal, or a [managed identity for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). The RBAC roles that are assigned to a security principal determine the permissions that the principal will have. To learn more about assigning Azure roles for blob access, see [Assign an Azure role for access to blob data](../blobs/assign-azure-role-data-access.md)
-In some cases you may need to enable fine-grained access to blob resources or to simplify permissions when you have a large number of role assignments for a storage resource. You can use Azure attribute-based access control (Azure ABAC) to configure conditions on role assignments. You can use conditions with a [custom role](../../role-based-access-control/custom-roles.md) or select built-in roles. For more information about configuring conditions for Azure storage resources with ABAC, see [Authorize access to blobs using Azure role assignment conditions (preview)](../common/storage-auth-abac.md). For details about supported conditions for blob data operations, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../common/storage-auth-abac-attributes.md).
+In some cases you may need to enable fine-grained access to blob resources or to simplify permissions when you have a large number of role assignments for a storage resource. You can use Azure attribute-based access control (Azure ABAC) to configure conditions on role assignments. You can use conditions with a [custom role](../../role-based-access-control/custom-roles.md) or select built-in roles. For more information about configuring conditions for Azure storage resources with ABAC, see [Authorize access to blobs using Azure role assignment conditions (preview)](../blobs/storage-auth-abac.md). For details about supported conditions for blob data operations, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../blobs/storage-auth-abac-attributes.md).
### Resource scope
storage Immutable Policy Configure Container Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-container-scope.md
Immutable storage for Azure Blob Storage enables users to store business-critica
An immutability policy may be scoped either to an individual blob version or to a container. This article describes how to configure a container-level immutability policy. To learn how to configure version-level immutability policies, see [Configure immutability policies for blob versions](immutable-policy-configure-version-scope.md).
+> [!NOTE]
+> Immutability policies are not supported in accounts that have the Network File System (NFS) 3.0 protocol or the SSH File Transfer Protocol (SFTP) enabled on them.
+ ## Configure a retention policy on a container To configure a time-based retention policy on a container, use the Azure portal, PowerShell, or Azure CLI. You can configure a container-level retention policy for between 1 and 146000 days.
storage Immutable Policy Configure Version Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-version-scope.md
Immutable storage for Azure Blob Storage enables users to store business-critica
An immutability policy may be scoped either to an individual blob version or to a container. This article describes how to configure a version-level immutability policy. To learn how to configure container-level immutability policies, see [Configure immutability policies for containers](immutable-policy-configure-container-scope.md).
+> [!NOTE]
+> Immutability policies are not supported in accounts that have the Network File System (NFS) 3.0 protocol or the SSH File Transfer Protocol (SFTP) enabled on them.
+ Configuring a version-level immutability policy is a two-step process: 1. First, enable support for version-level immutability on a new storage account or on a new or existing container. See [Enable support for version-level immutability](#enable-support-for-version-level-immutability) for details.
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md
Immutability policies are supported for both new and existing storage accounts.
| Legal hold | Version-level scope | General-purpose v2<br />Premium block blob | No | | Legal hold | Container-level scope | General-purpose v2<br />Premium block blob<br />General-purpose v1 (legacy)<sup>1</sup><br> Blob storage (legacy) | Yes |
+> [!NOTE]
+> Immutability policies are not supported in accounts that have the Network File System (NFS) 3.0 protocol or the SSH File Transfer Protocol (SFTP) enabled on them.
+ <sup>1</sup> Microsoft recommends upgrading general-purpose v1 accounts to general-purpose v2 so that you can take advantage of more features. For information on upgrading an existing general-purpose v1 storage account, see [Upgrade a storage account](../common/storage-account-upgrade.md). ### Access tiers
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md
+
+ Title: Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
+
+description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) in Azure Storage.
+++++ Last updated : 09/01/2022+++++
+# Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
+
+> [!IMPORTANT]
+> Azure ABAC and Azure role assignment conditions are currently in preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This article describes the supported attribute dictionaries that can be used in conditions on Azure role assignments for each Azure Storage [DataAction](../../role-based-access-control/role-definitions.md#dataactions). For the list of Blob service operations that are affected by a specific permission or DataAction, see [Permissions for Blob service operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-blob-service-operations).
+
+To understand the role assignment condition format, see [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md).
+
+## Suboperations
+
+Multiple Storage service operations can be associated with a single permission or DataAction. However, each of these operations that are associated with the same permission might support different parameters. *Suboperations* enable you to differentiate between service operations that require the same permission but support different set of attributes for conditions. Thus, by using a suboperation, you can specify one condition for access to a subset of operations that support a given parameter. Then, you can use another access condition for operations with the same action that doesn't support that parameter.
+
+For example, the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` action is required for over a dozen different service operations. Some of these operations can accept blob index tags as request parameter, while others don't. For operations that accept blob index tags as a parameter, you can use blob index tags in a Request condition. However, if such a condition is defined on the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` action, all operations that don't accept tags as a request parameter cannot evaluate this condition, and will fail the authorization access check.
+
+In this case, the optional suboperation `Blob.Write.WithTagHeaders` can be used to apply a condition to only those operations that support blob index tags as a request parameter.
+
+> [!NOTE]
+> Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md).
+
+In this preview, storage accounts support the following suboperations:
+
+> [!div class="mx-tableFixed"]
+> | Display name | DataAction | Suboperation |
+> | : | : | : |
+> | [List blobs](#list-blobs) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | `Blob.List` |
+> | [Read a blob](#read-a-blob) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | **NOT** `Blob.List` |
+> | [Read content from a blob with tag conditions](#read-content-from-a-blob-with-tag-conditions) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | `Blob.Read.WithTagConditions (deprecated)` |
+> | [Sets the access tier on a blob](#sets-the-access-tier-on-a-blob) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | `Blob.Write.Tier` |
+> | [Write to a blob with blob index tags](#write-to-a-blob-with-blob-index-tags) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` <br/> `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | `Blob.Write.WithTagHeaders` |
+
+## Azure Blob storage actions and suboperations
+
+This section lists the supported Azure Blob storage actions and suboperations you can target for conditions.
+
+### List blobs
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | List blobs |
+> | **Description** | List blobs operation. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` |
+> | **Suboperation** | `Blob.List` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name) |
+> | **Request attributes** | [Blob prefix](#blob-prefix) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.List'})`<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path) |
+
+### Read a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Read a blob |
+> | **Description** | All blob read operations excluding list. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` |
+> | **Suboperation** | NOT `Blob.List` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})`<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path) |
+
+### Read content from a blob with tag conditions
+
+> [!IMPORTANT]
+> Although `Read content from a blob with tag conditions` is currently supported for compatibility with conditions implemented during the ABAC feature preview, that suboperation has been deprecated and Microsoft recommends using the [ΓÇ£Read a blobΓÇ¥](#read-a-blob) action instead.
+>
+> When configuring ABAC conditions in the Azure portal, you might see "DEPRECATED: Read content from a blob with tag conditions". Remove the operation and replace it with the ΓÇ£Read a blobΓÇ¥ operation instead.
+>
+> If you are authoring your own condition where you want to restrict read access by tag conditions, please refer to [Example: Read blobs with a blob index tag](storage-auth-abac-examples.md#example-read-blobs-with-a-blob-index-tag).
+
+### Read blob index tags
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Read blob index tags |
+> | **Description** | DataAction for reading blob index tags. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) |
+> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md) |
+
+### Find blobs by tags
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Find blobs by tags |
+> | **Description** | DataAction for finding blobs by index tags. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/filter/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Write to a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Write to a blob |
+> | **Description** | DataAction for writing to blobs. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})`<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers) |
+
+### Sets the access tier on a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Sets the access tier on a blob |
+> | **Description** | DataAction for writing to blobs. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` |
+> | **Suboperation** | `Blob.Write.Tier` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.Tier'})` |
+
+### Write to a blob with blob index tags
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Write to a blob with blob index tags |
+> | **Description** | REST operations: Put Blob, Put Block List, Copy Blob and Copy Blob From URL. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write`<br/>`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` |
+> | **Suboperation** | `Blob.Write.WithTagHeaders` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Request attributes** | [Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})`<br/>`!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})`<br/>[Example: New blobs must include a blob index tag](storage-auth-abac-examples.md#example-new-blobs-must-include-a-blob-index-tag) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md) |
+
+### Create a blob or snapshot, or append data
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Create a blob or snapshot, or append data |
+> | **Description** | DataAction for creating blobs. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Encryption scope name](#encryption-scope-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})`<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers) |
+
+### Write blob index tags
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Write blob index tags |
+> | **Description** | DataAction for writing blob index tags. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) |
+> | **Request attributes** | [Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys)<br/>[Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})`<br/>[Example: Existing blobs must have blob index tag keys](storage-auth-abac-examples.md#example-existing-blobs-must-have-blob-index-tag-keys) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md) |
+
+### Write Blob legal hold and immutability policy
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Write Blob legal hold and immutability policy |
+> | **Description** | DataAction for writing Blob legal hold and immutability policy. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/immutableStorage/runAsSuperUser/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Delete a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Delete a blob |
+> | **Description** | DataAction for deleting blobs. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})`<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers) |
+
+### Delete a version of a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Delete a version of a blob |
+> | **Description** | DataAction for deleting a version of a blob. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | [Version ID](#version-id) |
+> | **Principal attributes support** | True |
+> | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action'})`<br/>[Example: Delete old blob versions](storage-auth-abac-examples.md#example-delete-old-blob-versions) |
+
+### Permanently delete a blob overriding soft-delete
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Permanently delete a blob overriding soft-delete |
+> | **Description** | DataAction for permanently deleting a blob overriding soft-delete. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/permanentDelete/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) |
+> | **Principal attributes support** | True |
+
+### Modify permissions of a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Modify permissions of a blob |
+> | **Description** | DataAction for modifying permissions of a blob. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/modifyPermissions/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Change ownership of a blob
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Change ownership of a blob |
+> | **Description** | DataAction for changing ownership of a blob. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/manageOwnership/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Rename a file or a directory
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Rename a file or a directory |
+> | **Description** | DataAction for renaming files or directories. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### All data operations for accounts with hierarchical namespace enabled
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | All data operations for accounts with hierarchical namespace enabled |
+> | **Description** | DataAction for all data operations on storage accounts with hierarchical namespace enabled.<br/>If your role definition includes the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` action, you should target this action in your condition. Targeting this action ensures the condition will still work as expected if hierarchical namespace is enabled for a storage account. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` |
+> | **Suboperation** | |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+> | **Examples** | [Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers)<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path)<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path)<br/>[Example: Write blobs in named containers with a path](storage-auth-abac-examples.md#example-write-blobs-in-named-containers-with-a-path)<br/>[Example: Read only current blob versions](storage-auth-abac-examples.md#example-read-only-current-blob-versions)<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots)<br/>[Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled) |
+> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+## Azure Queue storage actions
+
+This section lists the supported Azure Queue storage actions you can target for conditions.
+
+### Peek messages
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Peek messages |
+> | **Description** | DataAction for peeking messages. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/read` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Put a message
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Put a message |
+> | **Description** | DataAction for putting a message. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/add/action` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Put or update a message
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Put or update a message |
+> | **Description** | DataAction for putting or updating a message. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/write` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Clear messages
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Clear messages |
+> | **Description** | DataAction for clearing messages. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/delete` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Get or delete messages
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Get or delete messages |
+> | **Description** | DataAction for getting or deleting messages. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/process/action` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+## Azure Blob storage attributes
+
+This section lists the Azure Blob storage attributes you can use in your condition expressions depending on the action you target. If you select multiple actions for a single condition, there might be fewer attributes to choose from for your condition because the attributes must be available across the selected actions.
+
+> [!NOTE]
+> Attributes and values listed are considered case-insensitive, unless stated otherwise.
+
+### Account name
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Account name |
+> | **Description** | Name of a storage account. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts:name` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts:name] StringEquals 'sampleaccount'`<br/>[Example: Read or write blobs in named storage account with specific encryption scope](storage-auth-abac-examples.md#example-read-or-write-blobs-in-named-storage-account-with-specific-encryption-scope) |
+
+### Blob index tags [Keys]
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Blob index tags [Keys] |
+> | **Description** | Index tags on a blob resource.<br/>Arbitrary user-defined key-value properties that you can store alongside a blob resource. Use when you want to check the key in blob index tags. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&` |
+> | **Attribute source** | Resource<br/>Request |
+> | **Attribute type** | StringList |
+> | **Is key case sensitive** | True |
+> | **Hierarchical namespace support** | False |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&] ForAllOfAnyValues:StringEquals {'Project', 'Program'}`<br/>[Example: Existing blobs must have blob index tag keys](storage-auth-abac-examples.md#example-existing-blobs-must-have-blob-index-tag-keys) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+### Blob index tags [Values in key]
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Blob index tags [Values in key] |
+> | **Description** | Index tags on a blob resource.<br/>Arbitrary user-defined key-value properties that you can store alongside a blob resource. Use when you want to check both the key (case-sensitive) and value in blob index tags. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags` |
+> | **Attribute source** | Resource<br/>Request |
+> | **Attribute type** | String |
+> | **Is key case sensitive** | True |
+> | **Hierarchical namespace support** | False |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:`*keyname*`<$key_case_sensitive$>`<br/>`@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'`<br/>[Example: Read blobs with a blob index tag](storage-auth-abac-examples.md#example-read-blobs-with-a-blob-index-tag) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+### Blob path
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Blob path |
+> | **Description** | Path of a virtual directory, blob, folder or file resource.<br/>Use when you want to check the blob name or folders in a blob path. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'readonly/*'`<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path) |
+
+> [!NOTE]
+> When specifying conditions for the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path` attribute, the values shouldn't include the container name or a preceding slash (`/`) character. Use the path characters without any URL encoding.
+
+### Blob prefix
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Blob prefix |
+> | **Description** | Allowed prefix of blobs to be listed.<br/>Path of a virtual directory or folder resource. Use when you want to check the folders in a blob path. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:prefix` |
+> | **Attribute source** | Request |
+> | **Attribute type** | String |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:prefix] StringStartsWith 'readonly/'`<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path) |
+
+> [!NOTE]
+> When specifying conditions for the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:prefix` attribute, the values shouldn't include the container name or a preceding slash (`/`) character. Use the path characters without any URL encoding.
+
+### Container name
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Container name |
+> | **Description** | Name of a storage container or file system.<br/>Use when you want to check the container name. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers:name` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'`<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers) |
+
+### Encryption scope name
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Encryption scope name |
+> | **Description** | Name of the encryption scope used to encrypt data.<br/>Available only for storage accounts where hierarchical namespace is not enabled. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/encryptionScopes:name` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
+> | **Exists support** | True |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/encryptionScopes:name] ForAnyOfAnyValues:StringEquals {'validScope1', 'validScope2'}`<br/>[Example: Read blobs with specific encryption scopes](storage-auth-abac-examples.md#example-read-blobs-with-specific-encryption-scopes) |
+> | **Learn more** | [Create and manage encryption scopes](../blobs/encryption-scope-manage.md) |
+
+### Is Current Version
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Is Current Version |
+> | **Description** | Identifies if the resource is the current version of the blob, in contrast of a snapshot or a specific blob version. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | Boolean |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true`<br/>[Example: Read only current blob versions](storage-auth-abac-examples.md#example-read-only-current-blob-versions)<br/>[Example: Read current blob versions and a specific blob version](storage-auth-abac-examples.md#example-read-current-blob-versions-and-a-specific-blob-version) |
+
+### Is hierarchical namespace enabled
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Is hierarchical namespace enabled |
+> | **Description** | Whether hierarchical namespace is enabled on the storage account.<br/>Applicable only at resource group scope or above. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts:isHnsEnabled` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | Boolean |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts:isHnsEnabled] BoolEquals true`<br/>[Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled) |
+> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+### Snapshot
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Snapshot |
+> | **Description** | The Snapshot identifier for the Blob snapshot.<br/>Available for storage accounts where hierarchical namespace is not enabled and currently in preview for storage accounts where hierarchical namespace is enabled. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot` |
+> | **Attribute source** | Request |
+> | **Attribute type** | DateTime |
+> | **Exists support** | True |
+> | **Hierarchical namespace support** | False |
+> | **Examples** | `Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot]`<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots) |
+> | **Learn more** | [Blob snapshots](../blobs/snapshots-overview.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+### Version ID
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Version ID |
+> | **Description** | The version ID of the versioned Blob.<br/>Available only for storage accounts where hierarchical namespace is not enabled. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId` |
+> | **Attribute source** | Request |
+> | **Attribute type** | DateTime |
+> | **Exists support** | True |
+> | **Hierarchical namespace support** | False |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z'`<br/>[Example: Read current blob versions and a specific blob version](storage-auth-abac-examples.md#example-read-current-blob-versions-and-a-specific-blob-version)<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots) |
+> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+
+## Azure Queue storage attributes
+
+This section lists the Azure Queue storage attributes you can use in your condition expressions depending on the action you target.
+
+### Queue name
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Queue name |
+> | **Description** | Name of a storage queue. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/queueServices/queues:name` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
+
+## See also
+
+- [Example Azure role assignment conditions (preview)](storage-auth-abac-examples.md)
+- [Azure role assignment condition format and syntax (preview)](../../role-based-access-control/conditions-format.md)
+- [Troubleshoot Azure role assignment conditions (preview)](../../role-based-access-control/conditions-troubleshoot.md)
storage Storage Auth Abac Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-cli.md
+
+ Title: "Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI (preview) - Azure ABAC"
+
+description: Add a role assignment condition to restrict access to blobs using Azure CLI and Azure attribute-based access control (Azure ABAC).
+++++++ Last updated : 09/01/2022+
+#Customer intent:
+++
+# Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI (preview)
+
+> [!IMPORTANT]
+> Azure ABAC and Azure role assignment conditions are currently in preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more fine-grained access control by adding a role assignment condition.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> - Add a condition to a role assignment
+> - Restrict access to blobs based on a blob index tag
+
+## Prerequisites
+
+For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](../../role-based-access-control/conditions-prerequisites.md).
+
+## Condition
+
+In this tutorial, you restrict access to blobs with a specific tag. For example, you add a condition to a role assignment so that Chandra can only read files with the tag Project=Cascade.
+
+![Diagram of role assignment with a condition.](./media/shared/condition-role-assignment-rg.png)
+
+If Chandra tries to read a blob without the tag Project=Cascade, access is not allowed.
+
+![Diagram showing read access to blobs with Project=Cascade tag.](./media/shared/condition-access.png)
+
+Here is what the condition looks like in code:
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
+ AND NOT
+ SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'
+ )
+)
+```
+
+## Step 1: Sign in to Azure
+
+1. Use the [az login](/cli/azure/reference-index#az-login) command and follow the instructions that appear to sign in to your directory as [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../../role-based-access-control/built-in-roles.md#owner).
+
+ ```azurecli
+ az login
+ ```
+
+1. Use [az account show](/cli/azure/account#az-account-show) to get the ID of your subscriptions.
+
+ ```azurecli
+ az account show
+ ```
+
+1. Determine the subscription ID and initialize the variable.
+
+ ```azurecli
+ subscriptionId="<subscriptionId>"
+ ```
+
+## Step 2: Create a user
+
+1. Use [az ad user create](/cli/azure/ad/user#az-ad-user-create) to create a user or find an existing user. This tutorial uses Chandra as the example.
+
+1. Initialize the variable for the object ID of the user.
+
+ ```azurecli
+ userObjectId="<userObjectId>"
+ ```
+
+## Step 3: Set up storage
+
+You can authorize access to Blob storage from the Azure CLI either with Azure AD credentials or by using the storage account access key. This article shows how to authorize Blob storage operations using Azure AD. For more information, see [Quickstart: Create, download, and list blobs with Azure CLI](../blobs/storage-quickstart-blobs-cli.md)
+
+1. Use [az storage account](/cli/azure/storage/account) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags (preview)](../blobs/storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
+
+1. Use [az storage container](/cli/azure/storage/container) to create a new blob container within the storage account and set the Public access level to **Private (no anonymous access)**.
+
+1. Use [az storage blob upload](/cli/azure/storage/blob#az-storage-blob-upload) to upload a text file to the container.
+
+1. Add the following blob index tag to the text file. For more information, see [Use blob index tags (preview) to manage and find data on Azure Blob Storage](../blobs/storage-blob-index-how-to.md).
+
+ > [!NOTE]
+ > Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions.
+
+ | Key | Value |
+ | | |
+ | Project | Cascade |
+
+1. Upload a second text file to the container.
+
+1. Add the following blob index tag to the second text file.
+
+ | Key | Value |
+ | | |
+ | Project | Baker |
+
+1. Initialize the following variables with the names you used.
+
+ ```azurecli
+ resourceGroup="<resourceGroup>"
+ storageAccountName="<storageAccountName>"
+ containerName="<containerName>"
+ blobNameCascade="<blobNameCascade>"
+ blobNameBaker="<blobNameBaker>"
+ ```
+
+## Step 4: Assign a role with a condition
+
+1. Initialize the [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) role variables.
+
+ ```azurecli
+ roleDefinitionName="Storage Blob Data Reader"
+ roleDefinitionId="2a2b9908-6ea1-4ae2-8e65-a410df84e7d1"
+ ```
+
+1. Initialize the scope for the resource group.
+
+ ```azurecli
+ scope="/subscriptions/$subscriptionId/resourceGroups/$resourceGroup"
+ ```
+
+1. Initialize the condition.
+
+ ```azurecli
+ condition="((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<\$key_case_sensitive\$>] StringEquals 'Cascade'))"
+ ```
+
+ In Bash, if history expansion is enabled, you might see the message `bash: !: event not found` because of the exclamation point (!). In this case, you can disable history expansion with the command `set +H`. To re-enable history expansion, use `set -H`.
+
+ In Bash, a dollar sign ($) has special meaning for expansion. If your condition includes a dollar sign ($), you might need to prefix it with a backslash (\\). For example, this condition uses dollar signs to delineate the tag key name. For more information about rules for quotation marks in Bash, see [Double Quotes](https://www.gnu.org/software/bash/manual/html_node/Double-Quotes.html).
+
+1. Initialize the condition version and description.
+
+ ```azurecli
+ conditionVersion="2.0"
+ description="Read access to blobs with the tag Project=Cascade"
+ ```
+
+1. Use [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) to assign the [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) role with a condition to the user at a resource group scope.
+
+ ```azurecli
+ az role assignment create --assignee-object-id $userObjectId --scope $scope --role $roleDefinitionId --description "$description" --condition "$condition" --condition-version $conditionVersion
+ ```
+
+ Here's an example of the output:
+
+ ```azurecli
+ {
+ "canDelegate": null,
+ "condition": "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'))",
+ "conditionVersion": "2.0",
+ "description": "Read access to blobs with the tag Project=Cascade",
+ "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}",
+ "name": "{roleAssignmentId}",
+ "principalId": "{userObjectId}",
+ "principalType": "User",
+ "resourceGroup": "{resourceGroup}",
+ "roleDefinitionId": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-4ae2-8e65-a410df84e7d1",
+ "scope": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}",
+ "type": "Microsoft.Authorization/roleAssignments"
+ }
+ ```
+
+## Step 5: (Optional) View the condition in the Azure portal
+
+1. In the Azure portal, open the resource group.
+
+1. Click **Access control (IAM)**.
+
+1. On the Role assignments tab, find the role assignment.
+
+1. In the **Condition** column, click **View/Edit** to view the condition.
+
+ ![Screenshot of Add role assignment condition in the Azure portal.](./media/shared/condition-view.png)
+
+## Step 6: Test the condition
+
+1. Open a new command window.
+
+1. Use [az login](/cli/azure/reference-index#az-login) to sign in as Chandra.
+
+ ```azurecli
+ az login
+ ```
+
+1. Initialize the following variables with the names you used.
+
+ ```azurecli
+ storageAccountName="<storageAccountName>"
+ containerName="<containerName>"
+ blobNameBaker="<blobNameBaker>"
+ blobNameCascade="<blobNameCascade>"
+ ```
+
+1. Use [az storage blob show](/cli/azure/storage/blob#az-storage-blob-show) to try to read the properties of the file for the Baker project.
+
+ ```azurecli
+ az storage blob show --account-name $storageAccountName --container-name $containerName --name $blobNameBaker --auth-mode login
+ ```
+
+ Here's an example of the output. Notice that you **can't** read the file because of the condition you added.
+
+ ```azurecli
+ You do not have the required permissions needed to perform this operation.
+ Depending on your operation, you may need to be assigned one of the following roles:
+ "Storage Blob Data Contributor"
+ "Storage Blob Data Reader"
+ "Storage Queue Data Contributor"
+ "Storage Queue Data Reader"
+
+ If you want to use the old authentication method and allow querying for the right account key, please use the "--auth-mode" parameter and "key" value.
+ ```
+
+1. Read the properties of the file for the Cascade project.
+
+ ```azurecli
+ az storage blob show --account-name $storageAccountName --container-name $containerName --name $blobNameCascade --auth-mode login
+ ```
+
+ Here's an example of the output. Notice that you can read the properties of the file because it has the tag Project=Cascade.
+
+ ```azurecli
+ {
+ "container": "<containerName>",
+ "content": "",
+ "deleted": false,
+ "encryptedMetadata": null,
+ "encryptionKeySha256": null,
+ "encryptionScope": null,
+ "isAppendBlobSealed": null,
+ "isCurrentVersion": null,
+ "lastAccessedOn": null,
+ "metadata": {},
+ "name": "<blobNameCascade>",
+ "objectReplicationDestinationPolicy": null,
+ "objectReplicationSourceProperties": [],
+ "properties": {
+ "appendBlobCommittedBlockCount": null,
+ "blobTier": "Hot",
+ "blobTierChangeTime": null,
+ "blobTierInferred": true,
+ "blobType": "BlockBlob",
+ "contentLength": 7,
+ "contentRange": null,
+
+ ...
+
+ }
+ ```
+
+## Step 7: (Optional) Edit the condition
+
+1. In the other command window, use [az role assignment list](/cli/azure/role/assignment#az-role-assignment-list) to get the role assignment you added.
+
+ ```azurecli
+ az role assignment list --assignee $userObjectId --resource-group $resourceGroup
+ ```
+
+ The output will be similar to the following:
+
+ ```azurecli
+ [
+ {
+ "canDelegate": null,
+ "condition": "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'))",
+ "conditionVersion": "2.0",
+ "description": "Read access to blobs with the tag Project=Cascade",
+ "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}",
+ "name": "{roleAssignmentId}",
+ "principalId": "{userObjectId}",
+ "principalName": "chandra@contoso.com",
+ "principalType": "User",
+ "resourceGroup": "{resourceGroup}",
+ "roleDefinitionId": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-4ae2-8e65-a410df84e7d1",
+ "roleDefinitionName": "Storage Blob Data Reader",
+ "scope": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}",
+ "type": "Microsoft.Authorization/roleAssignments"
+ }
+ ]
+ ```
+
+1. Create a JSON file with the following format and update the `condition` and `description` properties.
+
+ ```json
+ {
+ "canDelegate": null,
+ "condition": "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade' OR @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Baker'))",
+ "conditionVersion": "2.0",
+ "description": "Read access to blobs with the tag Project=Cascade or Project=Baker",
+ "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}",
+ "name": "{roleAssignmentId}",
+ "principalId": "{userObjectId}",
+ "principalName": "chandra@contoso.com",
+ "principalType": "User",
+ "resourceGroup": "{resourceGroup}",
+ "roleDefinitionId": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-4ae2-8e65-a410df84e7d1",
+ "roleDefinitionName": "Storage Blob Data Reader",
+ "scope": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}",
+ "type": "Microsoft.Authorization/roleAssignments"
+ }
+ ```
+
+1. Use [az role assignment update](/cli/azure/role/assignment#az-role-assignment-update) to update the condition for the role assignment.
+
+ ```azurecli
+ az role assignment update --role-assignment "./path/roleassignment.json"
+ ```
+
+## Step 8: Clean up resources
+
+1. Use [az role assignment delete](/cli/azure/role/assignment#az-role-assignment-delete) to remove the role assignment and condition you added.
+
+ ```azurecli
+ az role assignment delete --assignee $userObjectId --role "$roleDefinitionName" --resource-group $resourceGroup
+ ```
+
+1. Delete the storage account you created.
+
+1. Delete the user you created.
+
+## Next steps
+
+- [Example Azure role assignment conditions](storage-auth-abac-examples.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)
+- [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
+
+ Title: Example Azure role assignment conditions (preview) - Azure RBAC
+
+description: Example Azure role assignment conditions for Azure attribute-based access control (Azure ABAC).
++++++++ Last updated : 09/01/2022+
+#Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
++
+# Example Azure role assignment conditions (preview)
+
+> [!IMPORTANT]
+> Azure ABAC and Azure role assignment conditions are currently in preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This article list some examples of role assignment conditions.
+
+## Prerequisites
+
+For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](../../role-based-access-control/conditions-prerequisites.md).
+
+## Blob index tags
+
+> [!IMPORTANT]
+> Although ΓÇ£Read content from a blob with tag conditionsΓÇ¥ is currently supported for compatibility with conditions implemented during the ABAC feature preview, that suboperation has been deprecated and Microsoft recommends using the ΓÇ£Read a blobΓÇ¥ suboperation instead.
+>
+> When configuring ABAC conditions in the Azure portal, you might see "DEPRECATED: Read content from a blob with tag conditions". Remove the operation and replace it with the ΓÇ£Read a blobΓÇ¥ suboperation instead.
+>
+> If you are authoring your own condition where you want to restrict read access by tag conditions, please refer to [Example: Read blobs with a blob index tag](#example-read-blobs-with-a-blob-index-tag).
+
+### Example: Read blobs with a blob index tag
+
+This condition allows users to read blobs with a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Project and a value of Cascade. Attempts to access blobs without this key-value tag will not be allowed.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+![Diagram of condition showing read access to blobs with a blob index tag.](./media/storage-auth-abac-examples/blob-index-tags-read.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Attribute source | Resource |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | {keyName} |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {keyValue} |
++
+#### Azure PowerShell
+
+Here's how to add this condition using Azure PowerShell.
+
+```azurepowershell
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sensitive`$>] StringEquals 'Cascade'))"
+$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
+$testRa.Condition = $condition
+$testRa.ConditionVersion = "2.0"
+Set-AzRoleAssignment -InputObject $testRa -PassThru
+```
+
+Here's how to test this condition.
+
+```azurepowershell
+$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
+Get-AzStorageBlob -Container <containerName> -Blob <blobName> -Context $bearerCtx
+```
+
+### Example: New blobs must include a blob index tag
+
+This condition requires that any new blobs must include a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Project and a value of Cascade.
+
+There are two actions that allow you to create new blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+![Diagram of condition showing new blobs must include a blob index tag.](./media/storage-auth-abac-examples/blob-index-tags-new-blobs.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags)<br/>[Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags) |
+> | Attribute source | Request |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | {keyName} |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {keyValue} |
++
+#### Azure PowerShell
+
+Here's how to add this condition using Azure PowerShell.
+
+```azurepowershell
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})) OR (@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sensitive`$>] StringEquals 'Cascade'))"
+$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
+$testRa.Condition = $condition
+$testRa.ConditionVersion = "2.0"
+Set-AzRoleAssignment -InputObject $testRa -PassThru
+```
+
+Here's how to test this condition.
+
+```azurepowershell
+$localSrcFile = # path to an example file, can be an empty txt
+$ungrantedTag = @{'Project'='Baker'}
+$grantedTag = @{'Project'='Cascade'}
+# Get new context for request
+$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
+# try ungranted tags
+$content = Set-AzStorageBlobContent -File $localSrcFile -Container example2 -Blob "Example2.txt" -Tag $ungrantedTag -Context $bearerCtx
+# try granted tags
+$content = Set-AzStorageBlobContent -File $localSrcFile -Container example2 -Blob "Example2.txt" -Tag $grantedTag -Context $bearerCtx
+```
+
+### Example: Existing blobs must have blob index tag keys
+
+This condition requires that any existing blobs be tagged with at least one of the allowed [blob index tag](../blobs/storage-blob-index-how-to.md) keys: Project or Program. This condition is useful for adding governance to existing blobs.
+
+There are two actions that allow you to update tags on existing blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+![Diagram of condition showing existing blobs must have blob index tag keys.](./media/storage-auth-abac-examples/blob-index-tags-keys.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&] ForAllOfAnyValues:StringEquals {'Project', 'Program'}
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags)<br/>[Write blob index tags](storage-auth-abac-attributes.md#write-blob-index-tags) |
+> | Attribute source | Request |
+> | Attribute | [Blob index tags [Keys]](storage-auth-abac-attributes.md#blob-index-tags-keys) |
+> | Operator | [ForAllOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#forallofanyvalues) |
+> | Value | {keyName1}<br/>{keyName2} |
++
+#### Azure PowerShell
+
+Here's how to add this condition using Azure PowerShell.
+
+```azurepowershell
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})) OR (@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&`$keys`$&] ForAllOfAnyValues:StringEquals {'Project', 'Program'}))"
+$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
+$testRa.Condition = $condition
+$testRa.ConditionVersion = "2.0"
+Set-AzRoleAssignment -InputObject $testRa -PassThru
+```
+
+Here's how to test this condition.
+
+```azurepowershell
+$localSrcFile = # path to an example file, can be an empty txt
+$ungrantedTag = @{'Mode'='Baker'}
+$grantedTag = @{'Program'='Alpine';'Project'='Cascade'}
+# Get new context for request
+$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
+# try ungranted tags
+$content = Set-AzStorageBlobContent -File $localSrcFile -Container example3 -Blob "Example3.txt" -Tag $ungrantedTag -Context $bearerCtx
+# try granted tags
+$content = Set-AzStorageBlobContent -File $localSrcFile -Container example3 -Blob "Example3.txt" -Tag $grantedTag -Context $bearerCtx
+```
+
+### Example: Existing blobs must have a blob index tag key and values
+
+This condition requires that any existing blobs to have a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Project and values of Cascade, Baker, or Skagit. This condition is useful for adding governance to existing blobs.
+
+There are two actions that allow you to update tags on existing blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+![Diagram of condition showing existing blobs must have a blob index tag key and values.](./media/storage-auth-abac-examples/blob-index-tags-key-values.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&] ForAnyOfAnyValues:StringEquals {'Project'}
+ AND
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] ForAllOfAnyValues:StringEquals {'Cascade', 'Baker', 'Skagit'}
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags)<br/>[Write blob index tags](storage-auth-abac-attributes.md#write-blob-index-tags) |
+> | Attribute source | Request |
+> | Attribute | [Blob index tags [Keys]](storage-auth-abac-attributes.md#blob-index-tags-keys) |
+> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
+> | Value | {keyName} |
+> | Operator | And |
+> | **Expression 2** | |
+> | Attribute source | Request |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | {keyName} |
+> | Operator | [ForAllOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#forallofanyvalues) |
+> | Value | {keyValue1}<br/>{keyValue2}<br/>{keyValue3} |
++
+#### Azure PowerShell
+
+Here's how to add this condition using Azure PowerShell.
+
+```azurepowershell
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})) OR (@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&`$keys`$&] ForAnyOfAnyValues:StringEquals {'Project'} AND @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sensitive`$>] ForAllOfAnyValues:StringEquals {'Cascade', 'Baker', 'Skagit'}))"
+$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
+$testRa.Condition = $condition
+$testRa.ConditionVersion = "2.0"
+Set-AzRoleAssignment -InputObject $testRa -PassThru
+```
+
+Here's how to test this condition.
+
+```azurepowershell
+$localSrcFile = <pathToLocalFile>
+$ungrantedTag = @{'Project'='Alpine'}
+$grantedTag1 = @{'Project'='Cascade'}
+$grantedTag2 = @{'Project'='Baker'}
+$grantedTag3 = @{'Project'='Skagit'}
+# Get new context for request
+$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
+# try ungranted tags
+Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $ungrantedTag -Context $bearerCtx
+# try granted tags
+Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $grantedTag1 -Context $bearerCtx
+Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $grantedTag2 -Context $bearerCtx
+Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $grantedTag3 -Context $bearerCtx
+```
+
+## Blob container names or paths
+
+### Example: Read, write, or delete blobs in named containers
+
+This condition allows users to read, write, or delete blobs in storage containers named blobs-example-container. This condition is useful for sharing specific storage containers with other users in a subscription.
+
+There are five actions for read, write, and delete of existing blobs. You must add this condition to any role assignments that include one of the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner.<br/>Add if the storage accounts included in this condition have hierarchical namespace enabled or might be enabled in the future. |
+
+Suboperations are not used in this condition because the subOperation is needed only when conditions are authored based on tags.
+
+![Diagram of condition showing read, write, or delete blobs in named containers.](./media/storage-auth-abac-examples/containers-read-write-delete.png)
+
+Storage Blob Data Owner
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ )
+)
+```
+
+Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ )
+)
+```
++
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Delete a blob](storage-auth-abac-attributes.md#delete-a-blob)<br/>[Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
++
+#### Azure PowerShell
+
+Here's how to add this condition using Azure PowerShell.
+
+```azurepowershell
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'))"
+$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
+$testRa.Condition = $condition
+$testRa.ConditionVersion = "2.0"
+Set-AzRoleAssignment -InputObject $testRa -PassThru
+```
+
+Here's how to test this condition.
+
+```azurepowershell
+$localSrcFile = <pathToLocalFile>
+$grantedContainer = "blobs-example-container"
+$ungrantedContainer = "ungranted"
+# Get new context for request
+$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
+# Ungranted Container actions
+$content = Set-AzStorageBlobContent -File $localSrcFile -Container $ungrantedContainer -Blob "Example5.txt" -Context $bearerCtx
+$content = Get-AzStorageBlobContent -Container $ungrantedContainer -Blob "Example5.txt" -Context $bearerCtx
+$content = Remove-AzStorageBlob -Container $ungrantedContainer -Blob "Example5.txt" -Context $bearerCtx
+# Granted Container actions
+$content = Set-AzStorageBlobContent -File $localSrcFile -Container $grantedContainer -Blob "Example5.txt" -Context $bearerCtx
+$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "Example5.txt" -Context $bearerCtx
+$content = Remove-AzStorageBlob -Container $grantedContainer -Blob "Example5.txt" -Context $bearerCtx
+```
+
+### Example: Read blobs in named containers with a path
+
+This condition allows read access to storage containers named blobs-example-container with a blob path of readonly/*. This condition is useful for sharing specific parts of storage containers for read access with other users in the subscription.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner.<br/>Add if the storage accounts included in this condition have hierarchical namespace enabled or might be enabled in the future. |
+
+![Diagram of condition showing read access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-read.png)
+
+Storage Blob Data Owner
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'readonly/*'
+ )
+)
+```
+
+Storage Blob Data Reader, Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'readonly/*'
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | Resource |
+> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
+> | Operator | [StringLike](../../role-based-access-control/conditions-format.md#stringlike) |
+> | Value | {pathString} |
++
+#### Azure PowerShell
+
+Here's how to add this condition using Azure PowerShell.
+
+```azurepowershell
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container' AND @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'readonly/*'))"
+$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
+$testRa.Condition = $condition
+$testRa.ConditionVersion = "2.0"
+Set-AzRoleAssignment -InputObject $testRa -PassThru
+```
+
+Here's how to test this condition.
+
+```azurepowershell
+$grantedContainer = "blobs-example-container"
+# Get new context for request
+$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
+# Try to get ungranted blob
+$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "Ungranted.txt" -Context $bearerCtx
+# Try to get granted blob
+$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "readonly/Example6.txt" -Context $bearerCtx
+```
+
+### Example: Read or list blobs in named containers with a path
+
+This condition allows read access and also list access to storage containers named blobs-example-container with a blob path of readonly/*. Condition #1 applies to read actions excluding list blobs. Condition #2 applies to list blobs. This condition is useful for sharing specific parts of storage containers for read or list access with other users in the subscription.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner.<br/>Add if the storage accounts included in this condition have hierarchical namespace enabled or might be enabled in the future. |
+
+![Diagram of condition showing read and list access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-read.png)
+
+Storage Blob Data Owner
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringStartsWith 'readonly/'
+ )
+)
+AND
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:prefix] StringStartsWith 'readonly/'
+ )
+)
+```
+
+Storage Blob Data Reader, Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringStartsWith 'readonly/'
+ )
+)
+AND
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:prefix] StringStartsWith 'readonly/'
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!NOTE]
+> The Azure portal uses prefix='' to list blobs from container's root directory. After the condition is added with the list blobs operation using prefix StringStartsWith 'readonly/', targeted users won't be able to list blobs from container's root directory in the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | Resource |
+> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
+> | Operator | [StringStartsWith](../../role-based-access-control/conditions-format.md#stringstartswith) |
+> | Value | {pathString} |
+
+> [!div class="mx-tableFixed"]
+> | Condition #2 | Setting |
+> | | |
+> | Actions | [List blobs](storage-auth-abac-attributes.md#list-blobs)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | Request |
+> | Attribute | [Blob prefix](storage-auth-abac-attributes.md#blob-prefix) |
+> | Operator | [StringStartsWith](../../role-based-access-control/conditions-format.md#stringstartswith) |
+> | Value | {pathString} |
+
+### Example: Write blobs in named containers with a path
+
+This condition allows a partner (an Azure AD guest user) to drop files into storage containers named Contosocorp with a path of uploads/contoso/*. This condition is useful for allowing other users to put data in storage containers.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner.<br/>Add if the storage accounts included in this condition have hierarchical namespace enabled or might be enabled in the future. |
+
+![Diagram of condition showing write access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-write.png)
+
+Storage Blob Data Owner
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'contosocorp'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'uploads/contoso/*'
+ )
+)
+```
+
+Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'contosocorp'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'uploads/contoso/*'
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | Resource |
+> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
+> | Operator | [StringLike](../../role-based-access-control/conditions-format.md#stringlike) |
+> | Value | {pathString} |
++
+#### Azure PowerShell
+
+Here's how to add this condition using Azure PowerShell.
+
+```azurepowershell
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'}) AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'contosocorp' AND @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'uploads/contoso/*'))"
+$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
+$testRa.Condition = $condition
+$testRa.ConditionVersion = "2.0"
+Set-AzRoleAssignment -InputObject $testRa -PassThru
+```
+
+Here's how to test this condition.
+
+```azurepowershell
+$grantedContainer = "contosocorp"
+$localSrcFile = <pathToLocalFile>
+$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
+# Try to set ungranted blob
+$content = Set-AzStorageBlobContent -Container $grantedContainer -Blob "Example7.txt" -Context $bearerCtx -File $localSrcFile
+# Try to set granted blob
+$content = Set-AzStorageBlobContent -Container $grantedContainer -Blob "uploads/contoso/Example7.txt" -Context $bearerCtx -File $localSrcFile
+```
+
+### Example: Read blobs with a blob index tag and a path
+
+This condition allows a user to read blobs with a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Program, a value of Alpine, and a blob path of logs*. The blob path of logs* also includes the blob name.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+![Diagram of condition showing read access to blobs with a blob index tag and a path.](./media/storage-auth-abac-examples/blob-index-tags-path-read.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Program<$key_case_sensitive$>] StringEquals 'Alpine'
+ )
+)
+AND
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'logs*'
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Attribute source | Resource |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | {keyName} |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {keyValue} |
++
+> [!div class="mx-tableFixed"]
+> | Condition #2 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Attribute source | Resource |
+> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
+> | Operator | [StringLike](../../role-based-access-control/conditions-format.md#stringlike) |
+> | Value | {pathString} |
++
+#### Azure PowerShell
+
+Here's how to add this condition using Azure PowerShell.
+
+```azurepowershell
+$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Program<`$key_case_sensitive`$>] StringEquals 'Alpine')) AND ((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'logs*'))"
+$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
+$testRa.Condition = $condition
+$testRa.ConditionVersion = "2.0"
+Set-AzRoleAssignment -InputObject $testRa -PassThru
+```
+
+Here's how to test this condition.
+
+```azurepowershell
+$grantedContainer = "contosocorp"
+# Get new context for request
+$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
+# Try to get ungranted blobs
+# Wrong name but right tags
+$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "AlpineFile.txt" -Context $bearerCtx
+# Right name but wrong tags
+$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logsAlpine.txt" -Context $bearerCtx
+# Try to get granted blob
+$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logs/AlpineFile.txt" -Context $bearerCtx
+```
+
+## Blob versions or blob snapshots
+
+### Example: Read only current blob versions
+
+This condition allows a user to only read current blob versions. The user cannot read other blob versions.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+![Diagram of condition showing read access to current blob version only.](./media/storage-auth-abac-examples/current-version-read-only.png)
+
+Storage Blob Data Owner
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
+ )
+)
+```
+
+Storage Blob Data Reader, Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Is Current Version](storage-auth-abac-attributes.md#is-current-version) |
+> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | Value | True |
+
+### Example: Read current blob versions and a specific blob version
+
+This condition allows a user to read current blob versions as well as read blobs with a version ID of 2022-06-01T23:38:32.8883645Z. The user cannot read other blob versions. The [Version ID](storage-auth-abac-attributes.md#version-id) attribute is available only for storage accounts where hierarchical namespace is not enabled.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+
+![Diagram of condition showing read access to a specific blob version.](./media/storage-auth-abac-examples/version-id-specific-blob-read.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z'
+ OR
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Attribute source | Request |
+> | Attribute | [Version ID](storage-auth-abac-attributes.md#version-id) |
+> | Operator | [DateTimeEquals](../../role-based-access-control/conditions-format.md#datetime-comparison-operators) |
+> | Value | &lt;blobVersionId&gt; |
+> | **Expression 2** | |
+> | Operator | Or |
+> | Attribute source | Resource |
+> | Attribute | [Is Current Version](storage-auth-abac-attributes.md#is-current-version) |
+> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | Value | True |
+
+### Example: Delete old blob versions
+
+This condition allows a user to delete versions of a blob that are older than 06/01/2022 to perform clean up. The [Version ID](storage-auth-abac-attributes.md#version-id) attribute is available only for storage accounts where hierarchical namespace is not enabled.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action` | |
+
+![Diagram of condition showing delete access to old blob versions.](./media/storage-auth-abac-examples/version-id-blob-delete.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeLessThan '2022-06-01T00:00:00.0Z'
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Delete a blob](storage-auth-abac-attributes.md#delete-a-blob)<br/>[Delete a version of a blob](storage-auth-abac-attributes.md#delete-a-version-of-a-blob) |
+> | Attribute source | Request |
+> | Attribute | [Version ID](storage-auth-abac-attributes.md#version-id) |
+> | Operator | [DateTimeLessThan](../../role-based-access-control/conditions-format.md#datetime-comparison-operators) |
+> | Value | &lt;blobVersionId&gt; |
+
+### Example: Read current blob versions and any blob snapshots
+
+This condition allows a user to read current blob versions and any blob snapshots. The [Version ID](storage-auth-abac-attributes.md#version-id) attribute is available only for storage accounts where hierarchical namespace is not enabled. The [Snapshot](storage-auth-abac-attributes.md#snapshot) attribute is available for storage accounts where hierarchical namespace is not enabled and currently in preview for storage accounts where hierarchical namespace is enabled.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+![Diagram of condition showing read access to current blob versions and any blob snapshots.](./media/storage-auth-abac-examples/version-id-snapshot-blob-read.png)
+
+Storage Blob Data Owner
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot]
+ OR
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
+ )
+)
+```
+
+Storage Blob Data Reader, Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot]
+ OR
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Request |
+> | Attribute | [Snapshot](storage-auth-abac-attributes.md#snapshot) |
+> | Exists | [Checked](../../role-based-access-control/conditions-format.md#exists) |
+> | **Expression 2** | |
+> | Operator | Or |
+> | Attribute source | Resource |
+> | Attribute | [Is Current Version](storage-auth-abac-attributes.md#is-current-version) |
+> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | Value | True |
+
+## Hierarchical namespace
+
+### Example: Read only storage accounts with hierarchical namespace enabled
+
+This condition allows a user to only read blobs in storage accounts with [hierarchical namespace](../blobs/data-lake-storage-namespace.md) enabled. This condition is applicable only at resource group scope or above.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+![Diagram of condition showing read access to storage accounts with hierarchical namespace enabled.](./media/storage-auth-abac-examples/hierarchical-namespace-accounts-read.png)
+
+Storage Blob Data Owner
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts:isHnsEnabled] BoolEquals true
+ )
+)
+```
+
+Storage Blob Data Reader, Storage Blob Data Contributor
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts:isHnsEnabled] BoolEquals true
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Resource |
+> | Attribute | [Is hierarchical namespace enabled](storage-auth-abac-attributes.md#is-hierarchical-namespace-enabled) |
+> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | Value | True |
+
+## Encryption scope
+
+### Example: Read blobs with specific encryption scopes
+
+This condition allows a user to read blobs encrypted with encryption scope `validScope1` or `validScope2`.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+![Diagram of condition showing read access to blobs with encryption scope validScope1 or validScope2.](./media/storage-auth-abac-examples/encryption-scope-read-blobs.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/encryptionScopes:name] ForAnyOfAnyValues:StringEquals {'validScope1', 'validScope2'}
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Attribute source | Resource |
+> | Attribute | [Encryption scope name](storage-auth-abac-attributes.md#encryption-scope-name) |
+> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
+> | Value | &lt;scopeName&gt; |
+
+### Example: Read or write blobs in named storage account with specific encryption scope
+
+This condition allows a user to read or write blobs in a storage account named `sampleaccount` and encrypted with encryption scope `ScopeCustomKey1`. If blobs are not encrypted or decrypted with `ScopeCustomKey1`, request will return forbidden.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+> [!NOTE]
+> Since encryption scopes for different storage accounts could be different, it's recommended to use the `storageAccounts:name` attribute with the `encryptionScopes:name` attribute to restrict the specific encryption scope to be allowed.
+
+![Diagram of condition showing read or write access to blobs in sampleaccount storage account with encryption scope ScopeCustomKey1.](./media/storage-auth-abac-examples/encryption-scope-account-name-read-wite-blobs.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts:name] StringEquals 'sampleaccount'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/encryptionScopes:name] ForAnyOfAnyValues:StringEquals {'ScopeCustomKey1'}
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data) |
+> | Attribute source | Resource |
+> | Attribute | [Account name](storage-auth-abac-attributes.md#account-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | &lt;accountName&gt; |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | Resource |
+> | Attribute | [Encryption scope name](storage-auth-abac-attributes.md#encryption-scope-name) |
+> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
+> | Value | &lt;scopeName&gt; |
+
+## Principal attributes
+
+### Example: Read or write blobs based on blob index tags and custom security attributes
+
+This condition allows read or write access to blobs if the user has a [custom security attribute](../../active-directory/fundamentals/custom-security-attributes-overview.md) that matches the [blob index tag](../blobs/storage-blob-index-how-to.md).
+
+For example, if Brenda has the attribute `Project=Baker`, she can only read or write blobs with the `Project=Baker` blob index tag. Similarly, Chandra can only read or write blobs with `Project=Cascade`.
+
+You must add this condition to any role assignments that include the following actions.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+For more information, see [Allow read access to blobs based on tags and custom security attributes](../../role-based-access-control/conditions-custom-security-attributes.md).
+
+![Diagram of condition showing read or write access to blobs based on blob index tags and custom security attributes.](./media/storage-auth-abac-examples/principal-blob-index-tags-read-write.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
+ )
+)
+AND
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ )
+ OR
+ (
+ @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob conditions](storage-auth-abac-attributes.md#read-content-from-a-blob-with-tag-conditions) |
+> | Attribute source | [Principal](../../role-based-access-control/conditions-format.md#principal-attributes) |
+> | Attribute | &lt;attributeset&gt;_&lt;key&gt; |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Option | Attribute |
+> | Attribute source | Resource |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | &lt;key&gt; |
+
+> [!div class="mx-tableFixed"]
+> | Condition #2 | Setting |
+> | | |
+> | Actions | [Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags)<br/>[Write to a blob with blob index tags](storage-auth-abac-attributes.md#write-to-a-blob-with-blob-index-tags) |
+> | Attribute source | [Principal](../../role-based-access-control/conditions-format.md#principal-attributes) |
+> | Attribute | &lt;attributeset&gt;_&lt;key&gt; |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Option | Attribute |
+> | Attribute source | Request |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | &lt;key&gt; |
+
+### Example: Read blobs based on blob index tags and multi-value custom security attributes
+
+This condition allows read access to blobs if the user has a [custom security attribute](../../active-directory/fundamentals/custom-security-attributes-overview.md) with any values that matches the [blob index tag](../blobs/storage-blob-index-how-to.md).
+
+For example, if Chandra has the Project attribute with the values Baker and Cascade, she can only read blobs with the `Project=Baker` or `Project=Cascade` blob index tag.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
+
+For more information, see [Allow read access to blobs based on tags and custom security attributes](../../role-based-access-control/conditions-custom-security-attributes.md).
+
+![Diagram of condition showing read access to blobs based on blob index tags and multi-value custom security attributes.](./media/storage-auth-abac-examples/principal-blob-index-tags-multi-value-read.png)
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] ForAnyOfAnyValues:StringEquals @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project]
+ )
+)
+```
+
+#### Azure portal
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob conditions](storage-auth-abac-attributes.md#read-content-from-a-blob-with-tag-conditions) |
+> | Attribute source | Resource |
+> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) |
+> | Key | &lt;key&gt; |
+> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
+> | Option | Attribute |
+> | Attribute source | [Principal](../../role-based-access-control/conditions-format.md#principal-attributes) |
+> | Attribute | &lt;attributeset&gt;_&lt;key&gt; |
+
+## Next steps
+
+- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](storage-auth-abac-portal.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)
+- [Azure role assignment condition format and syntax (preview)](../../role-based-access-control/conditions-format.md)
+- [Troubleshoot Azure role assignment conditions (preview)](../../role-based-access-control/conditions-troubleshoot.md)
storage Storage Auth Abac Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-portal.md
+
+ Title: "Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview) - Azure ABAC"
+
+description: Add a role assignment condition to restrict access to blobs using the Azure portal and Azure attribute-based access control (Azure ABAC).
+++++++ Last updated : 09/01/2022+
+#Customer intent:
+++
+# Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)
+
+> [!IMPORTANT]
+> Azure ABAC and Azure role assignment conditions are currently in preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more fine-grained access control by adding a role assignment condition.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> - Add a condition to a role assignment
+> - Restrict access to blobs based on a blob index tag
+
+## Prerequisites
+
+For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](../../role-based-access-control/conditions-prerequisites.md).
+
+## Condition
+
+In this tutorial, you restrict access to blobs with a specific tag. For example, you add a condition to a role assignment so that Chandra can only read files with the tag `Project=Cascade`.
+
+![Diagram of role assignment with a condition.](./media/shared/condition-role-assignment-rg.png)
+
+If Chandra tries to read a blob without the tag `Project=Cascade`, access is not allowed.
+
+![Diagram showing read access to blobs with Project=Cascade tag.](./media/shared/condition-access.png)
+
+Here is what the condition looks like in code:
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
+ AND NOT
+ SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEqualsIgnoreCase 'Cascade'
+ )
+)
+```
+
+## Step 1: Create a user
+
+1. Sign in to the Azure portal as an Owner of a subscription.
+
+1. Click **Azure Active Directory**.
+
+1. Create a user or find an existing user. This tutorial uses Chandra as the example.
+
+## Step 2: Set up storage
+
+1. Create a storage account that is compatible with the blob index tags feature. For more information, see [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
+
+1. Create a new container within the storage account and set the Public access level to **Private (no anonymous access)**.
+
+1. In the container, click **Upload** to open the Upload blob pane.
+
+1. Find a text file to upload.
+
+1. Click **Advanced** to expand the pane.
+
+1. In the **Blob index tags** section, add the following blob index tag to the text file.
+
+ If you don't see the Blob index tags section and you just registered your subscription, you might need to wait a few minutes for changes to propagate. For more information, see [Use blob index tags to manage and find data on Azure Blob Storage](../blobs/storage-blob-index-how-to.md).
+
+ > [!NOTE]
+ > Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions.
+
+ | Key | Value |
+ | | |
+ | Project | Cascade |
+
+ ![Screenshot showing Upload blob pane with Blog index tags section.](./media/storage-auth-abac-portal/container-upload-blob.png)
+
+1. Click the **Upload** button to upload the file.
+
+1. Upload a second text file.
+
+1. Add the following blob index tag to the second text file.
+
+ | Key | Value |
+ | | |
+ | Project | Baker |
+
+## Step 3: Assign a storage blob data role
+
+1. Open the resource group.
+
+1. Click **Access control (IAM)**.
+
+1. Click the **Role assignments** tab to view the role assignments at this scope.
+
+1. Click **Add** > **Add role assignment**.
+
+ ![Screenshot of Add > Add role assignment menu.](./media/storage-auth-abac-portal/add-role-assignment-menu.png)
+
+ The Add role assignment page opens.
+
+1. On the **Roles** tab, select the [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) role.
+
+ ![Screenshot of Add role assignment page with Roles tab.](./media/storage-auth-abac-portal/roles.png)
+
+1. On the **Members** tab, select the user you created earlier.
+
+ ![Screenshot of Add role assignment page with Members tab.](./media/storage-auth-abac-portal/members.png)
+
+1. (Optional) In the **Description** box, enter **Read access to blobs with the tag Project=Cascade**.
+
+1. Click **Next**.
+
+## Step 4: Add a condition
+
+1. On the **Conditions (optional)** tab, click **Add condition**.
+
+ ![Screenshot of Add role assignment condition page for a new condition.](./media/storage-auth-abac-portal/condition-add-new.png)
+
+ The Add role assignment condition page appears.
+
+1. In the Add action section, click **Add action**.
+
+ The Select an action pane appears. This pane is a filtered list of data actions based on the role assignment that will be the target of your condition.
+
+ ![Screenshot of Select an action pane with an action selected.](./media/storage-auth-abac-portal/condition-actions-select.png)
+
+1. Check the box next to **Read a blob**, then click **Select**.
+
+1. In the Build expression section, click **Add expression**.
+
+ The Expression section expands.
+
+1. Specify the following expression settings:
+
+ | Setting | Value |
+ | | |
+ | Attribute source | Resource |
+ | Attribute | Blob index tags [Values in key] |
+ | Key | Project |
+ | Operator | StringEqualsIgnoreCase |
+ | Value | Cascade |
+
+ ![Screenshot of Build expression section for blob index tags.](./media/storage-auth-abac-portal/condition-expressions.png)
+
+1. Scroll up to **Editor type** and click **Code**.
+
+ The condition is displayed as code. You can make changes to the condition in this code editor. To go back to the visual editor, click **Visual**.
+
+ ![Screenshot of condition displayed in code editor.](./media/storage-auth-abac-portal/condition-code.png)
+
+1. Click **Save** to add the condition and return the Add role assignment page.
+
+1. Click **Next**.
+
+1. On the **Review + assign** tab, click **Review + assign** to assign the role with a condition.
+
+ After a few moments, the security principal is assigned the role at the selected scope.
+
+ ![Screenshot of role assignment list after assigning role.](./media/storage-auth-abac-portal/rg-role-assignments-condition.png)
+
+## Step 5: Assign Reader role
+
+- Repeat the previous steps to assign the [Reader](../../role-based-access-control/built-in-roles.md#reader) role to the user you created earlier at resource group scope.
+
+ > [!NOTE]
+ > You typically don't need to assign the Reader role. However, this is done so that you can test the condition using the Azure portal.
+
+## Step 6: Test the condition
+
+1. In a new window, open the [Azure portal](https://portal.azure.com).
+
+1. Sign in as the user you created earlier.
+
+1. Open the storage account and container you created.
+
+1. Ensure that the authentication method is set to **Azure AD User Account** and not **Access key**.
+
+ ![Screenshot of storage container with test files.](./media/storage-auth-abac-portal/test-storage-container.png)
+
+1. Click the Baker text file.
+
+ You should **NOT** be able to view or download the blob and an authorization failed message should be displayed.
+
+1. Click Cascade text file.
+
+ You should be able to view and download the blob.
+
+## Step 7: Clean up resources
+
+1. Remove the role assignment you added.
+
+1. Delete the test storage account you created.
+
+1. Delete the user you created.
+
+## Next steps
+
+- [Example Azure role assignment conditions](storage-auth-abac-examples.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)
+- [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-powershell.md
+
+ Title: "Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell (preview) - Azure ABAC"
+
+description: Add a role assignment condition to restrict access to blobs using Azure PowerShell and Azure attribute-based access control (Azure ABAC).
+++++++ Last updated : 09/01/2022+
+#Customer intent:
+++
+# Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell (preview)
+
+> [!IMPORTANT]
+> Azure ABAC and Azure role assignment conditions are currently in preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more fine-grained access control by adding a role assignment condition.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> - Add a condition to a role assignment
+> - Restrict access to blobs based on a blob index tag
+
+## Prerequisites
+
+For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](../../role-based-access-control/conditions-prerequisites.md).
+
+## Condition
+
+In this tutorial, you restrict access to blobs with a specific tag. For example, you add a condition to a role assignment so that Chandra can only read files with the tag Project=Cascade.
+
+![Diagram of role assignment with a condition.](./media/shared/condition-role-assignment-rg.png)
+
+If Chandra tries to read a blob without the tag Project=Cascade, access is not allowed.
+
+![Diagram showing read access to blobs with Project=Cascade tag.](./media/shared/condition-access.png)
+
+Here is what the condition looks like in code:
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
+ AND NOT
+ SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'
+ )
+)
+```
+
+## Step 1: Install prerequisites
+
+1. Open a PowerShell window.
+
+1. Use [Get-InstalledModule](/powershell/module/powershellget/get-installedmodule) to check versions of installed modules.
+
+ ```azurepowershell
+ Get-InstalledModule -Name Az
+ Get-InstalledModule -Name Az.Resources
+ Get-InstalledModule -Name Az.Storage
+ ```
+
+1. If necessary, use [Install-Module](/powershell/module/powershellget/install-module) to install the required versions for the [Az](https://www.powershellgallery.com/packages/Az/), [Az.Resources](https://www.powershellgallery.com/packages/Az.Resources/), and [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage/2.5.2-preview) modules.
+
+ ```azurepowershell
+ Install-Module -Name Az -RequiredVersion 5.5.0
+ Install-Module -Name Az.Resources -RequiredVersion 3.2.1
+ Install-Module -Name Az.Storage -RequiredVersion 2.5.2-preview -AllowPrerelease
+ ```
+
+1. Close and reopen PowerShell to refresh session.
+
+## Step 2: Sign in to Azure
+
+1. Use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command and follow the instructions that appear to sign in to your directory as [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../../role-based-access-control/built-in-roles.md#owner).
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+1. Use [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) to list all of your subscriptions.
+
+ ```azurepowershell
+ Get-AzSubscription
+ ```
+
+1. Determine the subscription ID and initialize the variable.
+
+ ```azurepowershell
+ $subscriptionId = "<subscriptionId>"
+ ```
+
+1. Set the subscription as the active subscription.
+
+ ```azurepowershell
+ $context = Get-AzSubscription -SubscriptionId $subscriptionId
+ Set-AzContext $context
+ ```
+
+## Step 3: Create a user
+
+1. Use [New-AzureADUser](/powershell/module/azuread/new-azureaduser) to create a user or find an existing user. This tutorial uses Chandra as the example.
+
+1. Initialize the variable for the object ID of the user.
+
+ ```azurepowershell
+ $userObjectId = "<userObjectId>"
+ ```
+
+## Step 4: Set up storage
+
+1. Use [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags (preview)](../blobs/storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
+
+1. Use [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) to create a new blob container within the storage account and set the Public access level to **Private (no anonymous access)**.
+
+1. Use [Set-AzStorageBlobContent](/powershell/module/az.storage/set-azstorageblobcontent) to upload a text file to the container.
+
+1. Add the following blob index tag to the text file. For more information, see [Use blob index tags (preview) to manage and find data on Azure Blob Storage](../blobs/storage-blob-index-how-to.md).
+
+ > [!NOTE]
+ > Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions.
+
+ | Key | Value |
+ | | |
+ | Project | Cascade |
+
+1. Upload a second text file to the container.
+
+1. Add the following blob index tag to the second text file.
+
+ | Key | Value |
+ | | |
+ | Project | Baker |
+
+1. Initialize the following variables with the names you used.
+
+ ```azurepowershell
+ $resourceGroup = "<resourceGroup>"
+ $storageAccountName = "<storageAccountName>"
+ $containerName = "<containerName>"
+ $blobNameCascade = "<blobNameCascade>"
+ $blobNameBaker = "<blobNameBaker>"
+ ```
+
+## Step 5: Assign a role with a condition
+
+1. Initialize the [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) role variables.
+
+ ```azurepowershell
+ $roleDefinitionName = "Storage Blob Data Reader"
+ $roleDefinitionId = "2a2b9908-6ea1-4ae2-8e65-a410df84e7d1"
+ ```
+
+1. Initialize the scope for the resource group.
+
+ ```azurepowershell
+ $scope = "/subscriptions/$subscriptionId/resourceGroups/$resourceGroup"
+ ```
+
+1. Initialize the condition.
+
+ ```azurepowershell
+ $condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sensitive`$>] StringEquals 'Cascade'))"
+ ```
+
+ In PowerShell, if your condition includes a dollar sign ($), you must prefix it with a backtick (\`). For example, this condition uses dollar signs to delineate the tag key name.
+
+1. Initialize the condition version and description.
+
+ ```azurepowershell
+ $conditionVersion = "2.0"
+ $description = "Read access to blobs with the tag Project=Cascade"
+ ```
+
+1. Use [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) to assign the [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) role with a condition to the user at a resource group scope.
+
+ ```azurepowershell
+ New-AzRoleAssignment -ObjectId $userObjectId -Scope $scope -RoleDefinitionId $roleDefinitionId -Description $description -Condition $condition -ConditionVersion $conditionVersion
+ ```
+
+ Here's an example of the output:
+
+ ```azurepowershell
+ RoleAssignmentId : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microso
+ ft.Authorization/roleAssignments/<roleAssignmentId>
+ Scope : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>
+ DisplayName : Chandra
+ SignInName : chandra@contoso.com
+ RoleDefinitionName : Storage Blob Data Reader
+ RoleDefinitionId : 2a2b9908-6ea1-4ae2-8e65-a410df84e7d1
+ ObjectId : <userObjectId>
+ ObjectType : User
+ CanDelegate : False
+ Description : Read access to blobs with the tag Project=Cascade
+ ConditionVersion : 2.0
+ Condition : ((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT
+ SubOperationMatches{'Blob.List'})) OR
+ (@Resource[Microsoft.Storage/storageAccounts/blobServices/co
+ ntainers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'))
+ ```
+
+## Step 6: (Optional) View the condition in the Azure portal
+
+1. In the Azure portal, open the resource group.
+
+1. Click **Access control (IAM)**.
+
+1. On the Role assignments tab, find the role assignment.
+
+1. In the **Condition** column, click **View/Edit** to view the condition.
+
+ ![Screenshot of Add role assignment condition in the Azure portal.](./media/shared/condition-view.png)
+
+## Step 7: Test the condition
+
+1. Open a new PowerShell window.
+
+1. Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in as Chandra.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+1. Initialize the following variables with the names you used.
+
+ ```azurepowershell
+ $storageAccountName = "<storageAccountName>"
+ $containerName = "<containerName>"
+ $blobNameBaker = "<blobNameBaker>"
+ $blobNameCascade = "<blobNameCascade>"
+ ```
+
+1. Use [New-AzStorageContext](/powershell/module/az.storage/new-azstoragecontext) to create a specific context to access your storage account more easily.
+
+ ```azurepowershell
+ $bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
+ ```
+
+1. Use [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob) to try to read the file for the Baker project.
+
+ ```azurepowershell
+ Get-AzStorageBlob -Container $containerName -Blob $blobNameBaker -Context $bearerCtx
+ ```
+
+ Here's an example of the output. Notice that you **can't** read the file because of the condition you added.
+
+ ```azurepowershell
+ Get-AzStorageBlob : This request is not authorized to perform this operation using this permission. HTTP Status Code:
+ 403 - HTTP Error Message: This request is not authorized to perform this operation using this permission.
+ ErrorCode: AuthorizationPermissionMismatch
+ ErrorMessage: This request is not authorized to perform this operation using this permission.
+ RequestId: <requestId>
+ Time: Sat, 24 Apr 2021 13:26:25 GMT
+ At line:1 char:1
+ + Get-AzStorageBlob -Container $containerName -Blob $blobNameBaker -Con ...
+ + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ + CategoryInfo : CloseError: (:) [Get-AzStorageBlob], StorageException
+ + FullyQualifiedErrorId : StorageException,Microsoft.WindowsAzure.Commands.Storage.Blob.Cmdlet.GetAzureStorageBlob
+ Command
+ ```
+
+1. Read the file for the Cascade project.
+
+ ```azurepowershell
+ Get-AzStorageBlob -Container $containerName -Blob $blobNameCascade -Context $bearerCtx
+ ```
+
+ Here's an example of the output. Notice that you can read the file because it has the tag Project=Cascade.
+
+ ```azurepowershell
+ AccountName: <storageAccountName>, ContainerName: <containerName>
+
+ Name BlobType Length ContentType LastModified AccessTier SnapshotT
+ ime
+ - -- -- -
+ CascadeFile.txt BlockBlob 7 text/plain 2021-04-24 05:35:24Z Hot
+ ```
+
+## Step 8: (Optional) Edit the condition
+
+1. In the other PowerShell window, use [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) to get the role assignment you added.
+
+ ```azurepowershell
+ $testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectId
+ ```
+
+1. Edit the condition.
+
+ ```azurepowershell
+ $condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sensitive`$>] StringEquals 'Cascade' OR @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sensitive`$>] StringEquals 'Baker'))"
+ ```
+
+1. Initialize the condition and description.
+
+ ```azurepowershell
+ $testRa.Condition = $condition
+ $testRa.Description = "Read access to blobs with the tag Project=Cascade or Project=Baker"
+ ```
+
+1. Use [Set-AzRoleAssignment](/powershell/module/az.resources/set-azroleassignment) to update the condition for the role assignment.
+
+ ```azurepowershell
+ Set-AzRoleAssignment -InputObject $testRa -PassThru
+ ```
+
+ Here's an example of the output:
+
+ ```azurepowershell
+ RoleAssignmentId : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microso
+ ft.Authorization/roleAssignments/<roleAssignmentId>
+ Scope : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>
+ DisplayName : Chandra
+ SignInName : chandra@contoso.com
+ RoleDefinitionName : Storage Blob Data Reader
+ RoleDefinitionId : 2a2b9908-6ea1-4ae2-8e65-a410df84e7d1
+ ObjectId : <userObjectId>
+ ObjectType : User
+ CanDelegate : False
+ Description : Read access to blobs with the tag Project=Cascade or Project=Baker
+ ConditionVersion : 2.0
+ Condition : ((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT
+ SubOperationMatches{'Blob.List'})) OR
+ (@Resource[Microsoft.Storage/storageAccounts/blobServices/co
+ ntainers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade' OR @Resource[Microsoft.S
+ torage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
+ StringEquals 'Baker'))
+ ```
+
+## Step 9: Clean up resources
+
+1. Use [Remove-AzRoleAssignment](/powershell/module/az.resources/remove-azroleassignment) to remove the role assignment and condition you added.
+
+ ```azurepowershell
+ Remove-AzRoleAssignment -ObjectId $userObjectId -RoleDefinitionName $roleDefinitionName -ResourceGroupName $resourceGroup
+ ```
+
+1. Delete the storage account you created.
+
+1. Delete the user you created.
+
+## Next steps
+
+- [Example Azure role assignment conditions](storage-auth-abac-examples.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)
+- [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)
storage Storage Auth Abac Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-security.md
+
+ Title: Security considerations for Azure role assignment conditions in Azure Storage (preview)
+
+description: Security considerations for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC).
+++++ Last updated : 09/01/2022+++++
+# Security considerations for Azure role assignment conditions in Azure Storage (preview)
+
+> [!IMPORTANT]
+> Azure ABAC and Azure role assignment conditions are currently in preview.
+> This preview version is provided without a service level agreement, and it is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+To fully secure resources using [Azure attribute-based access control (Azure ABAC)](storage-auth-abac.md), you must also protect the [attributes](storage-auth-abac-attributes.md) used in the [Azure role assignment conditions](../../role-based-access-control/conditions-format.md). For instance, if your condition is based on a file path, then you should beware that access can be compromised if the principal has an unrestricted permission to rename a file path.
+
+This article describes security considerations that you should factor into your role assignment conditions.
+
+## Use of other authorization mechanisms
+
+Role assignment conditions are only evaluated when using Azure RBAC for authorization. These conditions can be bypassed if you allow access using alternate authorization methods:
+- [Shared Key](/rest/api/storageservices/authorize-with-shared-key) authorization
+- [Account shared access signature](/rest/api/storageservices/create-account-sas) (SAS)
+- [Service SAS](/rest/api/storageservices/create-service-sas).
+
+Similarly, conditions are not evaluated when access is granted using [access control lists (ACLs)](../blobs/data-lake-storage-access-control.md) in storage accounts with a [hierarchical namespace](../blobs/data-lake-storage-namespace.md) (HNS).
+
+You can prevent shared key, account-level SAS, and service-level SAS authorization by [disabling shared key authorization](../common/shared-key-authorization-prevent.md) for your storage account. Since user delegation SAS depends on Azure RBAC, role-assignment conditions are evaluated when using this method of authorization.
+
+> [!NOTE]
+> Role-assignment conditions are not evaluated when access is granted using ACLs with Data Lake Storage Gen2. In this case, you must plan the scope of access so it does not overlap with that granted through ACLs.
+
+## Securing storage attributes used in conditions
+
+### Blob path
+
+When using blob path as a *@Resource* attribute for a condition, you should also prevent users from renaming a blob to get access to a file when using accounts that have a hierarchical namespace. For example, if you want to author a condition based on blob path, you should also restrict the user's access to the following actions:
+
+| Action | Description |
+| : | : |
+| `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action` | This action allows customers to rename a file using the Path Create API. |
+| `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | This action allows access to various file system and path operations. |
+
+### Blob index tags
+
+[Blob index tags](../blobs/storage-manage-find-blobs.md) are used as free-form attributes for conditions in storage. If you author any access conditions by using these tags, you must also protect the tags themselves. Specifically, the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` DataAction allows users to modify the tags on a storage object. You can restrict this action to prevent users from manipulating a tag key or value to gain access to unauthorized objects.
+
+In addition, if blob index tags are used in conditions, data may be vulnerable if the data and the associated index tags are updated in separate operations. You can use `@Request` conditions on blob write operations to require that index tags be set in the same update operation. This approach can help secure data from the instant it's written to storage.
+
+#### Tags on copied blobs
+
+By default, blob index tags are not copied from a source blob to the destination when you use [Copy Blob](/rest/api/storageservices/Copy-Blob) API or any of its variants. To preserve the scope of access for blob upon copy, you should copy the tags as well.
+
+#### Tags on snapshots
+
+Tags on blob snapshots cannot be modified. This implies that you must update the tags on a blob before taking the snapshot. If you modify the tags on a base blob, the tags on it's snapshot will continue to have their previous value.
+
+If a tag on a base blob is modified after a snapshot is taken, the scope of access may be different for the base blob and the snapshot.
+
+#### Tags on blob versions
+
+Blob index tags aren't copied when a blob version is created through the [Put Blob](/rest/api/storageservices/put-blob), [Put Block List](/rest/api/storageservices/put-block-list) or [Copy Blob](/rest/api/storageservices/Copy-Blob) APIs. You can specify tags through the header for these APIs.
+
+Tags can be set individually on a current base blob and on each blob version. When you modify tags on a base blob, the tags on previous versions are not updated. If you want to change the scope of access for a blob and all its versions using tags, you must update the tags on each version.
+
+#### Querying and filtering limitations for versions and snapshots
+
+When using tags to query and filter blobs in a container, only the base blobs are included in the response. Blob versions or snapshots with the requested keys and values aren't included.
+
+## Roles and permissions
+
+If you're using role assignment conditions for [Azure built-in roles](../../role-based-access-control/built-in-roles.md), you should carefully review all the permissions that the role grants to a principal.
+
+### Inherited role assignments
+
+Role assignments can be configured for a management group, subscription, resource group, storage account, or a container, and are inherited at each level in the stated order. Azure RBAC has an additive model, so the effective permissions are the sum of role assignments at each level. If a principal has the same permission assigned to them through multiple role assignments, then access for an operation using that permission is evaluated separately for each assignment at every level.
+
+Since conditions are implemented as conditions on role assignments, any unconditional role assignment can allow users to bypass the condition. Let's say you assign the *Storage Blob Data Contributor* role to a user for a storage account and on a subscription, but add a condition only to the assignment for the storage account. In this case, the user will have unrestricted access to the storage account through the role assignment at the subscription level.
+
+That's why you should apply conditions consistently for all role assignments across a resource hierarchy.
+
+## Other considerations
+
+### Condition operations that write blobs
+
+Many operations that write blobs require either the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` or the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` permission. Built-in roles, such as [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) and [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) grant both permissions to a security principal.
+
+When you define a role assignment condition on these roles, you should use identical conditions on both these permissions to ensure consistent access restrictions for write operations.
+
+### Behavior for Copy Blob and Copy Blob from URL
+
+For the [Copy Blob](/rest/api/storageservices/Copy-Blob) and [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) operations, `@Request` conditions using blob path as attribute on the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` action and its suboperations are evaluated only for the destination blob.
+
+For conditions on the source blob, `@Resource` conditions on the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read` action are evaluated.
+
+## See also
+
+- [Authorize access to blobs using Azure role assignment conditions (preview)](storage-auth-abac.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)
+- [What is Azure attribute-based access control (Azure ABAC)? (preview)](../../role-based-access-control/conditions-overview.md)
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac.md
+
+ Title: Authorize access to blobs using Azure role assignment conditions (preview)
+
+description: Authorize access to Azure blobs using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Storage attributes.
+++++ Last updated : 09/01/2022+++++
+# Authorize access to blobs using Azure role assignment conditions (preview)
+
+> [!IMPORTANT]
+> Azure ABAC and Azure role assignment conditions are currently in preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Attribute-based access control (ABAC) is an authorization strategy that defines access levels based on attributes associated with security principals, resources, requests, and the environment. With ABAC, you can grant a security principal access to a resource based on a condition expressed as a predicate using these attributes.
+
+Azure ABAC builds on Azure role-based access control (Azure RBAC) by adding [conditions to Azure role assignments](../../role-based-access-control/conditions-overview.md). This preview includes support for role assignment conditions on Blobs and Data Lake Storage Gen2. It enables you to author role-assignment conditions based on principal, resource and request attributes.
+
+## Overview of conditions in Azure Storage
+
+Azure Storage enables the [use of Azure Active Directory](../common/authorize-data-access.md) (Azure AD) to authorize requests to blob, queue, and table resources using Azure RBAC. Azure RBAC helps you manage access to resources by defining who has access to resources and what they can do with those resources, using role definitions and role assignments. Azure Storage defines a set of Azure [built-in roles](../../role-based-access-control/built-in-roles.md#storage) that encompass common sets of permissions used to access blob, queue and table data. You can also define custom roles with select set of permissions. Azure Storage supports role assignments for storage accounts or blob containers.
+
+Azure ABAC builds on Azure RBAC by adding role assignment conditions in the context of specific actions. A *role assignment condition* is an additional check that is evaluated when the action on the storage resource is being authorized. This condition is expressed as a predicate using attributes associated with any of the following:
+- Security principal that is requesting authorization
+- Resource to which access is being requested
+- Parameters of the request
+- Environment from which the request originates
+
+The benefits of using role assignment conditions are:
+- **Enable finer-grained access to resources** - For example, if you want to grant a user read access to blobs in your storage accounts only if the blobs are tagged as Project=Sierra, you can use conditions on the read action using tags as an attribute.
+- **Reduce the number of role assignments you have to create and manage** - You can do this by using a generalized role assignment for a security group, and then restricting the access for individual members of the group using a condition that matches attributes of a principal with attributes of a specific resource being accessed (such as, a blob or a container).
+- **Express access control rules in terms of attributes with business meaning** - For example, you can express your conditions using attributes that represent a project name, business application, organization function, or classification level.
+
+The tradeoff of using conditions is that you need a structured and consistent taxonomy when using attributes across your organization. Attributes must be protected to prevent access from being compromised. Also, conditions must be carefully designed and reviewed for their effect.
+
+Role-assignment conditions in Azure Storage are supported for blobs. You can use conditions with accounts that have the [hierarchical namespace](../blobs/data-lake-storage-namespace.md) (HNS) feature enabled on them. Conditions are currently not supported for queue, table, or file resources in Azure Storage.
++
+## Supported attributes and operations
+You can configure conditions on role assignments for [DataActions](../../role-based-access-control/role-definitions.md#dataactions) to achieve these goals. You can use conditions with a [custom role](../../role-based-access-control/custom-roles.md) or select built-in roles. Note, conditions are not supported for management [Actions](../../role-based-access-control/role-definitions.md#actions) through the [Storage resource provider](/rest/api/storagerp).
+
+In this preview, you can add conditions to built-in roles or custom roles. The built-in roles on which you can use role-assignment conditions in this preview include:
+- [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader)
+- [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)
+- [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner).
+- [Storage Queue Data Contributor](../../role-based-access-control/built-in-roles.md#storage-queue-data-contributor)
+- [Storage Queue Data Message Processor](../../role-based-access-control/built-in-roles.md#storage-queue-data-message-processor)
+- [Storage Queue Data Message Sender](../../role-based-access-control/built-in-roles.md#storage-queue-data-message-sender)
+- [Storage Queue Data Reader](../../role-based-access-control/built-in-roles.md#storage-queue-data-reader)
+
+You can use conditions with custom roles so long as the role includes [actions that support conditions](storage-auth-abac-attributes.md#azure-blob-storage-actions-and-suboperations).
+
+If you're working with conditions based on [blob index tags](../blobs/storage-manage-find-blobs.md), you should use the *Storage Blob Data Owner* since permissions for tag operations are included in this role.
+
+> [!NOTE]
+> Blob index tags are not supported for Data Lake Storage Gen2 storage accounts, which use a hierarchical namespace. You should not author role-assignment conditions using index tags on storage accounts that have HNS enabled.
+
+The [Azure role assignment condition format](../../role-based-access-control/conditions-format.md) allows use of `@Principal`, `@Resource` or `@Request` attributes in the conditions. A `@Principal` attribute is a custom security attribute on a principal, such as a user, enterprise application (service principal), or managed identity. A `@Resource` attribute refers to an existing attribute of a storage resource that is being accessed, such as a storage account, a container, or a blob. A `@Request` attribute refers to an attribute or parameter included in a storage operation request.
+
+Azure RBAC currently supports 2,000 role assignments in a subscription. If you need to create thousands of Azure role assignments, you may encounter this limit. Managing hundreds or thousands of role assignments can be difficult. In some cases, you can use conditions to reduce the number of role assignments on your storage account and make them easier to manage. You can [scale the management of role assignments](../../role-based-access-control/conditions-custom-security-attributes-example.md) using conditions and [Azure AD custom security attributes]() for principals.
++
+## Next steps
+
+- [Prerequisites for Azure role assignment conditions](../../role-based-access-control/conditions-prerequisites.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal](storage-auth-abac-portal.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)
+- [Example Azure role assignment conditions (preview)](storage-auth-abac-examples.md)
+- [Troubleshoot Azure role assignment conditions (preview)](../../role-based-access-control/conditions-troubleshoot.md)
++
+## See also
+
+- [What is Azure attribute-based access control (Azure ABAC)? (preview)](../../role-based-access-control/conditions-overview.md)
+- [FAQ for Azure role assignment conditions (preview)](../../role-based-access-control/conditions-faq.md)
+- [Azure role assignment condition format and syntax (preview)](../../role-based-access-control/conditions-format.md)
+- [Scale the management of Azure role assignments by using conditions and custom security attributes (preview)](../../role-based-access-control/conditions-custom-security-attributes-example.md)
+- [Security considerations for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-security.md)
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
This section describes known issues and conditions in the current release of the
- The `url` property of the log file is currently always empty. - The `LastConsumable` property of the segments.json file does not list the very first segment that the change feed finalizes. This issue occurs only after the first segment is finalized. All subsequent segments after the first hour are accurately captured in the `LastConsumable` property.-- You currently cannot see the **$blobchangefeed** container when you call ListContainers API and the container does not show up on Azure portal or Storage Explorer. You can view the contents by calling the ListBlobs API on the $blobchangefeed container directly.
+- You currently cannot see the **$blobchangefeed** container when you call the ListContainers API. You can view the contents by calling the ListBlobs API on the $blobchangefeed container directly.
- Storage accounts that have previously initiated an [account failover](../common/storage-disaster-recovery-guidance.md) may have issues with the log file not appearing. Any future account failovers may also impact the log file. ## Feature support
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
Previously updated : 08/18/2022 Last updated : 09/06/2022
The following table describes whether a feature is supported in a standard gener
| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705;<sup>2</sup> | &#x2705; | | [Encryption scopes](encryption-scope-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; |
+| [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
The following table describes whether a feature is supported in a premium block
| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705;<sup>2</sup> | &#x2705; | | [Encryption scopes](encryption-scope-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; |
+| [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azurite.md
# Use the Azurite emulator for local Azure Storage development
-The Azurite open-source emulator provides a free local environment for testing your Azure blob, queue storage, and table storage applications. When you're satisfied with how your application is working locally, switch to using an Azure Storage account in the cloud. The emulator provides cross-platform support on Windows, Linux, and macOS.
+The Azurite open-source emulator provides a free local environment for testing your Azure Blob, Queue Storage, and Table Storage applications. When you're satisfied with how your application is working locally, switch to using an Azure Storage account in the cloud. The emulator provides cross-platform support on Windows, Linux, and macOS.
Azurite is the future storage emulator platform. Azurite supersedes the [Azure Storage Emulator](storage-use-emulator.md). Azurite will continue to be updated to support the latest versions of Azure Storage APIs.
stream-analytics Automation Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/automation-powershell.md
To check that the alert was set up properly, we can add `throw "Testing the aler
First we'll need a new **Automation Account**. An Automation Account is similar to a solution that can host multiple runbooks.
-The procedure is [here](../automation/quickstarts/create-account-portal.md). Here we can select to use a system-assigned managed identity directly in the `advanced` tab.
+The procedure is [here](/azure/automation/quickstarts/create-azure-automation-account-portal). Here we can select to use a system-assigned managed identity directly in the `advanced` tab.
For reference, the Automation team has a [good tutorial](../automation/learn/powershell-runbook-managed-identity.md) to get started on PowerShell runbooks.
stream-analytics Stream Analytics Stream Analytics Query Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-stream-analytics-query-patterns.md
For more information, see [**WITH** clause](/stream-analytics-query/with-azure-s
## Simple pass-through query
-A simple pass-through query can be used to copy the input stream data into the output. For example, if a stream of data containing real-time vehicle information needs to be saved in a SQL database for letter analysis, a simple pass-through query will do the job.
+A simple pass-through query can be used to copy the input stream data into the output. For example, if a stream of data containing real-time vehicle information needs to be saved in a SQL database for later analysis, a simple pass-through query will do the job.
**Input**:
synapse-analytics Tutorial Sql Pool Model Scoring Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-sql-pool-model-scoring-wizard.md
-# Tutorial: Machine learning model scoring wizard (preview) for dedicated SQL pools
+# Tutorial: Machine learning model scoring wizard for dedicated SQL pools
Learn how to easily enrich your data in dedicated SQL pools with predictive machine learning models. The models that your data scientists create are now easily accessible to data professionals for predictive analytics. A data professional in Azure Synapse Analytics can simply select a model from the Azure Machine Learning model registry for deployment in Azure Synapse SQL pools and launch predictions to enrich the data.
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/business-intelligence.md
To create your data warehouse solution, you can choose from different kinds of i
| ![Jinfonet](./media/business-intelligence/jinfonet_logo.png) |**Jinfonet JReport**<br>JReport is an embeddable BI solution for the enterprise. The solution offers capabilities such as report creation, dashboards, and data analysis on cloud, big data, and transactional data sources. By visualizing data, you can conduct your own reporting and data discovery for agile, on-the-fly decision making. |[Product page](https://www.logianalytics.com/jreport/)<br> | | ![LogiAnalytics](./media/business-intelligence/logianalytics_logo.png) |**Logi Analytics**<br>Together, Logi Analytics enables your organization to collect, analyze, and immediately act on the largest and most diverse data sets in the world. |[Product page](https://www.logianalytics.com/)<br>| | ![Looker](./media/business-intelligence/looker_logo.png) |**Looker BI**<br>Looker gives everyone in your company the ability to explore and understand the data that drives your business. Looker also gives the data analyst a flexible and reusable modeling layer to control and curate that data. Companies have fundamentally transformed their culture using Looker as the catalyst.|[Product page](https://looker.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aad.lookeranalyticsplatform)<br> |
-| ![Microstrategy](./media/business-intelligence/microstrategy_logo.png) |**MicroStrategy**<br>The MicroStrategy platform offers a complete set of business intelligence and analytics capabilities that enable organizations to get value from their business data. MicroStrategy's powerful analytical engine, comprehensive toolsets, variety of data connectors, and open architecture ensures you have everything you need to extend access to analytics across every team.|[Product page](https://www.microstrategy.com/us/product/analytics)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microstrategy.microstrategy_enterprise_platform_vm)<br> |
+| ![Microstrategy](./media/business-intelligence/microstrategy_logo.png) |**MicroStrategy**<br>The MicroStrategy platform offers a complete set of business intelligence and analytics capabilities that enable organizations to get value from their business data. MicroStrategy's powerful analytical engine, comprehensive toolsets, variety of data connectors, and open architecture ensures you have everything you need to extend access to analytics across every team.|[Product page](https://www.microstrategy.com/us/product/analytics)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microstrategy.microstrategy_cloud_environment_mce)<br> |
| ![Mode Analytics](./media/business-intelligence/mode-logo.png) |**Mode**<br>Mode is a modern analytics and BI solution that helps teams make decisions through unreasonably fast and unexpectedly delightful data analysis. Data teams move faster through a preferred workflow that combines SQL, Python, R, and visual analysis, while stakeholders work alongside them exploring and sharing data on their own. With data more accessible to everyone, we shorten the distance from questions to answers and help businesses make better decisions, faster.|[Product page](https://mode.com/)<br> |
-| ![Pyramid Analytics](./media/business-intelligence/pyramid-logo.png) |**Pyramid Analytics**<br>Pyramid 2020 is the trusted analytics platform that connects your teams, drives confident decisions, and produces winning results. Business users can do high-end, cloud-scale analytics and data science without IT help ΓÇö on any browser or device. Data scientists can take advantage of machine learning algorithms and scripting to understand difficult business problems. Power users can prepare and model their own data to create illuminating analytic content. Non-technical users can benefit from stunning visualizations and guided analytic presentations. It's the next generation of self-service analytics with governance. |[Product page](https://www.pyramidanalytics.com/resources/analyst-reports/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/pyramidanalytics.pyramid2020v4) |
+| ![Pyramid Analytics](./media/business-intelligence/pyramid-logo.png) |**Pyramid Analytics**<br>Pyramid 2020 is the trusted analytics platform that connects your teams, drives confident decisions, and produces winning results. Business users can do high-end, cloud-scale analytics and data science without IT help ΓÇö on any browser or device. Data scientists can take advantage of machine learning algorithms and scripting to understand difficult business problems. Power users can prepare and model their own data to create illuminating analytic content. Non-technical users can benefit from stunning visualizations and guided analytic presentations. It's the next generation of self-service analytics with governance. |[Product page](https://www.pyramidanalytics.com/resources/analyst-reports/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/pyramidanalytics.pyramid2020-25-102) |
| ![Qlik](./media/business-intelligence/qlik_logo.png) |**Qlik Sense Enterprise**<br>Drive insight discovery with the data visualization app that anyone can use. With Qlik Sense, everyone in your organization can easily create flexible, interactive visualizations and make meaningful decisions. |[Product page](https://www.qlik.com/us/products/qlik-sense/enterprise)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik-sense) | | ![SAS](./media/business-intelligence/sas-logo.jpg) |**SAS® Viya®**<br>SAS® Viya® is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone – from data scientists to business users – to collaborate and realize innovative results faster. Using open source or SAS models, SAS® Viya® can be accessed through APIs or interactive interfaces to transform raw data into actions. |[Product page](https://www.sas.com/microsoft)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br>| | ![SiSense](./media/business-intelligence/sisense_logo.png) |**SiSense**<br>SiSense is a full-stack Business Intelligence software that comes with tools that a business needs to analyze and visualize data: a high-performance analytical database, the ability to join multiple sources, simple data extraction (ETL), and web-based data visualization. Start to analyze and visualize large data sets with SiSense BI and Analytics today. |[Product page](https://www.sisense.com/product/)<br> |
synapse-analytics Synapse Workspace Understand What Role You Need https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
You can review the status of running notebooks and jobs in Apache Spark pools if
You can review logs and cancel running jobs and pipelines if you're a Synapse Compute Operator at the workspace or for a specific Spark pool or pipeline.
+### Debug pipelines
+
+You can review and make changes in pipelines as a Synapse User, but if you want to be able to debug it you also need to have Synapse Credential User.
+ ### Publish and save your code You can publish new or updated code artifacts to the service if you're a Synapse Artifact Publisher, Synapse Contributor, or Synapse Administrator.
synapse-analytics Apache Spark What Is Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-what-is-delta-lake.md
# What is Delta Lake
-Azure Synapse Analytics is compatible with Linux Foundation Delta Lake. Delta Lake is an open-source storage layer that brings ACID (atomicity, consistency, isolation, and durability) transactions to Apache Spark and big data workloads.
+Delta Lake is an open-source storage layer that brings ACID (atomicity, consistency, isolation, and durability) transactions to Apache Spark and big data workloads.
-The current version of Delta Lake included with Azure Synapse has language support for Scala, PySpark, and .NET. There are links at the bottom of the page to more detailed examples and documentation.
+The current version of Delta Lake included with Azure Synapse has language support for Scala, PySpark, and .NET and is compatible with Linux Foundation Delta Lake. There are links at the bottom of the page to more detailed examples and documentation.
## Key features
synapse-analytics Sql Data Warehouse Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-source-control-integration.md
This tutorial outlines how to integrate your SQL Server Data Tools (SSDT) databa
## Set up and connect to Azure DevOps
-1. In your Azure DevOps Organization, create a project that will host your SSDT database project via an Azure Repo repository.
+1. In your Azure DevOps Organization, create a project that will host your SSDT database project via an Azure Repos repository.
![Create Project](./media/sql-data-warehouse-source-control-integration/1-create-project-azure-devops.png "Create Project")
For more information about connecting projects using Visual Studio, see the [Con
![Commit](./media/sql-data-warehouse-source-control-integration/6.5-commit-push-changes.png "Commit")
-4. Now that you have the changes committed locally in the cloned repository, sync and push your changes to your Azure Repo repository in your Azure DevOps project.
+4. Now that you have the changes committed locally in the cloned repository, sync and push your changes to your Azure Repos repository in your Azure DevOps project.
![Sync and Push - staging](./media/sql-data-warehouse-source-control-integration/7-commit-push-changes.png "Sync and push - staging")
For more information about connecting projects using Visual Studio, see the [Con
## Validation
-1. Verify changes have been pushed to your Azure Repo by updating a table column in your database project from Visual Studio SQL Server Data Tools (SSDT).
+1. Verify changes have been pushed to your Azure Repos by updating a table column in your database project from Visual Studio SQL Server Data Tools (SSDT).
![Validate update column](./media/sql-data-warehouse-source-control-integration/8-validation-update-column.png "Validate update column")
For more information about connecting projects using Visual Studio, see the [Con
![Push changes](./media/sql-data-warehouse-source-control-integration/9-push-column-change.png "Push changes")
-3. Verify the change has been pushed in your Azure Repo repository.
+3. Verify the change has been pushed in your Azure Repos repository.
![Verify](./media/sql-data-warehouse-source-control-integration/10-verify-column-change-pushed.png "Verify changes")
-4. (**Optional**) Use Schema Compare and update the changes to your target dedicated SQL pool using SSDT to ensure the object definitions in your Azure Repo repository and local repository reflect your dedicated SQL pool.
+4. (**Optional**) Use Schema Compare and update the changes to your target dedicated SQL pool using SSDT to ensure the object definitions in your Azure Repos repository and local repository reflect your dedicated SQL pool.
## Next steps -- [Developing for dedicated SQL pool](sql-data-warehouse-overview-develop.md)
+- [Developing for dedicated SQL pool](sql-data-warehouse-overview-develop.md)
synapse-analytics Query Delta Lake Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-delta-lake-format.md
Title: Query Delta Lake format using serverless SQL pool
-description: In this article, you'll learn how to query files stored in Apache Delta Lake format using serverless SQL pool.
+description: In this article, you'll learn how to query files stored in Delta Lake format using serverless SQL pool.
# Query Delta Lake files using serverless SQL pool in Azure Synapse Analytics
-In this article, you'll learn how to write a query using serverless Synapse SQL pool to read Apache Delta Lake files.
+In this article, you'll learn how to write a query using serverless Synapse SQL pool to read Delta Lake files.
Delta Lake is an open-source storage layer that brings ACID (atomicity, consistency, isolation, and durability) transactions to Apache Spark and big data workloads. The serverless SQL pool in Synapse workspace enables you to read the data stored in Delta Lake format, and serve it to reporting tools.
synapse-analytics Synapse Notebook Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-notebook-activity.md
You can create a Synapse notebook activity directly from the Synapse pipeline ca
Drag and drop **Synapse notebook** under **Activities** onto the Synapse pipeline canvas. Select on the Synapse notebook activity box and config the notebook content for current activity in the **settings**. You can select an existing notebook from the current workspace or add a new one.
-You can also select an Apache Spark pool in the settings. It should be noted that the Apache spark pool set here will replace the Apache spark pool used in the notebook. If Apache spark pool is not selected in the settings of notebook content for current activity, the Apache spark pool selected in that notebook will be used to run.
+(Optional) You can also reconfigure Spark pool\Executor size\Dynamically allocate executors\Min executors\Max executors\Driver size in settings. It should be noted that the settings reconfigured here will replace the settings of the configure session in Notebook. If nothing is set in the settings of the current notebook activity, it will run with the settings of the configure session in that notebook.
-![screenshot-showing-create-notebook-activity](./media/synapse-notebook-activity/create-synapse-notebook-activity.png)
+> [!div class="mx-imgBorder"]
+> ![screenshot-showing-create-notebook-activity](./media/synapse-notebook-activity/create-synapse-notebook-activity.png)
++
+| Property | Description | Required |
+| -- | -- | -- |
+|Spark pool| Reference to the Spark pool. You can select Apache Spark pool from the list. If this setting is empty, it will run in the spark pool of the notebook itself.| No |
+|Executor size| Number of cores and memory to be used for executors allocated in the specified Apache Spark pool for the session.| No |
+|Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.| No |
+|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.| No |
+|Max executors| Max number of executors to be allocated in the specified Spark pool for the job.| No |
+|Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.| No |
> [!NOTE] > The execution of parallel Spark Notebooks in Azure Synapse pipelines be queued and executed in a FIFO manner, jobs order in the queue is according to the time sequence, the expire time of a job in the queue is 3 days, please notice that queue for notebook only work in synapse pipeline.
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
Title: Automatic VM Guest Patching for Azure VMs description: Learn how to automatically patch virtual machines in Azure.-+ Last updated 10/20/2021-++ # Automatic VM guest patching for Azure VMs
As a new rollout is triggered every month, a VM will receive at least one patch
| Publisher | OS Offer | Sku | |-||--|
+| Canonical | UbuntuServer | 16.04-LTS |
| Canonical | UbuntuServer | 18.04-LTS |
+| Canonical | UbuntuServer | 18.04-LTS-Gen2 |
| Canonical | 0001-com-ubuntu-pro-bionic | pro-18_04-lts | | Canonical | 0001-com-ubuntu-server-focal | 20_04-lts | | Canonical | 0001-com-ubuntu-server-focal | 20_04-lts-gen2 | | Canonical | 0001-com-ubuntu-pro-focal | pro-20_04-lts |
-| MicrosoftCBLMariner | CBL-Mariner | 1-gen2 |
-| MicrosoftCBLMariner | CBL-Mariner | CBL-Mariner-2-gen2 |
+| microsoftcblmariner | cbl-mariner | cbl-mariner-1 |
+| microsoftcblmariner | cbl-mariner | 1-gen2 |
+| microsoftcblmariner | cbl-mariner | cbl-mariner-2 |
+| microsoftcblmariner | cbl-mariner | cbl-mariner-2-gen2 |
+| microsoft-aks | aks | aks-engine-ubuntu-1804-202112 |
| Redhat | RHEL | 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7_9, 7-RAW, 7-LVM | | Redhat | RHEL | 8, 8.1, 8.2, 8_3, 8_4, 8_5, 8-LVM | | Redhat | RHEL-RAW | 8-raw |
-| OpenLogic | CentOS | 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7_8, 7_9, 7-LVM |
-| OpenLogic | CentOS | 8.0, 8_1, 8_2, 8_3, 8-lvm |
+| OpenLogic | CentOS | 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7_8, 7_9, 7_9-gen2 |
+| OpenLogic | centos-lvm | 7-lvm |
+| OpenLogic | CentOS | 8.0, 8_1, 8_2, 8_3, 8_4, 8_5 |
+| OpenLogic | centos-lvm | 8-lvm |
| SUSE | sles-12-sp5 | gen1, gen2 | | SUSE | sles-15-sp2 | gen1, gen2 | | MicrosoftWindowsServer | WindowsServer | 2008-R2-SP1 | | MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter | | MicrosoftWindowsServer | WindowsServer | 2016-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2016-datacenter-gensecond |
| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-Server-Core |
+| MicrosoftWindowsServer | WindowsServer | 2016-datacenter-smalldisk |
+| MicrosoftWindowsServer | WindowsServer | 2016-datacenter-with-containers |
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-gensecond |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-smalldisk-g2 |
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers |
+| MicrosoftWindowsServer | WindowsServer | 2019-datacenter-gensecond |
+| MicrosoftWindowsServer | WindowsServer | 2019-datacenter-smalldisk |
+| MicrosoftWindowsServer | WindowsServer | 2019-datacenter-smalldisk-g2 |
+| MicrosoftWindowsServer | WindowsServer | 2019-datacenter-with-containers |
| MicrosoftWindowsServer | WindowsServer | 2022-datacenter | | MicrosoftWindowsServer | WindowsServer | 2022-datacenter-g2 | | MicrosoftWindowsServer | WindowsServer | 2022-datacenter-core |
+| MicrosoftWindowsServer | WindowsServer | 2022-datacenter-core-g2 |
| MicrosoftWindowsServer | WindowsServer | 2022-datacenter-azure-edition | | MicrosoftWindowsServer | WindowsServer | 2022-datacenter-azure-edition-core | | MicrosoftWindowsServer | WindowsServer | 2022-datacenter-azure-edition-core-smalldisk | | MicrosoftWindowsServer | WindowsServer | 2022-datacenter-azure-edition-smalldisk |
+| MicrosoftWindowsServer | WindowsServer | 2022-datacenter-smalldisk-g2 |
## Patch orchestration modes VMs on Azure now support the following patch orchestration modes:
virtual-machines Copy Files To Vm Using Scp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/copy-files-to-vm-using-scp.md
As examples, we move an Azure configuration file up to a VM and pull down a log
SCP uses SSH for the transport layer. SSH handles the authentication on the destination host, and it moves the file in an encrypted tunnel provided by default with SSH. For SSH authentication, usernames and passwords can be used. However, SSH public and private key authentication are recommended as a security best practice. Once SSH has authenticated the connection, SCP then begins copying the file. Using a properly configured `~/.ssh/config` and SSH public and private keys, the SCP connection can be established by just using a server name (or IP address). If you only have one SSH key, SCP looks for it in the `~/.ssh/` directory, and uses it by default to log in to the VM.
-For more information on configuring your `~/.ssh/config` and SSH public and private keys, see [Create SSH keys](/linux/mac-create-ssh-keys.md).
+For more information on configuring your `~/.ssh/config` and SSH public and private keys, see [Create SSH keys](/azure/virtual-machines/linux/mac-create-ssh-keys).
## SCP a file to a VM For the first example, we copy an Azure configuration file up to a VM that is used to deploy automation. Because this file contains Azure API credentials, which include secrets, security is important. The encrypted tunnel provided by SSH protects the contents of the file.
-The following command copies the local *.azure/config* file to an Azure VM with FQDN *myserver.eastus.cloudapp.azure.com*. If you don't have an [FQDN set](/create-fqdn.md), you can also use the IP address of the VM. The admin user name on the Azure VM is *azureuser*. The file is targeted to the */home/azureuser/* directory. Substitute your own values in this command.
+The following command copies the local *.azure/config* file to an Azure VM with FQDN *myserver.eastus.cloudapp.azure.com*. If you don't have an [FQDN set](/azure/virtual-machines/create-fqdn), you can also use the IP address of the VM. The admin user name on the Azure VM is *azureuser*. The file is targeted to the */home/azureuser/* directory. Substitute your own values in this command.
```bash scp ~/.azure/config azureuser@myserver.eastus.cloudapp.com:/home/azureuser/config
The `-r` flag instructs SCP to recursively copy the files and directories from t
## Next steps
-* [Manage users, SSH, and check or repair disks on Azure Linux VMs using the VMAccess Extension](/extensions/vmaccess.md?toc=/azure/virtual-machines/linux/toc.json)
+* [Manage users, SSH, and check or repair disks on Azure Linux VMs using the VMAccess Extension](/azure/virtual-machines/extensions/vmaccess)
virtual-machines Dasv5 Dadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dasv5-dadsv5-series.md
Dasv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
Dadsv5-series virtual machines support Standard SSD, Standard HDD, and Premium S
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption-overview.md
Title: Overview of managed disk encryption options description: Overview of managed disk encryption options Previously updated : 08/12/2022 Last updated : 09/06/2022
There are several types of encryption available for your managed disks, includin
- **Azure Disk Encryption** helps protect and safeguard your data to meet your organizational security and compliance commitments. ADE encrypts the OS and data disks of Azure virtual machines (VMs) inside your VMs by using the [DM-Crypt](https://wikipedia.org/wiki/Dm-crypt) feature of Linux or the [BitLocker](https://wikipedia.org/wiki/BitLocker) feature of Windows. ADE is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets. For full details, see [Azure Disk Encryption for Linux VMs](./linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](./windows/disk-encryption-overview.md). -- **Server-Side Encryption** (also referred to as encryption-at-rest or Azure Storage encryption) automatically encrypts data stored on Azure managed disks (OS and data disks) when persisting on the Storage Clusters. For full details, see [Server-side encryption of Azure Disk Storage](./disk-encryption.md).
+- **Azure Disk Storage Server-Side Encryption** (also referred to as encryption-at-rest or Azure Storage encryption) automatically encrypts data stored on Azure managed disks (OS and data disks) when persisting on the Storage Clusters. When configured with a Disk Encryption Set (DES), it supports customer-managed keys as well. For full details, see [Server-side encryption of Azure Disk Storage](./disk-encryption.md).
- **Encryption at host** ensures that data stored on the VM host hosting your VM is encrypted at rest and flows encrypted to the Storage clusters. For full details, see [Encryption at host - End-to-end encryption for your VM data](./disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
Encryption is part of a layered approach to security and should be used with oth
## Comparison
-Here's a comparison of SSE, ADE, encryption at host, and Confidential disk encryption.
+Here's a comparison of Disk Storage SSE, ADE, encryption at host, and Confidential disk encryption.
| | Encryption at rest (OS and data disks) | Temp disk encryption | Encryption of caches | Data flows encrypted between Compute and Storage | Customer control of keys | Does not use your VM's CPU | Works for custom images | Enhanced Key Protection | Microsoft Defender for Cloud disk encryption status | |--|--|--|--|--|--|--|--|--|--|
-| **Encryption at rest with platform-managed key (SSE+PMK)** | &#x2705; | &#10060; | &#10060; | &#10060; | &#10060; | &#x2705; | &#x2705; | &#10060; | Unhealthy, not applicable if exempt |
-| **Encryption at rest with customer-managed key (SSE+CMK)** | &#x2705; | &#10060; | &#10060; | &#10060; | &#x2705; | &#x2705; | &#x2705; | &#10060; | Unhealthy, not applicable if exempt |
+| **Azure Disk Storage Server-Side Encryption at rest** | &#x2705; | &#10060; | &#10060; | &#10060; | &#x2705; When configured with DES | &#x2705; | &#x2705; | &#10060; | Unhealthy, not applicable if exempt |
| **Azure Disk Encryption** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |&#10060; | &#10060; Does not work for custom Linux images | &#10060; | Healthy | | **Encryption at Host** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#10060; | Unhealthy, not applicable if exempt | | **Confidential disk encryption** | &#x2705; For the OS disk only | &#10060; | &#x2705; For the OS disk only | &#x2705; For the OS disk only| &#x2705; For the OS disk only |&#10060; | &#x2705; | &#x2705; | Unhealthy, not applicable if exempt |
virtual-machines Dplsv5 Dpldsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dplsv5-dpldsv5-series.md
Dplsv5-series virtual machines feature the Ampere® Altra® Arm-based processor
- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
-||||||||
+||||||||||
| Standard_D2pls_v5 | 2 | 4 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 | | Standard_D4pls_v5 | 4 | 8 | Remote Storage Only | 8 | 6400/145 | 20000/1200 | 2 | 12500 | | Standard_D8pls_v5 | 8 | 16 | Remote Storage Only | 16 | 12800/290 | 20000/1200 | 4 | 12500 |
Dpldsv5-series virtual machines feature the Ampere® Altra® Arm-based processor
- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
-||||||||
+|||||||||||
| Standard_D2plds_v5 | 2 | 4 | 75 | 4 | 9375/125 | 3750/85 | 10000/1200 | 2 | 12500 | | Standard_D4plds_v5 | 4 | 8 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 | | Standard_D8plds_v5 | 8 | 16 | 300 | 16 | 38000/500 | 12800/290 | 20000/1200 | 4 | 12500 |
virtual-machines Dpsv5 Dpdsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dpsv5-dpdsv5-series.md
Dpsv5-series virtual machines feature the Ampere® Altra® Arm-based processor o
- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
-||||||||
+||||||||||
| Standard_D2ps_v5 | 2 | 8 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 | | Standard_D4ps_v5 | 4 | 16 | Remote Storage Only | 8 | 6400/145 | 20000/1200 | 2 | 12500 | | Standard_D8ps_v5 | 8 | 32 | Remote Storage Only | 16 | 12800/290 | 20000/1200 | 4 | 12500 |
Dpdsv5-series virtual machines feature the Ampere® Altra® Arm-based processor
- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
-||||||||
+|||||||||||
| Standard_D2pds_v5 | 2 | 8 | 75 | 4 | 9375/125 | 3750/85 | 10000/1200 | 2 | 12500 | | Standard_D4pds_v5 | 4 | 16 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 | | Standard_D8pds_v5 | 8 | 32 | 300 | 16 | 38000/500 | 12800/290 | 20000/1200 | 4 | 12500 |
virtual-machines Easv5 Eadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/easv5-eadsv5-series.md
Easv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
Eadsv5-series virtual machines support Standard SSD, Standard HDD, and Premium S
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
To use the user-assigned identity on the target VM or virtual machine scale set,
> The `managedIdentity` property *must not* be used in conjunction with the `storageAccountName` or `storageAccountKey` property. ## Template deployment
-You can deploy Azure VM extensions by using Azure Resource Manager templates. The JSON schema detailed in the previous section can be used in an Azure Resource Manager template to run the Custom Script Extension during the template's deployment. You can find a sample template that includes the Custom Script Extension on [GitHub](https://github.com/Microsoft/dotnet-core-sample-templates/tree/master/dotnet-core-music-linux).
+You can deploy Azure VM extensions by using Azure Resource Manager templates. The JSON schema detailed in the previous section can be used in an Azure Resource Manager template to run the Custom Script Extension during the template's deployment. You can find a sample template that includes the Custom Script Extension on [GitHub](https://github.com/Azure/azure-quickstart-templates/blob/b1908e74259da56a92800cace97350af1f1fc32b/mongodb-on-ubuntu/azuredeploy.json/).
```json
virtual-machines Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/troubleshoot.md
Your VM is probably missing the Baltimore CyberTrust Root certificate in "Truste
**Solution** Open the certificates console with certmgr.msc, and check if the certificate is there.
-If it's not, please install it from https://cacert.omniroot.com/bc2025.crt
Another possible issue is that the certificate chain is broken by a third party SSL Inspection tool, like ZScaler. That kind of tool should be configured to bypass SSL inspection.----
virtual-machines Tutorial Devops Azure Pipelines Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-devops-azure-pipelines-classic.md
Using **Continuous-delivery**, you can configure rolling updates to your virtual
## Resources - [Deploy to Azure virtual machines with Azure DevOps](../../devops-project/azure-devops-project-vms.md)-- [Deploy to Azure virtual machine scale set](/azure/devops/pipelines/apps/cd/azure/deploy-azure-scaleset.md)
+- [Deploy to Azure virtual machine scale set](/azure/devops/pipelines/apps/cd/azure/deploy-azure-scaleset)
## Related articles
virtual-machines Mv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/mv2-series.md
Mv2-series VM’s feature Intel® Hyper-Threading Technology
[Write Accelerator](./how-to-enable-write-accelerator.md): Supported<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): No Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> |Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max uncached disk throughput: IOPS / MBps | Max NICs | Expected network bandwidth (Mbps) |
virtual-machines Sizes B Series Burstable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-b-series-burstable.md
The B-series comes in the following VM sizes:
[Memory Preserving Updates](maintenance-and-updates.md): Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported**<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br> *B-series VMs are burstable and thus ACU numbers will vary depending on workloads and core usage.<br>
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- September 6, 2022: Add managed identity for pacemaker fence agent [Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure](high-availability-guide-suse-pacemaker.md) on SLES and [Setting up Pacemaker on RHEL in Azure](high-availability-guide-rhel-pacemaker.md) RHEL.
- August 22, 2022: Release of cost optimization scenario [Deploy PAS and AAS with SAP NetWeaver HA cluster](high-availability-guide-rhel-with-dialog-instance.md) on RHEL. - August 09, 2022: Release of scenario [HA for SAP ASCS/ERS with NFS simple mount](./high-availability-guide-suse-nfs-simple-mount.md) on SLES 15 for SAP Applications. - July 18, 2022: Clarify statement around Pacemaker support on Oracle Linux in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms_guide_oracle.md)
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
Make sure to assign the custom role to the service principal at all VM (cluster
>[!IMPORTANT] > The installed version of the *fence-agents* package must be 4.4.0 or later to benefit from the faster failover times with the Azure fence agent, when a cluster node is fenced. If you're running an earlier version, we recommend that you update the package.
+ >[!IMPORTANT]
+ > If using managed identity, the installed version of the *fence-agents* package must be fence-agents 4.5.2+git.1592573838.1eee0863 or later. Earlier versions will not work correctly with a managed identity configuration.
+ > Currently only SLES 15 SP1 and older are supported for managed identity configuration.
+ 1. **[A]** Install the Azure Python SDK and Azure Identity Python module. Install the Azure Python SDK on SLES 12 SP4 or SLES 12 SP5:
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureAttestation** | Azure Attestation. | Outbound | No | Yes | | **AzureBackup** |Azure Backup.<br/><br/>**Note**: This tag has a dependency on the **Storage** and **AzureActiveDirectory** tags. | Outbound | No | Yes | | **AzureBotService** | Azure Bot Service. | Outbound | No | No |
-| **AzureCloud** | All [datacenter public IP addresses](https://www.microsoft.com/download/details.aspx?id=56519). | Outbound | Yes | Yes |
+| **AzureCloud** | All [datacenter public IP addresses](https://www.microsoft.com/download/details.aspx?id=56519). | Both | Yes | Yes |
| **AzureCognitiveSearch** | Azure Cognitive Search. <br/><br/>This tag or the IP addresses covered by this tag can be used to grant indexers secure access to data sources. For more information about indexers, see [indexer connection documentation](../search/search-indexer-troubleshooting.md#connection-errors). <br/><br/> **Note**: The IP of the search service isn't included in the list of IP ranges for this service tag and **also needs to be added** to the IP firewall of data sources. | Inbound | No | No | | **AzureConnectors** | This tag represents the IP addresses used for managed connectors that make inbound webhook callbacks to the Azure Logic Apps service and outbound calls to their respective services, for example, Azure Storage or Azure Event Hubs. | Both | Yes | Yes | | **AzureContainerRegistry** | Azure Container Registry. | Outbound | Yes | Yes |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureDataLake** | Azure Data Lake Storage Gen1. | Outbound | No | Yes | | **AzureDeviceUpdate** | Device Update for IoT Hub. | Both | No | Yes | | **AzureDevSpaces** | Azure Dev Spaces. | Outbound | No | No |
-| **AzureDevOps** | Azure DevOps. | Inbound | No | Yes |
+| **AzureDevOps** | Azure DevOps. | Inbound | Yes | Yes |
| **AzureDigitalTwins** | Azure Digital Twins.<br/><br/>**Note**: This tag or the IP addresses covered by this tag can be used to restrict access to endpoints configured for event routes. | Inbound | No | Yes | | **AzureEventGrid** | Azure Event Grid. | Both | No | No | | **AzureFrontDoor.Frontend** <br/> **AzureFrontDoor.Backend** <br/> **AzureFrontDoor.FirstParty** | Azure Front Door. | Both | No | No |
virtual-wan Scenario Bgp Peering Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-bgp-peering-hub.md
description: Learn about BGP peering with an Azure Virtual WAN virtual hub.
Previously updated : 07/27/2022 Last updated : 09/06/2022
The virtual hub router now also exposes the ability to peer with it, thereby exc
* Public ASNs: 8074, 8075, 12076 * Private ASNs: 65515, 65517, 65518, 65519, 65520 * ASNs reserved by IANA: 23456, 64496-64511, 65535-65551
-* While the virtual hub router exchanges BGP routes with your NVA and propagates them to your virtual network, it directly facilitates propagating routes from on-premises via the virtual hub hosted gateways (VPN Gateway/ExpressRoute Gateway/Managed NVA gateways).
+* While the virtual hub router exchanges BGP routes with your NVA and propagates them to your virtual network, it directly facilitates propagating routes from on-premises via the virtual hub hosted gateways (VPN gateway/ExpressRoute gateway/Managed NVA gateways).
The virtual hub router has the following limits:
The virtual hub router now also exposes the ability to peer with it, thereby exc
* Routes from NVA in a virtual network that are more specific than the virtual network address space, when advertised to the virtual hub through BGP are not propagated further to on-premises. * Currently we only support 1,000 routes from the NVA to the virtual hub. * Traffic destined for addresses in the virtual network directly connected to the virtual hub cannot be configured to route through the NVA using BGP peering between the hub and NVA. This is because the virtual hub automatically learns about system routes associated with addresses in the spoke virtual network when the spoke virtual network connection is created. These automatically learned system routes are preferred over routes learned by the hub through BGP.
-* This feature is not supported for setting up BGP peering between NVA in spoke VNET and Virtual hub with Azure Firewall.
+* This feature is not supported for setting up BGP peering between an NVA in a spoke VNet and a virtual hub with Azure Firewall.
+* In order for the NVA to exchange routes with VPN and ER connected sites, branch to branch routing must be turned on.
## BGP peering scenarios