Updates from: 02/07/2022 02:05:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/5-secure-access-b2b.md
We recommend the following restrictions for guest users.
* **Block access to the Azure portal. You can make rare necessary exceptions**.
- * Create a Conditional Access policy that includes either All guest and external users and then [implement a policy to block access](../../role-based-access-control/conditional-access-azure-management.md).
+ * Create a Conditional Access policy that includes either All guest and external users and then [implement a policy to block access](../conditional-access/concept-conditional-access-cloud-apps.md).
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
If you're not going to continue to use this application, you can delete the tena
- Add groups and members, see [Create a basic group and add members](active-directory-groups-create-azure-portal.md) -- Learn about [role-based access using Privileged Identity Management](../../role-based-access-control/best-practices.md) and [Conditional Access](../../role-based-access-control/conditional-access-azure-management.md) to help manage your organization's application and resource access.
+- Learn about [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and [Conditional Access](../conditional-access/overview.md) to help manage your organization's application and resource access.
- Learn about Azure AD, including [basic licensing information, terminology, and associated features](active-directory-whatis.md).
active-directory Protect M365 From On Premises Attacks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md
We recommend the following provisioning methods:
* Limit guest access to browsing groups and other properties in the directory. Use the external collaboration settings to restrict guests' ability to read groups they're not members of.
- * Block access to the Azure portal. You can make rare necessary exceptions. Create a Conditional Access policy that includes all guests and external users. Then [implement a policy to block access](../../role-based-access-control/conditional-access-azure-management.md).
+ * Block access to the Azure portal. You can make rare necessary exceptions. Create a Conditional Access policy that includes all guests and external users. Then [implement a policy to block access](../conditional-access/concept-conditional-access-cloud-apps.md).
* **Disconnected forests**: Use [Azure AD cloud provisioning](../cloud-sync/what-is-cloud-sync.md). This method enables you to connect to disconnected forests, eliminating the need to establish cross-forest connectivity or trusts, which can broaden the effect of an on-premises breach.
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
Having Azure AD pre-authenticate access to BIG-IP published services provides ma
- Preemptive [Conditional Access](../conditional-access/overview.md) and [Azure AD Multi-Factor Authentication (MFA)](../authentication/concept-mfa-howitworks.md) -- [Identity Protection](../identity-protection/overview-identity-protection.md) - Adaptive control through user and session risk profiling--- [Leaked credential detection](../identity-protection/concept-identity-protection-risks.md)
+- [Identity Protection](../identity-protection/overview-identity-protection.md) - Adaptive protection through user and session risk profiling, plus [Leaked credential detection](../identity-protection/concept-identity-protection-risks.md)
- [Self-service password reset (SSPR)](../authentication/tutorial-enable-sspr.md)
Whether a direct employee, affiliate, or consumer, most users are already acquai
Users now find their BIG-IP published services consolidated in the [MyApps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) or [O365 launchpads](https://airhead.io/airbase/launchpads/R3kW-RkDFEedipcU1AFlnA) along with self-service capabilities to a broader set of services, no matter the type of device or location. Users can even continue accessing published services directly via the BIG-IPs proprietary Webtop portal, if preferred. When logging off, SHA ensures a usersΓÇÖ session is terminated at both ends, the BIG-IP and Azure AD, ensuring services remain fully protected from unauthorized access.
-The screenshots provided are from the Azure AD app portal that users access securely to find their BIG-IP published services and for managing their account properties.
+Users access the Microsoft MyApps portal to easily find their BIG-IP published services and for managing their account properties.
![The screenshot shows woodgrove myapps gallery](media/f5-aad-integration/woodgrove-app-gallery.png)
The screenshots provided are from the Azure AD app portal that users access secu
## Insights and analytics
-A BIG-IPΓÇÖs role is critical to any business, so deployed BIG-IP instances should be monitored to ensure published services are highly available, both at an SHA level and operationally too.
+A BIG-IPΓÇÖs role is critical to any business, so deployed BIG-IP instances can be monitored to ensure published services are highly available, both at an SHA level and operationally too.
Several options exist for logging events either locally, or remotely through a Security Information and Event Management (SIEM) solution, enabling off-box storage and processing of telemetry. A highly effective solution for monitoring Azure AD and SHA-specific activity, is to use [Azure Monitor](../../azure-monitor/overview.md) and [Microsoft Sentinel](../../sentinel/overview.md), together offering:
Several options exist for logging events either locally, or remotely through a S
## Prerequisites
-Integrating F5 BIG-IP with Azure AD for SHA have the following pre-requisites:
+Integrating an F5 BIG-IP with Azure AD for SHA has the following pre-requisites:
- An F5 BIG-IP instance running on either of the following platforms:
Integrating F5 BIG-IP with Azure AD for SHA have the following pre-requisites:
- An active F5 BIG-IP APM license, through one of the following options:
- - F5 BIG-IP® Best bundle (or)
+ - F5 BIG-IP® Best bundle
- - F5 BIG-IP Access Policy ManagerΓäó standalone license
+ - F5 BIG-IP Access Policy ManagerΓäó standalone license
- - F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+ - F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
- - A 90-day BIG-IP Access Policy ManagerΓäó (APM) [trial license](https://www.f5.com/trial/big-ip-trial.php)
+ - A 90-day BIG-IP Access Policy ManagerΓäó (APM) [trial license](https://www.f5.com/trial/big-ip-trial.php)
- Azure AD licensing through either of the following options:
No previous experience or F5 BIG-IP knowledge is necessary to implement SHA, but
## Configuration scenarios Configuring a BIG-IP for SHA is achieved using any of the many available methods, including several template based options, or a manual configuration.
-The following tutorials provide detailed guidance on implementing some of the more common patterns for BIG-IP and Azure AD SHA, using these methods.
+The following tutorials provide detailed guidance on implementing some of the more common patterns for BIG-IP and Azure AD secure hybrid access.
**Advanced configuration**
Refer to the following advanced configuration tutorials for your integration req
The Guided Configuration wizard, available from BIG-IP version 13.1 aims to minimize time and effort implementing common BIG-IP publishing scenarios. Its workflow-based framework provides an intuitive deployment experience tailored to specific access topologies.
-The latest version of the Guided Configuration 16.1 now offers an Easy Button feature. With **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, without management overhead of having to do so on a per app basis.
+Version 16.x of the Guided Configuration now offers an Easy Button feature. With **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, without management overhead of having to do so on a per app basis.
Refer to the following guided configuration tutorials using Easy Button templates for your integration requirements:
Refer to the following guided configuration tutorials using Easy Button template
- [F5 BIG-IP Easy Button for SSO to header-based and LDAP applications](f5-big-ip-ldap-header-easybutton.md)
+- [BIG-IP Easy Button for SSO to Oracle EBS (Enterprise Business Suite)](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)
+
+- [BIG-IP Easy Button for SSO to Oracle JD Edwards](f5-big-ip-oracle-jde-easy-button.md)
+ ## Additional resources - [The end of passwords, go passwordless](https://www.microsoft.com/security/business/identity/passwordless)
Refer to the following guided configuration tutorials using Easy Button template
## Next steps
-Consider running an SHA Proof of concept (POC) using your existing BIG-IP infrastructure, or by deploying a trial instance. [Deploying a BIG-IP Virtual Edition (VE) VM into Azure](f5-bigip-deployment-guide.md) takes approximately 30 minutes, at which point you'll have:
+Consider running an SHA Proof of concept (POC) using your existing BIG-IP infrastructure, or by [Deploying a BIG-IP Virtual Edition (VE) VM into Azure](f5-bigip-deployment-guide.md) takes approximately 30 minutes, at which point you'll have:
-- A fully secured platform to model an SHA proof of concept
+- A fully secured platform to model a SHA proof of concept
-- A pre-production instance, fully secured platform to use for testing new BIG-IP system updates and hotfixes
+- A pre-production instance for testing new BIG-IP system updates and hotfixes
-At the same time, you should identify one or two applications that can be targeted for publishing via the BIG-IP and protecting with SHA.
+At the same time, you should identify one or two applications that can be published via the BIG-IP and protected with SHA.
Our recommendation is to start with an application that isnΓÇÖt yet published via a BIG-IP, so as to avoid potential disruption to production services. The guidelines mentioned in this article will help you get acquainted with the general procedure for creating the various BIG-IP configuration objects and setting up SHA. Once complete you should be able to do the same with any other new services, plus also have enough knowledge to convert existing BIG-IP published services over to SHA with minimal effort.
-The below interactive guide walks through the high-level procedure for implementing SHA and seeing the end-user experience.
+The below interactive guide walks through the high-level procedure for implementing SHA using a non Easy Button template, and seeing the end-user experience.
[![The image shows interactive guide cover](media/f5-aad-integration/interactive-guide.png)](https://aka.ms/Secure-Hybrid-Access-F5-Interactive-Guide)
active-directory F5 Big Ip Oracle Jde Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-oracle-jde-easy-button.md
+
+ Title: Configure F5 BIG-IP Easy Button for SSO to Oracle JDE
+description: Learn to implement SHA with header-based SSO to Oracle JD Edwards using F5ΓÇÖs BIG-IP Easy Button guided configuration
+++++++ Last updated : 02/03/2022++++
+# Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle JDE
+
+In this article, you'll learn to implement Secure Hybrid Access (SHA) with header-based single sign-on (SSO) to Oracle JD Edwards (JDE) using F5ΓÇÖs BIG-IP Easy Button guided configuration.
+
+Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
+
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/conditional-access/overview)
+
+* Full SSO between Azure AD and BIG-IP published services
+
+* Manage Identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
+
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](http://f5-aad-integration.md/) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+
+## Scenario description
+
+For this scenario, use an **Oracle JDE application using HTTP authorization headers** to manage access to protected content.
+
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+
+Having a BIG-IP in front of the app enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
+
+## Scenario architecture
+
+The secure hybrid access solution for this scenario is made up of several components:
+
+**Oracle JDE Application:** BIG-IP published service to be protected by Azure AD SHA.
+
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP.
+
+**BIG-IP APM:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the Oracle service.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+
+![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-oracle-jde/sp-initiated-flow.png)
+
+| Steps| Description |
+| -- |-|
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
+| 4| User is redirected back to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
+| 5| BIG-IP injects Azure AD attributes as headers in request to the application |
+| 6| Application authorizes request and returns payload |
+
+## Prerequisites
+
+Prior BIG-IP experience isnΓÇÖt necessary, but you need:
+
+* An Azure AD free subscription or above
+
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](./f5-bigip-deployment-guide.md)
+
+* Any of the following F5 BIG-IP license SKUs
+
+ * F5 BIG-IP® Best bundle
+
+ * F5 BIG-IP Access Policy ManagerΓäó (APM) standalone license
+
+ * F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD or created directly within Azure AD and flowed back to your on-premises directory
+
+* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
+
+* [SSL certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS
+
+* An existing Oracle JDE environment
+
+## BIG-IP configuration methods
+
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+
+>[!NOTE]
+> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+
+## Register Easy Button
+
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform](../develop/quickstart-register-app.md).
+
+A BIG-IP must also be registered as a client in Azure AD, before it is allowed to establish a trust in between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
+
+1. Sign in to the [Azure AD portal](https://portal.azure.com/) with Application Administrative rights
+
+2. From the left navigation pane, select the **Azure Active Directory** service
+
+3. Under Manage, select **App registrations > New registration**
+
+4. Enter a display name for your application. For example, F5 BIG-IP Easy Button
+
+5. Specify who can use the application > **Accounts in this organizational directory only**
+
+6. Select **Register** to complete the initial app registration
+
+7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions:
+
+ * Application.Read.All
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+
+8. Grant admin consent for your organization
+
+9. Go to **Certificates & Secrets**, generate a new **Client secret** and note it down
+
+10. Go to **Overview**, note the **Client ID** and **Tenant ID**
+
+## Configure Easy Button
+
+Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+
+1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
+
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
+
+2. Review the list of configuration steps and select **Next**
+
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
+
+3. Follow the sequence of steps required to publish your application.
+
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox)
+
+### Configuration Properties
+
+The **Configuration Properties** tab creates up a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
+
+Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
+
+1. Provide a unique **Configuration Name** that enables an admin to easily distinguish between Easy Button configurations
+
+2. Enable **Single Sign-On (SSO) & HTTP Headers**
+
+3. Enter the **Tenant Id, Client ID**, and **Client Secret** you noted down from your registered application
+
+4. Before you select **Next**, confirm that BIG-IP can successfully connect to your tenant.
+
+ ![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-oracle-jde/configuration-general-and-service-account-properties.png)
+
+### Service Provider
+
+The **Service Provider** settings define the SAML SP properties for the APM instance representing the application protected through SHA.
+
+1. Enter **Host**. This is the public FQDN of the application being secured. You need a corresponding DNS record for clients to resolve this address, but using a localhost record is fine during testing
+
+2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
+
+ Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-oracle-jde/service-provider-settings.png)
+
+ Next, under optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
+
+5. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab.
+
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png)
+
+6. Check **Enable Encrypted Assertion**
+
+7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM uses to decrypt Azure AD assertions
+
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP uploads to Azure AD for encrypting the issued SAML assertions.
+
+ ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
+
+### Azure Active Directory
+
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. In this example, select **JD Edwards Protected by F5 BIG-IP > Add**. This adds the template for the Oracle JD Edwards.
+
+![ Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-oracle-jde/azure-configuration-add-big-ip-application.png)
+
+#### Azure Configuration
+
+1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users see on MyApps portal
+
+2. In the **Sign On URL (optional)** enter the public FQDN of the JDE application being secured.
+
+ Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-oracle-jde/azure-configuration-add-display-info.png)
+
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+4. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
+
+5. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
+
+6. **User and User Groups** are used to authorize access to the application. They are dynamically added from the tenant. **Add** a user or group that you can use later for testing, otherwise all access will be denied
+
+ ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
+
+#### User Attributes & Claims
+
+When a user successfully authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims** tab shows the default claims to issue for the new application. It also lets you configure more claims.
+
+![Screenshot for Azure configuration ΓÇô User attributes & claims](./media/f5-big-ip-easy-button-ldap/user-attributes-claims.png)
+
+You can include additional Azure AD attributes if necessary, but the Oracle JDE scenario only requires the default attributes.
+
+#### Additional User Attributes
+
+The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+
+#### Conditional Access Policy
+
+Conditional Access policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+
+The **Available Policies** view, by default, will list all Conditional Access policies that do not include user-based actions.
+
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+
+To select a policy to be applied to the application being published:
+
+1. Select the desired policy in the **Available Policies** list
+
+2. Select the right arrow and move it to the **Selected Policies** list
+
+ The selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the policy is not enforced.
+
+ ![Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
+
+> [!NOTE]
+> The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+
+### Virtual Server Properties
+
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP.
+
+2. Enter **Service Port** as *443* for HTTPS
+
+3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
+
+4. Select **Client SSL Profile** to enable the virtual server for HTTPS so that client connections are encrypted over TLS. Select the client SSL profile you created as part of the prerequisites or leave the default if testing
+
+ ![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
+
+### Pool Properties
+
+The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
+
+1. Choose from **Select a Pool**. Create a new pool or select an existing one
+
+2. Choose the **Load Balancing Method** as *Round Robin*
+
+3. Update the **Pool Servers**. Select an existing node or specify an IP and port for the servers hosting the Oracle JDE application.
+
+ ![Screenshot for Application pool](./media/f5-big-ip-easy-button-ldap/application-pool.png)
+
+#### Single Sign-On & HTTP Headers
+
+The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO to published applications. As the Oracle JDE application expects headers, enable **HTTP Headers** and enter the following properties.
+
+* **Header Operation:** replace
+* **Header Name:** JDE_SSO_UID
+* **Header Value:** %{session.sso.token.last.username}
+
+ ![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-oracle-jde/sso-and-http-headers.png)
+
+>[!NOTE]
+>APM session variables defined within curly brackets are CASE sensitive. If you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure.
+
+### Session Management
+
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Consult [F5 documentation](https://support.f5.com/csp/article/K18390492) for details on these settings.
+
+What isnΓÇÖt covered however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button deploys a SAML application to your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
+
+During deployment, the SAML federation metadata for the published application is imported from your tenant, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign outs terminate the session between a client and Azure AD.
+
+## Summary
+
+This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of Enterprise applications.
+
+## Next steps
+
+From a browser, connect to the **Oracle JDE applicationΓÇÖs external URL** or select the applicationΓÇÖs icon in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+
+## Advanced deployment
+
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for headers-based SSO](./f5-big-ip-header-advanced.md). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+> [!NOTE]
+> Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+
+## Troubleshooting
+
+There can be many factors leading to failure to access a published application. BIG-IP logging can help quickly isolate all sorts of issues with connectivity, policy violations, or misconfigured variable mappings.
+
+Start troubleshooting by increasing the log verbosity level.
+
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+
+2. Select the row for your published application then **Edit > Access System Logs**
+
+3. Select **Debug** from the SSO list then **OK**
+
+Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data. If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+
+1. Navigate to **Access > Overview > Access reports**
+
+2. Run the report for the last hour to see logs provide any clues. The **View session** variables link for your session will also help understand if the APM is receiving the expected claims from Azure AD
+
+If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+
+1. In which case you should head to **Access Policy > Overview > Active Sessions** and select the link for your active session
+
+2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes
+
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory Assign Roles Different Scopes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/assign-roles-different-scopes.md
Previously updated : 09/13/2021 Last updated : 02/04/2022
Follow these instructions to assign a role using the Microsoft Graph API in [Gra
1. Sign in to the [Graph Explorer](https://aka.ms/ge).
-1. Use [List user](/graph/api/user-list) API to get the user.
+1. Use [List users](/graph/api/user-list) API to get the user.
- ```HTTP
- GET https://graph.microsoft.com/beta/users?$filter=userPrincipalName eq 'alice@contoso.com'
+ ```http
+ GET https://graph.microsoft.com/v1.0/users?$filter=userPrincipalName eq 'alice@contoso.com'
```
-1. Use the [List roleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API to get the role you want to assign.
+1. Use the [List unifiedRoleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API to get the role you want to assign.
- ```HTTP
- GET https://graph.microsoft.com/beta/rolemanagement/directory/roleDefinitions?$filter=displayName eq 'Billing Administrator'
+ ```http
+ GET https://graph.microsoft.com/v1.0/rolemanagement/directory/roleDefinitions?$filter=displayName eq 'Billing Administrator'
```
-1. Use the [Create roleAssignments](/graph/api/rbacapplication-post-roleassignments) API to assign the role.\
+1. Use the [Create unifiedRoleAssignment](/graph/api/rbacapplication-post-roleassignments) API to assign the role.
- ```HTTP
- POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+ ```http
+ POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
{
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
"principalId": "<provide objectId of the user obtained above>", "roleDefinitionId": "<provide templateId of the role obtained above>", "directoryScopeId": "/"
Follow these instructions to assign a role at administrative unit scope using th
1. Sign in to the [Graph Explorer](https://aka.ms/ge).
-1. Use [List user](/graph/api/user-list) API to get the user.
+1. Use [List users](/graph/api/user-list) API to get the user.
- ```HTTP
- GET https://graph.microsoft.com/beta/users?$filter=userPrincipalName eq 'alice@contoso.com'
+ ```http
+ GET https://graph.microsoft.com/v1.0/users?$filter=userPrincipalName eq 'alice@contoso.com'
```
-1. Use the [List roleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API to get the role you want to assign.
+1. Use the [List unifiedRoleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API to get the role you want to assign.
- ```HTTP
- GET https://graph.microsoft.com/beta/rolemanagement/directory/roleDefinitions?$filter=displayName eq 'User Administrator'
+ ```http
+ GET https://graph.microsoft.com/v1.0/rolemanagement/directory/roleDefinitions?$filter=displayName eq 'User Administrator'
``` 1. Use the [List administrativeUnits](/graph/api/administrativeunit-list) API to get the administrative unit you want the role assignment to be scoped to.
- ```HTTP
- GET https://graph.microsoft.com/beta/administrativeUnits?$filter=displayName eq 'Seattle Admin Unit'
+ ```http
+ GET https://graph.microsoft.com/v1.0/directory/administrativeUnits?$filter=displayName eq 'Seattle Admin Unit'
```
-1. Use the [Create roleAssignments](/graph/api/rbacapplication-post-roleassignments) API to assign the role.
+1. Use the [Create unifiedRoleAssignment](/graph/api/rbacapplication-post-roleassignments) API to assign the role.
- ```HTTP
- POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+ ```http
+ POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
{
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
"principalId": "<provide objectId of the user obtained above>", "roleDefinitionId": "<provide templateId of the role obtained above>", "directoryScopeId": "/administrativeUnits/<provide objectId of the admin unit obtained above>"
Follow these instructions to assign a role at application scope using the Micros
1. Sign in to the [Graph Explorer](https://aka.ms/ge).
-1. Use [List user](/graph/api/user-list) API to get the user.
+1. Use [List users](/graph/api/user-list) API to get the user.
- ```HTTP
- GET https://graph.microsoft.com/beta/users?$filter=userPrincipalName eq 'alice@contoso.com'
+ ```http
+ GET https://graph.microsoft.com/v1.0/users?$filter=userPrincipalName eq 'alice@contoso.com'
```
-1. Use the [List roleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API to get the role you want to assign.
+1. Use the [List unifiedRoleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API to get the role you want to assign.
- ```HTTP
- GET https://graph.microsoft.com/beta/rolemanagement/directory/roleDefinitions?$filter=displayName eq 'Application Administrator'
+ ```http
+ GET https://graph.microsoft.com/v1.0/rolemanagement/directory/roleDefinitions?$filter=displayName eq 'Application Administrator'
``` 1. Use the [List applications](/graph/api/application-list) API to get the administrative unit you want the role assignment to be scoped to.
- ```HTTP
- GET https://graph.microsoft.com/beta/applications?$filter=displayName eq 'f/128 Filter Photos'
+ ```http
+ GET https://graph.microsoft.com/v1.0/applications?$filter=displayName eq 'f/128 Filter Photos'
```
-1. Use the [Create roleAssignments](/graph/api/rbacapplication-post-roleassignments) API to assign the role.
+1. Use the [Create unifiedRoleAssignment](/graph/api/rbacapplication-post-roleassignments) API to assign the role.
+
+ ```http
+ POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
- ```HTTP
- POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
{
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
"principalId": "<provide objectId of the user obtained above>", "roleDefinitionId": "<provide templateId of the role obtained above>", "directoryScopeId": "/<provide objectId of the app registration obtained above>"
active-directory Custom Assign Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-assign-graph.md
Previously updated : 05/14/2021 Last updated : 02/04/2022
For more information, see [Prerequisites to use PowerShell or Graph Explorer](pr
## POST Operations on RoleAssignment
-### Example 1: Create a role assignment between a user and a role definition
+Use the [Create unifiedRoleAssignment](/graph/api/rbacapplication-post-roleassignments) API to assign the role.
-POST
+### Example 1: Create a role assignment between a user and a role definition
-``` HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+```http
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
Content-type: application/json ``` Body
-``` HTTP
+```http
{
- "principalId":"ab2e1023-bddc-4038-9ac1-ad4843e7e539",
- "roleDefinitionId":"194ae4cb-b126-40b2-bd5b-6091b380977d",
- "directoryScopeId":"/" // Don't use "resourceScope" attribute in Azure AD role assignments. It will be deprecated soon.
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
+ "principalId": "ab2e1023-bddc-4038-9ac1-ad4843e7e539",
+ "roleDefinitionId": "194ae4cb-b126-40b2-bd5b-6091b380977d",
+ "directoryScopeId": "/" // Don't use "resourceScope" attribute in Azure AD role assignments. It will be deprecated soon.
} ``` Response
-``` HTTP
+```http
HTTP/1.1 201 Created ``` ### Example 2: Create a role assignment where the principal or role definition does not exist
-POST
-
-``` HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+```http
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
``` Body
-``` HTTP
+```http
{
- "principalId":" 2142743c-a5b3-4983-8486-4532ccba12869",
- "roleDefinitionId":"194ae4cb-b126-40b2-bd5b-6091b380977d",
- "directoryScopeId":"/" //Don't use "resourceScope" attribute in Azure AD role assignments. It will be deprecated soon.
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
+ "principalId": "2142743c-a5b3-4983-8486-4532ccba12869",
+ "roleDefinitionId": "194ae4cb-b126-40b2-bd5b-6091b380977d",
+ "directoryScopeId": "/" //Don't use "resourceScope" attribute in Azure AD role assignments. It will be deprecated soon.
} ``` Response
-``` HTTP
+```http
HTTP/1.1 404 Not Found ```
-### Example 3: Create a role assignment on a single resource scope
-POST
+### Example 3: Create a role assignment on a single resource scope
-``` HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+```http
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
``` Body
-``` HTTP
+```http
{
- "principalId":" 2142743c-a5b3-4983-8486-4532ccba12869",
- "roleDefinitionId":"e9b2b976-1dea-4229-a078-b08abd6c4f84", //role template ID of a custom role
- "directoryScopeId":"/13ff0c50-18e7-4071-8b52-a6f08e17c8cc" //object ID of an application
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
+ "principalId": "2142743c-a5b3-4983-8486-4532ccba12869",
+ "roleDefinitionId": "e9b2b976-1dea-4229-a078-b08abd6c4f84", //role template ID of a custom role
+ "directoryScopeId": "/13ff0c50-18e7-4071-8b52-a6f08e17c8cc" //object ID of an application
} ``` Response
-``` HTTP
+```http
HTTP/1.1 201 Created ``` ### Example 4: Create an administrative unit scoped role assignment on a built-in role definition which is not supported
-POST
-
-``` HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+```http
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
``` Body
-``` HTTP
+```http
{
- "principalId":"ab2e1023-bddc-4038-9ac1-ad4843e7e539",
- "roleDefinitionId":"29232cdf-9323-42fd-ade2-1d097af3e4de", //role template ID of Exchange Administrator
- "directoryScopeId":"/administrativeUnits/13ff0c50-18e7-4071-8b52-a6f08e17c8cc" //object ID of an administrative unit
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
+ "principalId": "ab2e1023-bddc-4038-9ac1-ad4843e7e539",
+ "roleDefinitionId": "29232cdf-9323-42fd-ade2-1d097af3e4de", //role template ID of Exchange Administrator
+ "directoryScopeId": "/administrativeUnits/13ff0c50-18e7-4071-8b52-a6f08e17c8cc" //object ID of an administrative unit
} ``` Response
-``` HTTP
+```http
HTTP/1.1 400 Bad Request { "odata.error":
Only a subset of built-in roles are enabled for Administrative Unit scoping. Ref
## GET Operations on RoleAssignment
-### Example 5: Get role assignments for a given principal
+Use the [List unifiedRoleAssignments](/graph/api/rbacapplication-list-roleassignments) API to get the role assignment.
-GET
+### Example 5: Get role assignments for a given principal
-``` HTTP
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments?$filter=principalId+eq+'<object-id-of-principal>'
+```http
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments?$filter=principalId+eq+'<object-id-of-principal>'
``` Response
-``` HTTP
+```http
HTTP/1.1 200 OK { "value":[ {
- "id":"mhxJMipY4UanIzy2yE-r7JIiSDKQoTVJrLE9etXyrY0-1"
- "principalId":"ab2e1023-bddc-4038-9ac1-ad4843e7e539",
- "roleDefinitionId":"10dae51f-b6af-4016-8d66-8c2a99b929b3",
- "directoryScopeId":"/"
+ "id": "mhxJMipY4UanIzy2yE-r7JIiSDKQoTVJrLE9etXyrY0-1"
+ "principalId": "ab2e1023-bddc-4038-9ac1-ad4843e7e539",
+ "roleDefinitionId": "10dae51f-b6af-4016-8d66-8c2a99b929b3",
+ "directoryScopeId": "/"
} , {
- "id":"CtRxNqwabEKgwaOCHr2CGJIiSDKQoTVJrLE9etXyrY0-1"
- "principalId":"ab2e1023-bddc-4038-9ac1-ad4843e7e539",
- "roleDefinitionId":"fe930be7-5e62-47db-91af-98c3a49a38b1",
- "directoryScopeId":"/"
+ "id": "CtRxNqwabEKgwaOCHr2CGJIiSDKQoTVJrLE9etXyrY0-1"
+ "principalId": "ab2e1023-bddc-4038-9ac1-ad4843e7e539",
+ "roleDefinitionId": "fe930be7-5e62-47db-91af-98c3a49a38b1",
+ "directoryScopeId": "/"
} ] }
HTTP/1.1 200 OK
### Example 6: Get role assignments for a given role definition.
-GET
-
-``` HTTP
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments?$filter=roleDefinitionId+eq+'<object-id-or-template-id-of-role-definition>'
+```http
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments?$filter=roleDefinitionId+eq+'<object-id-or-template-id-of-role-definition>'
``` Response
-``` HTTP
+```http
HTTP/1.1 200 OK { "value":[ {
- "id":"CtRxNqwabEKgwaOCHr2CGJIiSDKQoTVJrLE9etXyrY0-1"
- "principalId":"ab2e1023-bddc-4038-9ac1-ad4843e7e539",
- "roleDefinitionId":"fe930be7-5e62-47db-91af-98c3a49a38b1",
- "directoryScopeId":"/"
+ "id": "CtRxNqwabEKgwaOCHr2CGJIiSDKQoTVJrLE9etXyrY0-1"
+ "principalId": "ab2e1023-bddc-4038-9ac1-ad4843e7e539",
+ "roleDefinitionId": "fe930be7-5e62-47db-91af-98c3a49a38b1",
+ "directoryScopeId": "/"
} ] }
HTTP/1.1 200 OK
### Example 7: Get a role assignment by ID.
-GET
-
-``` HTTP
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments/lAPpYvVpN0KRkAEhdxReEJC2sEqbR_9Hr48lds9SGHI-1
+```http
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments/lAPpYvVpN0KRkAEhdxReEJC2sEqbR_9Hr48lds9SGHI-1
``` Response
-``` HTTP
+```http
HTTP/1.1 200 OK {
- "id":"mhxJMipY4UanIzy2yE-r7JIiSDKQoTVJrLE9etXyrY0-1",
- "principalId":"ab2e1023-bddc-4038-9ac1-ad4843e7e539",
- "roleDefinitionId":"10dae51f-b6af-4016-8d66-8c2a99b929b3",
- "directoryScopeId":"/"
+ "id": "mhxJMipY4UanIzy2yE-r7JIiSDKQoTVJrLE9etXyrY0-1",
+ "principalId": "ab2e1023-bddc-4038-9ac1-ad4843e7e539",
+ "roleDefinitionId": "10dae51f-b6af-4016-8d66-8c2a99b929b3",
+ "directoryScopeId": "/"
} ``` ### Example 8: Get role assignments for a given scope -
-GET
-
-``` HTTP
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments?$filter=directoryScopeId+eq+'/d23998b1-8853-4c87-b95f-be97d6c6b610'
+```http
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments?$filter=directoryScopeId+eq+'/d23998b1-8853-4c87-b95f-be97d6c6b610'
``` Response
-``` HTTP
+```http
HTTP/1.1 200 OK { "value":[ {
- "id":"mhxJMipY4UanIzy2yE-r7JIiSDKQoTVJrLE9etXyrY0-1"
- "principalId":"ab2e1023-bddc-4038-9ac1-ad4843e7e539",
- "roleDefinitionId":"10dae51f-b6af-4016-8d66-8c2a99b929b3",
- "directoryScopeId":"/d23998b1-8853-4c87-b95f-be97d6c6b610"
+ "id": "mhxJMipY4UanIzy2yE-r7JIiSDKQoTVJrLE9etXyrY0-1"
+ "principalId": "ab2e1023-bddc-4038-9ac1-ad4843e7e539",
+ "roleDefinitionId": "10dae51f-b6af-4016-8d66-8c2a99b929b3",
+ "directoryScopeId": "/d23998b1-8853-4c87-b95f-be97d6c6b610"
} , {
- "id":"CtRxNqwabEKgwaOCHr2CGJIiSDKQoTVJrLE9etXyrY0-1"
- "principalId":"ab2e1023-bddc-4038-9ac1-ad4843e7e539",
- "roleDefinitionId":"3671d40a-1aac-426c-a0c1-a3821ebd8218",
- "directoryScopeId":"/d23998b1-8853-4c87-b95f-be97d6c6b610"
+ "id": "CtRxNqwabEKgwaOCHr2CGJIiSDKQoTVJrLE9etXyrY0-1"
+ "principalId": "ab2e1023-bddc-4038-9ac1-ad4843e7e539",
+ "roleDefinitionId": "3671d40a-1aac-426c-a0c1-a3821ebd8218",
+ "directoryScopeId": "/d23998b1-8853-4c87-b95f-be97d6c6b610"
} ] }
HTTP/1.1 200 OK
## DELETE Operations on RoleAssignment
-### Example 9: Delete a role assignment between a user and a role definition.
+Use the [Delete unifiedRoleAssignment](/graph/api/unifiedroleassignment-delete) API to delete the role assignment.
-DELETE
+### Example 9: Delete a role assignment between a user and a role definition.
-``` HTTP
-DELETE https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments/lAPpYvVpN0KRkAEhdxReEJC2sEqbR_9Hr48lds9SGHI-1
+```http
+DELETE https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments/lAPpYvVpN0KRkAEhdxReEJC2sEqbR_9Hr48lds9SGHI-1
``` Response
-``` HTTP
+```http
HTTP/1.1 204 No Content ``` ### Example 10: Delete a role assignment that no longer exists
-DELETE
-
-``` HTTP
-DELETE https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments/lAPpYvVpN0KRkAEhdxReEJC2sEqbR_9Hr48lds9SGHI-1
+```http
+DELETE https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments/lAPpYvVpN0KRkAEhdxReEJC2sEqbR_9Hr48lds9SGHI-1
``` Response
-``` HTTP
+```http
HTTP/1.1 404 Not Found ``` ### Example 11: Delete a role assignment between self and Global Administrator role definition
-DELETE
-
-``` HTTP
-DELETE https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments/lAPpYvVpN0KRkAEhdxReEJC2sEqbR_9Hr48lds9SGHI-1
+```http
+DELETE https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments/lAPpYvVpN0KRkAEhdxReEJC2sEqbR_9Hr48lds9SGHI-1
``` Response
-``` HTTP
+```http
HTTP/1.1 400 Bad Request { "odata.error":
active-directory Custom Enterprise Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-enterprise-apps.md
Previously updated : 05/14/2021 Last updated : 02/04/2022
$roleAssignment = New-AzureADMSRoleAssignment -ResourceScope $resourceScope -Rol
## Microsoft Graph API
-Create a custom role using the provided example in the Microsoft Graph API. For more detail, see [Create and assign a custom role](custom-create.md) and [Assign custom admin roles using the Microsoft Graph API](custom-assign-graph.md).
+Use the [Create unifiedRoleDefinition](/graph/api/rbacapplication-post-roledefinitions) API to create a custom role. For more information, see [Create and assign a custom role](custom-create.md) and [Assign custom admin roles using the Microsoft Graph API](custom-assign-graph.md).
-HTTP request to create the custom role.
+```http
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleDefinitions
-```HTTP
-POST
-https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitionsIsEnabled $true
{
- "description":"Can manage user and group assignments for Applications.",
- "displayName":" Manage user and group assignments",
- "isEnabled":true,
+ "description": "Can manage user and group assignments for Applications.",
+ "displayName": "Manage user and group assignments",
+ "isEnabled": true,
"rolePermissions": [ {
- "resourceActions":
- {
- "allowedResourceActions":
- [
- "microsoft.directory/servicePrincipals/appRoleAssignedTo/update"
- ]
- },
- "condition":null
+ "allowedResourceActions":
+ [
+ "microsoft.directory/servicePrincipals/appRoleAssignedTo/update"
+ ]
} ],
- "templateId":"<PROVIDE NEW GUID HERE>",
- "version":"1"
+ "templateId": "<PROVIDE NEW GUID HERE>",
+ "version": "1"
} ``` ### Assign the custom role using the Microsoft Graph API
-The role assignment combines a security principal ID (which can be a user or service principal), a role definition ID, and an Azure AD resource scope. For more information on the elements of a role assignment, see the [custom roles overview](custom-overview.md)
+Use the [Create unifiedRoleAssignment](/graph/api/rbacapplication-post-roleassignments) API to assign the custom role. The role assignment combines a security principal ID (which can be a user or service principal), a role definition ID, and an Azure AD resource scope. For more information on the elements of a role assignment, see the [custom roles overview](custom-overview.md)
-HTTP request to assign a custom role.
-
-```HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+```http
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
{
- "principalId":"<PROVIDE OBJECTID OF USER TO ASSIGN HERE>",
- "roleDefinitionId":"<PROVIDE OBJECTID OF ROLE DEFINITION HERE>",
- "resourceScopes":["/"]
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
+ "principalId": "<PROVIDE OBJECTID OF USER TO ASSIGN HERE>",
+ "roleDefinitionId": "<PROVIDE OBJECTID OF ROLE DEFINITION HERE>",
+ "directoryScopeId": "/"
} ```
active-directory Groups Assign Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-assign-role.md
Previously updated : 07/30/2021 Last updated : 02/04/2022
$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId '/' -RoleDefinit
### Create a group that can be assigned Azure AD role
-```
-POST https://graph.microsoft.com/beta/groups
+Use the [Create group](/graph/api/group-post-groups) API to create a group.
+
+```http
+POST https://graph.microsoft.com/v1.0/groups
+ {
-"description": "This group is assigned to Helpdesk Administrator built-in role of Azure AD.",
-"displayName": "Contoso_Helpdesk_Administrators",
-"groupTypes": [],
-"mailEnabled": false,
-"securityEnabled": true,
-"mailNickname": "contosohelpdeskadministrators",
-"isAssignableToRole": true
+ "description": "This group is assigned to Helpdesk Administrator built-in role of Azure AD.",
+ "displayName": "Contoso_Helpdesk_Administrators",
+ "groupTypes": [
+ "Unified"
+ ],
+ "isAssignableToRole": true,
+ "mailEnabled": true,
+ "mailNickname": "contosohelpdeskadministrators",
+ "securityEnabled": true
} ``` ### Get the role definition
-```
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions?$filter = displayName eq 'Helpdesk Administrator'
+Use the [List unifiedRoleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API to get a role definition.
+
+```http
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleDefinitions?$filter = displayName eq 'Helpdesk Administrator'
``` ### Create the role assignment
-```
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+Use the [Create unifiedRoleAssignment](/graph/api/rbacapplication-post-roleassignments) API to assign the role.
+
+```http
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
+ {
-"principalId":"<Object Id of Group>",
-"roleDefinitionId":"<ID of role definition>",
-"directoryScopeId":"/"
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
+ "principalId": "<Object Id of Group>",
+ "roleDefinitionId": "<ID of role definition>",
+ "directoryScopeId": "/"
} ``` ## Next steps
active-directory Groups Remove Assignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-remove-assignment.md
Previously updated : 07/30/2021 Last updated : 02/04/2022
Remove-AzureAdMSRoleAssignment -Id $roleAssignment.Id
### Create a group that can be assigned an Azure AD role
+Use the [Create group](/graph/api/group-post-groups) API to create a group.
+ ```http
-POST https://graph.microsoft.com/beta/groups
+POST https://graph.microsoft.com/v1.0/groups
+ {
-"description": "This group is assigned to Helpdesk Administrator built-in role of Azure AD",
-"displayName": "Contoso_Helpdesk_Administrators",
-"groupTypes": [
-"Unified"
-],
-"mailEnabled": true,
-"securityEnabled": true
-"mailNickname": "contosohelpdeskadministrators",
-"isAssignableToRole": true,
+ "description": "This group is assigned to Helpdesk Administrator built-in role of Azure AD",
+ "displayName": "Contoso_Helpdesk_Administrators",
+ "groupTypes": [
+ "Unified"
+ ],
+ "isAssignableToRole": true,
+ "mailEnabled": true,
+ "mailNickname": "contosohelpdeskadministrators",
+ "securityEnabled": true
} ``` ### Get the role definition
+Use the [List unifiedRoleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API to get a role definition.
+ ```http
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions?$filter=displayName+eq+'Helpdesk Administrator'
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleDefinitions?$filter=displayName+eq+'Helpdesk Administrator'
``` ### Create the role assignment
+Use the [Create unifiedRoleAssignment](/graph/api/rbacapplication-post-roleassignments) API to assign the role.
+ ```http
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
{
-"principalId":"{object-id-of-group}",
-"roleDefinitionId":"{role-definition-id}",
-"directoryScopeId":"/"
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
+ "principalId": "{object-id-of-group}",
+ "roleDefinitionId": "{role-definition-id}",
+ "directoryScopeId": "/"
} ``` ### Delete role assignment
+Use the [Delete unifiedRoleAssignment](/graph/api/unifiedroleassignment-delete) API to delete the role assignment.
+ ```http
-DELETE https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments/{role-assignment-id}
+DELETE https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments/{role-assignment-id}
``` ## Next steps
active-directory Groups View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-view-assignments.md
Previously updated : 05/14/2021 Last updated : 02/04/2022
Get-AzureADMSRoleAssignment -Filter "principalId eq '<object id of group>"
### Get object ID of the group
+Use the [Get group](/graph/api/group-get) API to get a group.
+ ```http
-GET https://graph.microsoft.com/beta/groups?$filter=displayName+eq+'Contoso_Helpdesk_Administrator'
+GET https://graph.microsoft.com/v1.0/groups?$filter=displayName+eq+'Contoso_Helpdesk_Administrator'
``` ### Get role assignments to a group
+Use the [List unifiedRoleAssignments](/graph/api/rbacapplication-list-roleassignments) API to get the role assignment.
+ ```http
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments?$filter=principalId eq
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments?$filter=principalId eq
``` ## Next steps
active-directory List Role Assignments Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/list-role-assignments-users.md
Previously updated : 08/12/2021 Last updated : 02/04/2022
Follow these steps to list Azure AD roles assigned to a user using PowerShell.
c. Use [checkMemberObjects](/graph/api/user-checkmemberobjects) API to figure out which of the role assignable groups the user is member of. ```powershell
- $uri = "https://graph.microsoft.com/beta/directoryObjects/$userId/microsoft.graph.checkMemberObjects"
+ $uri = "https://graph.microsoft.com/v1.0/directoryObjects/$userId/microsoft.graph.checkMemberObjects"
$userRoleAssignableGroups = (Invoke-MgGraphRequest -Method POST -Uri $uri -Body @{"ids"= $roleAssignableGroups}).value ```
Follow these steps to list Azure AD roles assigned to a user using the Microsoft
1. Sign in to the [Graph Explorer](https://aka.ms/ge).
-1. Use [List roleAssignments](/graph/api/rbacapplication-list-roleassignments) API to get roles assigned directly to a user. Add following query to the URL and select **Run query**.
+1. Use the [List unifiedRoleAssignments](/graph/api/rbacapplication-list-roleassignments) API to get roles assigned directly to a user. Add following query to the URL and select **Run query**.
- ```HTTP
- GET https://graph.microsoft.com/beta/rolemanagement/directory/roleAssignments?$filter=principalId eq '55c07278-7109-4a46-ae60-4b644bc83a31'
+ ```http
+ GET https://graph.microsoft.com/v1.0/rolemanagement/directory/roleAssignments?$filter=principalId eq '55c07278-7109-4a46-ae60-4b644bc83a31'
``` 3. To get transitive roles assigned to the user, follow these steps.
- a. Use [List groups](/graph/api/group-list) to get the list of all role assignable groups.
+ a. Use the [List groups](/graph/api/group-list) API to get the list of all role assignable groups.
- ```HTTP
- GET https://graph.microsoft.com/beta/groups?$filter=isAssignableToRole eq true
+ ```http
+ GET https://graph.microsoft.com/v1.0/groups?$filter=isAssignableToRole eq true
```
- b. Pass this list to [checkMemberObjects](/graph/api/user-checkmemberobjects) API to figure out which of the role assignable groups the user is member of.
+ b. Pass this list to the [checkMemberObjects](/graph/api/user-checkmemberobjects) API to figure out which of the role assignable groups the user is member of.
- ```HTTP
- POST https://graph.microsoft.com/beta/users/55c07278-7109-4a46-ae60-4b644bc83a31/checkMemberObjects
+ ```http
+ POST https://graph.microsoft.com/v1.0/users/55c07278-7109-4a46-ae60-4b644bc83a31/checkMemberObjects
{ "ids": [ "936aec09-47d5-4a77-a708-db2ff1dae6f2",
Follow these steps to list Azure AD roles assigned to a user using the Microsoft
} ```
- c. Use [List roleAssignments](/graph/api/rbacapplication-list-roleassignments) API to loop through the groups and get the roles assigned to them.
+ c. Use the [List unifiedRoleAssignments](/graph/api/rbacapplication-list-roleassignments) API to loop through the groups and get the roles assigned to them.
- ```HTTP
- GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments?$filter=principalId eq '5425a4a0-8998-45ca-b42c-4e00920a6382'
+ ```http
+ GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments?$filter=principalId eq '5425a4a0-8998-45ca-b42c-4e00920a6382'
``` ## Next steps
active-directory Manage Roles Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/manage-roles-portal.md
Previously updated : 07/15/2021 Last updated : 02/04/2022
In this example, a security principal with objectID `f8ca5a85-489a-49a0-b555-0a6
1. Sign in to the [Graph Explorer](https://aka.ms/ge). 2. Select **POST** as the HTTP method from the dropdown.
-3. Select the API version to **beta**.
-4. Use the [roleAssignments](/graph/api/rbacapplication-post-roleassignments) API to assign roles. Add following details to the URL and Request Body and select **Run query**.
+3. Select the API version to **v1.0**.
+4. Use the [Create unifiedRoleAssignment](/graph/api/rbacapplication-post-roleassignments) API to assign roles. Add following details to the URL and Request Body and select **Run query**.
-```HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+```http
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
Content-type: application/json {
In this example, a security principal with objectID `f8ca5a85-489a-49a0-b555-0a6
1. Sign in to the [Graph Explorer](https://aka.ms/ge). 2. Select **POST** as the HTTP method from the dropdown. 3. Select the API version to **beta**.
-4. Add following details to the URL and Request Body and select **Run query**.
+4. Use the [Create unifiedRoleEligibilityScheduleRequest](/graph/api/unifiedroleeligibilityschedulerequest-post-unifiedroleeligibilityschedulerequests) API to assign roles using PIM. Add following details to the URL and Request Body and select **Run query**.
-```HTTP
+```http
POST https://graph.microsoft.com/beta/rolemanagement/directory/roleEligibilityScheduleRequests- Content-type: application/json {
Content-type: application/json
} } }- ``` In the following example, a security principal is assigned a permanent eligible role assignment to Billing Administrator.
-```HTTP
+```http
POST https://graph.microsoft.com/beta/rolemanagement/directory/roleEligibilityScheduleRequests- Content-type: application/json {
Content-type: application/json
} } }- ```
-To activate the role assignment, use the following API.
+To activate the role assignment, use the [Create unifiedRoleAssignmentScheduleRequest](/graph/api/unifiedroleassignmentschedulerequest-post-unifiedroleassignmentschedulerequests) API.
-```HTTP
+```http
POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentScheduleRequests- Content-type: application/json {
Content-type: application/json
"directoryScopeId": "/", "principalId": "f8ca5a85-489a-49a0-b555-0a6d81e56f0d" }- ``` ## Next steps
active-directory Quickstart App Registration Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/quickstart-app-registration-limits.md
Previously updated : 05/14/2021 Last updated : 02/04/2022
$roleAssignment = New-AzureADMSRoleAssignment -ResourceScope $resourceScope -Rol
### Create a custom role
-HTTP request to create the custom role.
+Use the [Create unifiedRoleDefinition](/graph/api/rbacapplication-post-roledefinitions) API to create a custom role.
-POST
-
-``` HTTP
-https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions
+```http
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleDefinitions
``` Body
-```HTTP
+```http
{
- "description":"Can create an unlimited number of application registrations.",
- "displayName":"Application Registration Creator",
- "isEnabled":true,
+ "description": "Can create an unlimited number of application registrations.",
+ "displayName": "Application Registration Creator",
+ "isEnabled": true,
"rolePermissions": [ {
- "resourceActions":
- {
- "allowedResourceActions":
- [
- "microsoft.directory/applications/create"
- "microsoft.directory/applications/createAsOwner"
- ]
- },
- "condition":null
+ "allowedResourceActions":
+ [
+ "microsoft.directory/applications/create"
+ "microsoft.directory/applications/createAsOwner"
+ ]
} ],
- "templateId":"<PROVIDE NEW GUID HERE>",
- "version":"1"
+ "templateId": "<PROVIDE NEW GUID HERE>",
+ "version": "1"
} ``` ### Assign the role
-The role assignment combines a security principal ID (which can be a user or service principal), a role definition (role) ID, and an Azure AD resource scope.
-
-HTTP request to assign a custom role.
-
-POST
+Use the [Create unifiedRoleAssignment](/graph/api/rbacapplication-post-roleassignments) API to assign the custom role. The role assignment combines a security principal ID (which can be a user or service principal), a role definition (role) ID, and an Azure AD resource scope.
-``` HTTP
-https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+```http
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
``` Body
-``` HTTP
+```http
{
- "principalId":"<PROVIDE OBJECTID OF USER TO ASSIGN HERE>",
- "roleDefinitionId":"<PROVIDE OBJECTID OF ROLE DEFINITION HERE>",
- "resourceScopes":["/"]
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
+ "principalId": "<PROVIDE OBJECTID OF USER TO ASSIGN HERE>",
+ "roleDefinitionId": "<PROVIDE OBJECTID OF ROLE DEFINITION HERE>",
+ "directoryScopeId": "/"
} ```
active-directory Role Definitions List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/role-definitions-list.md
Previously updated : 07/23/2021 Last updated : 02/04/2022
Follow these instructions to list Azure AD roles using the Microsoft Graph API i
1. Sign in to the [Graph Explorer](https://aka.ms/ge). 2. Select **GET** as the HTTP method from the dropdown.
-3. Select the API version to **beta**.
-4. Add the following query to use the [List roleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API.
+3. Select the API version to **v1.0**.
+4. Add the following query to use the [List unifiedRoleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API.
- ```HTTP
- GET https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions
+ ```http
+ GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleDefinitions
``` 5. Select **Run query** to list the roles. 6. To view permissions of a role, use the following API.
- ```HTTP
- GET https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions?$filter=DisplayName eq 'Conditional Access Administrator'&$select=rolePermissions
+ ```http
+ GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleDefinitions?$filter=DisplayName eq 'Conditional Access Administrator'&$select=rolePermissions
``` ## Next steps
active-directory View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/view-assignments.md
Previously updated : 09/07/2021 Last updated : 02/04/2022
Get-AzureADMSRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'"
This section describes how to list role assignments with organization-wide scope. To list single-application scope role assignments using Graph API, you can use the operations in [Assign custom roles with Graph API](custom-assign-graph.md).
-HTTP request to get a role assignment for a given role definition.
+Use the [List unifiedRoleAssignments](/graph/api/rbacapplication-list-roleassignments) API to get the role assignment for a specified role definition.
-GET
-
-``` HTTP
-https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments&$filter=roleDefinitionId eq ΓÇÿ<template-id-of-role-definition>ΓÇÖ
+```http
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments&$filter=roleDefinitionId eq ΓÇÿ<template-id-of-role-definition>ΓÇÖ
``` Response
-``` HTTP
+```http
HTTP/1.1 200 OK {
- "id":"CtRxNqwabEKgwaOCHr2CGJIiSDKQoTVJrLE9etXyrY0-1",
- "principalId":"ab2e1023-bddc-4038-9ac1-ad4843e7e539",
- "roleDefinitionId":"3671d40a-1aac-426c-a0c1-a3821ebd8218",
- "directoryScopeId":"/"
+ "id": "CtRxNqwabEKgwaOCHr2CGJIiSDKQoTVJrLE9etXyrY0-1",
+ "principalId": "ab2e1023-bddc-4038-9ac1-ad4843e7e539",
+ "roleDefinitionId": "3671d40a-1aac-426c-a0c1-a3821ebd8218",
+ "directoryScopeId": "/"
} ```
api-management Api Management Howto Developer Portal Customize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-developer-portal-customize.md
To make your portal and its latest changes available to visitors, you need to *p
### Publish from the administrative interface 1. Make sure you saved your changes by selecting the **Save** icon.
-1. In the **Operations** section of the menu, select **Publish website** . This operation may take a few minutes.
+1. In the **Operations** section of the menu, select **Publish website**. This operation may take a few minutes.
:::image type="content" source="media/api-management-howto-developer-portal-customize/publish-portal.png" alt-text="Publish portal" border="false":::
api-management Api Management Howto Developer Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-developer-portal.md
Previously updated : 04/15/2021 Last updated : 02/02/2022
As introduced in this article, you can customize and extend the developer portal
Migration to the new developer portal is described in the [dedicated documentation article](developer-portal-deprecated-migration.md).
-## Customization and styling
+## Customization and styling of the managed portal
-The developer portal can be customized and styled through the built-in, drag-and-drop visual editor. See [this tutorial](api-management-howto-developer-portal-customize.md) for more details.
+Your API Management service includes a built-in, always up-to-date, **managed** developer portal. You can access it from the Azure portal interface.
-## <a name="managed-vs-self-hosted"></a> Extensibility
+Customize and style the managed portal through the built-in, drag-and-drop visual editor:
-Your API Management service includes a built-in, always up-to-date, **managed** developer portal. You can access it from the Azure portal interface.
+* Use the visual editor to modify pages, media, layouts, menus, styles, or website settings.
+
+* Take advantage of built-in widgets to add text, images, buttons, and other objects that the portal supports out-of-the-box.
+
+* [Add custom HTML](developer-portal-faq.md#how-do-i-add-custom-html-to-my-developer-portal) - for example, add HTML for a form or to embed a video player. The custom code is rendered in an inline frame (iframe).
+
+See [this tutorial](api-management-howto-developer-portal-customize.md) for example customizations.
+
+## <a name="managed-vs-self-hosted"></a> Extensibility
-If you need to extend it with custom logic, which isn't supported out-of-the-box, you can modify its codebase. The portal's codebase is [available in a GitHub repository](https://github.com/Azure/api-management-developer-portal). For example, you could implement a new widget, which integrates with a third-party support system. When you implement new functionality, you can choose one of the following options:
+In some cases you might need functionality beyond the customization and styling options supported in the managed developer portal. If you need to implement custom logic, which isn't supported out-of-the-box, you can modify the portal's codebase, available on [GitHub](https://github.com/Azure/api-management-developer-portal). For example, you could create a new widget to integrate with a third-party support system. When you implement new functionality, you can choose one of the following options:
- **Self-host** the resulting portal outside of your API Management service. When you self-host the portal, you become its maintainer and you are responsible for its upgrades. Azure Support's assistance is limited only to the [basic setup of self-hosted portals](developer-portal-self-host.md). - Open a pull request for the API Management team to merge new functionality to the **managed** portal's codebase.
For extensibility details and instructions, refer to the [GitHub repository](htt
## Next steps
-Learn more about the new developer portal:
+Learn more about the developer portal:
- [Access and customize the managed developer portal](api-management-howto-developer-portal-customize.md) - [Set up self-hosted version of the portal](developer-portal-self-host.md)
api-management Developer Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/developer-portal-faq.md
Previously updated : 07/30/2021 Last updated : 02/04/2022
## What if I need functionality that isn't supported in the portal?
-You can open a feature request in the [GitHub repository](https://github.com/Azure/api-management-developer-portal) or [implement the missing functionality yourself](developer-portal-implement-widgets.md). Learn more about developer portal [extensibility](api-management-howto-developer-portal.md#managed-vs-self-hosted).
+You have the following options:
+* For certain situations, you can [add custom HTML](#how-do-i-add-custom-html-to-my-developer-portal) to add functionality to the portal.
+
+* Open a feature request in the [GitHub repository](https://github.com/Azure/api-management-developer-portal).
+
+* [Implement the missing functionality yourself](developer-portal-implement-widgets.md).
+
+Learn more about developer portal [extensibility](api-management-howto-developer-portal.md#managed-vs-self-hosted).
## Can I have multiple developer portals in one API Management service?
You can check the status of the CORS policy in the **Portal overview** section o
![Screenshot that shows where you can check the status of your CORS policy.](media/developer-portal-faq/cors-azure-portal.png)
-Automatically apply the CORS policy by clicking on the **Enable CORS** button.
+Automatically apply the CORS policy by clicking the **Enable CORS** button.
You can also enable CORS manually.
This error is shown when a `GET` call to `https://<management-endpoint-hostname>
If your API Management service is in a VNet, refer to the [VNet connectivity question](#do-i-need-to-enable-additional-vnet-connectivity-for-the-managed-portal-dependencies).
-The call failure may also be caused by an TLS/SSL certificate, which is assigned to a custom domain and is not trusted by the browser. As a mitigation, you can remove the management endpoint custom domain API Management will fall back to the default endpoint with a trusted certificate.
+The call failure may also be caused by an TLS/SSL certificate, which is assigned to a custom domain and is not trusted by the browser. As a mitigation, you can remove the management endpoint custom domain. API Management will fall back to the default endpoint with a trusted certificate.
## What's the browser support for the portal?
You can generate *user-specific tokens* (including admin tokens) using the [Get
> [!NOTE] > The token must be URL-encoded.
+## How do I add custom HTML to my developer portal?
+
+The managed developer portal includes a **Custom HTML code** widget that enables you to insert HTML code for small portal customizations. For example, use custom HTML to embed a video or to add a form. The portal renders the custom widget in an inline frame (iframe).
+
+1. In the administrative interface for the developer portal, go to the page or section where you want to insert the widget.
+1. Select the grey "plus" (**+**) icon that appears when you hover the pointer over the page.
+1. In the **Add widget** window, select **Custom HTML code**.
+
+ :::image type="content" source="media/developer-portal-faq/add-custom-html-code-widget.png" alt-text="Add widget for custom HTML code":::
+1. Select the "pencil" icon to customize the widget.
+1. Enter a **Width** and **Height** (in pixels) for the widget.
+1. To inherit styles from the developer portal (recommended), select **Apply developer portal styling**.
+ > [!NOTE]
+ > If this setting isn't selected, the embedded elements will be plain HTML controls, without the styles of the developer portal.
+
+ :::image type="content" source="media/developer-portal-faq/configure-html-custom-code.png" alt-text="Configure HTML custom code":::
+1. Replace the sample **HTML code** with your custom content.
+1. When configuration is complete, close the window.
+1. Save your changes, and [republish the portal](api-management-howto-developer-portal-customize.md#publish).
+
+> [!NOTE]
+> Microsoft does not support the HTML code you add in the Custom HTML Code widget.
## Next steps
-Learn more about the new developer portal:
+Learn more about the developer portal:
- [Access and customize the managed developer portal](api-management-howto-developer-portal-customize.md) - [Set up self-hosted version of the portal](developer-portal-self-host.md)
applied-ai-services Resource Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/resource-customer-stories.md
Previously updated : 05/10/2021 Last updated : 02/04/2022
The following customers and partners have adopted Form Recognizer across a wide
| Customer/Partner | Description | Link | ||-|-|
-| <font size=5>Acumatica</font>| [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud- and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer prebuilt receipt API and machine-learning capabilities enable Acumatica's customers to file multiple, error-free claims in a matter of seconds. | [Customer story](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure) |
-|<font size=5> Arkas Logistics</font> | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) provides"complete logistics" services, maintains the continuity of the supply chain, and continues to provide uninterrupted service thanks to its focus on contactless operation and digitalization steps taken during the COVID-19 crisis powered by Microsoft's Form Recognizer solutions. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) |
-|<font size=5>Automation Anywhere</font>| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation and create a better future for everyone, liberating people from mundane, repetitive tasks, and allowing them more time to use their intellect and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily and complete a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help healthcare providers automatically process and submit the COVID-19 test forms at scale. | [Customer story](https://customers.microsoft.com/story/811346-automation-anywhere-partner-professional-services-azure-cognitive-services) |
-|<font size=5>AvidXchange</font>| [**AvidXchange**](https://www.avidxchange.com/) has developed an account payable automation solution using Form Recognizer. AvidXchange is able to deliver an accounts payable automation solution for the middle market powered by Form Recognizer and machine learning. Customers benefit from faster invoice processing times and increased accuracy to help ensure their suppliers are paid the right amount, at the right time. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
-|<font size=5>Blue Prism</font>| [**Blue Prism**](https://www.blueprism.com/)'s Decipher is a new AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Form Recognizer to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. | [Customer story](https://customers.microsoft.com/story/737482-blue-prism-partner-professional-services-azure) |
-|<font size=5>Chevron</font>| [**Chevron**](https://www.chevron.com//)'s Canada Business Unit is using Form Recognizer together with UiPath's robotic process automation platform to automate the extraction of data and move it into back-end systems for analysis. Subject-matter experts have more time to focus on higher-value activities and information flows. This automation accelerates operational control and enabled the company to analyze its business with greater speed, accuracy, and depth. | [Customer story](https://customers.microsoft.com/story/chevron-mining-oil-gas-azure-cognitive-services)|
-|<font size=5>Cross Masters</font>| [**Cross Masters**](https://crossmasters.com/), use of cutting-edge AI technologies isn't only a passion, it's an essential part of their work culture and requires continuous innovation. One of the latest success stories is automation of manual paperwork required to process thousands of invoices. Form Recognizer enabled Cross Masters to develop a unique customized solution to provide clients with market insights from large sets of collected invoices. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
-|<font size=5>Element</font>| [**Element**](https://www.element.com/) is a global business that provides specialist testing, inspection, and certification services to a diverse range of businesses. With over 6500 engaged experts working in more than 200 facilities worldwide, Element is one of the fastest growing companies in the global testing, inspection, and certification sector. When the finance team was forced to work from home during the COVID-19 pandemic, they needed to digitalize its paper processes fast. The creativity of the team and its use of Azure Form Recognizer delivered more than business as usualΓÇöit delivered significant efficiencies. Rather than coding from scratch, the team saw the opportunity to use the Azure Form Recognizer. This solution quickly gave them the functionality they needed and the agility and security of Microsoft Azure Services. Microsoft Azure Logic Apps is used to automate the process of gathering documents from email, storing and scanning them, and updating the system with extracted data and a copy of the invoice. Microsoft Computer Vision uses Optical Character Recognition (OCR) to extract the right data points from the invoice documents. | [Customer story](https://customers.microsoft.com/en-us/story/1414941527887021413-element)|
-|<font size=5>EY</font>| [**EY**](https://ey.com/) organization exists to build a better working world, help to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. The EY Technology team collaborated with Microsoft to build a platform that hastens invoice extraction and contract comparison processes. EY teams use Form Recognizer and the Custom Vision API to automate and improve Optical Character Recognition (OCR) and document-handling processes for its consulting, tax, audit, and transactions services clients. | [Customer story](https://customers.microsoft.com/en-us/story/1404985164224935715-ey-professional-services-azure-form-recognizer)|
-|<font size=5>Financial Fabric</font>| [**Financial Fabric**](https://www.financialfabric.com//), a Microsoft Cloud Solution Provider, delivers data architecture, science, and analytics services to investment managers at hedge funds, family offices, and corporate treasuries. Daily processes involve extracting and normalizing data from thousands of complex financial documents. The company provides custom analytics to help its clients make better investment decisions. Extracting this data previously took days or weeks. With Form Recognizer Financial Fabric has reduced the time it takes to go from extraction to analysis to minutes. | [Customer story](https://customers.microsoft.com/story/financial-fabric-banking-capital-markets-azure)|
-|<font size=5>GEP</font>| [**GEP**](https://www.gep.com/) has developed an invoice processing solution for a client using Form Recognizer. GEP partnered with Microsoft Form Recognizer to automate the processing of 4,000 invoices a day for a client. The process saved the client tens of thousands of hours in manual effort and improving accuracy, controls, and compliance on a global scale. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
-|<font size=5>HCA Healthcare</font>| [**HCA Healthcare**](https://hcahealthcare.com/) is one of the nation's leading providers of healthcare with over 180 hospitals and 2,000 sites of care located throughout the United States and serving approximately 35 million patients each year. HCA Healthcare is partnering with Microsoft and using Azure Form Recognizer to simplify and improve patient onboarding experience, as well as reduce administrative time spent entering repetitive data into the care center's system. | [Customer story](https://customers.microsoft.com/en-us/story/1404891793134114534-hca-healthcare-healthcare-provider-azure)|
-|<font size=5>Instabase</font>| [**Instabase**](https://instabase.com/) is a horizontal application platform that provides best-in-class machine-learning processes to retrieve, organize, identify, and understand complex masses of unorganized data and bring it into business workflows as organized information. The platform provides a repository of prebuilt applications to orchestrate and harness data that can be rapidly extended and enhanced as required. Instabase applications are fully containerized for widespread, infrastructure-agnostic deployment. | [Customer story](https://customers.microsoft.com/en-gb/story/1376278902865681018-instabase-partner-professional-services-azure)|
-|<font size=5>Northern Trust</font>| [**Northern Trust**](https://www.northerntrust.com/) is a leading provider of wealth and asset management servicing banks, institutions, families, and individuals. As part of its initiative to digitize alternative-asset servicing, Northern Trust has launched an artificial intelligence-powered solution. The solution extracts unstructured investment data from alternative asset documents and making it accessible and actionable for asset owner clients. Developed in partnership with Microsoft Azure Applied AI Services and business and consulting firm Neudesic, the proprietary solution transforms crucial information such as capital call notices, cash and stock distribution notices, and capital account statements from various unstructured formats into digital, actionable insights for investment teams. The solution also accelerates time-to-value for enterprises building AI solutions.| [Customer story](https://www.businesswire.com/news/home/20210914005449/en/Northern-Trust-Automates-Data-Extraction-from-Alternative-Asset-Documentation)|
-|<font size=5>Standard Bank</font>| [**Standard Bank of South Africa**](https://www.standardbank.co.za/southafrica/personal/home) is Africa's largest bank by assets. Headquartered in Johannesburg, South Africa with more than 150 years of history, Standard Bank is deeply involved in trade both on the African continent and beyond. When manual due diligence in cross-border transactions began absorbing too much of staff's time, the bank decided it needed a new way forward. With Form Recognizer, Standard Bank is now poised to reduce its cross-border payments registration and processing time significantly. | [Customer story](https://customers.microsoft.com/en-hk/story/1395059149522299983-standard-bank-of-south-africa-banking-capital-markets-azure-en-south-africa)|
-|<font size=5>WEX</font>| [**WEX**](https://www.wexinc.com/) has developed a tool to process _Explanation of Benefits_ documents using Form Recognizer. Matt Dallahan, Senior Vice President of Product Management and Strategy, said "The technology is truly amazing. I was initially worried that this type of solution would not be feasible, but I soon realized that the Form Recognizer can read virtually any document with accuracy." | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
-|<font size=5>Wilson Allen</font> | [**Wilson Allen**](https://wilsonallen.com/) took advantage of AI container support for Microsoft Azure Cognitive Services and created a powerful AI solution to help firms around the world find unprecedented levels of insight in previously siloed and unstructured data. Now, its clients can use this data to support business development and foster client relationships. | [Customer story](https://customers.microsoft.com/story/814361-wilson-allen-partner-professional-services-azure)|
-|<font size=5>Zelros</font>| [**Zelros**](http://www.zelros.com/) offers AI-powered software for the insurance industry. Insurers use the Zelros platform to take in forms and seamlessly manage customer enrollment and claims filing. The company partnered its technology with Form Recognizer to automatically pull key-value pairs and text out of documents. Insurers use the Zelros platform to process paperwork far more quickly, ensuring high accuracy and redirecting thousands of hours previously spent on manual data extraction toward better serving customers. | [Customer story](https://customers.microsoft.com/story/816397-zelros-insurance-azure)|
+| <font size=5>Acumatica</font>| [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud- and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | [Customer story](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure) |
+|<font size=5> Arkas Logistics</font> | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, Turkey's leading holding institution and operating in 23 countries. During the COVID-19 crisis, Arkas Logistics has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) |
+|<font size=5>Automation Anywhere</font>| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation by liberating teams from mundane, repetitive tasks, and allowing more time for innovation and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily, each one accompanied by a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help a healthcare provider automatically process and submit the COVID-19 test forms at scale. | [Customer story](https://customers.microsoft.com/story/811346-automation-anywhere-partner-professional-services-azure-cognitive-services) |
+|<font size=5>AvidXchange</font>| [**AvidXchange**](https://www.avidxchange.com/) has developed an accounts payable automation solution applying Form Recognizer. AvidXchange partners with Azure Cognitive Services to deliver an accounts payable automation solution for the middle market. Customers benefit from faster invoice processing times and increased accuracy to ensure their suppliers are paid the right amount, at the right time. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
+|<font size=5>Blue Prism</font>| [**Blue Prism**](https://www.blueprism.com/) Decipher is an AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Form Recognizer to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. | [Customer story](https://customers.microsoft.com/story/737482-blue-prism-partner-professional-services-azure) |
+|<font size=5>Chevron</font>| [**Chevron**](https://www.chevron.com//) Canada Business Unit is now using Form Recognizer with UiPath's robotic process automation platform to automate the extraction of data and move it into back-end systems for analysis. Subject matter experts have more time to focus on higher-value activities and information flows more rapidly. Accelerated operational control enables the company to analyze its business with greater speed, accuracy, and depth. | [Customer story](https://customers.microsoft.com/story/chevron-mining-oil-gas-azure-cognitive-services)|
+|<font size=5>Cross Masters</font>|[**Cross Masters**](https://crossmasters.com/), uses cutting-edge AI technologies not only as a passion, but as an essential part of a work culture requiring continuous innovation. One of the latest success stories is automation of manual paperwork required to process thousands of invoices. Cross Masters used Form Recognizer to develop a unique, customized solution, to provide clients with market insights from a large set of collected invoices. Most impressive is the extraction quality and continuous introduction of new features, such as model composing and table labeling. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
+ |<font size=5>Element</font>| [**Element**](https://www.element.com/) is a global business that provides specialist testing, inspection, and certification services to a diverse range of businesses. Element is one of the fastest growing companies in the global testing, inspection and certification sector having over 6,500 engaged experts working in more than 200 facilities across the globe. When the finance team for the Americas was forced to work from home during the COVID-19 pandemic, it needed to digitalize its paper processes fast. The creativity of the team and its use of Azure Form Recognizer delivered more than business as usualΓÇöit delivered significant efficiencies. The Element team used the tools in Microsoft Azure so the next phase could be expedited. Rather than coding from scratch, they saw the opportunity to use the Azure Form Recognizer. This integration quickly gave them the functionality they needed, together with the agility and security of Microsoft Azure. Microsoft Azure Logic Apps is used to automate the process of extracting the documents from email, storing them, and updating the system with the extracted data. Computer Vision, part of Azure Cognitive Services, partners with Azure Form Recognizer to extract the right data points from the invoice documentsΓÇöwhether they're a pdf or scanned images. | [Customer story](https://customers.microsoft.com/en-us/story/1414941527887021413-element)|
+ |<font size=5>Emaar Properties</font>| [**Emaar Properties**](https://www.emaar.com/en/), operates Dubai Mall, the world's most-visited retail and entertainment destination. Each year, the Dubai Mall draws more than 80 million visitors. To enrich the shopping experience, Emaar Properties offers a unique rewards program through a dedicated mobile app. Loyalty program points are earned via submitted receipts. Emaar Properties uses Microsoft Azure Form Recognizer to process submitted receipts and has achieved 92 percent reading accuracy.| [Customer story](https://customers.microsoft.com/en-us/story/1459754150957690925-emaar-retailers-azure-en-united-arab-emirates)|
+|<font size=5>EY</font>| [**EY**](https://ey.com/) (Ernst & Young Global Limited) is a multinational professional services network that helps to create long-term value for clients and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries to help clients grow, transform, and operate. EY teams work across assurance, consulting, law, strategy, tax, and transactions to find solutions for complex issues facing our world today. The EY Technology team collaborated with Microsoft to build a platform that hastens invoice extraction and contract comparison processes. Azure Form Recognizer and Custom Vision partnered to enable EY teams to automate and improve the OCR and document handling processes for its consulting, tax, audit, and transactions services clients. | [Customer story](https://customers.microsoft.com/en-us/story/1404985164224935715-ey-professional-services-azure-form-recognizer)|
+|<font size=5>Financial Fabric</font>| [**Financial Fabric**](https://www.financialfabric.com//), a Microsoft Cloud Solution Provider, delivers data architecture, science, and analytics services to investment managers at hedge funds, family offices, and corporate treasuries. Its daily processes involve extracting and normalizing data from thousands of complex financial documents, such as bank statements and legal agreements. The company then provides custom analytics to help its clients make better investment decisions. Extracting this data previously took days or weeks. By using Form Recognizer, Financial Fabric has reduced the time it takes to go from extraction to analysis to just minutes. | [Customer story](https://customers.microsoft.com/story/financial-fabric-banking-capital-markets-azure)|
+|<font size=5>GEP</font>| [**GEP**](https://www.gep.com/) has developed an invoice processing solution for a client using Form Recognizer. "At GEP, we're seeing AI and automation make a profound impact on procurement and the supply chain. By combining our AI solution with Microsoft Form Recognizer, we automated the processing of 4,000 invoices a day for a client... It saved them tens of thousands of hours of manual effort, while improving accuracy, controls and compliance on a global scale." Sarateudu Sethi, GEP's Vice President of Artificial Intelligence. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
+|<font size=5>HCA Healthcare</font>| [**HCA Healthcare**](https://hcahealthcare.com/) is one of the nation's leading providers of healthcare with over 180 hospitals and 2,000 sites-of-care located throughout the United States and serving approximately 35 million patients each year. Currently, they're using Azure Form Recognizer to simplify and improve the patient onboarding experience and reducing administrative time spent entering repetitive data into the care center's system. | [Customer story](https://customers.microsoft.com/en-us/story/1404891793134114534-hca-healthcare-healthcare-provider-azure)|
+|<font size=5>Icertis</font>| [**Icertis**](https://www.icertis.com/), is a Software as a Service (SaaS) provider headquartered in Bellevue, Washington. Icertis digitally transforms the contract management process with a cloud-based, AI-powered, contract lifecycle management solution. Azure Form Recognizer enables Icertis Contract Intelligence to take key-value pairs embedded in contracts and create structured data understood and operated upon by machine algorithms. Through these and other powerful Azure Cognitive and AI services, Icertis empowers customers in every industry to improve business in multiple ways: optimized manufacturing operations, added agility to retail strategies, reduced risk in IT services, and faster delivery of life-saving pharmaceutical products. | [Blog](https://cloudblogs.microsoft.com/industry-blog/en-in/unicorn/2022/01/12/how-icertis-built-a-contract-management-solution-using-azure-form-recognizer/)|
+|<font size=5>Instabase</font>| [**Instabase**](https://instabase.com/) is a horizontal application platform that provides best-in-class machine learning processes to help retrieve, organize, identify, and understand complex masses of unorganized data. Instabase then brings this data into business workflows as organized information. The platform provides a repository of integrative applications to orchestrate and harness that information with the means to rapidly extend and enhance them as required. Instabase applications are fully containerized for widespread, infrastructure-agnostic deployment. | [Customer story](https://customers.microsoft.com/en-gb/story/1376278902865681018-instabase-partner-professional-services-azure)|
+|<font size=5>Northern Trust</font>| [**Northern Trust**](https://www.northerntrust.com/) is a leading provider of wealth management, asset servicing, asset management, and banking to corporations, institutions, families, and individuals. As part of its initiative to digitize alternative asset servicing, Northern Trust has launched an AI-powered solution to extract unstructured investment data from alternative asset documents and making it accessible and actionable for asset-owner clients. Microsoft Azure Applied AI services accelerate time-to-value for enterprises building AI solutions. This proprietary solution transforms crucial information such as capital call notices, cash and stock distribution notices, and capital account statements from various unstructured formats into digital, actionable insights for investment teams. | [Customer story](https://www.businesswire.com/news/home/20210914005449/en/Northern-Trust-Automates-Data-Extraction-from-Alternative-Asset-Documentation)|
+ |<font size=5>Standard Bank</font>| [**Standard Bank of South Africa**](https://www.standardbank.co.za/southafrica/personal/home) is Africa's largest bank by assets. Standard Bank is headquartered in Johannesburg, South Africa, and has more than 150 years of trade experience in Africa and beyond. When manual due diligence in cross-border transactions began absorbing too much staff time, the bank decided it needed a new way forward. Standard Bank uses Form Recognizer to significantly reduce its cross-border payments registration and processing time. | [Customer story](https://customers.microsoft.com/en-hk/story/1395059149522299983-standard-bank-of-south-africa-banking-capital-markets-azure-en-south-africa)|
+|<font size=5>WEX</font>| [**WEX**](https://www.wexinc.com/) has developed a tool to process Explanation of Benefits documents using Form Recognizer. "The technology is truly amazing. I was initially worried that this type of solution wouldn't be feasible, but I soon realized that Form Recognizer can read virtually any document with accuracy." Matt Dallahan, Senior Vice President of Product Management and Strategy | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
+|<font size=5>Wilson Allen</font> | [**Wilson Allen**](https://wilsonallen.com/) took advantage of AI container support for Microsoft Azure Cognitive Services and created a powerful AI solution that help firms around the world find unprecedented levels of insight in previously siloed and unstructured data. Its clients can use this data to support business development and foster client relationships. | [Customer story](https://customers.microsoft.com/story/814361-wilson-allen-partner-professional-services-azure)|
+|<font size=5>Zelros</font>| [**Zelros**](http://www.zelros.com/) offers AI-powered software for the insurance industry. Insurers use the Zelros platform to take in forms and seamlessly manage customer enrollment and claims filing. The company combined its technology with Form Recognizer to automatically pull key-value pairs and text out of documents. When insurers use the Zelros platform, they can quickly process paperwork, ensure high accuracy, and redirect thousands of hours previously spent on manual data extraction toward better service. | [Customer story](https://customers.microsoft.com/story/816397-zelros-insurance-azure)|
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: 'Quickstart: Connect an existing Kubernetes cluster to Azure Arc' description: "In this quickstart, learn how to connect an Azure Arc-enabled Kubernetes cluster."-- Last updated 09/09/2021
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/resource-graph-samples.md
description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernete
Last updated 01/20/2022 -- # Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
If you use a `bucket` source instead of a `git` source, here are the bucket-spec
| `--bucket-insecure` | Boolean | Communicate with a `bucket` without TLS. If not provided, assumed false; if provided, assumed true. | ### Local secret for authentication with source
-You can use a local Kubernetes secret for authentication with the `git` or `bucket` source.
+You can use a local Kubernetes secret for authentication with a `git` or `bucket` source. The local secret must contain all of the authentication parameters needed for the source and must be created in the same namespace as the Flux configuration.
| Parameter | Format | Notes | | - | - | - | | `--local-auth-ref` `--local-ref` | String | Local reference to a Kubernetes secret in the Flux configuration namespace to use for authentication with the source. |
-For HTTPS authentication, you create a secret (in the same namespace where the Flux configuration will be) with the username and password/key:
+For HTTPS authentication, you create a secret with the `username` and `password`:
```console kubectl create ns flux-config kubectl create secret generic -n flux-config my-custom-secret --from-literal=username=<my-username> --from-literal=password=<my-password-or-key> ```
-For SSH authentication, you create a secret (in the same namespace where the Flux configuration will be) with both the `identity` and `known_hosts` fields:
+For SSH authentication, you create a secret with the `identity` and `known_hosts` fields:
```console kubectl create ns flux-config
For both cases, when you create the Flux configuration, use `--local-auth-ref my
```console az k8s-configuration flux create -g <cluster_resource_group> -c <cluster_name> -n <config_name> -t connectedClusters --scope cluster --namespace flux-config -u <git-repo-url> --kustomization name=kustomization1 --local-auth-ref my-custom-secret ```
+Learn more about using a local Kubernetes secret with these authentication methods:
+* [Git repository HTTPS authentication](https://fluxcd.io/docs/components/source/gitrepositories/#https-authentication)
+* [Git repository HTTPS self-signed certificates](https://fluxcd.io/docs/components/source/gitrepositories/#https-self-signed-certificates)
+* [Git repository SSH authentication](https://fluxcd.io/docs/components/source/gitrepositories/#ssh-authentication)
+* [Bucket static authentication](https://fluxcd.io/docs/components/source/buckets/#static-authentication)
>[!NOTE] >If you need Flux to access the source through your proxy, you'll need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md?tabs=azure-cli#4a-connect-using-an-outbound-proxy-server).
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/platform/conceptual-custom-locations.md
Last updated 10/13/2021 -- description: "This article provides a conceptual overview of Custom Locations capability of Azure Arc"
azure-functions Functions Proxies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-proxies.md
Standard Functions billing applies to proxy executions. For more information, se
This section shows you how to create a proxy in the Functions portal.
+> [!NOTE]
+> Not all languages and operating system combinations support in-portal editing. If you're unable to create a proxy in the portal, you can instead manually create a _proxies.json_ file in the root of your function app project folder. To learn more about portal editing support, see [Language support details](functions-create-function-app-portal.md#language-support-details).
+ 1. Open the [Azure portal], and then go to your function app. 2. In the left pane, select **New proxy**. 3. Provide a name for your proxy.
azure-maps Geocoding Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/geocoding-coverage.md
The ability to geocode in a country/region is dependent upon the road data cover
| Greenland | | | | Γ£ô | Γ£ô | | Grenada | | | Γ£ô | Γ£ô | Γ£ô | | Guadeloupe | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Guam | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
| Guatemala | | | Γ£ô | Γ£ô | Γ£ô | | Guyana | | | Γ£ô | Γ£ô | Γ£ô | | Haiti | | | Γ£ô | Γ£ô | Γ£ô |
The ability to geocode in a country/region is dependent upon the road data cover
| Cook Islands | | | | Γ£ô | Γ£ô | | Fiji | | | Γ£ô | Γ£ô | Γ£ô | | French Polynesia | | | Γ£ô | Γ£ô | Γ£ô |
+| Guam | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
| Heard Island & McDonald Islands | | | | Γ£ô | Γ£ô | | Hong Kong SAR | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | India | Γ£ô | | Γ£ô | Γ£ô | Γ£ô |
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-support.md
Jump to a resource provider namespace:
> | alertrules | Yes | Yes | > | autoscalesettings | Yes | Yes | > | components | Yes | Yes |
+> | components / analyticsItems | No | No |
+> | components / favorites | No | No |
> | components / linkedStorageAccounts | No | No |
+> | components / myAnalyticsItems | No | No |
+> | components / pricingPlans | No | No |
> | components / ProactiveDetectionConfigs | No | No |
+> | dataCollectionEndpoints | No | No |
+> | dataCollectionRuleAssociations | No | No |
+> | dataCollectionRules | Yes | Yes |
> | diagnosticSettings | No | No | > | guestDiagnosticSettings | Yes | Yes | > | guestDiagnosticSettingsAssociation | Yes | Yes | > | logprofiles | Yes | Yes | > | metricAlerts | Yes | Yes |
+> | myWorkbooks | No | No |
> | privateLinkScopes | Yes | Yes | > | privateLinkScopes / privateEndpointConnections | No | No | > | privateLinkScopes / scopedResources | No | No |
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
Jump to a resource provider namespace:
> | alertrules | Yes | > | autoscalesettings | Yes | > | components | Yes |
+> | components / analyticsItems | No |
+> | components / favorites | No |
> | components / linkedStorageAccounts | No |
+> | components / myAnalyticsItems | No |
+> | components / pricingPlans | No |
> | components / ProactiveDetectionConfigs | No |
+> | dataCollectionEndpoints | No |
+> | dataCollectionRuleAssociations | No |
+> | dataCollectionRules | Yes |
> | diagnosticSettings | No | > | guestDiagnosticSettings | Yes | > | guestDiagnosticSettingsAssociation | Yes | > | logprofiles | Yes | > | metricAlerts | Yes |
+> | myWorkbooks | No |
> | privateLinkScopes | Yes | > | privateLinkScopes / privateEndpointConnections | No | > | privateLinkScopes / scopedResources | No |
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
# SDKs and REST APIs
-Azure Communication Services APIs are organized into eight areas. Most areas have fully open-sourced SDKs programmed against published REST APIs that you can use directly over the Internet. The Calling SDK uses proprietary network interfaces and is closed-source.
+Azure Communication Services capabilities are conceptually organized into discrete areas based on their functional area. Most areas have fully open-sourced SDKs programmed against published REST APIs that you can use directly over the Internet. The Calling SDK uses proprietary network interfaces and is closed-source.
In the tables below we summarize these areas and availability of REST APIs and SDK libraries. We note if APIs and SDKs are intended for end-user clients or trusted service environments. APIs such as SMS should not be directly accessed by end-user devices in low trust environments. Development of Calling and Chat applications can be accelerated by the [Azure Communication Services UI library](./ui-library/ui-library-overview.md). The customizable UI library provides open-source UI components for Web and mobile apps, and a Microsoft Teams theme.
-## REST APIs
-Communication Services APIs are documented alongside other [Azure REST APIs in docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs).
- ## SDKs | Assembly | Protocols| Environment | Capabilities| |--|-||-|
Publishing locations for individual SDK packages are detailed below.
| UI Library| [npm](https://www.npmjs.com/package/@azure/communication-react) | - | - | - | - | - | [GitHub](https://github.com/Azure/communication-ui-library), [Storybook](https://azure.github.io/communication-ui-library/?path=/story/overview--page) | | Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html)| -| [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | [docs](/objectivec/communication-services/calling/)| [docs](/java/api/com.azure.android.communication.calling)| -|
-The mapping between friendly assembly names and namespaces is:
-
-| Assembly | Namespaces |
-||--|
-| Azure Resource Manager | Azure.ResourceManager.Communication|
-| Common | Azure.Communication.Common |
-| Identity | Azure.Communication.Identity |
-| Phone numbers| Azure.Communication.PhoneNumbers |
-| SMS| Azure.Communication.SMS|
-| Chat | Azure.Communication.Chat |
-| Calling| Azure.Communication.Calling|
-| Calling Server | Azure.Communication.CallingServer|
-| Network Traversal| Azure.Communication.NetworkTraversal |
-| UI Library | Azure.Communication.Calling|
-
-## SDK platform support details
+### SDK platform support details
-### iOS and Android
+#### iOS and Android
- Communication Services iOS SDKs target iOS version 13+, and Xcode 11+. - Android Java SDKs target Android API level 21+ and Android Studio 4.0+
-### .NET
+#### .NET
Except for Calling, Communication Services packages target .NET Standard 2.0, which supports the platforms listed below.
The Calling package supports UWP apps build with .NET Native or C++/WinRT on:
- Windows 10 10.0.17763 - Windows Server 2019 10.0.17763
+## REST APIs
+Communication Services APIs are documented alongside other [Azure REST APIs in docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs).
+
+### REST API Throttles
+Certain REST APIs and corresponding SDK methods have throttle limits you should be mindful of. Exceeding these throttle limits will trigger a`429 - Too Many Requests` error response. These limits can be increased through [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+
+| API| Throttle|
+|||
+| [All Search Telephone Number Plan APIs](/rest/api/communication/phonenumbers) | 4 requests/day|
+| [Purchase Telephone Number Plan](/rest/api/communication/phonenumbers/purchasephonenumbers) | 1 purchase a month|
+| [Send SMS](/rest/api/communication/sms/send) | 200 requests/minute |
+ ## API stability expectations > [!IMPORTANT]
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 01/24/2022 Last updated : 02/06/2022 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **PREVIEW - Storage account with potentially sensitive data has been detected with a publicly exposed container**<br>(Storage.Blob_OpenACL) | The access policy of a container in your storage account was modified to allow anonymous access. This might lead to a data breach if the container holds any sensitive data. This alert is based on analysis of Azure activity log.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | Privilege Escalation | Medium | | **Authenticated access from a Tor exit node**<br>(Storage.Blob_TorAnomaly<br>Storage.Files_TorAnomaly) | One or more storage container(s) / file share(s) in your storage account were successfully accessed from an IP address known to be an active exit node of Tor (an anonymizing proxy). Threat actors use Tor to make it difficult to trace the activity back to them. Authenticated access from a Tor exit node is a likely indication that a threat actor is trying to hide their identity.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial access | High/Medium | | **Access from an unusual location to a storage account**<br>(Storage.Blob_GeoAnomaly<br>Storage.Files_GeoAnomaly) | Indicates that there was a change in the access pattern to an Azure Storage account. Someone has accessed this account from an IP address considered unfamiliar when compared with recent activity. Either an attacker has gained access to the account, or a legitimate user has connected from a new or unusual geographic location. An example of the latter is remote maintenance from a new application or developer.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exploitation | Low |
-| **Unusual unauthenticated access to a storage container**<br>(Storage.Blob_AnonymousAccessAnomaly) | This storage account was accessed without authentication, which is a change in the common access pattern. Read access to this container is usually authenticated. This might indicate that a threat actor was able to exploit public read access to storage container(s) in this storage account(s).<br>Applies to: Azure Blob Storage | Collection | Medium |
+| **Unusual unauthenticated access to a storage container**<br>(Storage.Blob_AnonymousAccessAnomaly) | This storage account was accessed without authentication, which is a change in the common access pattern. Read access to this container is usually authenticated. This might indicate that a threat actor was able to exploit public read access to storage container(s) in this storage account(s).<br>Applies to: Azure Blob Storage | Collection | Low |
| **Potential malware uploaded to a storage account**<br>(Storage.Blob_MalwareHashReputation<br>Storage.Files_MalwareHashReputation) | Indicates that a blob containing potential malware has been uploaded to a blob container or a file share in a storage account. This alert is based on hash reputation analysis leveraging the power of Microsoft threat intelligence, which includes hashes for viruses, trojans, spyware and ransomware. Potential causes may include an intentional malware upload by an attacker, or an unintentional upload of a potentially malicious blob by a legitimate user.<br>Applies to: Azure Blob Storage, Azure Files (Only for transactions over REST API)<br>Learn more about [Azure's hash reputation analysis for malware](defender-for-storage-introduction.md#what-kind-of-alerts-does-microsoft-defender-for-storage-provide).<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). | Lateral Movement | High | | **Publicly accessible storage containers successfully discovered**<br>(Storage.Blob_OpenContainersScanning.SuccessfulDiscovery) | A successful discovery of publicly open storage container(s) in your storage account was performed in the last hour by a scanning script or tool.<br><br> This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Medium | | **Publicly accessible storage containers unsuccessfully scanned**<br>(Storage.Blob_OpenContainersScanning.FailedAttempt) | A series of failed attempts to scan for publicly open storage containers were performed in the last hour. <br><br>This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Low |
defender-for-cloud Defender For Storage Exclude https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-storage-exclude.md
Title: Microsoft Defender for Storage - excluding a storage account description: Excluding a specific storage account from a subscription with Microsoft Defender for Storage enabled. Previously updated : 01/16/2022 Last updated : 02/06/2022 # Exclude a storage account from Microsoft Defender for Storage protections
To exclude specific storage accounts from Microsoft Defender for Storage when th
- ## Exclude an Azure Databricks Storage account
-When Defender for Storage is enabled on a subscription, it's not currently possible to exclude a Storage account if it belongs to an Azure Databricks workspace.
+### Exclude an active Databricks workspace
+
+Microsoft Defender for Storage can exclude specific active Databricks workspace storage accounts, when the plan is already enabled on a subscription.
+
+**To exclude an active Databricks workspace**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Azure Databricks** > **`Your Databricks workspace`** > **Tags**.
+
+1. In the Name field, enter `AzDefenderPlanAutoEnable`.
+
+1. In the Value field, enter `off`.
+
+1. Select **Apply**.
+
+ :::image type="content" source="media/defender-for-storage-exclude/workspace-exclude.png" alt-text="Screenshot showing the location, and how to apply the tag to your Azure Databricks account.":::
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings** > **`Your subscription`**.
+
+1. Toggle the Defender for Storage plan to **Off**.
+
+ :::image type="content" source="media/defender-for-storage-exclude/storage-off.png" alt-text="Screenshot showing how to switch the Defender for Storage plan to off.":::
+
+1. Select **Save**.
+
+1. Toggle the Defender for Storage plan to **On**.
+
+1. Select **Save**.
+
+The tags will be inherited by the Storage account of the Databricks workspace and prevent Defender for Storage from turning on.
+
+> [!Note]
+> Tags can't be added directly to the Databricks Storage account, or its Managed Resource Group.
+
+### Prevent auto-enabling on a new Databricks workspace storage account
+
+When you create a new Databricks workspace, you have the ability to add a tag that will prevent your Microsoft Defender for Storage account from enabling automatically.
-Instead, you can disable Defender for Storage on the subscription and enable Defender for Storage for each Azure Storage account from the **Security** page:
+**To prevent auto-enabling on a new Databricks workspace storage account**:
+ 1. Follow [these steps](/azure/databricks/scenarios/quickstart-create-Databricks-workspace-portal?tabs=azure-portal) to create a new Azure Databricks workspace.
+
+ 1. In the Tags tab, enter a tag named `AzDefenderPlanAutoEnable`.
+
+ 1. Enter the value `off`.
+
+ :::image type="content" source="media/defender-for-storage-exclude/tag-off.png" alt-text="Screenshot that shows how to create a tag in the Databricks workspace.":::
+1. Continue following the instructions to create your new Azure Databricks workspace.
+
+The Microsoft Defender for Storage account will inherit the tag of the Databricks workspace, which will prevent Defender for Storage from turning on automatically.
## Next steps
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/kubernetes-workload-protections.md
Defender for Cloud offers more container security features if you enable Microso
| Release state: | General availability (GA) | | Pricing: | Free for AKS workloads<br>For Azure Arc-enabled Kubernetes or EKS, it's billed according to the Microsoft Defender for Containers plan | | Required roles and permissions: | **Owner** or **Security admin** to edit an assignment<br>**Reader** to view the recommendations |
-| Environment requirements: | Kubernetes v1.14 (or higher) is required<br>No PodSecurityPolicy resource (old PSP model) on the clusters<br>Windows nodes are not supported |
+| Environment requirements: | Kubernetes v1.14 (or newer) is required<br>No PodSecurityPolicy resource (old PSP model) on the clusters<br>Windows nodes are not supported |
| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) | | | |
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/permissions.md
In addition to the built-in roles, there are two roles specific to Defender for
The following table displays roles and allowed actions in Defender for Cloud.
-| **Action** | **Action** | [Security Reader](../role-based-access-control/built-in-roles.md#security-reader) / <br> [Reader](../role-based-access-control/built-in-roles.md#reader) | [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) / [Owner](../role-based-access-control/built-in-roles.md#owner) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| **Action** | [Security Reader](../role-based-access-control/built-in-roles.md#security-reader) / <br> [Reader](../role-based-access-control/built-in-roles.md#reader) | [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) / [Owner](../role-based-access-control/built-in-roles.md#owner) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
|:-|:-:|:-:|:-:|:-:|:-:| | | | | **(Resource group level)** | **(Subscription level)** | **(Subscription level)** | | Add/assign initiatives (including) regulatory compliance standards) | - | - | - | Γ£ö | Γ£ö |
defender-for-iot Concept Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/concept-key-concepts.md
Using custom, condition-based alert triggering and messaging helps pinpoint spec
For a complete list of supported protocols see, [Supported Protocols](concept-supported-protocols.md#supported-protocols). +
+### Secure development environment
+
+The Horizon ODE enables development of custom or proprietary protocols that cannot be shared outside an organization. For example, because of legal regulations or corporate policies.
+
+Develop dissector plugins without:
+
+- revealing any proprietary information about how your protocols are defined.
+
+- sharing any of your sensitive PCAPs.
+
+- violating compliance regulations.
+
+Contact <ms-horizon-support@microsoft.com> for information about developing protocol plugins.
+
+### Customization and localization
+
+The SDK supports various customization options, including:
+
+ - Text for function codes.
+
+ - Full localization text for alerts, events, and protocol parameters.
+
+ :::image type="content" source="media/references-horizon-sdk/localization.png" alt-text="View fully localized alerts.":::
+
+## Horizon architecture
+
+The architectural model includes three product layers.
++
+### Defender for IoT platform layer
+
+Enables immediate integration and real-time monitoring of custom dissector plugins in the Defender for IoT platform, without the need to upgrade the Defender for IoT platform version.
+
+### Defender for IoT services layer
+
+Each service is designed as a pipeline, decoupled from a specific protocol, enabling more efficient, independent development.
+
+Each service is designed as a pipeline, decoupled from a specific protocol. Services listens for traffic on the pipeline. They interact with the plugin data and the traffic captured by the sensors to index deployed protocols and analyze the traffic payload, and enable a more efficient and independent development.
+
+### Custom dissector layer
+
+Enables creation of plugins using the Defender for IoT proprietary SDK (including C++ implementation and JSON configuration) to:
+
+- Define how to identify the protocol
+
+- Define how to map the fields you want to extract from the traffic, and extract them
+
+- Define how to integrate with the Defender for IoT services
+
+ :::image type="content" source="media/references-horizon-sdk/layers.png" alt-text="The built-in layers.":::
+
+Defender for IoT provides basic dissectors for common protocols. You can build your dissectors on top of these protocols.
++ ## What is an Inventory Device The Defender for IoT Device inventory displays an extensive range of asset attributes that are detected by sensors monitoring the organizations networks and managed endpoints.
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/concept-supported-protocols.md
This section lists protocols that are detected using passive monitoring.
**Medical:** ASTM, HL7
-**Microsoft:** Horizon community dissectors, Horizon proprietary dissectors (developed by customers). See [Horizon proprietary protocol dissector](references-horizon-sdk.md) for details.
+**Microsoft:** Horizon community dissectors, Horizon proprietary dissectors (developed by customers).
**Mitsubishi:** Melsoft / Melsec (Mitsubishi Electric)
We invite you to join our community here: <horizon-community@microsoft.com>
## Next steps
-Learn more about the [Horizon proprietary protocol dissector](references-horizon-sdk.md).
-
-Check out our [Horizon API](references-horizon-api.md).
+[Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
defender-for-iot How To Create Attack Vector Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-create-attack-vector-reports.md
Title: Create attack vector reports description: Attack vector reports provide a graphical representation of a vulnerability chain of exploitable devices. Previously updated : 11/09/2021 Last updated : 02/03/2022
Attack vector reports provide a graphical representation of a vulnerability chai
Working with the attack vector lets you evaluate the effect of mitigation activities in the attack sequence. You can then determine, for example, if a system upgrade disrupts the attacker's path by breaking the attack chain, or if an alternate attack path remains. This information helps you prioritize remediation and mitigation activities. - > [!NOTE] > Administrators and security analysts can perform the procedures described in this section. ## Create an attack vector report
-To create an attack vector simulation:
+This section describes how to create Attack Vector reports.
-1. Select :::image type="content" source="media/how-to-generate-reports/plus.png" alt-text="Plus sign":::on the side menu to add a Simulation.
+**To create an attack vector simulation:**
- :::image type="content" source="media/how-to-generate-reports/vector.png" alt-text="The attack vector simulation.":::
+1. Select **Attack vector** from the sensor side menu.
+1. Select **Add simulation**.
2. Enter simulation properties:
To create an attack vector simulation:
- **Maximum vectors**: The maximum number of vectors in a single simulation.
- - **Show in Device map**: Show the attack vector as a filter on the device map.
+ - **Show in Device map**: Show the attack vector as a group in the Device map.
- **All Source devices**: The attack vector will consider all devices as an attack source.
To create an attack vector simulation:
- **Exclude Subnets**: Specified subnets will be excluded from the attack vector simulation.
-3. Select **Add Simulation**. The simulation will be added to the simulations list.
-
- :::image type="content" source="media/how-to-generate-reports/new-simulation.png" alt-text="Add a new simulation.":::
-
-4. Select :::image type="icon" source="media/how-to-generate-reports/edit-a-simulation-icon.png" border="false"::: if you want to edit the simulation.
-
- Select :::image type="icon" source="media/how-to-generate-reports/delete-simulation-icon.png" border="false"::: if you want to delete the simulation.
-
- Select :::image type="icon" source="media/how-to-generate-reports/make-a-favorite-icon.png" border="false"::: if you want to mark the simulation as a favorite.
+3. Select **Save**.
+1. Select the report that is saved from the Attack vector page and review:
+ - network attack paths and insights
+ - a risk score
+ - source and target devices
+ - a graphical representation of attack vectors
-5. A list of attack vectors appears and includes vector score (out of 100), attack source device, and attack target device. Select a specific attack for graphical depiction of attack vectors.
+ :::image type="content" source="media/how-to-generate-reports/sample-attack-vectors.png" alt-text="Screen shot of Attack vectors report.":::
- :::image type="content" source="media/how-to-generate-reports/sample-attack-vectors.png" alt-text="Attack vectors.":::
## See also
defender-for-iot How To Create Risk Assessment Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-create-risk-assessment-reports.md
Title: Create risk assessment reports description: Gain insight into network risks detected by individual sensors or an aggregate view of risks detected by all sensors. Previously updated : 11/09/2021 Last updated : 02/03/2022
Overall network security score is generated in each report. The score represents
Risk Assessment scores are based on information learned from packet inspection, behavioral modeling engines, and a SCADA-specific state machine design.
-**Secure Devices** are devices with a security score above 90 %.
+**Secure Devices** are devices with a security score above 90%.
-**Devices Needing Improvement**: Devices with a security score between 70 percent and 89 %.
+**Devices Needing Improvement**: Devices with a security score between 70 percent and 89%.
-**Vulnerable Devices** are devices with a security score below 70 %.
+**Vulnerable Devices** are devices with a security score below 70%.
### About backup and anti-virus servers
-The risk assessment score may be negatively impacted if you do not define backup and anti-virus server addresses in your sensor. Adding these addresses improves your score. By default these addresses are not defined.
+The risk assessment score may be negatively impacted if you don't define backup and anti-virus server addresses in your sensor. Adding these addresses improves your score. By default these addresses aren't defined.
The Risk Assessment report cover page will indicate if backup servers and anti-virus servers are not defined. **To add servers:**
The Risk Assessment report cover page will indicate if backup servers and anti-v
1. Select **System Settings** and then select **System Properties**. 1. Select **Vulnerability Assessment** and add the addresses to **backup_servers** and **AV_addresses** fields. Use commas to separate multiple addresses. separated by commas. 1. Select **Save**.
-## Create risk assessment reports
-
-Create a PDF risk assessment report. The report name is automatically generated as risk-assessment-report-1.pdf. The number is updated for each new report you create. The time and day of creation are displayed.
-
-### Create a sensor risk assessment report
-Create a risk assessment report based on detections made by the sensor you are logged into.
+## Create risk assessment reports
-To create a report:
+Create a risk assessment report based on detections made by the sensor you are logged into. The report name is automatically generated as risk-assessment-report-1.pdf. The number is updated for each new report you create. The time and day of creation are displayed.
-1. Login to the sensor console.
-1. Select **Risk Assessment** on the side menu.
-1. Select **Generate Report**. The report appears in the Archived Reports section.
-1. Select the report from the Archived Reports section to download it.
+**To create a report:**
+1. Sign in to the sensor console.
+1. Select **Risk assessment** on the side menu.
+1. Select **Generate report**. The report appears in the Saved Reports section.
+1. Select the report from the Saved Reports section to download it.
-To import a company logo:
+**To import a company logo:**
-- Select **Import Logo**.
+1. Select **Import logo**.
+1. Choose a logo to add to the header of your Risk assessment reports.
### Create an on-premises management console risk assessment report
-Create a risk assessment report based on detections made by the any of the sensors managed by your on-premises management console.
+Create a risk assessment report based on detections made by sensors that are managed by your on-premises management console.
-To create a report:
+**To create a report:**
1. Select **Risk Assessment** on the side menu.- 2. Select a sensor from the **Select sensor** drop-down list.- 3. Select **Generate Report**.- 4. Select **Download** from the **Archived Reports** section.
-To import a company logo:
--- Select **Import Logo**.
+**To import a company logo:**
+1. Select **Import logo**.
+1. Choose a logo to add to the header of your Risk assessment reports.
## See also
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Alerts provide information about an extensive range of security and operational
- Suspicious traffic detected - Relevant information is sent to partner systems when forwarding rules are created. ## About Forwarding rules and certificates
In these cases, the sensor or on-premises management console is the client and i
Your Defender for IoT system was set up to either validate certificates or ignore certificate validation. See [About certificate validation](how-to-deploy-certificates.md#about-certificate-validation) for information about enabling and disabling validation.
-If validation is enabled and the certificate can not be verified, communication between Defender for IoT and the server will be halted. The sensor will display an error message indicating the validation failure. If the validation is disabled and the certificate is not valid, communication will still be carried out.
+If validation is enabled and the certificate cannot be verified, communication between Defender for IoT and the server will be halted. The sensor will display an error message indicating the validation failure. If the validation is disabled and the certificate isn't valid, communication will still be carried out.
The following Forwarding rules allow encryption and certificate validation: - Syslog CEF
The following Forwarding rules allow encryption and certificate validation:
## Create forwarding rules
-**To create a new forwarding rule on a sensor**:
+**To create a new forwarding rule**:
1. Sign in to the sensor. 1. Select **Forwarding** on the side menu.
-1. Select **Create Forwarding Rule**.
-
- :::image type="content" source="media/how-to-work-with-alerts-sensor/create-forwarding-rule-screen.png" alt-text="Create a Forwarding Rule icon.":::
-
-1. Enter a name for the forwarding rule.
-
-1. Select the severity level.
-
- This is the minimum incident to forward, in terms of severity level. For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. Levels are predefined.
-
-1. Select any protocols to apply.
-
- Only trigger the forwarding rule if the traffic detected was running over specific protocols. Select the required protocols from the drop-down list or choose them all.
-
-1. Select which engines the rule should apply to.
+1. Select **Create new rule**.
+1. Add a rule name.
+1. Define rule conditions:
+ - Select the severity level. This is the minimum incident to forward, in terms of severity level. For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. Levels are predefined.
+
+ - Select a protocol(s) that should be detected.
+ Information will forwarding if the traffic detected was running selected protocols.
+
+ - Select which engines the rule should apply to.
+ Alert information detected from selected engines will be forwarded
+
+1. Define rule actions by selecting a server.
+
+ Forwarding rule actions instruct the sensor to forward alert information to selected partner vendors or servers. You can create multiple actions for each forwarding rule.
- Select the required engines, or choose them all. Alerts from selected engines will be sent.
-
-1. Select an action to apply, and fill in any parameters needed for the selected action.
-
- Forwarding rule actions instruct the sensor to forward alert information to partner vendors or servers. You can create multiple actions for each forwarding rule.
+1. Select **Save**.
-1. Add another action if desired.
+## Forwarding rule actions
-1. Select **Submit**.
+You can send alert information to the servers described in this section.
### Email address action Send mail that includes the alert information. You can enter one email address per rule.
-To define email for the forwarding rule:
-
-1. Enter a single email address. If you need to add more than one email, you will need to create another action for each email address.
+**To define email for the forwarding rule:**
- :::image type="content" source="media/how-to-forward-alert-information-to-partners/forward-email.png" alt-text="Scrrenshot of the forwarding alert screen to forward the alerts to an email address.":::
+1. Enter a single email address. If you need to add more than one email, you'll need to create another action for each email address.
1. Enter the time zone for the time stamp for the alert detection at the SIEM.
-1. Select **Submit**.
+1. Select **Save**.
### Syslog server actions
The following formats are supported:
- Object messages - Enter the following parameters: - Syslog host name and port.
Enter the following parameters:
- TLS encryption certificate file and key file for CEF servers (optional). | Syslog text message output fields | Description | |--|--|
Enter the following parameters:
| Hostname | Sensor IP | | Message | Sensor name: The name of the Microsoft Defender for IoT appliance. <br />LEEF:1.0 <br />Microsoft Defender for IoT <br />Sensor <br />Sensor version <br />Microsoft Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. It may be different from the time of the syslog server machine. (This depends on the time-zone configuration.) <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
-After you enter all the information, select **Submit**.
+ ### Webhook server action Send alert information to a webhook server. Working with webhook servers lets you set up integrations that subscribe to alert events with Defender for IoT. When an alert event is triggered, the management console sends an HTTP POST payload to the webhook's configured URL. Webhooks can be used to update an external SIEM system, SOAR systems, Incident management systems, etc.
+This action is available from the on-premises management console.
+ **To define to a webhook action:** 1. Select the Webhook action.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/webhook.png" alt-text="Define a webhook forwarding rule.":::
- 1. Enter the server address in the **URL** field. 1. In the **Key** and **Value fields**, customize the HTTP header with a key and value definition. Keys can only contain letters, numbers, dashes, and underscores. Values can only contain one leading and/or one trailing space.
Webhook extended can be used to send extra data to the endpoint. The extended fe
**To define a webhook extended action**:
-1. In the management console, select **Forwarding** from the left-hand pane.
-
-1. Add a forwarding rule by selecting the :::image type="icon" source="media/how-to-forward-alert-information-to-partners/add-icon.png" border="false"::: button.
-
-1. Add a meaningful name for the forwarding alert.
-
-1. Select a severity level.
-
-1. Select **Add**.
-
-1. In the Select Type drop down window, select **Webhook Extended**.
-
- :::image type="content" source="media/how-to-forward-alert-information-to-partners/webhook-extended.png" alt-text="Select the webhook extended option from the select type drop down options menu.":::
- 1. Add the endpoint data URL in the URL field. 1. (Optional) Customize the HTTP header with a key and value definition. Add extra headers by selecting the :::image type="icon" source="media/how-to-forward-alert-information-to-partners/add-header.png" border="false"::: button.
Once the Webhook Extended forwarding rule has been configured, you can test the
:::image type="content" source="media/how-to-forward-alert-information-to-partners/run-button.png" alt-text="Select the run button to test your forwarding rule.":::
-You will know the forwarding rule is working if you see the Success notification appear.
+You will know the forwarding rule is working if you see the Success notification.
### NetWitness action Send alert information to a NetWitness server.
-To define NetWitness forwarding parameters:
+**To define NetWitness forwarding parameters:**
1. Enter NetWitness **Hostname** and **Port** information. 1. Enter the time zone for the time stamp for the alert detection at the SIEM.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/add-timezone.png" alt-text="Add a time zone to your forwarding rule.":::
-
-1. Select **Submit**.
+1. Select **Save**.
### Integrated vendor actions
For details about setting up forwarding rules for the integrations, refer to the
Test the connection between the sensor and the partner server that's defined in your forwarding rules:
-1. Select the rule from the **Forwarding rule** dialog box.
-
-1. Select the **More** box.
+1. In the Forwarding page, find the rule you need and select the three dots (...) at the end of the row.
1. Select **Send Test Message**.
Test the connection between the sensor and the partner server that's defined in
**To edit a forwarding rule**: -- On the **Forwarding Rule** screen, select **Edit** under the **More** drop-down menu. Make the desired changes and select **Submit**.
+1. In the Forwarding page, find the rule you need and select the three dots (...) at the end of the row.
+1. Select **Edit** and update the rule.
+1. Select **Save**.
**To remove a forwarding rule**: -- On the **Forwarding Rule** screen, select **Remove** under the **More** drop-down menu. In the **Warning** dialog box, select **OK**.
+1. In the Forwarding page, find the rule you need and select the three dots (...) at the end of the row.
+1. Select **Delete** and confirm.
+1. Select **Save**.
## Forwarding rules and alert exclusion rules
defender-for-iot References Horizon Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/references-horizon-api.md
- Title: Horizon API
-description: This guide describes commonly used Horizon methods.
Previously updated : 11/09/2021---
-# Horizon API
-
-This guide describes commonly used Horizon methods.
-
-## Getting more information
-
-Defender for IoT APIs are governed by [Microsoft API License and Terms of use](/legal/microsoft-apis/terms-of-use).
-
-For more information about working with Horizon and the Defender for IoT platform, see the following information:
--- For the Horizon Open Development Environment (ODE) SDK, contact your Defender for IoT representative.--- For support and troubleshooting information, contact <support@cyberx-labs.com>.--- To access the Defender for IoT user guide from the Defender for IoT console, select :::image type="icon" source="media/references-horizon-api/profile.png"::: and then select **Download User Guide**.-
-## `horizon::protocol::BaseParser`
-
-Abstract for all plugins. This consists of two methods:
--- For processing plugin filters defined above you. This way Horizon knows how to communicate with the parser.-- For processing the actual data.-
-## `std::shared_ptr<horizon::protocol::BaseParser> create_parser()`
-
-The first function that is called for your plugin creates an instance of the parser for Horizon to recognize it and register it.
-
-### Parameters
-
-None.
-
-### Return value
-
-shared_ptr to your parser instance.
-
-## `std::vector<uint64_t> horizon::protocol::BaseParser::processDissectAs(const std::map<std::string, std::vector<std::string>> &) const`
-
-This function will get called for each plugin registered above.
-
-In most cases, this will be empty. Throw an exception for Horizon to know something bad happened.
-
-### Parameters
--- A map containing the structure of dissect_as, as defined in the config.json of another plugin that wants to register over you.-
-### Return value
-
-An array of uint64_t, which is the registration processed into a kind of uint64_t. This means in the map, you'll have a list of ports, whose values will be the uin64_t.
-
-## `horizon::protocol::ParserResult horizon::protocol::BaseParser::processLayer(horizon::protocol::management::IProcessingUtils &,horizon::general::IDataBuffer &)`
-
-The main function. Specifically, the logic of the plugin, each time a new packet reaches your parser. This function will be called, everything related for packet processing should be done here.
-
-### Considerations
-
-Your plugin should be thread safe, as this function may be called from different threads. A good approach would be to define everything on the stack.
-
-### Parameters
--- The SDK control unit responsible for storing the data and creating SDK-related objects, such as ILayer, and fields.-- A helper for reading the data of the raw packet. It is already set with the byte order you defined in the config.json.-
-### Return value
-
-The result of the processing. This can be either *Success*, *Malformed*, or *Sanity*.
-
-## `horizon::protocol::SanityFailureResult: public horizon::protocol::ParserResult`
-
-Marks the processing as sanitation failure, meaning the packet isn't recognized by the current protocol, and Horizon should pass it to other parser, if any registered on same filters.
-
-## `horizon::protocol::SanityFailureResult::SanityFailureResult(uint64_t)`
-
-Constructor
-
-### Parameters
--- Defines the error code used by the Horizon for logging, as defined in the config.json.-
-## `horizon::protocol::MalformedResult: public horizon::protocol::ParserResult`
-
-Malformed result, indicated we already recognized the packet as our protocol, but some validation went wrong (reserved bits are on, or some field is missing).
-
-## `horizon::protocol::MalformedResult::MalformedResult(uint64_t)`
-
-Constructor
-
-### Parameters
--- Error code, as defined in config.json.-
-## `horizon::protocol::SuccessResult: public horizon::protocol::ParserResult`
-
-Notifies Horizon of successful processing. When successful, the packet was accepted, the data belongs to us, and all data was extracted.
-
-## `horizon::protocol::SuccessResult()`
-
-Constructor. Created a basic successful result. This means we don't know the direction or any other metadata regarding the packet.
-
-## `horizon::protocol::SuccessResult(horizon::protocol::ParserResultDirection)`
-
-Constructor.
-
-### Parameters
--- The direction of packet, if identified. Values can be *REQUEST*, or *RESPONSE*.-
-## `horizon::protocol::SuccessResult(horizon::protocol::ParserResultDirection, const std::vector<uint64_t> &)`
-
-Constructor.
-
-### Parameters
--- The direction of packet, if we've identified it, can be *REQUEST*, *RESPONSE*.-- Warnings. These events wonΓÇÖt be failed, but Horizon will be notified.-
-## `horizon::protocol::SuccessResult(const std::vector<uint64_t> &)`
-
-Constructor.
-
-### Parameters
--- Warnings. These events wonΓÇÖt be failed, but Horizon will be notified.-
-## `HorizonID HORIZON_FIELD(const std::string_view &)`
-
-Converts a string-based reference to a field name (for example, function_code) to HorizonID.
-
-### Parameters
--- String to convert.-
-### Return value
--- HorizonID created from the string.-
-## `horizon::protocol::ILayer &horizon::protocol::management::IProcessingUtils::createNewLayer()`
-
-Creates a new layer so Horizon will know the plugin wants to store some data. This is the base storage unit you should use.
-
-### Return value
-
-A reference to a created layer, so you could add data to it.
-
-## `horizon::protocol::management::IFieldManagement &horizon::protocol::management::IProcessingUtils::getFieldsManager()`
-
-Gets the field management object, which is responsible for creating fields on different objects, for example, on ILayer.
-
-### Return value
-
-A reference to the manager.
-
-## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, uint64_t)`
-
-Creates a new numeric field of 64 bits on the layer with the requested ID.
-
-### Parameters
--- The layer you created earlier.-- HorizonID created by the **HORIZON_FIELD** macro.-- The raw value you want to store.-
-## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, std::string)`
-
-Creates a new string field of on the layer with the requested ID. The memory will be moved, so be careful. You won't be able to use this value again.
-
-### Parameters
--- The layer you created earlier.-- HorizonID created by the **HORIZON_FIELD** macro.-- The raw value you want to store.-
-## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, std::vector<char> &)`
-
-Creates a new raw value (array of bytes) field of on the layer, with the requested ID. The memory will be move, so be caution, you won't be able to use this value again.
-
-### Parameters
--- The layer you created earlier.-- HorizonID created by the **HORIZON_FIELD** macro.-- The raw value you want to store.-
-## `horizon::protocol::IFieldValueArray &horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, horizon::protocol::FieldValueType)`
-
-Creates an array value (array) field on the layer of the specified type with the requested ID.
-
-### Parameters
--- The layer you created earlier.-- HorizonID created by the **HORIZON_FIELD** macro.-- The type of values that will be stored inside the array.-
-### Return value
-
-Reference to an array that you should append values to.
-
-## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::IFieldValueArray &, uint64_t)`
-
-Appends a new integer value to the array created earlier.
-
-### Parameters
--- The array created earlier.-- The raw value to be stored in the array.-
-## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::IFieldValueArray &, std::string)`
-
-Appends a new string value to the array created earlier. The memory will be move, so be caution, you won't be able to use this value again.
-
-### Parameters
--- The array created earlier.-- Raw value to be stored in the array.-
-## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::IFieldValueArray &, std::vector<char> &)`
-
-Appends a new raw value to the array created earlier. The memory will be move, so be caution, you won't be able to use this value again.
-
-### Parameters
--- The array created earlier.-- Raw value to be stored in the array.-
-## `bool horizon::general::IDataBuffer::validateRemainingSize(size_t)`
-
-Checks that the buffer contains at least X bytes.
-
-### Parameters
-
-The number of bytes that should exist.
-
-### Return value
-
-True if the buffer contains at least X bytes. Otherwise, it is `False`.
-
-## `uint8_t horizon::general::IDataBuffer::readUInt8()`
-
-Reads uint8 value (1 byte), from the buffer, according to the byte order.
-
-### Return value
-
-The value read from the buffer.
-
-## `uint16_t horizon::general::IDataBuffer::readUInt16()`
-
-Reads uint16 value (2 bytes), from the buffer, according to the byte order.
-
-### Return value
-
-The value read from the buffer.
-
-## `uint32_t horizon::general::IDataBuffer::readUInt32()`
-
-Reads uint32 value (4 bytes) from the buffer according to the byte order.
-
-### Return value
-
-The value read from the buffer.
-
-## `uint64_t horizon::general::IDataBuffer::readUInt64()`
-
-Reads uint64 value (8 bytes), from the buffer, according to the byte order.
-
-### Return value
-
-The value read from the buffer.
-
-## `void horizon::general::IDataBuffer::readIntoRawData(void *, size_t)`
-
-Reads into pre-allocated memory, of a specified size, will actually copy the data into your memory region.
-
-### Parameters
--- The memory region to copy the data into.-- Size of the memory region, this parameter also defined how many bytes will be copied.-
-## `std::string_view horizon::general::IDataBuffer::readString(size_t)`
-
-Reads into a string from the buffer.
-
-### Parameters
--- The number of bytes that should be read.-
-### Return value
-
-The reference to the memory region of the string.
-
-## `size_t horizon::general::IDataBuffer::getRemainingData()`
-
-Tells you how many bytes are left in the buffer.
-
-### Return value
-
-Remaining size of the buffer.
-
-## `void horizon::general::IDataBuffer::skip(size_t)`
-
-Skips X bytes in the buffer.
-
-### Parameters
--- Number of bytes to skip.
defender-for-iot References Horizon Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/references-horizon-sdk.md
- Title: Horizon SDK
-description: The Horizon SDK lets Microsoft Defender for IoT developers design dissector plugins that decode network traffic so it can be processed by automated Defender for IoT network analysis programs.
Previously updated : 11/09/2021---
-# Horizon proprietary protocol dissector
-
-Horizon is an Open Development Environment (ODE) used to secure IoT and ICS devices running proprietary protocols.
-
-This environment provides the following solutions for customers and technology partners:
--- Unlimited, full support for common, proprietary, custom protocols or protocols that deviate from any standard. --- A new level of flexibility and scope for DPI development.--- A tool that exponentially expands OT visibility and control, without the need to upgrade Defender for IoT platform versions.--- The security of allowing proprietary development without divulging sensitive information.-
-The Horizon SDK lets Microsoft Defender for IoT developers design dissector plugins that decode network traffic so it can be processed by automated Defender for IoT network analysis programs.
-
-Protocol dissectors are developed as external plugins and are integrated with an extensive range of Defender for IoT services. For example, services that provide monitoring, alerting and reporting capabilities.
-
-## Secure development environment
-
-The Horizon ODE enables development of custom or proprietary protocols that cannot be shared outside an organization. For example, because of legal regulations or corporate policies.
-
-Develop dissector plugins without:
--- revealing any proprietary information about how your protocols are defined.--- sharing any of your sensitive PCAPs.--- violating compliance regulations.-
-Contact <ms-horizon-support@microsoft.com> for information about developing protocol plugins.
-## Customization and localization
-
-The SDK supports various customization options, including:
-
- - Text for function codes.
-
- - Full localization text for alerts, events, and protocol parameters. For more information, see [Create mapping files (JSON)](#create-mapping-files-json).
-
- :::image type="content" source="media/references-horizon-sdk/localization.png" alt-text="View fully localized alerts.":::
-
-## Horizon architecture
-
-The architectural model includes three product layers.
--
-## Defender for IoT platform layer
-
-Enables immediate integration and real-time monitoring of custom dissector plugins in the Defender for IoT platform, without the need to upgrade the Defender for IoT platform version.
-
-## Defender for IoT services layer
-
-Each service is designed as a pipeline, decoupled from a specific protocol, enabling more efficient, independent development.
-
-Each service is designed as a pipeline, decoupled from a specific protocol. Services listens for traffic on the pipeline. They interact with the plugin data and the traffic captured by the sensors to index deployed protocols and analyze the traffic payload, and enable a more efficient and independent development.
-
-## Custom dissector layer
-
-Enables creation of plugins using the Defender for IoT proprietary SDK (including C++ implementation and JSON configuration) to:
--- Define how to identify the protocol--- Define how to map the fields you want to extract from the traffic, and extract them --- Define how to integrate with the Defender for IoT services-
- :::image type="content" source="media/references-horizon-sdk/layers.png" alt-text="The built-in layers.":::
-
-Defender for IoT provides basic dissectors for common protocols. You can build your dissectors on top of these protocols.
-
-## Before you begin
-
-## What this SDK contains
-
-This kit contains the header files needed for development. The development process requires basic steps and optional advanced steps, described in this SDK.
-
-Contact <ms-horizon-support@microsoft.com> for information on receiving header files and other resources.
-
-## About the environment and setup
-
-### Requirements
--- The preferred development environment is Linux. If you are developing in a Windows environment, consider using a VM with a Linux System.--- For the compilation process, use GCC 7.4.0 or higher. Use any standard class from stdlib that is supported under C++17.--- Defender for IoT version 3.0 and above.-
-### Process
-
-1. [Download](https://www.eclipse.org/) the Eclipse IDE for C/C++ Developers. You can use any other IDE you prefer. This document guides you through configuration using Eclipse IDE.
-
-1. After launching Eclipse IDE and configuring the workspace (where your projects will be stored), press **Ctrl + n**, and create it as a C++ project.
-
-1. On the next screen, set the name to the protocol you want to develop and select the project type as `Shared Library` and `AND Linux GCC`.
-
-1. Edit the project properties, under **C/C++ Build** > **Settings** > **Tool Settings** > **GCC C++ Compiler** > **Miscellaneous** > **Tick Position Independent Code**.
-
-1. Paste the example codes that you received with the SDK and compile it.
-
-1. Add the artifacts (library, config.json, and metadata) to a tar.gz file, and change the file extension to \<XXX>.hdp, where is \<XXX> is the name of the plugin.
-
-### Research
-
-Before you begin, verify that you:
--- Read the protocol specification, if available.--- Know which protocol fields you plan to extract.--- Have planned your mapping objectives.-
-## About plugin files
-
-Three files are defined during the development process.
-
-### JSON configuration file (required)
-
-This file should define the dissector ID and declarations, dependencies, integration requirements, validation parameters, and mapping definitions to translate values to names, numbers to text. For more information, see the following links:
--- [Prepare the configuration file (JSON)](#prepare-the-configuration-file-json)--- [Prepare implementation code validations](#prepare-implementation-code-validations)--- [Extract device metadata](#extract-device-metadata)--- [Connect to an indexing service (Baseline)](#connect-to-an-indexing-service-baseline)-
-### Implementation code: C++ (required)
-
-The Implementation Code (CPP) parses raw traffic, and maps it to values such as services, classes, and function codes. It extracts the layer fields and maps them to their index names from the JSON configuration files. The fields to extract from CPP are defined in config file. for more information, see [Prepare the implementation code (C++)](#prepare-the-implementation-code-c).
-
-### Mapping files (optional)
-
-You can customize plugin output text to meet the needs of your enterprise environment.
--
-You can define and update mapping files to update text without changing the code. Each file can map one or many fields:
-
- - Mapping of field values to names, for example, 1:Reset, 2:Start, 3:Stop.
-
- - Mapping text to support multiple languages.
-
-For more information, see [Create mapping files (JSON)](#create-mapping-files-json).
-
-## Create a dissector plugin (overview)
-
-1. Review the [About the environment and setup](#about-the-environment-and-setup) section.
-
-2. [Prepare the implementation code (C++)](#prepare-the-implementation-code-c). Copy the **template.json** file and edit it to meet your needs. Do not change the keys.
-
-3. [Prepare the configuration file (JSON)](#prepare-the-configuration-file-json). Copy the **template.cpp** file and implement an override method. For more information, see [horizon::protocol::BaseParser](#horizonprotocolbaseparser) for details.
-
-4. [Prepare implementation code validations](#prepare-implementation-code-validations).
-
-## Prepare the implementation code (C++)
-
-The CPP file is a parser responsible for:
--- Validating the packet header and payload (for example header length, or payload structure).--- Extracting data from the header and payload into defined fields.--- Implementing configured fields extraction by the JSON file.-
-### What to do
-
-Copy the template **.cpp** file and implement an override method. For more information, see [horizon::protocol::BaseParser](#horizonprotocolbaseparser).
-
-### Basic C++ template sample
-
-This section provides the basic protocol template, with standard functions for a sample Defender for IoT Horizon Protocol.
-
-```C++
-#include ΓÇ£plugin/plugin.hΓÇ¥
-namespace {
- class CyberxHorizonSDK: public horizon::protocol::BaseParser
- public:
- std::vector<uint64_t> processDissectAs(const std::map<std::string,
- std::vector<std::string>> &filters) const override {
- return std::vector<uint64_t>();
- }
- horizon::protocol::ParserResult processLayer(horizon::protocol::management::IProcessingUtils &ctx,
- horizon::general::IDataBuffer &data) override {
- return horizon::protocol::ParserResult();
- }
- };
-}
-
-extern "C" {
- std::shared_ptr<horizon::protocol::BaseParser> create_parser() {
- return std::make_shared<CyberxHorizonSDK>();
- }
-}
-
-```
-
-### Basic C++ template description
-
-This section provides the basic protocol template, with a description of standard functions for a sample Defender for IoT Horizon Protocol.
-
-### #include ΓÇ£plugin/plugin.hΓÇ¥
-
-The definition the plugin uses. The header file contains everything needed to complete development.
-
-### horizon::protocol::BaseParser
-
-The communication interface between the Horizon infrastructure and the Plugin layer. For more information, see [Horizon architecture](#horizon-architecture) for an overview on the layers.
-
-The processLayer is the method used to process data.
--- The first parameter in the function code is the processing utility used for retrieving data previously processed, creating new fields, and layers.--- The second parameter in the function code is the current data passed from the previous parser.-
-```C++
-horizon::protocol::ParserResult processLayer(horizon::protocol::management::IProcessingUtils &ctx,
- horizon::general::IDataBuffer &data) override {
-
-```
-
-### create_parser
-
-Use to create the instance of your parser.
--
-## Protocol function code sample
-
-This section provides an example of how the code number (2 bytes) and the message length (4 bytes) are extracted.
-
-This is done according to the endianness supplied in the JSON configuration file, which means if the protocol is *little endianness*, and the sensor runs on a machine with little endianness, it will be converted.
-
-A layer is also created to store data. Use the *fieldsManager* from the processing utils to create new fields. A field can have only one of the following types: *STRING*, *NUMBER*, *RAW DATA*, *ARRAY* (of specific type), or *COMPLEX*. This layer may contain a number, raw, or string with ID.
-
-In the sample below, the following two fields are extracted:
--- `function_code_number`--- `headerLength`-
-A new layer is created, and the extracted field is copied into it.
-
-The sample below describes a specific function, which is the main logic implemented for plugin processing.
-
-```C++
-namespace {
- class CyberxHorizonProtocol: public horizon::protocol::BaseParser {
- public:
-
- horizon::protocol::ParserResult processLayer(horizon::protocol::management::IProcessingUtils &ctx,
- horizon::general::IDataBuffer &data) override {
- uint16_t codeNumber = data.readUInt16();
- uint32_t headerLength = data.readUInt32();
-
- auto &layer = ctx.createNewLayer();
-
- ctx.getFieldsManager().create(layer,HORIZON_FIELD("code_number"),codeNumber;
- ctx.getFieldsManager().create(layer,HORIZON_FIELD("header_length"),headerLength);
- return horizon::protocol::SuccessResult();
- }
-
-
-```
-
-### Related JSON field
--
-## Prepare the configuration file (JSON)
-
-The Horizon SDK uses standard JavaScript Object Notation (JSON), a lightweight format for storing and transporting data and does not require proprietary scripting languages.
-
-This section describes minimal JSON configuration declarations, the related structure and provides a sample config file that defines a protocol. This protocol is automatically integrated with the device discovery service.
-
-## File structure
-
-The sample below describes the file structure.
--
-### What to do
-
-Copy the template `config.json` file and edit it to meet your needs. Do not change the key. Keys are marked in red in the [Sample JSON configuration file](#sample-json-configuration-file).
-
-### File naming requirements
-
-The JSON Configuration file must be saved as `config.json`.
-
-### JSON Configuration file fields
-
-This section describes the JSON configuration fields you will be defining. Do not change the fields *labels*.
-
-### Basic parameters
-
-This section describes basic parameters.
-
-| Parameter Label | Description | Type |
-|--|--|--|
-| **ID** | The name of the protocol. Delete the default and add the name of your protocol as it appears. | String |
-| **endianess** | Defines how the multi byte data is encoded. Use the term ΓÇ£littleΓÇ¥ or ΓÇ£bigΓÇ¥ only. Taken from the protocol specification or traffic recording | String |
-| **sanity_failure_codes** | These are the codes returned from the parser when there is a sanity conflict regarding the identity of the code. See magic number validation in the C++ section. | String |
-| **malformed_codes** | These are codes that have been properly identified, but an error is detected. For example, if the field length is too short or long, or a value is invalid. | String |
-| **dissect_as** | An array defining where the specific protocol traffic should arrive. | TCP/UDP, port etc. |
-| **fields** | The declaration of which fields will be extracted from the traffic. Each field has its own ID (name), and type (numeric, string, raw, array, complex). For example, the field [function](https://docs.google.com/document/d/14nm8cyoGiaE0ODOYQd_xjULxVz9U_bjfPKkcDhOFr5Q/edit#bookmark=id.6s1zcxa9184k) that is extracted in the Implementation Parser file. The fields written in the config file are the only ones that can be added to the layer. | |
-
-### Other advanced fields
-
-This section describes other fields.
-
-| Parameter Label | Description |
-|--|--|
-| **allow_lists** | You can index the protocol values and display them in Data Mining Reports. These reports reflect your network baseline. :::image type="content" source="media/references-horizon-sdk/data-mining.png" alt-text="A sample of the data mining view."::: <br /> For more information, see [Connect to an indexing service (Baseline)](#connect-to-an-indexing-service-baseline) for details. |
-| **firmware** | You can extract firmware information, define index values, and trigger firmware alerts for the plugin protocol. For more information, see [Extract firmware data](#extract-firmware-data) for details. |
-| **value_mapping** | You can customize plugin output text to meet the needs of your enterprise environment by defining and updating mapping files. For example, map to language files. Changes can easily be implemented to text without changing or impacting the code. For more information, see [Create mapping files (JSON)](#create-mapping-files-json) for details. |
-
-## Sample JSON configuration file
-
-```json
-{
- "id":"CyberX Horizon Protocol",
- "endianess": "big",
- "sanity_failure_codes": {
- "wrong magic": 0
- },
- "malformed_codes": {
- "not enough bytes": 0
- },
- "exports_dissect_as": {
- },
- "dissect_as": {
- "UDP": {
- "port": ["12345"]
- }
- },
- "fields": [
-{
- "id": "function",
- "type": "numeric"
- },
- {
- "id": "sub_function",
- "type": "numeric"
- },
- {
- "id": "name",
- "type": "string"
- },
- {
- "id": "model",
- "type": "string"
- },
- {
- "id": "version",
- "type": "numeric"
- }
- ]
-}
--
-```
-
-## Prepare implementation code validations
-
-This section describes implementation C++ code validation functions and provides sample code. Two layers of validation are available:
--- Sanity.--- Malformed Code.-
-You donΓÇÖt need to create validation code in order to build a functioning plugin. If you donΓÇÖt prepare validation code, you can review sensor Data Mining reports as an indication of successful processing.
-
-Field values can be mapped to the text in mapping files and seamlessly updated without impacting processing.
-
-## Sanity code validations
-
-This validates that the packet transmitted matches the validation parameters of the protocol, which helps you identify the protocol within the traffic.
-
-For example, use the first 8 bytes as the *magic number*. If the sanity fails, a sanity failure response is returned.
-
-For example:
-
-```C++
- horizon::protocol::ParserResult
-processLayer(horizon::protocol::management::IProcessingUtils &ctx,
- horizon::general::IDataBuffer
-&data) override {
-
- uint64_t magic = data.readUInt64();
-
- if (magic != 0xBEEFFEEB) {
-
- return horizon::protocol::SanityFailureResult(0);
-
- }
-```
-
-If other relevant plugins have been deployed, the packet will be validated against them.
-
-## Malformed code validations
-
-Malformed validations are used after the protocol has been positively validated.
-
-If there is a failure to process the packets based on the protocol, a failure response is returned.
--
-## C++ sample with validations
-
-According to the function, the process is carried out, as shown in the example below.
-
-### Function 20
--- It is processed as firmware.--- The fields are read according to the function.--- The fields are added to the layer.-
-### Function 10
--- The function contains another sub function, which is a more specific operation--- The subfunction is read and added to the layer.-
-Once this is done, processing is finished. The return value indicates if the dissector layer was successfully processed. If it was, the layer becomes usable.
-
-```C++
-#include "plugin/plugin.h"
-
-#define FUNCTION_FIRMWARE_RESPONSE 20
-
-#define FUNCTION_SUBFUNCTION_REQUEST 10
-
-namespace {
-
-class CyberxHorizonSDK: public horizon::protocol::BaseParser {
-
- public:
-
- std::vector<uint64_t> processDissectAs(const std::map<std::string,
-
- std::vector<std::string>> &filters) const override {
-
- return std::vector<uint64_t>();
-
- }
-
- horizon::protocol::ParserResult processLayer(horizon::protocol::management::IProcessingUtils &ctx,
-
- horizon::general::IDataBuffer &data) override {
-
- uint64_t magic = data.readUInt64();
-
- if (magic != 0xBEEFFEEB) {
-
- return horizon::protocol::SanityFailureResult(0);
-
- }
-
- uint16_t function = data.readUInt16();
-
- uint32_t length = data.readUInt32();
-
- if (length > data.getRemaningData()) {
-
- return horizon::protocol::MalformedResult(0);
-
- }
-
- auto &layer = ctx.createNewLayer();
-
- ctx.getFieldsManager().create(layer, HORIZON_FIELD("function"), function);
-
- switch (function) {
-
- case FUNCTION_FIRMWARE_RESPONSE: {
-
- uint8_t modelLength = data.readUInt8();
-
- std::string model = data.readString(modelLength);
-
- uint16_t firmwareVersion = data.readUInt16();
-
- uint8_t nameLength = data.readUInt8();
-
- std::string name = data.readString(nameLength);
-
- ctx.getFieldsManager().create(layer, HORIZON_FIELD("model"), model);
-
- ctx.getFieldsManager().create(layer, HORIZON_FIELD("version"), firmwareVersion);
-
- ctx.getFieldsManager().create(layer, HORIZON_FIELD("name"), name);
-
- }
-
- break;
-
- case FUNCTION_SUBFUNCTION_REQUEST: {
-
- uint8_t subFunction = data.readUInt8();
-
- ctx.getFieldsManager().create(layer, HORIZON_FIELD("sub_function"), subFunction);
-
- }
-
- break;
-
- }
-
- return horizon::protocol::SuccessResult();
-
- }
-
-};
-
-}
-
-extern "C" {
-
- std::shared_ptr<horizon::protocol::BaseParser> create_parser() {
-
- return std::make_shared<CyberxHorizonSDK>();
-
- }
-
-}
-```
-
-## Extract device metadata
-
-You can extract the following metadata on assets:
-
- - `Is_distributed_control_system` - Indicates if the protocol is part of Distributed Control System. (example: SCADA protocol should be false)
-
- - `Has_protocol_address` ΓÇô Indicates if there is a protocol address; the specific address for the current protocol, for example MODBUS unit identifier
-
- - `Is_scada_protocol` - Indicates if the protocol is specific to OT networks
-
- - `Is_router_potential` - Indicates if the protocol is used mainly by routers. For example, LLDP, CDP, or STP.
-
-In order to achieve this, the JSON configuration file needs to be updated using the metadata property.
-
-## JSON sample with metadata
-
-```json
-
-{
- "id":"CyberX Horizon Protocol",
- "endianess": "big",
- "metadata": {
- "is_distributed_control_system": false,
- "has_protocol_address": false,
- "is_scada_protocol": true,
- "is_router_potenial": false
- },
- "sanity_failure_codes": {
- "wrong magic": 0
- },
- "malformed_codes": {
- "not enough bytes": 0
- },
- "exports_dissect_as": {
- },
- "dissect_as": {
- "UDP": {
- "port": ["12345"]
- }
- },
- "fields": [
- {
- "id": "function",
- "type": "numeric"
-},
- {
- "id": "sub_function",
- "type": "numeric"
- },
- {
- "id": "name",
- "type": "string"
- },
- {
- "id": "model",
- "type": "string"
- },
- {
- "id": "version",
- "type": "numeric"
- }
- ],
-}
-
-```
-
-## Extract programming code
-
-When programming event occurs, you can extract the code content. The extracted content lets you:
--- Compare code file content in different programming events.--- Trigger an alert on unauthorized programming. --- Trigger an event for receiving programming code file.-
- :::image type="content" source="media/references-horizon-sdk/change.png" alt-text="The programming change log.":::
-
- :::image type="content" source="media/references-horizon-sdk/view.png" alt-text="View the programming by clicking the button.":::
-
- :::image type="content" source="media/references-horizon-sdk/unauthorized.png" alt-text="The unauthorized PLC programming alert.":::
-
-In order to achieve this, the JSON configuration file needs to be updated using the `code_extraction` property.
-
-### JSON configuration fields
-
-This section describes the JSON configuration fields.
--- **method**-
- Indicates the way that programming event files are received.
-
- ALL (each programming action will cause all the code files to be received even if there are files without changes).
--- **file_type** -
- Indicates the code content type.
-
- TEXT (each code file contains textual information).
--- **code_data_field**
-
- Indicates the implementation field to use in order to provide the code content.
-
- FIELD.
--- **code_name_field**-
- Indicates the implementation field to use in order to provide the name of the coding file.
-
- FIELD.
--- **size_limit**-
- Indicates the size limit of each coding file content in BYTES, if a code file exceeds the set limit it will be dropped. If this field is not specified, the default value will be 15,000,000 that is 15 MB.
-
- Number.
--- **metadata**-
- Indicates additional information for a code file.
-
- Array containing objects with two properties:
-
- - name (String) -Indicates the metadata key XSense currently supports only the username key.
-
- - value (Field) - Indicates the implementation field to use in order to provide the metadata data.
-
-## JSON sample with programming code
-
-```json
-{
- "id":"CyberXHorizonProtocol",
- "endianess": "big",
- "metadata": {
- "is_distributed_control_system": false,
- "has_protocol_address": false,
- "is_scada_protocol": true,
- "is_router_potenial": false
- },
- "sanity_failure_codes": {
- "wrong magic": 0
- },
- "malformed_codes": {
- "not enough bytes": 0
- },
- "exports_dissect_as": {
- },
- "dissect_as": {
- "UDP": {
- "port": ["12345"]
- }
- },
- "fields": [
- {
- "id": "function",
- "type": "numeric"
- },
- {
- "id": "sub_function",
- "type": "numeric"
- },
- {
- "id": "name",
- "type": "string"
- },
- {
- "id": "model",
- "type": "string"
- },
- {
- "id": "version",
- "type": "numeric"
- },
- {
- "id": "script",
- "type": "string"
- },
- {
- "id": "script_name",
- "type": "string"
- },
- "id": "username",
- "type": "string"
- }
- ],
-"whitelists": [
- {
- "name": "Functions",
- "alert_title": "New Activity Detected - CyberX Horizon
- Protocol Function",
- "alert_text": "There was an attempt by the source to
- invoke a new function on the destination",
- "fields": [
- {
- "name": "Source",
- "value": "IPv4.src"
- },
- {
- "name": "Destination",
- "value": "IPv4.dst"
- },
- {
- "name": "Function",
- "value": "CyberXHorizonProtocol.function"
- }
- ]
- },
-"firmware": {
- "alert_text": "Firmware was changed on a network asset.
- This may be a planned activity,
- for example an authorized maintenance procedure",
- "index_by": [
- {
- "name": "Device",
- "value": "IPv4.src",
- "owner": true
- }
- ],
- "firmware_fields": [,
- {
- "name": "Model",
- "value": "CyberXHorizonProtocol.model",
- "firmware_index": "model"
- },
- {
- "name": "Revision",
- "value": "CyberXHorizonProtocol.version",
- ΓÇ£Firmware_indexΓÇ¥: ΓÇ£firmware_versionΓÇ¥
- },
- {
- "name": "Name",
- "value": "CyberXHorizonProtocol.name"
- }
- ]
- },
-"code_extraction": {
- "method": "ALL",
- "file_type": "TEXT",
- "code_data_field": "script",
- "code_name_field": "script_name",
- "size_limit": 15000000,
- "metadata": [
- {
- "name": "username",
- "value": "username"
- }
- ]
- }
-}
-
-```
-## Custom horizon alerts
-
-Some protocols function code might indicate an error. For example, if the protocol controls a container with a specific chemical that must be stored always at a specific temperature. In this case, there may be function code indicating an error in the thermometer. For example, if the function code is 25, you can trigger an alert in the Web Console that indicates there is a problem with the container. In such case, you can define deep packet alerts.
-
-Add the **alerts** parameter to the `config.json` of the plugin.
-
-```json
-ΓÇ£alertsΓÇ¥: [{
- ΓÇ£idΓÇ¥: 1,
- ΓÇ£messageΓÇ¥: ΓÇ£Problem with thermometer at station {IPv4.src}ΓÇ¥,
- ΓÇ£titleΓÇ¥: ΓÇ£Thermometer problemΓÇ¥,
- ΓÇ£expressionΓÇ¥: ΓÇ£{CyberXHorizonProtocol.function} == 25ΓÇ¥
-}]
-
-```
-
-## JSON configuration fields
-
-This section describes the JSON configuration fields.
-
-| Field name | Description | Possible values |
-|--|--|--|
-| **ID** | Represents a single alert ID. It must be unique in this context. | Numeric value 0 - 10000 |
-| **message** | Information displayed to the user. This field allows to you use different fields. | Use any field from your protocol, or any lower layer protocol. |
-| **title** | The alert title | |
-| **expression** | When you want this alert to pop up. | Use any numeric field found in lower layers, or the current layer.</br></br> Each field should be wrapper with `{}`, in order for the SDK to detect it as a field, the supported logical operators are</br> == - Equal</br> <= - Less than or equal</br> >= - More than or equal</br> > - More than</br> < - Less than</br> ~= - Not equal |
-
-## More about expressions
-
-Every time the Horizon SDK invokes the expression and it becomes *true*, an alert will be triggered in the sensor.
-
-Multiple expressions can be included under the same alert. For example,
-
-`{CyberXHorizonProtocol.function} == 25 and {IPv4.src} == 269488144`.
-
-This expression validates the function code only when the packet ipv4 src is 10.10.10.10. Which is a raw representation of the IP address in numeric representation.
-
-You can use `and`, or `or` in order to connect expressions.
-
-## JSON sample custom horizon alerts
-
-```json
- "id":"CyberX Horizon Protocol",
- "endianess": "big",
- "metadata": {
- "is_distributed_control_system": false,
- "has_protocol_address": false,
- "is_scada_protocol": true,
- "is_router_potenial": false
- },
- "sanity_failure_codes": {
- "wrong magic": 0
- },
- "malformed_codes": {
- "not enough bytes": 0
- },
- "exports_dissect_as": {
- },
- "dissect_as": {
- "UDP": {
- "port": ["12345"]
- }
- },
- …………………………………….
- ΓÇ£alertsΓÇ¥: [{
-ΓÇ£idΓÇ¥: 1,
-ΓÇ£messageΓÇ¥: ΓÇ£Problem with thermometer at station {IPv4.src}ΓÇ¥,
-ΓÇ£titleΓÇ¥: ΓÇ£Thermometer problemΓÇ¥,
-ΓÇ£expressionΓÇ¥: ΓÇ£{CyberXHorizonProtocol.function} == 25ΓÇ¥
-
-```
-
-## Connect to an indexing service (Baseline)
-
-You can index the protocol values and display them in Data Mining reports.
--
-These values can later be mapped to specific texts, for example-mapping numbers as texts or adding information, in any language.
--
-For more information, see [Create mapping files (JSON)](#create-mapping-files-json) for details.
-
-You can also use values from protocols previously parsed to extract additional information.
-
-For example, for the value, which is based on TCP, you can use the values from IPv4 layer. From this layer you can extract values such as the source of the packet, and the destination.
-
-In order to achieve this, the JSON configuration file needs to be updated using the `whitelists` property.
-
-## Allow list (data mining) fields
-
-The following Allowlist fields are available:
--- name ΓÇô The name used for indexing.--- alert_title ΓÇô A short, unique title that explains the event.--- alert_text ΓÇô Additional information-
-Multiple Allowlists can be added, allowing complete flexibility in indexing.
-
-## JSON sample with indexing
-
-```json
-{
- "id":"CyberXHorizonProtocol",
- "endianess": "big",
- "metadata": {
- "is_distributed_control_system": false,
- "has_protocol_address": false,
- "is_scada_protocol": true,
- "is_router_potenial": false
- },
- "sanity_failure_codes": {
- "wrong magic": 0
- },
- "malformed_codes": {
- "not enough bytes": 0
- },
- "exports_dissect_as": {
- },
- "dissect_as": {
- "UDP": {
- "port": ["12345"]
- }
- },
- "fields": [
- {
- "id": "function",
- "type": "numeric"
- },
- {
- "id": "sub_function",
- "type": "numeric"
- },
- {
- "id": "name",
- "type": "string"
- },
- {
- "id": "model",
- "type": "string"
- },
- {
- "id": "version",
- "type": "numeric"
- }
- ],
-"whitelists": [
- {
- "name": "Functions",
- "alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
- "alert_text": "There was an attempt by the source to invoke a new function on the destination",
- "fields": [
- {
- "name": "Source",
- "value": "IPv4.src"
- },
- {
- "name": "Destination",
- "value": "IPv4.dst"
- },
- {
- "name": "Function",
- "value": "CyberXHorizonProtocol.function"
- }
- ]
- }
-
-```
-## Extract firmware data
-
-You can extract firmware information, define index values, and trigger firmware alerts for the plugin protocol. For example,
--- Extract the firmware model or version. This information can be further utilized to identify CVEs.--- Trigger an alert when a new firmware version is detected.-
-In order to achieve this, the JSON configuration file needs to be updated using the firmware property.
-
-## Firmware fields
-
-This section describes the JSON firmware configuration fields.
--- **name**
-
- Indicates how the field is presented in the sensor console.
--- **value**-
- Indicates the implementation field to use in order to provide the data.
--- **firmware_index:**-
- Select one:
- - **model**: The device model. Enables detection of CVEs.
- - **serial**: The device serial number. The serial number is not always available for all protocols. This value is unique per device.
- - **rack**: Indicates the rack identifier, if the device is part of a rack.
- - **slot**: The slot identifier, if the device is part of a rack.
- - **module_address**: Use to present a hierarchy if the module can be presented behind another device. Applicable instead if a rack slot combination, which is a simpler presentation.
- - **firmware_version**: Indicates the device version. Enables detection of CVEs.
- - **alert_text**: Indicates text describing firmware deviations, for example, version changes.
- - **index_by**: Indicates the fields used to identify and index the device. In the example below the device is identified by its IP address. In certain protocols, a more complex index can be used. For example, if another device connected, and you know how to extract its internal path. For example, the MODBUS Unit ID, can be used as part of the index, as a different combination of IP address and the Unit Identifier.
- - **firmware_fields**: Indicates which fields are device metadata fields. In this example, the following are used: model, revision, and name. Each protocol can define its own firmware data.
-
-## JSON sample with firmware
-
-```json
-{
- "id":"CyberXHorizonProtocol",
- "endianess": "big",
- "metadata": {
- "is_distributed_control_system": false,
- "has_protocol_address": false,
- "is_scada_protocol": true,
- "is_router_potenial": false
- },
- "sanity_failure_codes": {
- "wrong magic": 0
- },
- "malformed_codes": {
- "not enough bytes": 0
- },
- "exports_dissect_as": {
- },
- "dissect_as": {
- "UDP": {
- "port": ["12345"]
- }
- },
- "fields": [
- {
- "id": "function",
- "type": "numeric"
- },
- {
- "id": "sub_function",
- "type": "numeric"
- },
- {
- "id": "name",
- "type": "string"
- },
- {
- "id": "model",
- "type": "string"
- },
- {
- "id": "version",
- "type": "numeric"
- }
- ],
-"whitelists": [
- {
- "name": "Functions",
- "alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
- "alert_text": "There was an attempt by the source to invoke a new function on the destination",
- "fields": [
- {
- "name": "Source",
- "value": "IPv4.src"
- },
- {
- "name": "Destination",
- "value": "IPv4.dst"
- },
- {
- "name": "Function",
- "value": "CyberXHorizonProtocol.function"
- }
- ]
- },
-"firmware": {
- "alert_text": "Firmware was changed on a network asset.
- This may be a planned activity, for example an authorized maintenance procedure",
- "index_by": [
- {
- "name": "Device",
- "value": "IPv4.src",
- "owner": true
- }
- ],
- "firmware_fields": [,
- {
- "name": "Model",
- "value": "CyberXHorizonProtocol.model",
- "firmware_index": "model"
- },
- {
- "name": "Revision",
- "value": "CyberXHorizonProtocol.version",
- ΓÇ£Firmware_indexΓÇ¥: ΓÇ£firmware_versionΓÇ¥
- },
- {
- "name": "Name",
- "value": "CyberXHorizonProtocol.name"
- }
- ]
- }
-}
-
-```
-## Extract device attributes
-
-You can enhance device the information available in the Device in Inventory, Data Mining, and other reports.
--- Name--- Type--- Vendor--- Operating System-
-In order to achieve this, the JSON configuration file needs to be updated using the **Properties** property.
-
-You can do this after writing the basic plugin and extracting required fields.
-
-## Properties fields
-
-This section describes the JSON properties configuration fields.
-
-**config_file**
-
-Contains the file name that defines how to process each key in the `key` fields. The config file itself should be in JSON format and be included as part of the plugin protocol folder.
-
-Each key in the JSON defines the set of action that should be done when you extract this key from a packet.
-
-Each key can have:
--- **Packet Data** - Indicates the properties that would be updated based on the data extracted from the packet (the implementation field used to provide that data).--- **Static Data** - Indicates predefined set of `property-value` actions that should be updated.-
-The properties that can be configured in this file are:
--- **Name** - Indicates the device name.--- **Type** - Indicates the device type.--- **Vendor** - Indicates device vendor.--- **Description** - Indicates the device firmware model (lower priority than ΓÇ£modelΓÇ¥).--- **operatingSystem** - Indicates the device operating system.-
-### Fields
-
-| Field | Description |
-|--|--|
-| key | Indicates the key. |
-| value | Indicates the implementation field to use in order to provide the data. |
-| is_static_key | Indicates whether the `key` field is derived as a value from the packet or is it a predefined value. |
-
-### Working with static keys only
-
-If you are working with static keys, then you don't have to configure the `config.file`. You can configure the JSON file only.
-
-## JSON sample with properties
-
-```json
-{
- "id":"CyberXHorizonProtocol",
- "endianess": "big",
- "metadata": {
- "is_distributed_control_system": false,
- "has_protocol_address": false,
- "is_scada_protocol": true,
- "is_router_potenial": false
- },
- "sanity_failure_codes": {
- "wrong magic": 0
- },
- "malformed_codes": {
- "not enough bytes": 0
- },
- "exports_dissect_as": {
- },
- "dissect_as": {
- "UDP": {
- "port": ["12345"]
- }
- },
- "fields": [
- {
- "id": "function",
- "type": "numeric"
- },
-
- "id": "sub_function",
- "type": "numeric"
- },
- {
- "id": "name",
- "type": "string"
- },
- {
- "id": "model",
- "type": "string"
- },
- {
- "id": "version",
- "type": "numeric"
- },
- {
- "id": "vendor",
- "type": "string"
- }
- ],
-"whitelists": [
- {
- "name": "Functions",
- "alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
- "alert_text": "There was an attempt by the source to invoke a new function on the destination",
- "fields": [
- {
- "name": "Source",
- "value": "IPv4.src"
- },
- {
- "name": "Destination",
- "value": "IPv4.dst"
- },
- {
- "name": "Function",
- "value": "CyberXHorizonProtocol.function"
- }
- ]
- },
-"firmware": {
- "alert_text": "Firmware was changed on a network asset.
- This may be a planned activity, for example an authorized maintenance procedure",
- "index_by": [
- {
- "name": "Device",
- "value": "IPv4.src",
- "owner": true
- }
- ],
- "firmware_fields": [,
- {
- "name": "Model",
- "value": "CyberXHorizonProtocol.model",
- "firmware_index": "model"
- },
- {
- "name": "Revision",
- "value": "CyberXHorizonProtocol.version",
- ΓÇ£Firmware_indexΓÇ¥: ΓÇ£firmware_versionΓÇ¥
- },
- {
- "name": "Name",
- "value": "CyberXHorizonProtocol.name"
- }
- ]
- }
-"properties": {
- "config_file": "config_file_example",
-"fields": [
- {
- "key": "vendor",
- "value": "CyberXHorizonProtocol.vendor",
- "is_static_key": true
- },
-{
- "key": "name",
- "value": "CyberXHorizonProtocol.vendor",
- "is_static_key": true
- },
-
-]
- }
-}
-
-```
-
-## CONFIG_FILE_EXAPMLE JSON
-
-```json
-{
-"someKey": {
- "staticData": {
- "model": "FlashSystem",
- "vendor": "IBM",
- "type": "Storage"}
- }
- "packetData": [
- "nameΓÇ¥
- ]
-}
-
-```
-
-## Create mapping files (JSON)
-
-You can customize plugin output text to meet the needs of your enterprise environment by defining and update-mapping files. Changes can easily be implemented to text without changing or impacting the code. Each file can map one or many fields.
--- Mapping of field values to names, for example 1:Reset, 2:Start, 3:Stop.--- Mapping text to support multiple languages.-
-Two types of mapping files can be defined.
-
- - [Simple mapping file](#simple-mapping-file).
-
- - [Dependency mapping file](#dependency-mapping-file).
-
- :::image type="content" source="media/references-horizon-sdk/localization.png" alt-text="ether net":::
-
- :::image type="content" source="media/references-horizon-sdk/unhandled.png" alt-text="A view of the unhandled alerts.":::
-
- :::image type="content" source="media/references-horizon-sdk/policy-violation.png" alt-text="A list of known policy violations.":::
-
-## File naming and storage requirements
-
-Mapping files should be saved under the metadata folder.
-
-The name of the file should match the JSON config file ID.
--
-## Simple mapping file
-
-The following sample presents a basic JSON file as a key value.
-
-When you create an Allowlist, and it contains one or more of the mapped fields. The value will be converted from a number, string, or any type, in to formatted text presented in the mapping file.
-
-```json
-{
- ΓÇ£10ΓÇ¥: ΓÇ£ReadΓÇ¥,
- ΓÇ£20ΓÇ¥: ΓÇ£Firmware DataΓÇ¥,
- ΓÇ£3ΓÇ¥: ΓÇ£WriteΓÇ¥
-}
-
-```
-
-## Dependency-mapping file
-
-To indicate that the file is a dependency file, add the keyword `dependency` to the mapping configuration.
-
-```json
-dependency": { "field": "CyberXHorizonProtocol.function" }}]
- }],
- "firmware": {
- "alert_text": "Firmware was changed on a network asset. This may be a planned activity, for example an authorized maintenance procedure",
- "index_by": [{ "name": "Device", "value": "IPv4.src", "owner": true }],
- "firmware_fields": [{ "name": "Model", "value":
-
-```
-
-The file contains a mapping between the dependency field and the function field. For example, between the function, and sub function. The sub function changes according to the function supplied.
-
-In the Allowlist previously configured, there is no dependency configuration, as shown below.
-
-```json
-"whitelists": [
-{
-"name": "Functions",
-"alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
-"alert_text": "There was an attempt by the source to invoke a new function on the destination",
-"fields": [
-{
-"name": "Source",
-"value": "IPv4.src"
-},
-{
-"name": "Destination",
-"value": "IPv4.dst"
-},
-{
-"name": "Function",
-"value": "CyberXHorizonProtocol.function"
-}
-]
-}
-
-```
-
-The dependency can be based on a specific value or a field. In the example below, it is based on a field. If you base it on a value, define the extract value to be read from the mapping file.
-
-In the example below, the dependency as follow for same value of the field.
-
-For example, in the sub function five, the meaning is changed based on the function.
-
- - If it is a read function, then five means Read Memory.
-
- - If it is a write function, the same value is used to read from a file.
-
- ```json
- {
- ΓÇ£10ΓÇ¥: {
- ΓÇ£5ΓÇ¥: ΓÇ£MemoryΓÇ¥,
- ΓÇ£6ΓÇ¥: ΓÇ£FileΓÇ¥,
- ΓÇ£7ΓÇ¥ ΓÇ£RegisterΓÇ¥
- },
- ΓÇ£3ΓÇ¥: {
- ΓÇ£5ΓÇ¥: ΓÇ£FileΓÇ¥,
- ΓÇ£7ΓÇ¥: ΓÇ£MemoryΓÇ¥,
- ΓÇ£6ΓÇ¥, ΓÇ£RegisterΓÇ¥
- }
- }
-
- ```
-
-### Sample file
-
-```json
-{
- "id":"CyberXHorizonProtocol",
- "endianess": "big",
- "metadata": {"is_distributed_control_system": false, "has_protocol_address": false, "is_scada_protocol": true, "is_router_potenial": false},
- "sanity_failure_codes": { "wrong magic": 0 },
- "malformed_codes": { "not enough bytes": 0 },
- "exports_dissect_as": { },
- "dissect_as": { "UDP": { "port": ["12345"] }},
- "fields": [{ "id": "function", "type": "numeric" }, { "id": "sub_function", "type": "numeric" },
- {"id": "name", "type": "string" }, { "id": "model", "type": "string" }, { "id": "version", "type": "numeric" }],
- "whitelists": [{
- "name": "Functions",
- "alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
- "alert_text": "There was an attempt by the source to invoke a new function on the destination",
- "fields": [{ "name": "Source", "value": "IPv4.src" }, { "name": "Destination", "value": "IPv4.dst" },
- { "name": "Function", "value": "CyberXHorizonProtocol.function" },
- { "name": "Sub function", "value": "CyberXHorizonProtocol.sub_function", "dependency": { "field": "CyberXHorizonProtocol.function" }}]
- }],
- "firmware": {
- "alert_text": "Firmware was changed on a network asset. This may be a planned activity, for example an authorized maintenance procedure",
- "index_by": [{ "name": "Device", "value": "IPv4.src", "owner": true }],
- "firmware_fields": [{ "name": "Model", "value": "CyberXHorizonProtocol.model", "firmware_index": "model" },
- { "name": "Revision", "value": "CyberXHorizonProtocol.version", "firmware_index": "firmware_version" },
- { "name": "Name", "value": "CyberXHorizonProtocol.name" }]
- },
- "value_mapping": {
- "CyberXHorizonProtocol.function": {
- "file": "function-mapping"
- },
- "CyberXHorizonProtocol.sub_function": {
- "dependency": true,
- "file": "sub_function-mapping"
- }
- }
-}
-
-```
-
-## JSON sample with mapping
-
-```json
-{
- "id":"CyberXHorizonProtocol",
- "endianess": "big",
- "metadata": {
- "is_distributed_control_system": false,
- "has_protocol_address": false,
- "is_scada_protocol": true,
- "is_router_potenial": false
- },
- "sanity_failure_codes": {
- "wrong magic": 0
- },
- "malformed_codes": {
- "not enough bytes": 0
- },
- "exports_dissect_as": {
- },
- "dissect_as": {
- "UDP": {
- "port": ["12345"]
- }
- },
- "fields": [
- {
- "id": "function",
- "type": "numeric"
- },
- {
- "id": "sub_function",
- "type": "numeric"
- },
- {
- "id": "name",
- "type": "string"
- },
- {
- "id": "model",
- "type": "string"
- },
- {
- "id": "version",
- "type": "numeric"
- }
- ],
-"whitelists": [
- {
- "name": "Functions",
- "alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
- "alert_text": "There was an attempt by the source to invoke a new function on the destination",
- "fields": [
- {
- "name": "Source",
- "value": "IPv4.src"
- },
- {
- "name": "Destination",
- "value": "IPv4.dst"
- },
- {
- "name": "Function",
- "value": "CyberXHorizonProtocol.function"
- },
- {
- ΓÇ£nameΓÇ¥: ΓÇ£Sub functionΓÇ¥,
- ΓÇ£valueΓÇ¥: ΓÇ£CyberXHorizonProtocol.sub_functionΓÇ¥,
- ΓÇ£dependencyΓÇ¥: {
- ΓÇ£fieldΓÇ¥: ΓÇ£CyberXHorizonProtocol.functionΓÇ¥
- }
- ]
- },
-"firmware": {
- "alert_text": "Firmware was changed on a network asset. This may be a planned activity, for example an authorized maintenance procedure",
- "index_by": [
- {
- "name": "Device",
- "value": "IPv4.src",
- "owner": true
- }
- ],
- "firmware_fields": [,
- {
- "name": "Model",
- "value": "CyberXHorizonProtocol.model",
- "firmware_index": "model"
- },
- {
- "name": "Revision",
- "value": "CyberXHorizonProtocol.version",
- ΓÇ£Firmware_indexΓÇ¥: ΓÇ£firmware_versionΓÇ¥
- },
- {
- "name": "Name",
- "value": "CyberXHorizonProtocol.name"
- }
- ]
- },
-"value_mapping": {
- "CyberXHorizonProtocol.function": {
- "file": "function-mapping"
- },
- "CyberXHorizonProtocol.sub_function": {
- "dependency": true,
- "file": "sub_function-mapping"
- }
-}
-
-```
-## Package, upload, and monitor the plugin
-
-This section describes how to
-
- - Package your plugin.
-
- - Upload your plugin.
-
- - Monitor and debug the plugin to evaluate how well it is performing.
-
-To package the plugin:
-
-1. Add the **artifact** (can be, library, config.json, or metadata) to a `tar.gz` file.
-
-1. Change the file extension to \<XXX.hdp>, where is \<XXX> is the name of the plugin.
-
-To sign in to the Horizon Console:
-
-1. Sign in your sensor CLI as an administrator, CyberX, or Support user.
-
-2. In the file: `/var/cyberx/properties/horizon.properties` change the **ui.enabled** property to **true** (`horizon.properties:ui.enabled=true`).
-
-3. Sign in to the sensor console.
-
-4. Select the **Horizon** option from the main menu.
-
- :::image type="content" source="media/references-horizon-sdk/horizon.png" alt-text="Select the horizon option from the left side pane.":::
-
- The Horizon Console opens.
-
- :::image type="content" source="media/references-horizon-sdk/plugins.png" alt-text="A view of the Horizon console and all of its plugins.":::
-
-## Plugins pane
-
-The plugin pane lists:
-
- - Infrastructure plugins: Infrastructure plugins installed by default with Defender for IoT.
-
- - Application plugins: Application plugins installed by default with Defender for IoT and other plugins developed by Defender for IoT, or external developers.
-
-Enable and disable plugins that have been uploaded using the toggle.
--
-### Uploading a plugin
-
-After creating and packaging your plugin, you can upload it to the Defender for IoT sensor. To achieve full coverage of your network, you should upload the plugin to each sensor in your organization.
-
-To upload:
-
-1. Sign in to your sensor.
--
-2. Select **Upload**.
-
- :::image type="content" source="media/references-horizon-sdk/upload.png" alt-text="Upload your plugins.":::
-
-3. Browse to your plugin and drag it to the plugin dialog box. Verify that the prefix is `.hdp`. The plugin loads.
-
-## Plugin status overview
-
-The Horizon console **Overview** window provides information about the plugin you uploaded and lets you disable and enable them.
--
-| Field | Description |
-|--|--|
-| Application | The name of the plugin you uploaded. |
-| :::image type="content" source="media/references-horizon-sdk/switch.png" alt-text="The on/off switch."::: | Toggle **On** or **Off** the plugin. Defender for IoT will not handle protocol traffic defined in the plugin when you toggle off the plugin. |
-| Time | The time the data was last analyzed. Updated every 5 seconds. |
-| PPS | The number of packets per second. |
-| Bandwidth | The average bandwidth detected within the last 5 seconds. |
-| Malforms | Malformed validations are used after the protocol has been positively validated. If there is a failure to process the packets based on the protocol, a failure response is returned. <br><br>This column indicates the number of malform errors in the past 5 seconds. For more information, see [Malformed code validations](#malformed-code-validations) for details. |
-| Warnings | Packets match the structure and specification but there is unexpected behavior based on the plugin warning configuration. |
-| Errors | The number of packets that failed basic protocol validations. Validates that the packet matches the protocol definitions. The Number displayed here indicates that number of errors detected in the past 5 seconds. For more information, see [Sanity code validations](#sanity-code-validations) for details. |
-| :::image type="content" source="media/references-horizon-sdk/monitor.png" alt-text="The monitor icon."::: | Review details about malform and warnings detected for your plugin. |
-
-## Plugin details
-
-You can monitor real-time plugin behavior by analyzing the number of *Malform* and *Warnings* detected for your plugin. An option is available to freeze the screen and export for further investigation
--
-To Monitor:
-
-Select the Monitor button for your plugin from the Overview.
-
-Next Steps
-
-Set up your [Horizon API](references-horizon-api.md)
devtest-labs Enable Browser Connection Lab Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/enable-browser-connection-lab-virtual-machines.md
Last updated 11/02/2021
-# Enable browser connection to DevTest Labs VMs
+# Enable browser connection to DevTest Labs VMs with Azure Bastion
Azure DevTest Labs integrates with [Azure Bastion](../bastion/index.yml) to allow connecting to lab virtual machines (VMs) through a browser. As a lab owner, you can enable browser access to all your lab VMs through Azure Bastion.
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-securing-a-logic-app.md
When the [managed identity](../active-directory/managed-identities-azure-resourc
"uri": "@parameters('endpointUrlParam')", "authentication": { "type": "ManagedServiceIdentity",
- "identity": "SystemAssigned",
"audience": "https://management.azure.com/" }, },
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-pytorch.md
- Last updated 01/14/2020
ws = Workspace.from_config()
### Get the data
-The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We will download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html).
+The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html).
### Prepare training script
To see the packages included in the curated environment, you can write out the c
pytorch_env.save_to_directory(path=curated_env_name) ```
-Make sure the curated environment includes all the dependencies required by your training script. If not, you will have to modify the environment to include the missing dependencies. Note that if the environment is modified, you will have to give it a new name, as the 'AzureML' prefix is reserved for curated environments. If you modified the conda dependencies YAML file, you can create a new environment from it with a new name, e.g.:
+Make sure the curated environment includes all the dependencies required by your training script. If not, you'll have to modify the environment to include the missing dependencies. If the environment is modified, you'll have to give it a new name, as the 'AzureML' prefix is reserved for curated environments. If you modified the conda dependencies YAML file, you can create a new environment from it with a new name, for example:
```python pytorch_env = Environment.from_conda_specification(name='pytorch-1.6-gpu', file_path='./conda_dependencies.yml') ```
dependencies:
Create an Azure ML environment from this conda environment specification. The environment will be packaged into a Docker container at runtime.
-By default if no base image is specified, Azure ML will use a CPU image `azureml.core.environment.DEFAULT_CPU_IMAGE` as the base image. Since this example runs training on a GPU cluster, you will need to specify a GPU base image that has the necessary GPU drivers and dependencies. Azure ML maintains a set of base images published on Microsoft Container Registry (MCR) that you can use, see the [Azure/AzureML-Containers](https://github.com/Azure/AzureML-Containers) GitHub repo for more information.
+By default if no base image is specified, Azure ML will use a CPU image `azureml.core.environment.DEFAULT_CPU_IMAGE` as the base image. Since this example runs training on a GPU cluster, you'll need to specify a GPU base image that has the necessary GPU drivers and dependencies. Azure ML maintains a set of base images published on Microsoft Container Registry (MCR) that you can use. For more information, see [AzureML-Containers GitHub repo](https://github.com/Azure/AzureML-Containers).
```python pytorch_env = Environment.from_conda_specification(name='pytorch-1.6-gpu', file_path='./conda_dependencies.yml')
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-tensorflow.md
ws = Workspace.from_config()
### Create a file dataset
-A `FileDataset` object references one or multiple files in your workspace datastore or public urls. The files can be of any format, and the class provides you with the ability to download or mount the files to your compute. By creating a `FileDataset`, you create a reference to the data source location. If you applied any transformations to the data set, they will be stored in the data set as well. The data remains in its existing location, so no extra storage cost is incurred. See the [how-to](./how-to-create-register-datasets.md) guide on the `Dataset` package for more information.
+A `FileDataset` object references one or multiple files in your workspace datastore or public urls. The files can be of any format, and the class provides you with the ability to download or mount the files to your compute. By creating a `FileDataset`, you create a reference to the data source location. If you applied any transformations to the data set, they'll be stored in the data set as well. The data remains in its existing location, so no extra storage cost is incurred. For more information the `Dataset` package, see the [How to create register datasets article](./how-to-create-register-datasets.md).
```python from azureml.core.dataset import Dataset
To see the packages included in the curated environment, you can write out the c
tf_env.save_to_directory(path=curated_env_name) ```
-Make sure the curated environment includes all the dependencies required by your training script. If not, you will have to modify the environment to include the missing dependencies. Note that if the environment is modified, you will have to give it a new name, as the 'AzureML' prefix is reserved for curated environments. If you modified the conda dependencies YAML file, you can create a new environment from it with a new name, e.g.:
+Make sure the curated environment includes all the dependencies required by your training script. If not, you'll have to modify the environment to include the missing dependencies. If the environment is modified, you'll have to give it a new name, as the 'AzureML' prefix is reserved for curated environments. If you modified the conda dependencies YAML file, you can create a new environment from it with a new name, for example:
```python tf_env = Environment.from_conda_specification(name='tensorflow-2.2-gpu', file_path='./conda_dependencies.yml') ```
dependencies:
Create an Azure ML environment from this conda environment specification. The environment will be packaged into a Docker container at runtime.
-By default if no base image is specified, Azure ML will use a CPU image `azureml.core.environment.DEFAULT_CPU_IMAGE` as the base image. Since this example runs training on a GPU cluster, you will need to specify a GPU base image that has the necessary GPU drivers and dependencies. Azure ML maintains a set of base images published on Microsoft Container Registry (MCR) that you can use, see the [Azure/AzureML-Containers](https://github.com/Azure/AzureML-Containers) GitHub repo for more information.
+By default if no base image is specified, Azure ML will use a CPU image `azureml.core.environment.DEFAULT_CPU_IMAGE` as the base image. Since this example runs training on a GPU cluster, you'll need to specify a GPU base image that has the necessary GPU drivers and dependencies. Azure ML maintains a set of base images published on Microsoft Container Registry (MCR) that you can use, see the [Azure/AzureML-Containers](https://github.com/Azure/AzureML-Containers) GitHub repo for more information.
```python tf_env = Environment.from_conda_specification(name='tensorflow-2.2-gpu', file_path='./conda_dependencies.yml')
As the run is executed, it goes through the following stages:
## Register or download a model
-Once you've trained the model, you can register it to your workspace. Model registration lets you store and version your models in your workspace to simplify [model management and deployment](concept-model-management-and-deployment.md). Optional: by specifying the parameters `model_framework`, `model_framework_version`, and `resource_configuration`, no-code model deployment becomes available. This allows you to directly deploy your model as a web service from the registered model, and the `ResourceConfiguration` object defines the compute resource for the web service.
+Once you've trained the model, you can register it to your workspace. Model registration lets you store and version your models in your workspace to simplify [model management and deployment](concept-model-management-and-deployment.md).
+
+Optional: by specifying the parameters `model_framework`, `model_framework_version`, and `resource_configuration`, no-code model deployment becomes available. This allows you to directly deploy your model as a web service from the registered model, and the `ResourceConfiguration` object defines the compute resource for the web service.
```Python from azureml.core import Model
The deployment how-to contains a section on registering models, but you can skip
### (Preview) No-code model deployment
-Instead of the traditional deployment route, you can also use the no-code deployment feature (preview) for TensorFlow. By registering your model as shown above with the `model_framework`, `model_framework_version`, and `resource_configuration` parameters, you can simply use the `deploy()` static function to deploy your model.
+Instead of the traditional deployment route, you can also use the no-code deployment feature (preview) for TensorFlow. By registering your model as shown above with the `model_framework`, `model_framework_version`, and `resource_configuration` parameters, you can use the `deploy()` static function to deploy your model.
```python service = Model.deploy(ws, "tensorflow-web-service", [model])
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow-cli-runs.md
To track a local run, you need to point your local machine to the Azure Machine
# [MLflow SDK](#tab/mlflow)
-The following code uses `mlflow` and the [`subprocess`](https://docs.python.org/3/library/subprocess.html) classes in Python to run the Azure Machine Learning CLI (v2) command to retrieve the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
+The following code uses `mlflow` and your Azure Machine Learning workspace details to construct the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
```Python import mlflow
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/what-is-new.md
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | | |
-| Offers | Added a [Revenue Dashboard](revenue-dashboard.md) to Partner Center, including a revenue report, [sample queries](analytics-sample-queries.md#revenue-report-queries), and [FAQs](/analytics-faq#revenue) page. | 2021-12-08 |
+| Offers | Added a [Revenue Dashboard](revenue-dashboard.md) to Partner Center, including a revenue report, [sample queries](analytics-sample-queries.md#revenue-report-queries), and [FAQs](/azure/marketplace/analytics-faq#revenue) page. | 2021-12-08 |
| Offers | Container and container apps offers can now use the Microsoft [Standard Contract](standard-contract.md). | 2021-11-02 | | Offers | Private plans for [SaaS offers](plan-saas-offer.md) are now available on AppSource. | 2021-10-06 | | Offers | In [Set up an Azure Marketplace subscription for hosted test drives](test-drive-azure-subscription-setup.md), for **Set up for Dynamics 365 apps on Dataverse and Power Apps**, we added a new method to remove users from your Azure tenant. | 2021-10-01 | | Offers | Setup and maintenance of Power BI Visuals is migrating from the Office Store to the commercial marketplace this month. [This FAQ](power-bi-visual-faq.yml) provides a summary of improvements to the offer submission process. To start, see [Plan a Power BI visual offer](marketplace-power-bi-visual.md).| 2021-09-21 | | Offers | While [private plans](private-plans.md) were previously only available on the Azure portal, they are now also available on Microsoft AppSource. | 2021-09-10 | | Analytics | Publishers of Azure application offers can view offer deployment health in the Quality of service (QoS) reports. QoS helps publishers understand the reasons for offer deployment failures and provides actionable insights for their remediation. For details, see [Quality of service (QoS) dashboard](quality-of-service-dashboard.md). | 2021-09-07 |
-| Policy | The SaaS customer [refund window](/marketplace/refund-policies) is now [72 hours](/marketplace-faq-publisher-guide) for all offers. | 2021-09-01 |
+| Policy | The SaaS customer [refund window](/marketplace/refund-policies) is now [72 hours](marketplace-faq-publisher-guide.yml) for all offers. | 2021-09-01 |
| Offers | Additional properties at the plan level are now available for Azure Virtual Machine offers. See the [virtual machine technical configuration properties](azure-vm-plan-overview.md#properties) article for more information. | 2021-07-26 | | Fees | Microsoft has reduced its standard store service fee to 3%. See [Commercial marketplace transact capabilities](marketplace-commercial-transaction-capabilities-and-considerations.md#examples-of-pricing-and-store-fees) and Common questions about payouts and taxes, "[How do I find the current Store Service Fee and the payout rate?](/partner-center/payout-faq)". | 2021-07-14 | |
Learn about important updates in the commercial marketplace program of Partner C
| | - | - | | Payouts | Updated the payment schedule on [Payout schedules and processes](/partner-center/payout-policy-details), including terminology and graphics. | 2022-01-19 | | Offers | Added a new article, [Troubleshooting Private Plans in the commercial marketplace](azure-private-plan-troubleshooting.md). | 2021-12-13 |
-| Offers | We have updated the names of [Dynamics 365](/marketplace-dynamics-365#licensing-options) offer types: <br><br> - Dynamics 365 for Customer Engagement &amp; PowerApps is now **Dynamics 365 apps on Dataverse and Power Apps** <br> - Dynamics 365 for operations is now **Dynamics 365 Operations Apps** <br> - Dynamics 365 business central is now **Dynamics 365 Business Central** | 2021-12-03 |
+| Offers | We have updated the names of [Dynamics 365](marketplace-dynamics-365.md) offer types: <br><br> - Dynamics 365 for Customer Engagement &amp; PowerApps is now **Dynamics 365 apps on Dataverse and Power Apps** <br> - Dynamics 365 for operations is now **Dynamics 365 Operations Apps** <br> - Dynamics 365 business central is now **Dynamics 365 Business Central** | 2021-12-03 |
| Policy | WeΓÇÖve created an [FAQ topic](/legal/marketplace/mpa-faq) to answer publisher questions about the Microsoft Publisher Agreement. | 2021-09-27 | | Policy | We've updated the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). For change history, see [Microsoft Publisher Agreement Version 8.0 ΓÇô October 2021 Update](/legal/marketplace/mpa-change-history-oct-2021). | 2021-09-14 | | Policy | Updated [certification](/legal/marketplace/certification-policies) policy for September; see [change history](/legal/marketplace/offer-policies-change-history). | 2021-09-10 |
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/single-server-whats-new.md
Last updated 06/17/2021
Azure Database for MySQL is a relational database service in the Microsoft cloud. The service is based on the [MySQL Community Edition](https://www.mysql.com/products/community/) (available under the GPLv2 license) database engine and supports versions 5.6(retired), 5.7, and 8.0. [Azure Database for MySQL - Single Server](./overview.md#azure-database-for-mysqlsingle-server) is a deployment mode that provides a fully managed database service with minimal requirements for customizations of database. The Single Server platform is designed to handle most database management functions such as patching, backups, high availability, and security, all with minimal user configuration and control. This article summarizes new releases and features in Azure Database for MySQL - Single Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+## February 2022
+
+This release of Azure Database for MySQL - Single Server includes the following updates.
+
+**Bug fixes**
+
+The MySQL client version 8.0.27 or later is now compatible with Azure Database for MySQL - Single Server. Now you can connect form the MySQL client version 8.0.27 or later created either via mysql.exe or workbench.
+
+**Known Issues**
+
+Customers in Japan received two Maintenance Notification emails for this month. The Email notification send for *05-Feb 2022* was send by mistake and no changes will be done to the service on this date. You can safely ignore them. We apologize for the inconvenience.
## December 2021
role-based-access-control Conditional Access Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/conditional-access-azure-management.md
- Title: Manage access to Azure management with Conditional Access in Azure AD
-description: Learn about using Conditional Access in Azure AD to manage access to Azure management.
------ Previously updated : 07/15/2019----
-# Manage access to Azure management with Conditional Access
-
-> [!CAUTION]
-> Make sure you understand how Conditional Access works before setting up a policy to manage access to Azure management. Make sure you don't create conditions that could block your own access to the portal.
-
-Conditional Access in Azure Active Directory (Azure AD) controls access to cloud apps based on specific conditions that you specify. To allow access, you create Conditional Access policies that allow or block access based on whether or not the requirements in the policy are met.
-
-Typically, you use Conditional Access to control access to your cloud apps. You can also set up policies to control access to Azure management based on certain conditions (such as sign-in risk, location, or device) and to enforce requirements like multi-factor authentication.
-
-To create a policy for Azure management, you select **Microsoft Azure Management** under **Cloud apps** when choosing the app to which to apply the policy.
-
-![Conditional Access for Azure management](./media/conditional-access-azure-management/conditional-access-azure-mgmt.png)
-
-The policy you create applies to all Azure management endpoints, including the following:
--- Azure portal-- Azure Resource Manager provider-- Classic Service Management APIs-- Azure PowerShell-- Visual Studio subscriptions administrator portal-- Azure DevOps-- Azure Data Factory portal-- Azure Event Hubs-- Azure Service Bus-- [Azure SQL Database](../azure-sql/database/conditional-access-configure.md)-- SQL Managed Instance-- Azure Synapse-
-Note that the policy applies to Azure PowerShell, which calls the Azure Resource Manager API. It does not apply to [Azure AD PowerShell](/powershell/azure/active-directory/install-adv2), which calls Microsoft Graph.
-
-For more information on how to set up a sample policy to enable Conditional Access for Microsoft Azure management, see the article [Conditional Access: Require MFA for Azure management](../active-directory/conditional-access/howto-conditional-access-policy-azure-management.md).
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| Feature | Azure | Azure Government | | -- | -- | - |
-| **Incidents** | |
+| **Incidents** | | |
|- [Automation rules](../../sentinel/automate-incident-handling-with-automation-rules.md) | Public Preview | Public Preview | | - [Cross-tenant/Cross-workspace incidents view](../../sentinel/multiple-workspace-view.md) |GA | GA | | - [Entity insights](../../sentinel/enable-entity-behavior-analytics.md) | GA | Public Preview |
The following tables display the current Microsoft Sentinel feature availability
| - [Notebook integration with Azure Synapse](../../sentinel/notebooks-with-synapse.md) | Public Preview | Not Available| | **Watchlists** | | | |- [Watchlists](../../sentinel/watchlists.md) | GA | GA |
-| **Hunting** | |
+| **Hunting** | | |
| - [Hunting](../../sentinel/hunting.md) | GA | GA | | **Content and content management** | | | | - [Content hub](../../sentinel/sentinel-solutions.md) and [solutions](../../sentinel/sentinel-solutions-catalog.md) | Public preview | Not Available|
The following tables display the current Microsoft Sentinel feature availability
| - [Anomalous Windows File Share Access Detection](../../sentinel/fusion.md) | Public Preview | Not Available | | - [Anomalous RDP Login Detection](../../sentinel/data-connectors-reference.md#configure-the-security-events--windows-security-events-connector-for-anomalous-rdp-login-detection)<br>Built-in ML detection | Public Preview | Not Available | | - [Anomalous SSH login detection](../../sentinel/connect-syslog.md#configure-the-syslog-connector-for-anomalous-ssh-login-detection)<br>Built-in ML detection | Public Preview | Not Available |
- | **Domain solution content** | | |
+| **Domain solution content** | | |
| - [Apache Log4j Vulnerability Detection](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available | | - [Cybersecurity Maturity Model Certification (CMMC)](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available | | - [IoT/OT Threat Monitoring with Defender for IoT](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
The following tables display the current Microsoft Sentinel feature availability
| - [Azure Active Directory](../../sentinel/connect-azure-active-directory.md) | GA | GA | | - [Azure ADIP](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection) | GA | GA | | - [Azure DDoS Protection](../../sentinel/data-connectors-reference.md#azure-ddos-protection) | GA | GA |
-| - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA |
-| - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | Public Preview | Not Available |
-| - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
| - [Azure Firewall ](../../sentinel/data-connectors-reference.md#azure-firewall) | GA | GA | | - [Azure Information Protection](../../sentinel/data-connectors-reference.md#azure-information-protection-preview) | Public Preview | Not Available | | - [Azure Key Vault ](../../sentinel/data-connectors-reference.md#azure-key-vault) | Public Preview | Not Available | | - [Azure Kubernetes Services (AKS)](../../sentinel/data-connectors-reference.md#azure-kubernetes-service-aks) | Public Preview | Not Available | | - [Azure SQL Databases](../../sentinel/data-connectors-reference.md#azure-sql-databases) | GA | GA | | - [Azure WAF](../../sentinel/data-connectors-reference.md#azure-web-application-firewall-waf) | GA | GA |
+| - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA |
+| - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | Public Preview | Not Available |
+| - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
| **Windows connectors** | | | | - [Windows Firewall](../../sentinel/data-connectors-reference.md#windows-firewall) | GA | GA | | - [Windows Security Events](/azure/sentinel/connect-windows-security-events) | GA | GA |
Office 365 GCC is paired with Azure Active Directory (Azure AD) in Azure. Office
| - Office 365 GCC | Public Preview | - | | - Office 365 GCC High | - | Not Available | | - Office 365 DoD | - | Not Available |
+| - **[Microsoft Power BI](../../sentinel/data-connectors-reference.md#microsoft-power-bi-preview)** | | |
+| - Office 365 GCC | Public Preview | - |
+| - Office 365 GCC High | - | Not Available |
+| - Office 365 DoD | - | Not Available |
+| - **[Microsoft Project](../../sentinel/data-connectors-reference.md#microsoft-project-preview)** | | |
+| - Office 365 GCC | Public Preview | - |
+| - Office 365 GCC High | - | Not Available |
+| - Office 365 DoD | - | Not Available |
| **[Office 365](../../sentinel/data-connectors-reference.md#microsoft-office-365)** | | | | - Office 365 GCC | GA | - | | - Office 365 GCC High | - | GA |
sentinel User Management Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/user-management-normalization-schema.md
+
+ Title: Microsoft Sentinel user management normalization schema reference (Public Preview) | Microsoft Docs
+description: This article describes the Microsoft Sentinel user management normalization schema.
++ Last updated : 02/06/2022+++
+# Microsoft Sentinel user management normalization schema reference (preview)
+
+The Microsoft Sentinel user management normalization schema is used to describe user management activities, such as creating a user or a group, changing user attribute, or adding a user to a group. Such events are reported, for example, by operating systems, directory services, identity management systems, and any other system reporting on its local user management activity.
+
+For more information about normalization in Microsoft Sentinel, see [Normalization and the Advanced SIEM Information Model (ASIM)](normalization.md).
+
+> [!IMPORTANT]
+> The user management normalization schema is currently in *preview*. This feature is provided without a service level agreement. We don't recommend it for production workloads.
+>
+> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+++
+## Schema overview
+
+The ASIM user management schema describes user management activities. The activities typically include the following entities:
+- **Actor** - the user performing the management activity.
+- **Acting Process** - the process used by the actor to perform the management activity.
+- **Src** - when the activity is performed over the network, the source device from which the activity was initiated.
+- **Target User** - the user who's account is managed.
+- **Group** the target user is added or removed from, or being modified.
+
+Some activities, such as **UserCreated**, **GroupCreated**, **UserModified**, and *GroupModified**, set or update user properties. The property set or updated is documented in the following fields:
+- [EventSubType](#eventsubtype) - the name of the value that was set or updated. [UpdatedPropertyName](#updatedpropertyname) is an alias to **EventSubType** when [EventSubType](#eventsubtype) refers to one of the relevant event types.
+- [PreviousPropertyValue](#previouspropertyvalue) - the previous value of the property.
+- [NewPropertyValue](#newpropertyvalue) - the updated value of the property.
+
+## Schema details
+
+### Common fields
+
+> [!IMPORTANT]
+> Fields common to all schemas are described in the [ASIM schema overview](normalization-about-schemas.md#common). The following list mentions only fields that have specific guidelines for user management events.
+>
+
+| Field | Class | Type | Description |
+||-||--|
+| **EventType** | Mandatory | Enumerated | Describes the operation reported by the record.<br><br> For User Management activity, the supported values are:<br> - `UserCreated`<br> - `UserDeleted`<br> - `UserModified`<br> - `UserLocked`<br> - `UserUnlocked`<br> - `UserDisabled`<br> - `UserEnabled`<br> - `PasswordChanged`<br> - `PasswordReset`<br> - `GroupCreated`<br> - `GroupDeleted`<br> - `GroupModified`<br> - `UserAddedToGroup`<br> - `UserRemovedFromGroup`<br> - `GroupEnumerated`<br> - `UserRead`<br> - `GroupRead`<br> |
+| <a name="eventsubtype"></a>**EventSubType** | Optional | Enumerated | The following sub-types are supported:<br> - `UserRead`: Password, Hash<br> - `UserCreated`, `GroupCreated`, `UserModified`, `GroupModified`. For more information, see [UpdatedPropertyName](#updatedpropertyname) |
+| **EventResult** | Mandatory | Enumerated | While failure is possible, most systems report only successful user management events. The expected value for successful events is `Success`. |
+| **EventResultDetails** | Optional | Enumerated | The valid values are `NotAuthorized` and `Other`. |
+| **EventSeverity** | Mandatory | Enumerated | While any valid severity value is allowed, the severity of user management events is typically `Informational`. |
+| **EventSchema** | Mandatory | String | The name of the schema documented here is `UserManagement`. |
+| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1.1`. |
+| **Dvc** fields| | | For user management events, device fields refer to the system reporting the event. This is usually the system on which the user is managed. |
+| | | | |
+
+### Updated property fields
+
+| Field | Class | Type | Description |
+|-|-||-|
+| <a name="updatedpropertyname"></a>**UpdatedPropertyName** | Alias | | Alias to [EventSubType](#eventsubtype) when the Event Type is `UserCreated`, `GroupCreated`, `UserModified`, or `GroupModified`.<br><br>Supported values are:<br>- `MultipleProperties`: Used when the activity updates multiple properties<br>- `Previous<PropertyName>`, where `<PropertyName>` is one of the supported values for `UpdatedPropertyName`. <br>- `New<PropertyName>`, where `<PropertyName>` is one of the supported values for `UpdatedPropertyName`. |
+| <a name="previouspropertyvalue"></a>**PreviousPropertyValue** | Optional | String | The previous value that was stored in the specified property. |
+| <a name="newpropertyvalue"></a>**NewPropertyValue** | Optional | String | The new value stored in the specified property. |
+|||||
+
+### Target user fields
+
+| Field | Class | Type | Description |
+|-|-||-|
+| <a name="targetuserid"></a>**TargetUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the target user. <br><br>Supported formats and types include:<br>- **SID** (Windows): `S-1-5-21-1377283216-344919071-3415362939-500`<br>- **UID** (Linux): `4578`<br>- **AADID** (Azure Active Directory): `9267d02c-5f76-40a9-a9eb-b686f3ca47aa`<br>- **OktaId**: `00urjk4znu3BcncfY0h7`<br>- **AWSId**: `72643944673`<br><br>Store the ID type in the [TargetUserIdType](#targetuseridtype) field. If other IDs are available, we recommend that you normalize the field names to **TargetUserSid**, **TargetUserUid**, **TargetUserAADID**, **TargetUserOktaId**, and **TargetUserAwsId**, respectively. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `S-1-12` |
+| <a name="targetuseridtype"></a>**TargetUserIdType** | Optional | Enumerated | The type of the ID stored in the [TargetUserId](#targetuserid) field. <br><br>Supported values are `SID`, `UID`, `AADID`, `OktaId`, and `AWSId`. |
+| <a name="targetusername"></a>**TargetUsername** | Optional | String | The target username, including domain information when available. <br><br>Use one of the following formats and in the following order of priority:<br>- **Upn/Email**: `johndow@contoso.com`<br>- **Windows**: `Contoso\johndow`<br>- **DN**: `CN=Jeff Smith,OU=Sales,DC=Fabrikam,DC=COM`<br>- **Simple**: `johndow`. Use the Simple form only if domain information isn't available.<br><br>Store the Username type in the [TargetUsernameType](#targetusernametype) field. If other IDs are available, we recommend that you normalize the field names to **TargetUserUpn**, **TargetUserWindows**, and **TargetUserDn**. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `AlbertE` |
+| <a name="targetusernametype"></a>**TargetUsernameType** | Optional | Enumerated | Specifies the type of the username stored in the [TargetUsername](#targetusername) field. Supported values include `UPN`, `Windows`, `DN`, and `Simple`. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `Windows` |
+| **TargetUserType** | Optional | Enumerated | The type of target user. Supported values include:<br>- `Regular`<br>- `Machine`<br>- `Admin`<br>- `System`<br>- `Application`<br>- `Service Principal`<br>- `Other`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [TargetOriginalUserType](#targetoriginalusertype) field. |
+| <a name="targetoriginalusertype"></a>**TargetOriginalUserType** | Optional | String | The original destination user type, if provided by the source. |
+|||||
+
+### Actor fields
+
+| Field | Class | Type | Description |
+|-|-||-|
+| <a name="actoruserid"></a>**ActorUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the Actor. <br><br>Supported formats and types include:<br>- **SID** (Windows): `S-1-5-21-1377283216-344919071-3415362939-500`<br>- **UID** (Linux): `4578`<br>- **AADID** (Azure Active Directory): `9267d02c-5f76-40a9-a9eb-b686f3ca47aa`<br>- **OktaId**: `00urjk4znu3BcncfY0h7`<br>- **AWSId**: `72643944673`<br><br>Store the ID type in the [ActorUserIdType](#actoruseridtype) field. If other IDs are available, we recommend that you normalize the field names to **ActorUserSid**, **ActorUserUid**, **ActorUserAadId**, **ActorUserOktaId**, and **ActorAwsId**, respectively. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: S-1-12 |
+| <a name="actoruseridtype"></a>**ActroUserIdType** | Optional | Enumerated | The type of the ID stored in the [ActorUserId](#actoruserid) field. Supported values include `SID`, `UID`, `AADID`, `OktaId`, and `AWSId`. |
+| <a name="actorusername"></a>**ActorUsername** | Mandatory | String | The Actor username, including domain information when available. <br><br>Use one of the following formats and in the following order of priority:<br>- **Upn/Email**: `johndow@contoso.com`<br>- **Windows**: `Contoso\johndow`<br>- **DN**: `CN=Jeff Smith,OU=Sales,DC=Fabrikam,DC=COM`<br>- **Simple**: `johndow`. Use the Simple form only if domain information isn't available.<br><br>Store the Username type in the [ActorUsernameType](#actorusernametype) field. If other IDs are available, we recommend that you normalize the field names to **ActorUserUpn**, **ActorUserWindows**, and **ActorUserDn**.<br><br>For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `AlbertE` |
+| <a name="user"></a>**User** | Alias | | Alias to [ActorUsername](#actorusername). |
+| <a name="actorusernametype"></a>**ActorUsernameType** | Mandatory | Enumerated | Specifies the type of the username stored in the [ActorUsername](#actorusername) field. Supported values are `UPN`, `Windows`, `DN`, and `Simple`. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `Windows` |
+| **ActorUserType** | Optional | Enumerated | The type of the Actor. Allowed values are:<br>- `Regular`<br>- `Machine`<br>- `Admin`<br>- `System`<br>- `Application`<br>- `Service Principal`<br>- `Other`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [ActorOriginalUserType](#actororiginalusertype) field. |
+| <a name="actororiginalusertype"></a>**ActorOriginalUserType** | | | The original actor user type, if provided by the source. |
+| **ActorSessionId** | Optional | String | The unique ID of the login session of the Actor. <br><br>Example: `999`<br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows this value must be numeric. <br><br>If you are using a Windows machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
+|||||
+
+### Group fields
+
+| Field | Class | Type | Description |
+|-|-||-|
+| <a name="groupid"></a>**GroupId** | Optional | String | A machine-readable, alphanumeric, unique representation of the group, for activities involving a group. <br><br>Supported formats and types include:<br>- **SID** (Windows): `S-1-5-21-1377283216-344919071-3415362939-500`<br>- **UID** (Linux): `4578`<br><br>Store the ID type in the [GroupIdType](#groupidtype) field. If other IDs are available, we recommend that you normalize the field names to **GroupSid** or **GroupUid**, respectively. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `S-1-12` |
+| <a name="groupidtype"></a>**GroupIdType** | Optional | Enumerated | The type of the ID stored in the [GroupId](#groupid) field. <br><br>Supported values are `SID`, and `UID`. |
+| <a name="groupname"></a>**GroupName** | Optional | String | The group name, including domain information when available, for activities involving a group. <br><br>Use one of the following formats and in the following order of priority:<br>- **Upn/Email**: `grp@contoso.com`<br>- **Windows**: `Contoso\grp`<br>- **DN**: `CN=grp,OU=Sales,DC=Fabrikam,DC=COM`<br>- **Simple**: `grp`. Use the Simple form only if domain information isn't available.<br><br>Store the group name type in the [GroupNameType](#groupnametype) field. If other IDs are available, we recommend that you normalize the field names to **GroupUpn**, **GorupNameWindows**, and **GroupDn**.<br><br>Example: `Contoso\Finance` |
+| <a name="groupnametype"></a>**GroupNameType** | Optional | Enumerated | Specifies the type of the group name stored in the [GroupName](#groupname) field. Supported values include `UPN`, `Windows`, `DN`, and `Simple`.<br><br>Example: `Windows` |
+| **GroupType** | Optional | Enumerated | The type of the group, for activities involving a group. Supported values include:<br>- `Local Distribution`<br>- `Local Security Enabled`<br>- `Global Distribution`<br>- `Global Security Enabled`<br>- `Universal Distribution`<br>- `Universal Security Enabled`<br>- `Other`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [GroupOriginalType](#grouporiginaltype) field. |
+| <a name="grouporiginaltype"></a>**GroupOriginalType** | Optional | String | The original group type, if provided by the source. |
+|||||
+
+### Source fields
+
+| Field | Class | Type | Description |
+|-|-||-|
+| <a name="src"></a>**Src** | Recommended | String | A unique identifier of the source device. <br><br>This field might alias the [SrcDvcId](#srcdvcid), [SrcHostname](#srchostname), or [SrcIpAddr](#srcipaddr) fields. <br><br>Example: `192.168.12.1` |
+| <a name="srcipaddr"></a>**SrcIpAddr** | Recommended | IP address | The IP address of the source device. This value is mandatory if **SrcHostname** is specified.<br><br>Example: `77.138.103.108` |
+| <a name="ipaddr"></a>**IpAddr** | Alias | | Alias to [SrcIpAddr](#srcipaddr). |
+| <a name="srchostname"></a> **SrcHostname** | Recommended | String | The source device hostname, excluding domain information.<br><br>Example: `DESKTOP-1282V4D` |
+|<a name="srcdomain"></a> **SrcDomain** | Recommended | String | The domain of the source device.<br><br>Example: `Contoso` |
+| <a name="srcdomaintype"></a>**SrcDomainType** | Recommended | Enumerated | The type of [SrcDomain](#srcdomain), if known. Possible values include:<br>- `Windows` (such as `contoso`)<br>- `FQDN` (such as `microsoft.com`)<br><br>Required if [SrcDomain](#srcdomain) is used. |
+| **SrcFQDN** | Optional | String | The source device hostname, including domain information when available. <br><br>**Note**: This field supports both traditional FQDN format and Windows domain\hostname format. The [SrcDomainType](#srcdomaintype) field reflects the format used. <br><br>Example: `Contoso\DESKTOP-1282V4D` |
+| <a name="srcdvcid"></a>**SrcDvcId** | Optional | String | The ID of the source device as reported in the record.<br><br>Example: `ac7e9755-8eae-4ffc-8a02-50ed7a2216c3` |
+| **SrcDvcIdType** | Optional | Enumerated | The type of [SrcDvcId](#srcdvcid), if known. Possible values include:<br> - `AzureResourceId`<br>- `MDEid`<br><br>If multiple IDs are available, use the first one from the preceding list, and store the others in **SrcDvcAzureResourceId** and **SrcDvcMDEid**, respectively.<br><br>**Note**: This field is required if [SrcDvcId](#srcdvcid) is used. |
+| **SrcDeviceType** | Optional | Enumerated | The type of the source device. Possible values include:<br>- `Computer`<br>- `Mobile Device`<br>- `IOT Device`<br>- `Other` |
+| **SrcGeoCountry** | Optional | Country | The country associated with the source IP address.<br><br>Example: `USA` |
+| **SrcGeoRegion** | Optional | Region | The region within a country associated with the source IP address.<br><br>Example: `Vermont` |
+| **SrcGeoCity** | Optional | City | The city associated with the source IP address.<br><br>Example: `Burlington` |
+| **SrcGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the source IP address.<br><br>Example: `44.475833` |
+| **SrcGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the source IP address.<br><br>Example: `73.211944` |
+| | | | |
+
+### Acting Application
+
+| Field | Class | Type | Description |
+|-|-||-|
+| **ActingAppId** | Optional | String | The ID of the application used by the actor to perform the activity, including a process, browser, or service. <br><br>For example: `0x12ae8` |
+| **ActiveAppName** | Optional | String | The name of the application used by the actor to perform the activity, including a process, browser, or service. <br><br>For example: `C:\Windows\System32\svchost.exe` |
+| **ActingAppType** | Optional | Enumerated | The type of acting application. Supported values include: <br>- `Process` <br>- `Browser` <br>- `Resource` <br>- `Other` |
+| **HttpUserAgent** | Optional | String | When authentication is performed over HTTP or HTTPS, this field's value is the user_agent HTTP header provided by the acting application when performing the authentication.<br><br>For example: `Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1` |
+|||||
+
+### Additional fields and aliases
+
+| Field | Class | Type | Description |
+|-|-||-|
+| <a name="hostname"></a>**Hostname** | Alias | | Alias to [DvcHostname](normalization-about-schemas.md#dvchostname). |
+|||||
++
+## Next steps
+
+For more information, see:
+
+- Watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
+
storage Blob Containers Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-containers-cli.md
Previously updated : 01/19/2022 Last updated : 02/05/2022
Azure blob storage allows you to store large amounts of unstructured object data
The Azure CLI is Azure's cross-platform command-line experience for managing Azure resources. You can use it in your browser with Azure Cloud Shell. You can also install it on macOS, Linux, or Windows and run it locally from the command line.
-In this how-to article, you learn to use the Azure CLI to work with container objects.
+In this how-to article, you learn to use the Azure CLI with Bash to work with container objects.
## Prerequisites
To use this example, supply values for the variables and ensure that you've logg
```azurecli #!/bin/bash
-export AZURE_STORAGE_ACCOUNT="<storage-account>"
+storageAccount="<storage-account>"
containerName="demo-container-1" containerPrefix="demo-container-" # Approach 1: Create a container az storage container create \ --name $containerName \
+ --account-name $storageAccount \
--auth-mode login # Approach 2: Create containers with a loop
-for value in {2..4}
+for value in {2..5}
do az storage container create \ --name $containerPrefix$value \
+ --account-name $storageAccount \
--auth-mode login done # Approach 3: Create containers by splitting multiple values
-containerList="${containerPrefix}5 ${containerPrefix}6 ${containerPrefix}7"
+containerList="${containerPrefix}6 ${containerPrefix}7 ${containerPrefix}8"
for container in $containerList do az storage container create \ --name $container \
+ --account-name $storageAccount \
--auth-mode login done ```
Read more about the [az storage container list](/cli/azure/storage/container#az_
```azurecli-interactive #!/bin/bash
-export AZURE_STORAGE_ACCOUNT="<storage-account>"
-numResults="3"
+storageAccount="<storage-account>"
containerPrefix="demo-container-" containerName="demo-container-1"
+numResults="3"
# Approach 1: List maximum containers az storage container list \
+ --account-name $storageAccount \
--auth-mode login # Approach 2: List a defined number of named containers az storage container list \ --prefix $containerPrefix \ --num-results $numResults \
+ --account-name $storageAccount \
--auth-mode login # Approach 3: List an individual container az storage container list \ --prefix $containerPrefix \ --query "[?name=='$containerName']" \
+ --account-name $storageAccount \
--auth-mode login ```
In the following example, the first approach displays the properties of a single
```azurecli-interactive #!/bin/bash
-export AZURE_STORAGE_ACCOUNT="<storage-account>"
+storageAccount="<storage-account>"
containerPrefix="demo-container-" containerName="demo-container-1" # Show a named container's properties az storage container show \ --name $containerName \
+ --account-name $storageAccount \
--auth-mode login # List several containers and show their properties containerList=$(az storage container list \ --query "[].name" \ --prefix $containerPrefix \
+ --account-name $storageAccount \
--auth-mode login \ --output tsv)
-for item in $containerList
+
+for row in $containerList
do
- az storage container show \
- --name $item \
- --auth-mode login
+ tmpRow=$(echo $row | sed -e 's/\r//g')
+ az storage container show --name $tmpRow --account-name $storageAccount --auth-mode login
done ```
done
Users that have many thousands of objects within their storage account can quickly locate specific containers based on their metadata. To read the metadata, you'll use the `az storage container metadata show` command. To update metadata, you'll need to call the `az storage container metadata update` command. The method only accepts space-separated key-value pairs. For more information, see the [az storage container metadata](/cli/azure/storage/container/metadata) documentation.
-The example below first updates a container's metadata and afterward retrieves the container's metadata.
+The first example below updates and then retrieves a named container's metadata. The second example iterates the list of containers matching the `-prefix` value. Containers with names containing even numbers have their metadata set with values contained in the *metadata* variable.
```azurecli-interactive #!/bin/bash
-export AZURE_STORAGE_ACCOUNT="<storage-account>"
-containerName = "demo-container-1"
+storageAccount="<storage-account>"
+containerName="demo-container-1"
+containerPrefix="demo-container-"
# Create metadata string metadata="key=value pie=delicious"
-# Update metadata
+# Update named container metadata
az storage container metadata update \ --name $containerName \ --metadata $metadata \
+ --account-name $storageAccount \
--auth-mode login # Display metadata az storage container metadata show \ --name $containerName \
+ --account-name $storageAccount \
--auth-mode login+
+# Get list of containers
+containerList=$(az storage container list \
+ --query "[].name" \
+ --prefix $containerPrefix \
+ --account-name $storageAccount \
+ --auth-mode login \
+ --output tsv)
+
+# Update and display metadata
+for row in $containerList
+do
+ #Get the container's number
+ tmpName=$(echo $row | sed -e 's/\r//g')
+ if [ `expr ${tmpName: ${#containerPrefix}} % 2` == 0 ]
+ then
+ az storage container metadata update \
+ --name $tmpName \
+ --metadata $metadata \
+ --account-name $storageAccount \
+ --auth-mode login
+
+ echo $tmpName
+
+ az storage container metadata show \
+ --name $tmpName \
+ --account-name $storageAccount \
+ --auth-mode login
+ fi
+done
``` ## Delete containers
Depending on your use case, you can delete a single container or a group of cont
```azurecli-interactive #!/bin/bash
-export AZURE_STORAGE_ACCOUNT="<storage-account>"
+storageAccount="<storage-account>"
containerName="demo-container-1" containerPrefix="demo-container-" # Delete a single named container az storage container delete \ --name $containerName \
+ --account-name $storageAccount \
--auth-mode login # Delete containers by iterating a loop list=$(az storage container list \ --query "[].name" \
- --auth-mode login \
--prefix $containerPrefix \
+ --account-name $storageAccount \
+ --auth-mode login \
--output tsv)
-for item in $list
+for row in $list
do
+ tmpName=$(echo $row | sed -e 's/\r//g')
az storage container delete \
- --name $item \
+ --name $tmpName \
+ --account-name $storageAccount \
--auth-mode login done ```
-If you have container soft delete enabled for your storage account, then it's possible to retrieve containers that have been deleted. If your storage account's soft delete data protection option is enabled, the `--include-deleted` parameter will return containers deleted within the associated retention period. The `--include-deleted` parameter can only be used in conjunction with the `--prefix` parameter when returning a list of containers. To learn more about soft delete, refer to the [Soft delete for containers](soft-delete-container-overview.md) article.
+If you have container soft delete enabled for your storage account, then it's possible to retrieve containers that have been deleted. If your storage account's soft delete data protection option is enabled, the `--include-deleted` parameter will return containers deleted within the associated retention period. The `--include-deleted` parameter can only be used to return containers when used with the `--prefix` parameter. To learn more about soft delete, refer to the [Soft delete for containers](soft-delete-container-overview.md) article.
Use the following example to retrieve a list of containers deleted within the storage account's associated retention period. ```azurecli-interactive
+#!/bin/bash
+storageAccount="<storage-account>"
+containerPrefix="demo-container-"
+ # Retrieve a list of containers including those recently deleted az storage container list \
- --prefix $prefix \
+ --prefix $containerPrefix \
--include-deleted \
+ --account-name $storageAccount\
--auth-mode login ```
az storage container list \
As mentioned in the [List containers](#list-containers) section, you can configure the soft delete data protection option on your storage account. When enabled, it's possible to restore containers deleted within the associated retention period. Before you can follow this example, you'll need to enable soft delete and configure it on at least one of your storage accounts.
-The following example explains how to restore a soft-deleted container with the `az storage container restore` command. You'll need to supply values for the `--name` and `--version` parameters to ensure that the correct version of the container is restored. If you don't know the version number, you can use the `az storage container list` command to retrieve it as shown in the following example.
+The following examples explain how to restore a soft-deleted container with the `az storage container restore` command. You'll need to supply values for the `--name` and `--version` parameters to ensure that the correct version of the container is restored. If you don't know the version number, you can use the `az storage container list` command to retrieve it as shown in the first example. The second example finds and restores all deleted containers within a specific storage account.
To learn more about the soft delete data protection option, refer to the [Soft delete for containers](soft-delete-container-overview.md) article. ```azurecli-interactive #!/bin/bash
-export AZURE_STORAGE_ACCOUNT="<storage-account>"
+storageAccount="<storage-account>"
containerName="demo-container-1" # Restore an individual named container containerVersion=$(az storage container list \
+ --account-name $storageAccount \
--query "[?name=='$containerName'].[version]" \ --auth-mode login \ --output tsv \
- --include-deleted)
+ --include-deleted | sed -e 's/\r//g')
az storage container restore \ --name $containerName \ --deleted-version $containerVersion \
+ --account-name $storageAccount \
--auth-mode login+
+# Restore a list of deleted containers
+containerList=$(az storage container list \
+ --account-name $storageAccount \
+ --include-deleted \
+ --auth-mode login \
+ --query "[?deleted].{name:name,version:version}" \
+ -o json)
+
+for row in $(echo "${containerList}" | jq -c '.[]' )
+do
+ tmpName=$(echo $row | jq -r '.name')
+ tmpVersion=$(echo $row | jq -r '.version')
+ az storage container restore \
+ --account-name $storageAccount \
+ --name $tmpName \
+ --deleted-version $tmpVersion \
+ --auth-mode login
+done
``` ## Get a shared access signature for a container
Azure Storage supports three types of shared access signatures: user delegation,
> [!CAUTION] > Any client that possesses a valid SAS can access data in your storage account as permitted by that SAS. It's important to protect a SAS from malicious or unintended use. Use discretion in distributing a SAS, and have a plan in place for revoking a compromised SAS.
-The following example illustrates the process of configuring a service SAS for a specific container using the `az storage container generate-sas` command. Because it is generating a service SAS, the example first retrieves the storage account key to pass as the `--account-key` value.
+The following example illustrates the process of configuring a service SAS for a specific container using the `az storage container generate-sas` command. Because it's generating a service SAS, the example first retrieves the storage account key to pass as the `--account-key` value.
The example will configure the SAS with start and expiry times and a protocol. It will also specify the **delete**, **read**, **write**, and **list** permissions in the SAS using the `--permissions` parameter. You can reference the full table of permissions in the [Create a service SAS](/rest/api/storageservices/create-service-sas) article.
+Copy and paste the Blob SAS token value in a secure location. It will only be displayed once and canΓÇÖt be retrieved once Bash is closed. To construct the SAS URL, append the SAS token (URI) to the URL for the storage service.
+ ```azurecli-interactive #!/bin/bash
-storageAccount="<storage-account-name>"
-export AZURE_STORAGE_ACCOUNT=$storageAccount
+storageAccount="<storage-account>"
containerName="demo-container-1" permissions="drwl" expiry=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
az storage container generate-sas \
--https-only \ --permissions dlrw \ --expiry $expiry \
- --account-key $accountKey
-```
-
-## Clean up resources
-
-If you want to delete the environment variables as part of this how-to article, run the following script.
-
-```azurecli
-# Remove environment variables
-unset AZURE_STORAGE_ACCOUNT
+ --account-key $accountKey \
+ --account-name $storageAccount
``` ## Next steps
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/network-file-system-protocol-support-how-to.md
Title: Mount Azure Blob Storage by using the NFS 3.0 protocol | Microsoft Docs
-description: Learn how to mount a container in Blob storage from an Azure Virtual Machine (VM) or a client that runs on-premises by using the NFS 3.0 protocol.
+description: Learn how to mount a container in Blob Storage from an Azure virtual machine (VM) or a client that runs on-premises by using the NFS 3.0 protocol.
-# Mount Blob storage by using the Network File System (NFS) 3.0 protocol
+# Mount Blob Storage by using the Network File System (NFS) 3.0 protocol
-You can mount a container in Blob storage from a Linux-based Azure Virtual Machine (VM) or a Linux system that runs on-premises by using the NFS 3.0 protocol. This article provides step-by-step guidance. To learn more about NFS 3.0 protocol support in Blob storage, see [Network File System (NFS) 3.0 protocol support in Azure Blob Storage](network-file-system-protocol-support.md).
+This article provides guidance on how to mount a container in Azure Blob Storage from a Linux-based Azure virtual machine (VM) or a Linux system that runs on-premises by using the Network File System (NFS) 3.0 protocol. To learn more about NFS 3.0 protocol support in Blob Storage, see [Network File System (NFS) 3.0 protocol support in Azure Blob Storage](network-file-system-protocol-support.md).
-## Step 1: Create an Azure Virtual Network (VNet)
+## Step 1: Create an Azure virtual network
-Your storage account must be contained within a VNet. A VNet enables clients to securely connect to your storage account. To learn more about VNet, and how to create one, see the [Virtual Network documentation](../../virtual-network/index.yml).
+Your storage account must be contained within a virtual network. A virtual network enables clients to connect securely to your storage account. To learn more about Azure Virtual Network, and how to create a virtual network, see the [Virtual Network documentation](../../virtual-network/index.yml).
> [!NOTE]
-> Clients in the same VNet can mount containers in your account. You can also mount a container from a client that runs in an on-premises network, but you'll have to first connect your on-premises network to your VNet. See [Supported network connections](network-file-system-protocol-support.md#supported-network-connections).
+> Clients in the same virtual network can mount containers in your account. You can also mount a container from a client that runs in an on-premises network, but you'll have to first connect your on-premises network to your virtual network. See [Supported network connections](network-file-system-protocol-support.md#supported-network-connections).
## Step 2: Configure network security
-The only way to secure the data in your account is by using a VNet and other network security settings. Any other tool used to secure data including account key authorization, Azure Active Directory (AD) security, and access control lists (ACLs) are not yet supported in accounts that have the NFS 3.0 protocol support enabled on them.
+Currently, the only way to secure the data in your storage account is by using a virtual network and other network security settings. Any other tools used to secure data, including account key authorization, Azure Active Directory (Azure AD) security, and access control lists (ACLs), are not yet supported in accounts that have the NFS 3.0 protocol support enabled on them.
To secure the data in your account, see these recommendations: [Network security recommendations for Blob storage](security-recommendations.md#networking). ## Step 3: Create and configure a storage account
-To mount a container by using NFS 3.0, You must create a storage account. You can't enable existing accounts.
+To mount a container by using NFS 3.0, you must create a storage account. You can't enable existing accounts.
-NFS 3.0 protocol is supported for standard general-purpose v2 storage accounts and for premium block blob storage accounts. For more information on these types of storage accounts, see [Storage account overview](../common/storage-account-overview.md).
+The NFS 3.0 protocol is supported for standard general-purpose v2 storage accounts and for premium block blob storage accounts. For more information on these types of storage accounts, see [Storage account overview](../common/storage-account-overview.md).
-As you configure the account, choose these values:
+To configure the account, choose these values:
|Setting | Premium performance | Standard performance |-|||
Create a container in your storage account by using any of these tools or SDKs:
||[REST](/rest/api/storageservices/create-container)| > [!NOTE]
-> By default, the root squash option of a new container is `no root squash`. But you can change that to `root squash` or `all squash`. For information about these squash options, see your operating system documentation.
+> By default, the root squash option of a new container is **No Root Squash**. But you can change that to **Root Squash** or **All Squash**. For information about these squash options, see your operating system documentation.
The following image shows the squash options as they appear in the Azure portal. > [!div class="mx-imgBorder"]
-> ![squash options in the Azure portal](./media/network-file-system-protocol-how-to/squash-options-azure-portal.png)
+> ![Screenshot that shows squash options in the Azure portal.](./media/network-file-system-protocol-how-to/squash-options-azure-portal.png)
## Step 5: Mount the container
-Create a directory on your Linux system, and then mount a container in the storage account.
+Create a directory on your Linux system, and then mount the container in the storage account.
-1. On a Linux system, create a directory.
+1. On your Linux system, create a directory:
``` mkdir -p /mnt/test ```
-2. Mount a container by using the following command.
+2. Mount the container by using the following command:
``` mount -o sec=sys,vers=3,nolock,proto=tcp <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> /mnt/test
Create a directory on your Linux system, and then mount a container in the stora
- Replace the `<container-name>` placeholder with the name of your container. -- ## Resolve common errors
-|Error | Cause / resolution|
+|Error | Cause/resolution|
|||
-|`Access denied by server while mounting`|Ensure that your client is running within a supported subnet. See the [Supported network locations](network-file-system-protocol-support.md#supported-network-connections).|
-|`No such file or directory`| Make sure to type the mount command and it's parameters directly into the terminal. If you copy and paste any part of this command into the terminal from another application, hidden characters in the pasted information might cause this error to appear. This error also might appear if the account isn't enabled for NFS 3.0. |
-|`Permision denied`| The default mode of a newly created NFS v3 container is 0750. Non-root users do not have access to the volume. If access from non-root users is required, root user must change the mode to 0755. Sample command: `sudo chmod 0755 /mnt/<newcontainer>`|
-|`EINVAL ("Invalid argument"`) |This error can appear when a client attempts to:<li>Write to a blob that was created from a blob endpoint.<li>Delete a blob that has a snapshot or is in a container that has an active WORM (Write Once, Read Many) policy.|
-|`EROFS ("Read-only file system"`) |This error can appear when a client attempts to:<li>Write to a blob or delete a blob that has an active lease.<li>Write to a blob or delete a blob in a container that has an active WORM (Write Once, Read Many) policy. |
+|`Access denied by server while mounting`|Ensure that your client is running within a supported subnet. See [Supported network locations](network-file-system-protocol-support.md#supported-network-connections).|
+|`No such file or directory`| Make sure to type, rather than copy and paste, the mount command and its parameters directly into the terminal. If you copy and paste any part of this command into the terminal from another application, hidden characters in the pasted information might cause this error to appear. This error also might appear if the account isn't enabled for NFS 3.0.|
+|`Permission denied`| The default mode of a newly created NFS 3.0 container is 0750. Non-root users don't have access to the volume. If access from non-root users is required, root users must change the mode to 0755. Sample command: `sudo chmod 0755 /mnt/<newcontainer>`|
+|`EINVAL ("Invalid argument"`) |This error can appear when a client attempts to:<li>Write to a blob that was created from a blob endpoint.<li>Delete a blob that has a snapshot or is in a container that has an active WORM (write once, read many) policy.|
+|`EROFS ("Read-only file system"`) |This error can appear when a client attempts to:<li>Write to a blob or delete a blob that has an active lease.<li>Write to a blob or delete a blob in a container that has an active WORM policy. |
|`NFS3ERR_IO/EIO ("Input/output error"`) |This error can appear when a client attempts to read, write, or set attributes on blobs that are stored in the archive access tier. |
-|`OperationNotSupportedOnSymLink` error| This error can be returned during a write operation via a Blob or Azure Data Lake Storage Gen2 API. Using these APIs to write or delete symbolic links that are created by using NFS 3.0 is not allowed. Make sure to use the NFS v3 endpoint to work with symbolic links. |
-|`mount: /mnt/test: bad option;`| Install the nfs helper program using **sudo apt install nfs-common**.|
+|`OperationNotSupportedOnSymLink` error| This error can be returned during a write operation via a Blob Storage or Azure Data Lake Storage Gen2 API. Using these APIs to write or delete symbolic links that are created by using NFS 3.0 is not allowed. Make sure to use the NFS 3.0 endpoint to work with symbolic links. |
+|`mount: /mnt/test: bad option;`| Install the NFS helper program by using `sudo apt install nfs-common`.|
## See also
synapse-analytics Quickstart Create Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-workspace-cli.md
Previously updated : 08/25/2020 Last updated : 02/04/2022
In this quickstart, you learn to create a Synapse workspace by using the Azure C
[ ![Azure Synapse workspace web](media/quickstart-create-synapse-workspace-cli/create-workspace-cli-1.png) ](media/quickstart-create-synapse-workspace-cli/create-workspace-cli-1.png#lightbox)
+1. Once deployed, additional permissions are required.
+- In the Azure portal, assign other users of the workspace to the **Contributor** role in the workspace. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+- Assign other users the appropriate **[Synapse RBAC roles](security/synapse-workspace-synapse-rbac-roles.md)** using Synapse Studio.
+- A member of the **Owner** role of the Azure Storage account must assign the **Storage Blob Data Contributor** role to the Azure Synapse workspace MSI and other users.
## Clean up resources
synapse-analytics Quickstart Create Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-workspace-powershell.md
Previously updated : 10/19/2020 Last updated : 02/04/2022
# Quickstart: Create an Azure synapse workspace with Azure PowerShell
-Azure PowerShell is a set of cmdlets for managing Azure resources directly from PowerShell. You can
-use it in your browser with Azure Cloud Shell. You can also install it on macOS, Linux, or Windows.
+Azure PowerShell is a set of cmdlets for managing Azure resources directly from PowerShell. You can use it in your browser with Azure Cloud Shell. You can also install it on macOS, Linux, or Windows.
In this quickstart, you learn to create a Synapse workspace using Azure PowerShell.
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
## Prerequisites
before you begin.
> **hierarchical namespace** at the creation of the storage account as described in > [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-powershell#create-a-storage-account).
-If you choose to use Cloud Shell, see
-[Overview of Azure Cloud Shell](../cloud-shell/overview.md) for more
-information.
+If you choose to use Cloud Shell, see [Overview of Azure Cloud Shell](../cloud-shell/overview.md) for more information.
### Install the Azure PowerShell module locally
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
-module and connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information
-about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
+If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
For more information about authentication with Azure PowerShell, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps). ### Install the Azure Synapse PowerShell module > [!IMPORTANT]
-> While the **Az.Synapse** PowerShell module is in preview, you must install it separately using
-> the `Install-Module` cmdlet. After this PowerShell module becomes generally available, it will be
-> part of future Az PowerShell module releases and available by default from within Azure Cloud
-> Shell.
+> While the **Az.Synapse** PowerShell module is in preview, you must install it separately using the `Install-Module` cmdlet. After this PowerShell module becomes generally available, it will be part of future Az PowerShell module releases and available by default from within Azure Cloud Shell.
```azurepowershell-interactive Install-Module -Name Az.Synapse
Install-Module -Name Az.Synapse
![Azure Synapse workspace web](media/quickstart-create-synapse-workspace-powershell/create-workspace-powershell-1.png) +
+1. Once deployed, additional permissions are required.
+- In the Azure portal, assign other users of the workspace to the **Contributor** role in the workspace. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+- Assign other users the appropriate **[Synapse RBAC roles](security/synapse-workspace-synapse-rbac-roles.md)** using Synapse Studio.
+- A member of the **Owner** role of the Azure Storage account must assign the **Storage Blob Data Contributor** role to the Azure Synapse workspace MSI and other users.
+ ## Clean up resources Follow the steps below to delete the Azure Synapse workspace.
Remove-AzSynapseWorkspace -Name $SynapseWorkspaceNam -ResourceGroupName $Synapse
## Next steps
-Next, you can [create SQL pools](quickstart-create-sql-pool-studio.md) or
-[create Apache Spark pools](quickstart-create-apache-spark-pool-studio.md) to start analyzing and
-exploring your data.
+Next, you can [create SQL pools](quickstart-create-sql-pool-studio.md) or [create Apache Spark pools](quickstart-create-apache-spark-pool-studio.md) to start analyzing and exploring your data.
synapse-analytics Quickstart Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-workspace.md
Managed identities for your Azure Synapse workspace might already have access to
* [Create a dedicated SQL pool](quickstart-create-sql-pool-studio.md) * [Create a serverless Apache Spark pool](quickstart-create-apache-spark-pool-portal.md)
-* [Use serverless SQL pool](quickstart-sql-on-demand.md)
+* [Use serverless SQL pool](quickstart-sql-on-demand.md)
synapse-analytics Quickstart Deployment Template Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-deployment-template-workspaces.md
Title: 'Quickstart: Create an Azure Synapse workspace Azure Resource Manager template (ARM template)' description: Learn how to create a Synapse workspace by using Azure Resource Manager template (ARM template). - + Previously updated : 08/07/2020 Last updated : 02/04/2022 # Quickstart: Create an Azure Synapse workspace using an ARM template
-This Azure Resource Manager template (ARM template) will create an Azure Synapse workspace with underlying Data Lake Storage. The Azure Synapse workspace is a securable collaboration boundary for analytics processes in Azure Synapse Analytics.
+This Azure Resource Manager (ARM) template will create an Azure Synapse workspace with underlying Data Lake Storage. The Azure Synapse workspace is a securable collaboration boundary for analytics processes in Azure Synapse Analytics.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
If your environment meets the prerequisites and you're familiar with using ARM t
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+To create an Azure Synapse workspace, a user must have **Azure Contributor** role and **User Access Administrator** permissions, or the **Owner** role in the subscription. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+ ## Review the template You can review the template by selecting the **Visualize** link. Then select **Edit template**.
The template defines two resources:
- **Review and Create**: Select. - **Create**: Select.
+1. Once deployed, additional permissions are required.
+- In the Azure portal, assign other users of the workspace to the **Contributor** role in the workspace. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+- Assign other users the appropriate **[Synapse RBAC roles](security/synapse-workspace-synapse-rbac-roles.md)** using Synapse Studio.
+- A member of the **Owner** role of the Azure Storage account must assign the **Storage Blob Data Contributor** role to the Azure Synapse workspace MSI and other users.
+ ## Next steps
-To learn more about Azure Synapse Analytics and Azure Resource Manager, continue on to the articles below.
+To learn more about Azure Synapse Analytics and Azure Resource Manager,
- Read an [Overview of Azure Synapse Analytics](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) - Learn more about [Azure Resource Manager](../azure-resource-manager/management/overview.md) - [Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)+
+Next, you can [create SQL pools](quickstart-create-sql-pool-studio.md) or [create Apache Spark pools](quickstart-create-apache-spark-pool-studio.md) to start analyzing and exploring your data.
+
virtual-machine-scale-sets Cli Sample Manage Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/cli-sample-manage-scale-set.md
- Title: Azure CLI sample for virtual machine scale set management
-description: This sample shows how to add disks to a virtual machine scale set. You can upgrade disks and add your virtual machines to Azure AD authentication.
-- Previously updated : 02/04/2021-----
-# Create and manage virtual machine scale set
-
-Use these sample commands to prototype a virtual machine scale set by using Azure CLI.
-
-These sample commands demonstrate the following operations:
-
-* Create a virtual machine scale set.
-* Add and upgrade new or existing disks to a scale set or to an instance of the set.
-* Add scale set to Azure Active Directory (Azure AD) authentication.
--
-## Sample commands
-
-```azurecli
-# Create a resource group
-az group create --name MyResourceGroup --location eastus
-
-# Create virtual machine scale set
-az vmss create --resource-group MyResourceGroup --name myScaleSet --instance-count 2 \
- --image UbuntuLTS --upgrade-policy-mode automatic --admin-username azureuser \
- --generate-ssh-keys
-
-# Attach a new managed disk to your scale set
-az vmss disk attach --resource-group MyResourceGroup --vmss-name myScaleSet --size-gb 50
-```
-
-After you add a new data disk, format and mount the disk. For Windows virtual machines, see [Attach a managed data disk to a Windows VM by using the Azure portal](../../virtual-machines/windows/attach-managed-disk-portal.md). For Linux virtual machines, see [Add a disk to a Linux VM](../../virtual-machines/linux/add-disk.md).
-
-```azurecli
-# Attach an existing managed disk to a virtual machine instance in your scale set
-az vmss disk attach --resource-group MyResourceGroup --disk myDataDisk \
- --vmss-name myScaleSet --instance-id 0
-
-# See the instances in your virtual machine scale set
-az vmss list-instances --resource-group MyResourceGroup --name myScaleSet --output table
-
-# See the disks for your virtual machine
-az disk list --resource-group MyResourceGroup \
- --query "[*].{Name:name,Gb:diskSizeGb,Tier:accountType}" --output table
-
-# Deallocate the virtual machine
-az vmss deallocate --resource-group MyResourceGroup --name myScaleSet --instance-ids 0
-
-# Resize the disk
-az disk update --resource-group MyResourceGroup --name myDataDisk --size-gb 200
-
-# Restart the disk
-az vmss restart --resource-group MyResourceGroup --name myScaleSet --instance-ids 0
-```
-
-To use the expanded disk, expand the underlying partition. For more information, see [Expand a disk partition and filesystem](../../virtual-machines/linux/expand-disks.md#expand-a-disk-partition-and-filesystem).
-
-This example resized a data disk. You can use this same procedure to update an OS disk. For more information for a Windows virtual machine, see [How to expand the OS drive of a virtual machine](../../virtual-machines/windows/expand-os-disk.md). For more information for Linux virtual machines, see [Expand virtual hard disks on a Linux VM with the Azure CLI](../../virtual-machines/linux/expand-disks.md).
-
-```azurecli
-# Enable managed service identity on your scale set. This is required to authenticate and interact with other Azure services using bearer tokens.
-az vmss identity assign --resource-group MyResourceGroup --name myScaleSet --role Owner \
- --scope /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyResourceGroup
-
-# Connect to Azure AD authentication
-az vmss extension set --resource-group MyResourceGroup --name AADLoginForWindows \
- --publisher Microsoft.Azure.ActiveDirectory --vmss-name myScaleSet
-
-# Upgrade one instance of a scale set virtual machine
-az vmss update-instances --resource-group MyResourceGroup --name myScaleSet --instance-ids 0
-
-# Remove a managed disk from the scale set
-az vmss disk detach --resource-group MyResourceGroup --vmss-name myScaleSet --lun 0
-
-# Remove a managed disk from an instance
-az vmss disk detach --resource-group MyResourceGroup --vmss-name myScaleSet --instance-id 1 --lun 0
-
-# Delete the pre-existing disk
-az disk delete --resource-group MyResourceGroup --disk myDataDisk
-```
-
-## Clean up resources
-
-After using these commands, run the following command to remove the resource group and all resources associated with it.
-
-```azurecli
-az group delete --name MyResourceGroup
-```
-
-## Azure CLI references used in this article
-
-* [az disk delete](/cli/azure/disk#az_disk_delete)
-* [az disk list](/cli/azure/disk#az_disk_list)
-* [az disk update](/cli/azure/disk#az_disk_update)
-* [az group create](/cli/azure/group#az_group_create)
-* [az vmss create](/cli/azure/vmss#az_vmss_create)
-* [az virtual machine scale set deallocate](/cli/azure/vmss#az_vmss_deallocate)
-* [az vmss disk attach](/cli/azure/vmss/disk#az_vmss_disk_attach)
-* [az vmss disk detach](/cli/azure/vmss/disk#az_vmss_disk_detach)
-* [az vmss extension set](/cli/azure/vmss/extension#az_vmss_extension_set)
-* [az vmss identity assign](/cli/azure/vmss/identity#az_vmss_identity_assign)
-* [az vmss list-instances](/cli/azure/vmss#az_vmss_list_instances)
-* [az vmss restart](/cli/azure/vmss#az_vmss_restart)
-* [az vmss update-instances](/cli/azure/vmss#az_vmss_update_instances)
virtual-machines Azure Hybrid Benefit Byos Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/azure-hybrid-benefit-byos-linux.md
+
+ Title: Azure Hybrid Benefit for BYOS Linux VMs
+description: Learn how Azure Hybrid Benefit can help get updates from Azure infrastructure for Linux machines on Azure.
+
+documentationcenter: ''
+++++++ Last updated : 02/06/2022+++
+# How Azure Hybrid Benefit for BYOS VMs (AHB BYOS) applies for Linux virtual machines
+
+>[!IMPORTANT]
+>The below article is scoped to Azure Hybrid Benefit for BYOS VMs (AHB BYOS) which caters to conversion of custom on-prem image VMs and RHEL or SLES BYOS VMs. For conversion of RHEL PAYG or SLES PAYG VMs, refer to [Azure Hybrid Benefit for PAYG VMs here](./azure-hybrid-benefit-linux.md).
+
+>[!NOTE]
+>Azure Hybrid Benefit for BYOS VMs is planned for Preview from **30 March 2022**. You can [sign up for the preview here.](https://aka.ms/ahb-linux-form) You will receive a mail from Microsoft once your subscriptions are enabled for Preview.
++
+Azure Hybrid Benefit for BYOS VMs is a licensing benefit that helps you to get software updates and integrated support for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (VMs) directly from Azure infrastructure. This benefit is available to RHEL and SLES custom on-prem image VMs (VMs generated from o- prem images), and to RHEL and SLES Marketplace bring-your-own-subscription (BYOS) VMs.
+
+## Benefit description
+Before AHB BYOS, RHEL and SLES customers who migrated their on-prem machines to Azure by creating images of on-prem systems and migrating them as VMs on Azure did not have the flexibility to get software updates directly from Azure similar to Marketplace PAYG VMs. Hence, you needed to still buy cloud access licenses from the Enterprise Linux distributors to get security support as well as software updates. With Azure Hybrid Benefit for BYOS VMs, we will allow you to get software updates and support for on-prem custom image VMs as well as RHEL and SLES BYOS VMs similar to PAYG VMs by paying the same software fees as charged to PAYG VMs. In addition, these conversions can happen without any redeployment, so you can avoid any downtime risk.
++
+After you enable the AHB for BYOS VMs benefit on RHEL or SLES VM, you will be charged for the additional software fee typically incurred on a PAYG VM and you will also start getting software updates typically provided to a PAYG VM.
+
+You can also choose to convert a VM that has had the benefit enabled on it back to a BYOS billing model which will stop software billing and software updates from Azure infrastructure.
+
+## Scope of Azure Hybrid Benefit for BYOS VMs eligibility for Linux VMs
+
+**Azure Hybrid Benefit for BYOS VMs** is available for all RHEL and SLES custom on-prem image VMs as well as RHEL and SLES Marketplace BYOS VMs. For RHEL and SLES PAYG Marketplace VMs, [refer to AHB for PAYG VMs here](./azure-hybrid-benefit-linux.md)
+
+Azure Dedicated Host instances, and SQL hybrid benefits are not eligible for Azure Hybrid Benefit for BYOS VMs if you're already using the benefit with Linux VMs. Virtual Machine Scale Sets (VMSS) are Reserved Instances (RIs) are not in scope for AHB BYOS.
+
+## Get started
+
+### Red Hat customers
+
+To start using the benefit for Red Hat:
+
+1. Install the 'AHBForRHEL' extension on the virtual machine on which you wish to apply the AHB BYOS benefit. This is a prerequisite before moving to next step. You can do this via the portal or use Azure CLI.
+
+
+1. Depending on the software updates you want, change the license type to relevant value. Here are the available license type values and the software updates associated with them:
+
+ | License Type | Software Updates | Allowed VMs|
+ ||||
+ | RHEL_BASE | Installs Red Hat regular/base repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom on-prem image VMs|
+ | RHEL_EUS | Installs Red Hat Extended Update Support (EUS) repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom on-prem image VMs|
+ | RHEL_SAPAPPS | Installs RHEL for SAP Business Apps repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom on-prem image VMs|
+ | RHEL_SAPHA | Installs RHEL for SAP with HA repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom on-prem image VMs|
+ | RHEL_BASESAPAPPS | Installs RHEL regular/base SAP Business Apps repositories into your virtual machine. | RHEL BYOS VMs, RHEL custom on-prem image VMs|
+ | RHEL_BASESAPHA | Installs regular/base RHEL for SAP with HA repositories into your virtual machine.| RHEL BYOS VMs, RHEL custom on-prem image VMs|
+
+1. Wait for one hour for the extension to read the license type value and install the repositories.
+
+1. You should now be connected to Azure Red Hat Update Infrastructure and the relevant repositories will be installed in your machine.
+
+1. In case the extension is not running by itself, you can run it on demand as well.
+
+1. In case you want to switch back to the bring-your-own-subscription model, just change the license type to 'None' and run the extension. This will remove all RHUI repositories from your virtual machine and stop the billing.
+
+>[!Note]
+> In the unlikely event that extension is not able to install repositories or there are any issues, please change the license type back to empty and reach out to support for help. This will ensure you are not getting billed for software updates.
++
+### SUSE customers
+
+To start using the benefit for SUSE:
+
+1. Install the Azure Hybrid Benefit for BYOS VMs extension on the virtual machine on which you wish to apply the AHB BYOS benefit. This is a prerequisite before moving to next step.
+1. Depending on the software updates you want, change the license type to relevant value. Here are the available license type values and the software updates associated with them:
+
+ | License Type | Software Updates | Allowed VMs|
+ ||||
+ | SLES_STANDARD | Installs SLES standard repositories into your virtual machine. | SLES BYOS VMs, SLES custom on-prem image VMs|
+ | SLES_SAP | Installs SLES SAP repositories into your virtual machine. | SLES SAP BYOS VMs, SLES custom on-prem image VMs|
+ | SLES_HPC | Installs SLES High Performance Compute related repositories into your virtual machine. | SLES HPC BYOS VMs, SLES custom on-prem image VMs|
+
+1. Wait for 5 minutes for the extension to read the license type value and install the repositories.
+
+1. You should now be connected to Azure SLES Update Infrastructure and the relevant repositories will be installed in your machine.
+
+1. In case the extension is not running by itself, you can run it on demand as well.
+
+1. In case you want to switch back to the bring-your-own-subscription model, just change the license type to 'None' and run the extension. This will remove all repositories from your virtual machine and stop the billing.
+
+## Enable and disable the benefit for RHEL
+
+You can install the `AHBForRHEL` extension to install the extension. After successfully installing the extension,
+you can use the `az vm update` command to update existing license type on running VMs. For SLES VMs, run the command and set `--license-type` parameter to one of the following: `RHEL_BASE`, `RHEL_EUS`, `RHEL_SAPHA`, `RHEL_SAPAPPS`, `RHEL_BASESAPAPPS` or `RHEL_BASESAPHA`.
++
+### CLI example to enable the benefit for RHEL
+1. Install the Azure Hybrid Benefit extension on running VM using the portal or via Azure CLI using the command below:
+ ```azurecli
+ az vm extension set -n AHBForRHEL --publisher Microsoft.Azure.AzureHybridBenefit --vm-name myVMName --resource-group myResourceGroup
+ ```
+1. Once, the extension is installed successfully, change the license type based on your requirements:
+
+ ```azurecli
+ # This will enable the benefit to fetch software updates for RHEL base/regular repositories
+ az vm update -g myResourceGroup -n myVmName --license-type RHEL_BASE
+
+ # This will enable the benefit to fetch software updates for RHEL EUS repositories
+ az vm update -g myResourceGroup -n myVmName --license-type RHEL_EUS
+
+ # This will enable the benefit to fetch software updates for RHEL SAP APPS repositories
+ az vm update -g myResourceGroup -n myVmName --license-type RHEL_SAPAPPS
+
+ # This will enable the benefit to fetch software updates for RHEL SAP HA repositories
+ az vm update -g myResourceGroup -n myVmName --license-type RHEL_SAPHA
+
+ # This will enable the benefit to fetch software updates for RHEL BASE SAP APPS repositories
+ az vm update -g myResourceGroup -n myVmName --license-type RHEL_BASESAPAPPS
+
+ # This will enable the benefit to fetch software updates for RHEL BASE SAP HA repositories
+ az vm update -g myResourceGroup -n myVmName --license-type RHEL_BASESAPHA
+
+ ```
+1. Wait for 5 minutes for the extension to read the license type value and install the repositories.
+
+1. You should now be connected to Azure Red Hat Update Infrastructure and the relevant repositories will be installed in your machine. You can check the same by performing the command below on your VM which outputs installed repository packages on your VM:
+ ```bash
+ yum repolist
+ ```
+ 1. In case the extension is not running by itself, you can try the below command on the VM using:
+ ```bash
+
+ ```
+
+## Enable and disable the benefit for SLES
+
+You can install the `AHBForSLES` extension to install the extension. After successfully installing the extension,
+you can use the `az vm update` command to update existing license type on running VMs. For SLES VMs, run the command and set `--license-type` parameter to one of the following: `SLES_STANDARD`,`SLES_SAP` or `SLES_HPC`.
+
+### CLI example to enable the benefit for SLES
+1. Install the Azure Hybrid Benefit extension on running VM using the portal or via Azure CLI using the command below:
+ ```azurecli
+ az vm extension set -n AHBForSLES --publisher publisherName --vm-name myVMName --resource-group myResourceGroup
+ ```
+1. Once, the extension is installed successfully, change the license type based on your requirements:
+
+ ```azurecli
+ # This will enable the benefit to fetch software updates for SLES STANDARD repositories
+ az vm update -g myResourceGroup -n myVmName --license-type SLES_STANDARD
+
+ # This will enable the benefit to fetch software updates for SLES SAP repositories
+ az vm update -g myResourceGroup -n myVmName --license-type SLES_SAP
+
+ # This will enable the benefit to fetch software updates for SLES HPC repositories
+ az vm update -g myResourceGroup -n myVmName --license-type SLES_HPC
+
+ ```
+1. Wait for 5 minutes for the extension to read the license type value and install the repositories.
+
+1. You should now be connected to Azure SLES Update Infrastructure and the relevant repositories will be installed in your machine. You can check the same by performing the command below on your VM which outputs installed repository packages on your VM:
+ ```bash
+ zypper repos
+ ```
+
+### CLI example to disable the benefit
+1. Ensure that the Azure Hybrid Benefit extension is installed on your VM.
+1. To disable the benefit, follow below command:
+
+ ```azurecli
+ # This will disable the benefit on a VM
+ az vm update -g myResourceGroup -n myVmName --license-type None
+ ```
+
+## Check the AHB BYOS status of a VM
+To check the status of Azure Hybrid Benefit for BYOS VM status
+1. Ensure that the Azure Hybrid Benefit extension is installed:
+1. You can view the Azure Hybrid Benefit status of a VM by using the Azure CLI or by using Azure Instance Metadata Service.
+
+ You can use the below command for this purpose. Look for a `licenseType` field in the response. If the `licenseType` field exists and the value is one of the below, your VM has the benefit enabled:
+ `RHEL_BASE`, `RHEL_EUS`, `RHEL_BASESAPAPPS`, `RHEL_SAPHA`, `RHEL_BASESAPAPPS`, `RHEL_BASESAPHA`, `SLES_STANDARD`, `SLES_SAP`, `SLES_`
+
+ ```azurecli
+ az vm get-instance-view -g MyResourceGroup -n MyVm
+ ```
+
+## Compliance
+
+### Red Hat
+
+Customers who use Azure Hybrid Benefit for BYOS VMs for RHEL agree to the standard [legal terms](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Cloud_Software_Subscription_Agreement_for_Microsoft_Azure.pdf) and [privacy statement](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Privacy_Statement_for_Microsoft_Azure.pdf) associated with the Azure Marketplace RHEL offerings.
+
+### SUSE
+
+To use Azure Hybrid Benefit for BYOS VMs for your SLES VMs, and for information about moving from SLES PAYG to BYOS or moving from SLES BYOS to PAYG, see [SUSE Linux Enterprise and Azure Hybrid Benefit](https://aka.ms/suse-ahb).
+
+## Frequently asked questions
+*Q: What are the additional licensing cost I pay with AHB for BYOS VMs?*
+
+A: On using AHB for BYOS VMs, you will essentially convert your bring your own subscription (BYOS) billing model to pay as you go (PAYG) billing model. Hence, you will be paying similar to PAYG VMs for software subscription cost. The table below maps the PAYG flavors available on Azure and links to pricing page to help you understand the cost associated with AHB for BYOS VMs.
+
+| License type | Relevant PAYG VM image & Pricing Link (Keep the AHB for PAYG filter off) |
+||||
+| RHEL_BASE | [Red Hat Enterprise Linux](https://azure.microsoft.com/pricing/details/virtual-machines/red-hat/) |
+| RHEL_SAPAPPS | [RHEL for SAP Business Applications](https://azure.microsoft.com/pricing/details/virtual-machines/rhel-sap-business/) |
+| RHEL_SAPHA | [RHEL for SAP with HA](https://azure.microsoft.com/pricing/details/virtual-machines/rhel-sap-ha/) |
+| RHEL_BASESAPAPPS [RHEL for SAP Business Applications](https://azure.microsoft.com/pricing/details/virtual-machines/rhel-sap-business/) |
+| RHEL_BASESAPHA | [RHEL for SAP with HA](https://azure.microsoft.com/pricing/details/virtual-machines/rhel-sap-ha/) |
+| RHEL_EUS | [Red Hat Enterprise Linux](https://azure.microsoft.com/pricing/details/virtual-machines/red-hat/) |
+| SLES_ STANDARD | [SLES Standard](https://azure.microsoft.com/pricing/details/virtual-machines/sles-standard/) |
+| SLES_SAP | [SLES SAP](https://azure.microsoft.com/pricing/details/virtual-machines/sles-sap/) |
+| SLES_HPC | [SLES HPC](https://azure.microsoft.com/pricing/details/virtual-machines/sles-hpc-standard/) |
+
+*Q: Can I use a license type designated for RHEL (such as `RHEL_BASE`) with a SLES image, or vice versa?*
+
+A: No, you can't. Trying to enter a license type that incorrectly matches the distribution running on your VM will fail and you might end uo getting billed incorrectly. However, if you accidentally enter the wrong license type, either changing the license type to empty will remove the billing or updating your VM again to the correct license type will still enable the benefit.
+
+*Q: I've uploaded my own RHEL or SLES image from on-premises (via Azure Migrate, Azure Site Recovery, or otherwise) to Azure. Can I convert the billing on these images from BYOS to PAYG?*
+
+A: Yes, this is the capability AHB for BYOS VMs supports. Please [follow steps shared here](#get-started).
+
+*Q: Can I use Azure Hybrid Benefit for BYOS VMs on RHEL and SLES PAYG Marketplace VMs?*
+
+A: No, as these VMs are already pay-as-you-go (PAYG). However, with AHB v1 and v2 you can use the license type of `RHEL_BYOS` for RHEL VMs and `SLES_BYOS` for conversions of RHEL and SLES PAYG Marketplace VMs. You can read more on [AHB for PAYG VMs here.](./azure-hybrid-benefit-linux.md)
+
+*Q: Can I use Azure Hybrid Benefit for BYOS VMs on virtual machine scale sets for RHEL and SLES?*
+
+A: No, Azure Hybrid Benefit for BYOS VMs is not available for virtual machine scale sets currently.
+
+*Q: Can I use Azure Hybrid Benefit for BYOS VMs on a virtual machine deployed for SQL Server on RHEL images?*
+
+A: No, you can't. There is no plan for supporting these virtual machines.
+
+*Q: Can I use Azure Hybrid Benefit for BYOS VMs on my RHEL Virtual Data Center subscription?*
+
+A: No, you cannot. VDC is not supported on Azure at all, including AHB.
+
+
+## Next steps
+* [Learn how to convert RHEL and SLES PAYG VMs to BYOS using AHB for PAYG VMs](./azure-hybrid-benefit-linux.md)
+
+* [Learn how to create and update VMs and add license types (RHEL_BYOS, SLES_BYOS) for Azure Hybrid Benefit by using the Azure CLI](/cli/azure/vm)
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
Title: Azure Hybrid Benefit and Linux VMs
+ Title: Azure Hybrid Benefit for PAYG Linux VMs
description: Learn how Azure Hybrid Benefit can help you save money on your Linux virtual machines running on Azure. documentationcenter: '' -+
Last updated 09/22/2020
-# How Azure Hybrid Benefit applies for Linux virtual machines
+# How Azure Hybrid Benefit for PAYG Marketplace VMs applies for Linux virtual machines
+
+>[!IMPORTANT]
+>The below article is scoped to Azure Hybrid Benefit for PAYG Marketplace VMs which caters to conversion of RHEL PAYG or SLES PAYG VMs to BYOS billing model. For, conversion of custom on-prem image VMs and RHEL or SLES BYOS VMs to PAYG billing model, refer to [Azure Hybrid Benefit for BYOS VMs here](./azure-hybrid-benefit-byos-linux.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (VMs) in the cloud. With this benefit, you pay for only the infrastructure costs of your VM because your RHEL or SLES subscription covers the software fee. The benefit is available for all RHEL and SLES Marketplace pay-as-you-go (PAYG) images.
+Azure Hybrid Benefit for PAYG VMs is a licensing benefit that helps you to significantly reduce the costs of running your Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (VMs) in the cloud. With this benefit, you pay for only the infrastructure costs of your VM because your RHEL or SLES subscription covers the software fee. The benefit is available for all RHEL and SLES Marketplace pay-as-you-go (PAYG) images.
Azure Hybrid Benefit for Linux VMs is now publicly available. ## Benefit description
-Through Azure Hybrid Benefit, you can migrate your on-premises RHEL and SLES servers to Azure by converting existing RHEL and SLES PAYG VMs on Azure to bring-your-own-subscription (BYOS) billing. Typically, VMs deployed from PAYG images on Azure will charge both an infrastructure fee and a software fee. With Azure Hybrid Benefit, PAYG VMs can be converted to a BYOS billing model without a redeployment, so you can avoid any downtime risk.
+Through Azure Hybrid Benefit for PAYG VMs, you can migrate your on-premises RHEL and SLES servers to Azure by converting existing RHEL and SLES PAYG VMs on Azure to bring-your-own-subscription (BYOS) billing. Typically, VMs deployed from PAYG images on Azure will charge both an infrastructure fee and a software fee. With Azure Hybrid Benefit, PAYG VMs can be converted to a BYOS billing model without a redeployment, so you can avoid any downtime risk.
:::image type="content" source="./media/ahb-linux/azure-hybrid-benefit-cost.png" alt-text="Azure Hybrid Benefit cost visualization on Linux VMs.":::
After you enable the benefit on RHEL or SLES VM, you'll no longer be charged for
You can also choose to convert a VM that has had the benefit enabled on it back to a PAYG billing model.
-## Scope of Azure Hybrid Benefit eligibility for Linux VMs
+## Scope of Azure Hybrid Benefit for PAYG VMs
+
+**Azure Hybrid Benefit for PAYG VMs** is available for all RHEL and SLES PAYG images from Azure Marketplace.
-Azure Hybrid Benefit is available for all RHEL and SLES PAYG images from Azure Marketplace. The benefit is not yet available for RHEL or SLES BYOS images or custom images from Azure Marketplace.
+**Azure Hybrid Benefit for BYOS VMs** is available for RHEL or SLES BYOS images or custom images from Azure Marketplace. You can [read more about AHB for BYOS VMs here.](./azure-hybrid-benefit-byos-linux.md)
Azure Dedicated Host instances, and SQL hybrid benefits are not eligible for Azure Hybrid Benefit if you're already using the benefit with Linux VMs.
Azure Dedicated Host instances, and SQL hybrid benefits are not eligible for Azu
### Red Hat customers
-Azure Hybrid Benefit for RHEL is available to Red Hat customers who meet both of these criteria:
+Azure Hybrid Benefit for PAYG VMs for RHEL is available to Red Hat customers who meet both of these criteria:
- Have active or unused RHEL subscriptions that are eligible for use in Azure - Have enabled one or more of those subscriptions for use in Azure with the [Red Hat Cloud Access](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) program
To start using the benefit for Red Hat:
1. Enable one or more of your eligible RHEL subscriptions for use in Azure by using the [Red Hat Cloud Access customer interface](https://access.redhat.com/management/cloud). The Azure subscriptions that you provide during the Red Hat Cloud Access enablement process will then be permitted to use the Azure Hybrid Benefit feature.
-1. Apply Azure Hybrid Benefit to any of your existing RHEL PAYG VMs and any new RHEL VMs that you deploy from Azure Marketplace PAYG images. You can use Azure portal or Azure CLI for enabling the benefit.
+1. Apply Azure Hybrid Benefit for PAYG VMs to any of your existing RHEL PAYG VMs and any new RHEL VMs that you deploy from Azure Marketplace PAYG images. You can use Azure portal or Azure CLI for enabling the benefit.
1. Follow recommended [next steps](https://access.redhat.com/articles/5419341) for configuring update sources for your RHEL VMs and for RHEL subscription compliance guidelines. ### SUSE customers
-Azure Hybrid Benefit for SUSE is available to customers who have:
+Azure Hybrid Benefit for PAYG VMs for SUSE is available to customers who have:
- Unused SUSE subscriptions that are eligible to use in Azure. - One or more active SUSE subscriptions to use on-premises that should be moved to Azure.
$(az vm list -g MyResourceGroup --query "[].id" -o tsv)
az vm list -o json | jq '.[] | {VMName: .name, ResourceID: .id}' ```
-## Apply the Azure Hybrid Benefit at VM create time
-In addition to applying the Azure Hybrid Benefit to existing pay-as-you-go VMs, you can invoke it at the time of VM creation. The benefits of doing so are threefold:
+## Apply the Azure Hybrid Benefit for PAYG VMs at VM create time
+In addition to applying the Azure Hybrid Benefit for PAYG VMs to existing pay-as-you-go VMs, you can invoke it at the time of VM creation. The benefits of doing so are threefold:
- You can provision both PAYG and BYOS VMs by using the same image and process. - It enables future licensing mode changes, something not available with a BYOS-only image or if you bring your own VM. - The VM will be connected to Red Hat Update Infrastructure (RHUI) by default, to ensure that it remains up to date and secure. You can change the updated mechanism after deployment at any time.
-## Check the Azure Hybrid Benefit status of a VM
-You can view the Azure Hybrid Benefit status of a VM by using the Azure CLI or by using Azure Instance Metadata Service.
+## Check the Azure Hybrid Benefit for PAYG VMs status of a VM
+You can view the Azure Hybrid Benefit for PAYG VMs status of a VM by using the Azure CLI or by using Azure Instance Metadata Service.
### Azure CLI
From within the VM itself, you can query the attested metadata in Azure Instance
### Red Hat
-Customers who use Azure Hybrid Benefit for RHEL agree to the standard [legal terms](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Cloud_Software_Subscription_Agreement_for_Microsoft_Azure.pdf) and [privacy statement](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Privacy_Statement_for_Microsoft_Azure.pdf) associated with the Azure Marketplace RHEL offerings.
+Customers who use Azure Hybrid Benefit for PAYG RHEL VMs agree to the standard [legal terms](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Cloud_Software_Subscription_Agreement_for_Microsoft_Azure.pdf) and [privacy statement](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Privacy_Statement_for_Microsoft_Azure.pdf) associated with the Azure Marketplace RHEL offerings.
-Customers who use Azure Hybrid Benefit for RHEL have three options for providing software updates and patches to those VMs:
+Customers who use Azure Hybrid Benefit for PAYG RHEL VMs have three options for providing software updates and patches to those VMs:
- [Red Hat Update Infrastructure](../workloads/redhat/redhat-rhui.md) (default option) - Red Hat Satellite Server - Red Hat Subscription Manager
-Customers who choose the RHUI option can continue to use RHUI as the main update source for their Azure Hybrid Benefit RHEL VMs without attaching RHEL subscriptions to those VMs. Customers who choose the RHUI option are responsible for ensuring RHEL subscription compliance.
+Customers who choose the RHUI option can continue to use RHUI as the main update source for their Azure Hybrid Benefit for PAYG RHEL VMs without attaching RHEL subscriptions to those VMs. Customers who choose the RHUI option are responsible for ensuring RHEL subscription compliance.
-Customers who choose either Red Hat Satellite Server or Red Hat Subscription Manager should remove the RHUI configuration and then attach a Cloud Access enabled RHEL subscription to their Azure Hybrid Benefit RHEL VMs.
+Customers who choose either Red Hat Satellite Server or Red Hat Subscription Manager should remove the RHUI configuration and then attach a Cloud Access enabled RHEL subscription to their Azure Hybrid Benefit for PAYG RHEL VMs.
-For more information about Red Hat subscription compliance, software updates, and sources for Azure Hybrid Benefit RHEL VMs, see the [Red Hat article about using RHEL subscriptions with Azure Hybrid Benefit](https://access.redhat.com/articles/5419341).
+For more information about Red Hat subscription compliance, software updates, and sources for Azure Hybrid Benefit for PAYG RHEL VMs, see the [Red Hat article about using RHEL subscriptions with Azure Hybrid Benefit](https://access.redhat.com/articles/5419341).
### SUSE
-To use Azure Hybrid Benefit for your SLES VMs, and for information about moving from SLES PAYG to BYOS or moving from SLES BYOS to PAYG, see [SUSE Linux Enterprise and Azure Hybrid Benefit](https://aka.ms/suse-ahb).
+To use Azure Hybrid Benefit for PAYG SLES VMs, and for information about moving from SLES PAYG to BYOS or moving from SLES BYOS to PAYG, see [SUSE Linux Enterprise and Azure Hybrid Benefit](https://aka.ms/suse-ahb).
-Customers who use Azure Hybrid Benefit need to move the Cloud Update Infrastructure to one of three options that provide software updates and patches to those VMs:
+Customers who use Azure Hybrid Benefit for PAYG SLES VMs need to move the Cloud Update Infrastructure to one of three options that provide software updates and patches to those VMs:
- [SUSE Customer Center](https://scc.suse.com) - SUSE Manager - SUSE Repository Mirroring Tool (RMT)
-## Azure Hybrid Benefit on Reserved Instances
+## Azure Hybrid Benefit for PAYG VMs on Reserved Instances
-Azure Reservations (Azure Reserved Virtual Machine Instances) help you save money by committing to one-year or three-year plans for multiple products. You can learn more about [Reserved instances here](../../cost-management-billing/reservations/save-compute-costs-reservations.md). The Azure Hybrid Benefit is available for [Reserved Virtual Machine Instance(RIs)](../../cost-management-billing/reservations/save-compute-costs-reservations.md#charges-covered-by-reservation).
+Azure Reservations (Azure Reserved Virtual Machine Instances) help you save money by committing to one-year or three-year plans for multiple products. You can learn more about [Reserved instances here](../../cost-management-billing/reservations/save-compute-costs-reservations.md). The Azure Hybrid Benefit for PAYG VMs is available for [Reserved Virtual Machine Instance(RIs)](../../cost-management-billing/reservations/save-compute-costs-reservations.md#charges-covered-by-reservation).
This means that if you have purchased compute costs at a discounted rate using RI, you can apply AHB benefit on the licensing costs for RHEL and SUSE on top of it. The steps to apply AHB benefit for an RI instance remains exactly same as it is for a regular VM. ![AHB for RIs](./media/azure-hybrid-benefit/reserved-instances.png) >[!NOTE]
->If you have already purchased reservations for RHEL or SUSE PAYG software on Azure Marketplace, please wait for the reservation tenure to complete before using the Azure Hybrid Benefit..
+>If you have already purchased reservations for RHEL or SUSE PAYG software on Azure Marketplace, please wait for the reservation tenure to complete before using the Azure Hybrid Benefit for PAYG VMs.
## Frequently asked questions
A: It might take some time for your Red Hat Cloud Access subscription registrati
*Q: I've deployed a VM by using RHEL BYOS "golden image." Can I convert the billing on these images from BYOS to PAYG?*
-A: No, you can't. Azure Hybrid Benefit supports conversion only on pay-as-you-go images.
+A: Yes, you can use the Azure Hybrid Benefit for BYOS VMs capability to do this. You can [learn more about this capability here.](./azure-hybrid-benefit-byos-linux.md)
*Q: I've uploaded my own RHEL or SLES image from on-premises (via Azure Migrate, Azure Site Recovery, or otherwise) to Azure. Can I convert the billing on these images from BYOS to PAYG?*
-A: No, you can't. The Azure Hybrid Benefit capability is currently available only to RHEL and SLES images in Azure Marketplace.
+A: Yes, you can use the Azure Hybrid Benefit for BYOS VMs capability to do this. You can [learn more about this capability here.](./azure-hybrid-benefit-byos-linux.md)
*Q: I've uploaded my own RHEL or SLES image from on-premises (via Azure Migrate, Azure Site Recovery, or otherwise) to Azure. Do I need to do anything to benefit from Azure Hybrid Benefit?* A: No, you don't. RHEL or SLES images that you upload are already considered BYOS, and you're charged only for Azure infrastructure costs. You're responsible for RHEL subscription costs, just as you are for your on-premises environments.
-*Q: Can I use Azure Hybrid Benefit on VMs deployed from Azure Marketplace RHEL and SLES SAP images?*
+*Q: Can I use Azure Hybrid Benefit for PAYG VMs for Azure Marketplace RHEL and SLES SAP images?*
A: Yes, you can. You can use the license type of `RHEL_BYOS` for RHEL VMs and `SLES_BYOS` for conversions of VMs deployed from Azure Marketplace RHEL and SLES SAP images.
-*Q: Can I use Azure Hybrid Benefit on virtual machine scale sets for RHEL and SLES?*
+*Q: Can I use Azure Hybrid Benefit for PAYG VMs on virtual machine scale sets for RHEL and SLES?*
A: Yes, Azure Hybrid Benefit on virtual machine scale sets for RHEL and SLES is is available to all users. You can [learn more about this benefit and how to use it here](../../virtual-machine-scale-sets/azure-hybrid-benefit-linux.md).
-*Q: Can I use Azure Hybrid Benefit on reserved instances for RHEL and SLES?*
+*Q: Can I use Azure Hybrid Benefit for PAYG VMs on reserved instances for RHEL and SLES?*
-A: AHB can be used with reserved instances for Pay-as-you-Go RHEL and SLES. It cannot be used with pre-paid annualized RHEL or SLES subscriptions purchased through Azure.
+A: Yes, Azure Hybrid Benefit for PAYG VMs on reserved instance for RHEL and SLES is available to all users. You can [learn more about this benefit and how to use it here](#azure-hybrid-benefit-for-payg-vms-on-reserved-instances).
-*Q: Can I use Azure Hybrid Benefit on a virtual machine deployed for SQL Server on RHEL images?*
+*Q: Can I use Azure Hybrid Benefit for PAYG VMs on a virtual machine deployed for SQL Server on RHEL images?*
A: No, you can't. There is no plan for supporting these virtual machines.